Binary Classification Problem with Machine Learning

Overview

Binary Classification Problem with Machine Learning

Solving Approach:

1) Ultimate Goal of the Assignment:

This assignment is about solving a binary classification problem, and I need to come up with a binary classifier that classifies given instances
as class 1(Positive) and class 0 (Negative) based on the numerical features provided.

2) Getting to know the Dataset:

Before selecting any machine learning algorithm for the given task it is better to know and explore the dataset provided. We should look 
for the possible errors present inside datasets. After analysing the data I had following findings.

I) Training set and Test set is given with training csv having 3910 record or instances and test csv having 691 records.

II) There were no Null values present in any training or test set, so there was no need to deal with Null values.

III) All the features present were of numerical types with non-zero values greater than 0.0 to pretty large numbers.

IV) training_set.csv comes with a lable "Y" having two categories (Binary Value) of '0' and '1', but test_set.csv has only instances or records with not 
labels provided for them

V) From the observation of the training and test dataset, It is found that feature values having are large variation, some varies between 0 to 5,
but some varying between 0 to 1000, while few from 0 to 10000, and so on.

VI) Most importantly, the dataset is imbalaned. It has 1534 instances belonging to class '1' and 2376 instances for class '0' having imbalance
ration as 1.5489.

3) What Preprocessing techniques? and Why?

I) I used Simple Histograms which helped to find the distribution of each features, density of them and in what proportions there are varying.
II) KDE plot is vey important, it depicts the probability density at different values in a continuous variable.
III) Box-Whiskers Plot, this plot are very important and gives interesting insights on dataset, it gives, 1st IQR(25th Percentile), 2nd IQR
(median), 3rd IQR (75th Percentile), Upper bound, Lower bound, and Specially Outliers!!
IV) From box plots, it is observed that the dataset has lot of outliers also few of them havinf very large values, hence giving scope for data 
scaling or standardization.
V) Manually, I found the number of features having values greater than 1.0. Some features are very much concentrated between 0 to 1.0 but few are 
totally outside this range.

4) Feature Engineering and Feature Selection:

I) In feature engineering, we can combine existing features or use domain knowledge to design completely new features. Here I haven't explored on engineering
part, but focused on selection (though I removed only 1 of them!!)
II) There are 57 numerical features, so I decided to remove highly correlated features, as highly correlated features causes redundancy in dataset.
So it is always advisable to remove highly correlated features.
III) I used Corr() function to find correlations between features with respect to another. And displayed them in the form of Correlational Matrix.
IV) Due to large features, the matrix was pretty much messier!!. So I manually filter the features along with its highly correlated features list.
I used 85% correlation threshold limit. 
V) Only X32  and X34 were filtered out in this criterion, and decided to drop X32 (Just random decision, not based on P-Value).

5) Algorithm Selection and Tuning:

I) Model selection has no strict rules, but decision is taken from considering number of factors, such as number of features vs number of instances,
Linearity of data, speed, accuracy and so on.
II) From the feature pairplots, we found that dataset is highly distributed and very few are linearly separable, so I decided to go with Non-Linear
model like KNN, Decision Tree - Random Forest, XGBoost, SVM, etc,.
III) Since total number of records are 3910 and features 57, so records >> features, here KNN, Kernel-SVM, Desision tree, Random Firest are good choice.
IV) We have outliers in our data, so KNN and tree-based models are very robust to outliers.
V) The given dataset is small, so I ignored training time criterion to filter models.
VI) Finally I moved forward with KNN, Random Forest Classifer and XGBClassifier models.

6) Which accuravy measure to use? and Why?

I) We are dealing with Binary Classification task, So I decided to include multiple measure to assess the quality of predictions and 
performance of the models.
II) Accuracy measures followed --> Model accuracy Score, Confusion Matrix, Precision Score, Recall Score, F1-Score, ROC_AUC Score, ROC Curve
III) Accuracy Score - Accuracy is the most intuitive performance measure and it is simply a ratio of correctly predicted observation to the total observations.
IV) Confusion Matrix - Confusion matrix is a very popular measure used while solving classification problems. It can be applied to binary classification as well as for multiclass classification problems.
Confusion matrices represent counts from predicted and actual values. It gives four numbers TP (True Positive), TN (True Negative), FP (False Positive), FN (False Negative).

          ---------------------------------------------------------------------------------------------------------------------------
          | True Negative | True Negative which shows the number of negative examples classified accurately | class '0' to class '0' |
          ---------------------------------------------------------------------------------------------------------------------------
          | True Positive |  True Positive which indicates the number of positive examples classified accurately| class '1' to class '1'
          ---------------------------------------------------------------------------------------------------------------------------------------------
          | False Positive | False Positive which shows the number of actual negative examples classified as positive | actual class '0' to class '1' |
          ---------------------------------------------------------------------------------------------------------------------------------------------
          | False Negative | False Negative value which shows the number of actual positive examples classified as negative | actual class '1' to class '0' |
          ---------------------------------------------------------------------------------------------------------------------------------------------------
V) Precision Score - Precision is the ratio of correctly predicted positive observations to the total predicted positive observations. 
            ----------------------------------------------------------------------
            | Precision = TP/TP+FP | Where, TP = True Positive, FP = False Positive
            ----------------------------------------------------------------------
VI) Recall Score - This is also called 'Sensitivity'. It is the ratio of correctly predicted positive observations to the all observations in actual class.
            ----------------------------------------------------------------------
            | Recall = TP/TP+FN | Where, TP = True Positive, FN = False Negative |
            ----------------------------------------------------------------------
VII) F1 Score - F1 Score is the weighted average of Precision and Recall. 
            ------------------------------------------------------------
            | F1 Score = 2*(Recall * Precision) / (Recall + Precision) |
            ------------------------------------------------------------
VIII) ROC Curve - It is a chart that visualizes the tradeoff between true positive rate (TPR) and false positive rate (FPR). Basically, for every threshold, 
we calculate TPR and FPR and plot it on one chart. The higher TPR and the lower FPR is for each threshold the better and so classifiers that have curves that 
are more top-left-side are better.
IX) ROC_AUC Score - ROC score is nothing but the area under ROC curve. The more it close to zero, better is our classifier algorithm.

7) How we can Improve further?

    -----------------------------------------------------------------------------------------------------------------------
    | Data Imbalance | we should reduce data imbalance issue so that model is not biased against any class |
    -----------------------------------------------------------------------------------------------------------------------------------
    | Remove Outliers | We can use box-whiskers plots, Z-score, IQR based filtering, Percentile, Winsorization, etc to remove outliers |
    ------------------------------------------------------------------------------------------------------------------------------------
    | Feature Engineering | We can combine several features with each other to create new features, Use Domain Knowledge |
    -----------------------------------------------------------------------------------------------------------------------
    | Reduce Dimensionality - Feature selection | We can use Principle Component Analysis (PCA), t-SNE to filter out most useful features having large variance |
    -------------------------------------------------------------------------------------------------------------------------------------------------------------
    | Hyper Parameter Tuning | We can play around different algorithms and hyper tune them with most optimum algorithm parameters to avoid overfitting |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | Deep Neural Networks | If we have huge dataset, neural networks are very effective to capture hidden representations from dataset with reduced interpretability of the model |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------.

Please revert for any doubts. Thank You!!

Owner
Dinesh Mali
Machine Learning Enthusiastic, IITian, and Cricketer....
Dinesh Mali
[DEPRECATED] Tensorflow wrapper for DataFrames on Apache Spark

TensorFrames (Deprecated) Note: TensorFrames is deprecated. You can use pandas UDF instead. Experimental TensorFlow binding for Scala and Apache Spark

Databricks 757 Dec 31, 2022
This is an auto-ML tool specialized in detecting of outliers

Auto-ML tool specialized in detecting of outliers Description This tool will allows you, with a Dash visualization, to compare 10 models of machine le

1 Nov 03, 2021
Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogramas anuais com spark, em pyspark e SQL!

Olá! Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogr

Henrique de Paula 10 Apr 04, 2022
Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic p

Pyomo 1.4k Dec 28, 2022
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
cuML - RAPIDS Machine Learning Library

cuML - GPU Machine Learning Algorithms cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions t

RAPIDS 3.1k Dec 28, 2022
Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way

Apache Liminals goal is to operationalise the machine learning process, allowing data scientists to quickly transition from a successful experiment to an automated pipeline of model training, validat

The Apache Software Foundation 121 Dec 28, 2022
Continuously evaluated, functional, incremental, time-series forecasting

timemachines Autonomous, univariate, k-step ahead time-series forecasting functions assigned Elo ratings You can: Use some of the functionality of a s

Peter Cotton 343 Jan 04, 2023
A project based example of Data pipelines, ML workflow management, API endpoints and Monitoring.

MLOps template with examples for Data pipelines, ML workflow management, API development and Monitoring.

Utsav 33 Dec 03, 2022
MaD GUI is a basis for graphical annotation and computational analysis of time series data.

MaD GUI Machine Learning and Data Analytics Graphical User Interface MaD GUI is a basis for graphical annotation and computational analysis of time se

Machine Learning and Data Analytics Lab FAU 10 Dec 19, 2022
Tools for diffing and merging of Jupyter notebooks.

nbdime provides tools for diffing and merging of Jupyter Notebooks.

Project Jupyter 2.3k Jan 03, 2023
Dive into Machine Learning

Dive into Machine Learning Hi there! You might find this guide helpful if: You know Python or you're learning it 🐍 You're new to Machine Learning You

Michael Floering 11.1k Jan 03, 2023
Katana project is a template for ASAP 🚀 ML application deployment

Katana project is a FastAPI template for ASAP 🚀 ML API deployment

Mohammad Shahebaz 100 Dec 26, 2022
Scikit learn library models to account for data and concept drift.

liquid_scikit_learn Scikit learn library models to account for data and concept drift. This python library focuses on solving data drift and concept d

7 Nov 18, 2021
A naive Bayes model for cancer classification using a set of documents

Naivebayes text classifcation model for cancer and noncancer documents Author: Alex King Purpose Requirements/files included How to use 1. Purpose The

Alex W King 1 Nov 24, 2021
Spark development environment for k8s

Local Spark Dev Env with Docker Development environment for k8s. Using the spark-operator image to ensure it will be the same environment. Start conta

Otacilio Filho 18 Jan 04, 2022
An implementation of Relaxed Linear Adversarial Concept Erasure (RLACE)

Background This repository contains an implementation of Relaxed Linear Adversarial Concept Erasure (RLACE). Given a dataset X of dense representation

Shauli Ravfogel 4 Apr 13, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Jan 03, 2023
Automatically build ARIMA, SARIMAX, VAR, FB Prophet and XGBoost Models on Time Series data sets with a Single Line of Code. Now updated with Dask to handle millions of rows.

Auto_TS: Auto_TimeSeries Automatically build multiple Time Series models using a Single Line of Code. Now updated with Dask. Auto_timeseries is a comp

AutoViz and Auto_ViML 519 Jan 03, 2023
Predicting India’s COVID-19 Third Wave with LSTM

Predicting India’s COVID-19 Third Wave with LSTM Complete project of predicting new COVID-19 cases in the next 90 days with LSTM India is seeing a ste

Samrat Dutta 4 Jan 27, 2022