This is an open solution to the Home Credit Default Risk challenge 🏡

Overview

Home Credit Default Risk: Open Solution

Join the chat at https://gitter.im/minerva-ml/open-solution-home-credit license

This is an open solution to the Home Credit Default Risk challenge 🏡 .

More competitions 🎇

Check collection of public projects 🎁 , where you can find multiple Kaggle competitions with code, experiments and outputs.

Our goals

We are building entirely open solution to this competition. Specifically:

  1. Learning from the process - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience.
  2. Encourage more Kagglers to start working on this competition.
  3. Deliver open source solution with no strings attached. Code is available on our GitHub repository 💻 . This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code 😃
  4. We are opening our experiments as well: everybody can have live preview on our experiments, parameters, code, etc. Check: Home Credit Default Risk 📈 and screens below.
Train and validation results on folds 📊 LightGBM learning curves 📊
train-validation-results-on-folds LightGBM-learning-curves

Disclaimer

In this open source solution you will find references to the neptune.ml. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ml is not necessary to proceed with this solution. You may run it as plain Python script 🐍 .

Note

As of 1.07.2019 we officially discontinued neptune-cli client project making neptune-client the only supported way to communicate with Neptune. That means you should run experiments via python ... command or update loggers to neptune-client. For more information about the new client go to neptune-client read-the-docs page.

How to start?

Learn about our solutions

  1. Check Kaggle forum and participate in the discussions.
  2. Check our Wiki pages 🏡 , where we document our work. See solutions below:
link to code name CV LB link to description
solution 1 chestnut 🌰 ? 0.742 LightGBM and basic features
solution 2 seedling 🌱 ? 0.747 Sklearn and XGBoost algorithms and groupby features
solution 3 blossom 🌼 0.7840 0.790 LightGBM on selected features
solution 4 tulip 🌷 0.7905 0.801 LightGBM with smarter features
solution 5 sunflower 🌻 0.7950 0.804 LightGBM clean dynamic features
solution 6 four leaf clover 🍀 0.7975 0.806 priv. LB 0.79804, Stacking by feature diversity and model diversity

Start experimenting with ready-to-use code

You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.

Installation (fast track)

  1. Clone repository and install requirements (use Python3.5)
pip3 install -r requirements.txt
  1. Register to the neptune.ml (if you wish to use it)
  2. Run experiment based on LightGBM:

🔱

neptune account login
neptune run --config configs/neptune.yaml main.py train_evaluate_predict_cv --pipeline_name lightGBM

🐍

python main.py -- train_evaluate_predict_cv --pipeline_name lightGBM

Installation (step by step)

Step by step installation 🖥️

Hyperparameter Tuning

Various options of hyperparameter tuning are available

  1. Random Search

    configs/neptune.yaml

      hyperparameter_search__method: random
      hyperparameter_search__runs: 100

    src/pipeline_config.py

        'tuner': {'light_gbm': {'max_depth': ([2, 4, 6], "list"),
                                'num_leaves': ([2, 100], "choice"),
                                'min_child_samples': ([5, 10, 15 25, 50], "list"),
                                'subsample': ([0.95, 1.0], "uniform"),
                                'colsample_bytree': ([0.3, 1.0], "uniform"),
                                'min_gain_to_split': ([0.0, 1.0], "uniform"),
                                'reg_lambda': ([1e-8, 1000.0], "log-uniform"),
                                },
                  }

Get involved

You are welcome to contribute your code and ideas to this open solution. To get started:

  1. Check competition project on GitHub to see what we are working on right now.
  2. Express your interest in paticular task by writing comment in this task, or by creating new one with your fresh idea.
  3. We will get back to you quickly in order to start working together.
  4. Check CONTRIBUTING for some more information.

User support

There are several ways to seek help:

  1. Kaggle discussion is our primary way of communication.
  2. Read project's Wiki, where we publish descriptions about the code, pipelines and supporting tools such as neptune.ml.
  3. Submit an issue directly in this repo.
Comments
  • ModuleNotFoundError: No module named 'deepsense'

    ModuleNotFoundError: No module named 'deepsense'

    There are two things that will make the processing of your issue faster:

    1. Make sure that you are using the latest version of the code,
    2. In case of bug issue, it would be nice to provide more technical details such like execution command, error message or script that reproduces your bug.

    Thanks!

    Kamil & Jakub,

    core contributors to the minerva.ml

    opened by poteman 9
  • use lightGBM_stacking pipeline raise error

    use lightGBM_stacking pipeline raise error

    There are two things that will make the processing of your issue faster:

    1. Make sure that you are using the latest version of the code,
    2. In case of bug issue, it would be nice to provide more technical details such like execution command, error message or script that reproduces your bug.

    Thanks!

    Kamil & Jakub,

    core contributors to the minerva.ml

    while I run the script python -W ignore main.py -- train_evaluate_predict_cv --pipeline_name lightGBM_stacking

    it raise error like following:

    2018-08-10 21:03:04 steppy >>> done: initializing experiment directories
    2018-08-10 21:03:04 steppy >>> Step light_gbm_fold_0 initialized
    2018-08-10 21-03-04 home-credit >>> Start pipeline fit and transform on train
    2018-08-10 21:03:04 steppy >>> cleaning cache...
    2018-08-10 21:03:04 steppy >>> cleaning cache done
    Traceback (most recent call last):
      File "main.py", line 82, in <module>
        main()
      File "/opt/anaconda2/envs/python3/lib/python3.6/site-packages/click/core.py", line 722, in __call__
        return self.main(*args, **kwargs)
      File "/opt/anaconda2/envs/python3/lib/python3.6/site-packages/click/core.py", line 697, in main
        rv = self.invoke(ctx)
      File "/opt/anaconda2/envs/python3/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/opt/anaconda2/envs/python3/lib/python3.6/site-packages/click/core.py", line 895, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/opt/anaconda2/envs/python3/lib/python3.6/site-packages/click/core.py", line 535, in invoke
        return callback(*args, **kwargs)
      File "main.py", line 78, in train_evaluate_predict_cv
        pipeline_manager.train_evaluate_predict_cv(pipeline_name, model_level, dev_mode, submit_predictions)
      File "/data1/huangzp/kaggle/home_risk_lightgbm/src/pipeline_manager.py", line 37, in train_evaluate_predict_cv
        train_evaluate_predict_cv(pipeline_name, model_level, dev_mode, submit_predictions)
      File "/data1/huangzp/kaggle/home_risk_lightgbm/src/pipeline_manager.py", line 173, in train_evaluate_predict_cv
        train_evaluate_predict_cv_first_level(pipeline_name, dev_mode, submit_predictions)
      File "/data1/huangzp/kaggle/home_risk_lightgbm/src/pipeline_manager.py", line 285, in train_evaluate_predict_cv_first_level
        model_level='first')
      File "/data1/huangzp/kaggle/home_risk_lightgbm/src/pipeline_manager.py", line 428, in _fold_fit_evaluate_predict_loop
        fold_id, pipeline_name, model_level)
      File "/data1/huangzp/kaggle/home_risk_lightgbm/src/pipeline_manager.py", line 517, in _fold_fit_evaluate_loop
        pipeline.fit_transform(train_data)
      File "/opt/anaconda2/envs/python3/lib/python3.6/site-packages/steppy/base.py", line 310, in fit_transform
        step_inputs[input_data_part] = data[input_data_part]
    KeyError: 'input'
    
    opened by ghost 8
  • How to export the feature correlation?

    How to export the feature correlation?

    Dear,

    Could you tell me how to export the feature correlation? I saw some features in your wiki with some correlation scores. I would like to know how can you know the score? By the way, what is the scoring math metric you used? Thanks

    opened by OsloAI 4
  • PermissionError: [WinError 5] Access is denied

    PermissionError: [WinError 5] Access is denied

    Hi. Neptune is a very handy tool and I'm getting to use it. However I encountered some errors.

    It seems Neptune successfully read in the data, there are messages telling me the dataset has been initialized. But when the system wants to do gb training, such permission error happens.

    deepsense.neptune.client_library.threads.channel_values_thread WARNING channel_values_thread.py:389 - _validate() X-coordinate 1278101.8469238281 is not greater than the previous one 1278101.8469238281. Dropping point (x=1278101.8469238281, y=PermissionError: [WinError 5] Access is denied

    Besides, I'm wondering if such runtime warning is normal.

    deepsense.neptune.client_library.threads.channel_values_thread WARNING channel_values_thread.py:389 - _validate() X-coordinate 1278101.8469238281 is not greater than the previous one 1278101.8469238281. Dropping point (x=1278101.8469238281, y= new_handle = steal_handle(parent_pid, pipe_handle) ) for channel stderr. X-coordinates must be strictly increasing for each channel.

    This warning came out every second. Should it be like this?

    Could you please tell me why there is such error?

    opened by MRrollingJerry 4
  • Notebook Updated ?

    Notebook Updated ?

    Hello,

    Thank you for sharing your project, it is interesting ! I have a question regarding the different notebook where there are some preprocessing (cleaning) and creation of new features (hand craft and aggregating). I think the notebook are not updated if we compare to the code. For instance, if we look at the application data, we see 5 cleaning in the code, but in the notebook only two are available. It is forecast to update them ? It is more easy to understand all the data engineering with a notebook than a complete code.

    opened by Shiro-LK 2
  • Add some logger info while reading data

    Add some logger info while reading data

    Pull Request template to Home Credit Default Risk Open Solution

    Code contributions

    Major - and most appreciated - contribution is pull request with feature or bug fix. Each pull request initiates discussion about your code contribution.

    Each pull request should be provided with minimal description about its contents.

    Thanks!

    Jakub & Kamil,

    core contributors to the minerva.ml

    opened by pranayaryal 2
  • KeyError: 'NAME_EDUCATION_TYPE_CODE_GENDER_AMT_CREDIT_min' while running the code

    KeyError: 'NAME_EDUCATION_TYPE_CODE_GENDER_AMT_CREDIT_min' while running the code

    Solution Version: solution 5 | sunflower 🌻 Command used to run the code: neptune run --config configs/neptune.yaml main.py train_evaluate_predict_cv --pipeline_name lightGBM

    Error Message: KeyError: 'NAME_EDUCATION_TYPE_CODE_GENDER_AMT_CREDIT_min' The pops up in the feature_extraction.py file, under the GroupbyAggregateDiffs class and _add_diff_features method. While iterating through self.groupby_aggregations, using this line of code for groupby_cols, specs in self.groupby_aggregations:, the contents of this - self.groupby_aggregations are:

    [(['NAME_EDUCATION_TYPE', 'CODE_GENDER'], [('AMT_CREDIT', 'min'), ('AMT_CREDIT', 'mean'), ('AMT_CREDIT', 'max'), ('AMT_CREDIT', 'sum'), ('AMT_CREDIT', 'var'), ('AMT_ANNUITY', 'min'), ('AMT_ANNUITY', 'mean'), ('AMT_ANNUITY', 'max'), ('AMT_ANNUITY', 'sum'), ('AMT_ANNUITY', 'var'), ('AMT_INCOME_TOTAL', 'min'), ('AMT_INCOME_TOTAL', 'mean'), ('AMT_INCOME_TOTAL', 'max'), ('AMT_INCOME_TOTAL', 'sum'), ('AMT_INCOME_TOTAL', 'var'), ('AMT_GOODS_PRICE', 'min'), ('AMT_GOODS_PRICE', 'mean'), ('AMT_GOODS_PRICE', 'max'), ('AMT_GOODS_PRICE', 'sum'), ('AMT_GOODS_PRICE', 'var'), ('EXT_SOURCE_1', 'min'), ('EXT_SOURCE_1', 'mean'), ('EXT_SOURCE_1', 'max'), ('EXT_SOURCE_1', 'sum'), ('EXT_SOURCE_1', 'var'), ('EXT_SOURCE_2', 'min'), ('EXT_SOURCE_2', 'mean'), ('EXT_SOURCE_2', 'max'), ('EXT_SOURCE_2', 'sum'), ('EXT_SOURCE_2', 'var'), ('EXT_SOURCE_3', 'min'), ('EXT_SOURCE_3', 'mean'), ('EXT_SOURCE_3', 'max'), ('EXT_SOURCE_3', 'sum'), ('EXT_SOURCE_3', 'var'), ('OWN_CAR_AGE', 'min'), ('OWN_CAR_AGE', 'mean'), ('OWN_CAR_AGE', 'max'), ('OWN_CAR_AGE', 'sum'), ('OWN_CAR_AGE', 'var'), ('REGION_POPULATION_RELATIVE', 'min'), ('REGION_POPULATION_RELATIVE', 'mean'), ('REGION_POPULATION_RELATIVE', 'max'), ('REGION_POPULATION_RELATIVE', 'sum'), ('REGION_POPULATION_RELATIVE', 'var'), ('DAYS_REGISTRATION', 'min'), ('DAYS_REGISTRATION', 'mean'), ('DAYS_REGISTRATION', 'max'), ('DAYS_REGISTRATION', 'sum'), ('DAYS_REGISTRATION', 'var'), ('CNT_CHILDREN', 'min'), ('CNT_CHILDREN', 'mean'), ('CNT_CHILDREN', 'max'), ('CNT_CHILDREN', 'sum'), ('CNT_CHILDREN', 'var'), ('CNT_FAM_MEMBERS', 'min'), ('CNT_FAM_MEMBERS', 'mean'), ('CNT_FAM_MEMBERS', 'max'), ('CNT_FAM_MEMBERS', 'sum'), ('CNT_FAM_MEMBERS', 'var'), ('DAYS_ID_PUBLISH', 'min'), ('DAYS_ID_PUBLISH', 'mean'), ('DAYS_ID_PUBLISH', 'max'), ('DAYS_ID_PUBLISH', 'sum'), ('DAYS_ID_PUBLISH', 'var'), ('DAYS_BIRTH', 'min'), ('DAYS_BIRTH', 'mean'), ('DAYS_BIRTH', 'max'), ('DAYS_BIRTH', 'sum'), ('DAYS_BIRTH', 'var'), ('DAYS_EMPLOYED', 'min'), ('DAYS_EMPLOYED', 'mean'), ('DAYS_EMPLOYED', 'max'), ('DAYS_EMPLOYED', 'sum'), ('DAYS_EMPLOYED', 'var')]), (['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], [('AMT_CREDIT', 'min'), ('AMT_CREDIT', 'mean'), ('AMT_CREDIT', 'max'), ('AMT_CREDIT', 'sum'), ('AMT_CREDIT', 'var'), ('AMT_ANNUITY', 'min'), ('AMT_ANNUITY', 'mean'), ('AMT_ANNUITY', 'max'), ('AMT_ANNUITY', 'sum'), ('AMT_ANNUITY', 'var'), ('AMT_INCOME_TOTAL', 'min'), ('AMT_INCOME_TOTAL', 'mean'), ('AMT_INCOME_TOTAL', 'max'), ('AMT_INCOME_TOTAL', 'sum'), ('AMT_INCOME_TOTAL', 'var'), ('AMT_GOODS_PRICE', 'min'), ('AMT_GOODS_PRICE', 'mean'), ('AMT_GOODS_PRICE', 'max'), ('AMT_GOODS_PRICE', 'sum'), ('AMT_GOODS_PRICE', 'var'), ('EXT_SOURCE_1', 'min'), ('EXT_SOURCE_1', 'mean'), ('EXT_SOURCE_1', 'max'), ('EXT_SOURCE_1', 'sum'), ('EXT_SOURCE_1', 'var'), ('EXT_SOURCE_2', 'min'), ('EXT_SOURCE_2', 'mean'), ('EXT_SOURCE_2', 'max'), ('EXT_SOURCE_2', 'sum'), ('EXT_SOURCE_2', 'var'), ('EXT_SOURCE_3', 'min'), ('EXT_SOURCE_3', 'mean'), ('EXT_SOURCE_3', 'max'), ('EXT_SOURCE_3', 'sum'), ('EXT_SOURCE_3', 'var'), ('OWN_CAR_AGE', 'min'), ('OWN_CAR_AGE', 'mean'), ('OWN_CAR_AGE', 'max'), ('OWN_CAR_AGE', 'sum'), ('OWN_CAR_AGE', 'var'), ('REGION_POPULATION_RELATIVE', 'min'), ('REGION_POPULATION_RELATIVE', 'mean'), ('REGION_POPULATION_RELATIVE', 'max'), ('REGION_POPULATION_RELATIVE', 'sum'), ('REGION_POPULATION_RELATIVE', 'var'), ('DAYS_REGISTRATION', 'min'), ('DAYS_REGISTRATION', 'mean'), ('DAYS_REGISTRATION', 'max'), ('DAYS_REGISTRATION', 'sum'), ('DAYS_REGISTRATION', 'var'), ('CNT_CHILDREN', 'min'), ('CNT_CHILDREN', 'mean'), ('CNT_CHILDREN', 'max'), ('CNT_CHILDREN', 'sum'), ('CNT_CHILDREN', 'var'), ('CNT_FAM_MEMBERS', 'min'), ('CNT_FAM_MEMBERS', 'mean'), ('CNT_FAM_MEMBERS', 'max'), ('CNT_FAM_MEMBERS', 'sum'), ('CNT_FAM_MEMBERS', 'var'), ('DAYS_ID_PUBLISH', 'min'), ('DAYS_ID_PUBLISH', 'mean'), ('DAYS_ID_PUBLISH', 'max'), ('DAYS_ID_PUBLISH', 'sum'), ('DAYS_ID_PUBLISH', 'var'), ('DAYS_BIRTH', 'min'), ('DAYS_BIRTH', 'mean'), ('DAYS_BIRTH', 'max'), ('DAYS_BIRTH', 'sum'), ('DAYS_BIRTH', 'var'), ('DAYS_EMPLOYED', 'min'), ('DAYS_EMPLOYED', 'mean'), ('DAYS_EMPLOYED', 'max'), ('DAYS_EMPLOYED', 'sum'), ('DAYS_EMPLOYED', 'var')]), (['NAME_FAMILY_STATUS', 'CODE_GENDER'], [('AMT_CREDIT', 'min'), ('AMT_CREDIT', 'mean'), ('AMT_CREDIT', 'max'), ('AMT_CREDIT', 'sum'), ('AMT_CREDIT', 'var'), ('AMT_ANNUITY', 'min'), ('AMT_ANNUITY', 'mean'), ('AMT_ANNUITY', 'max'), ('AMT_ANNUITY', 'sum'), ('AMT_ANNUITY', 'var'), ('AMT_INCOME_TOTAL', 'min'), ('AMT_INCOME_TOTAL', 'mean'), ('AMT_INCOME_TOTAL', 'max'), ('AMT_INCOME_TOTAL', 'sum'), ('AMT_INCOME_TOTAL', 'var'), ('AMT_GOODS_PRICE', 'min'), ('AMT_GOODS_PRICE', 'mean'), ('AMT_GOODS_PRICE', 'max'), ('AMT_GOODS_PRICE', 'sum'), ('AMT_GOODS_PRICE', 'var'), ('EXT_SOURCE_1', 'min'), ('EXT_SOURCE_1', 'mean'), ('EXT_SOURCE_1', 'max'), ('EXT_SOURCE_1', 'sum'), ('EXT_SOURCE_1', 'var'), ('EXT_SOURCE_2', 'min'), ('EXT_SOURCE_2', 'mean'), ('EXT_SOURCE_2', 'max'), ('EXT_SOURCE_2', 'sum'), ('EXT_SOURCE_2', 'var'), ('EXT_SOURCE_3', 'min'), ('EXT_SOURCE_3', 'mean'), ('EXT_SOURCE_3', 'max'), ('EXT_SOURCE_3', 'sum'), ('EXT_SOURCE_3', 'var'), ('OWN_CAR_AGE', 'min'), ('OWN_CAR_AGE', 'mean'), ('OWN_CAR_AGE', 'max'), ('OWN_CAR_AGE', 'sum'), ('OWN_CAR_AGE', 'var'), ('REGION_POPULATION_RELATIVE', 'min'), ('REGION_POPULATION_RELATIVE', 'mean'), ('REGION_POPULATION_RELATIVE', 'max'), ('REGION_POPULATION_RELATIVE', 'sum'), ('REGION_POPULATION_RELATIVE', 'var'), ('DAYS_REGISTRATION', 'min'), ('DAYS_REGISTRATION', 'mean'), ('DAYS_REGISTRATION', 'max'), ('DAYS_REGISTRATION', 'sum'), ('DAYS_REGISTRATION', 'var'), ('CNT_CHILDREN', 'min'), ('CNT_CHILDREN', 'mean'), ('CNT_CHILDREN', 'max'), ('CNT_CHILDREN', 'sum'), ('CNT_CHILDREN', 'var'), ('CNT_FAM_MEMBERS', 'min'), ('CNT_FAM_MEMBERS', 'mean'), ('CNT_FAM_MEMBERS', 'max'), ('CNT_FAM_MEMBERS', 'sum'), ('CNT_FAM_MEMBERS', 'var'), ('DAYS_ID_PUBLISH', 'min'), ('DAYS_ID_PUBLISH', 'mean'), ('DAYS_ID_PUBLISH', 'max'), ('DAYS_ID_PUBLISH', 'sum'), ('DAYS_ID_PUBLISH', 'var'), ('DAYS_BIRTH', 'min'), ('DAYS_BIRTH', 'mean'), ('DAYS_BIRTH', 'max'), ('DAYS_BIRTH', 'sum'), ('DAYS_BIRTH', 'var'), ('DAYS_EMPLOYED', 'min'), ('DAYS_EMPLOYED', 'mean'), ('DAYS_EMPLOYED', 'max'), ('DAYS_EMPLOYED', 'sum'), ('DAYS_EMPLOYED', 'var')]), (['CODE_GENDER', 'ORGANIZATION_TYPE'], [('AMT_ANNUITY', 'mean'), ('AMT_INCOME_TOTAL', 'mean'), ('DAYS_REGISTRATION', 'mean'), ('EXT_SOURCE_1', 'mean')]), (['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], [('AMT_ANNUITY', 'mean'), ('CNT_CHILDREN', 'mean'), ('DAYS_ID_PUBLISH', 'mean')]), (['CODE_GENDER', 'NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], [('EXT_SOURCE_1', 'mean'), ('EXT_SOURCE_2', 'mean')]), (['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], [('AMT_CREDIT', 'mean'), ('AMT_REQ_CREDIT_BUREAU_YEAR', 'mean'), ('APARTMENTS_AVG', 'mean'), ('BASEMENTAREA_AVG', 'mean'), ('EXT_SOURCE_1', 'mean'), ('EXT_SOURCE_2', 'mean'), ('EXT_SOURCE_3', 'mean'), ('NONLIVINGAREA_AVG', 'mean'), ('OWN_CAR_AGE', 'mean'), ('YEARS_BUILD_AVG', 'mean')]), (['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], [('ELEVATORS_AVG', 'mean'), ('EXT_SOURCE_1', 'mean')]), (['OCCUPATION_TYPE'], [('AMT_ANNUITY', 'mean'), ('CNT_CHILDREN', 'mean'), ('CNT_FAM_MEMBERS', 'mean'), ('DAYS_BIRTH', 'mean'), ('DAYS_EMPLOYED', 'mean'), ('DAYS_ID_PUBLISH', 'mean'), ('DAYS_REGISTRATION', 'mean'), ('EXT_SOURCE_1', 'mean'), ('EXT_SOURCE_2', 'mean'), ('EXT_SOURCE_3', 'mean')])]
    

    I am assuming that this is where it is combining 'NAME_EDUCATION_TYPE', 'CODE_GENDER', 'AMT_CREDIT', 'min' and the key is missing. I tried to isolate the error message, but the code base is quite large. I figured a community based approach to resolve the bug might be ideal.

    opened by KartikKannapur 2
  • ValueError: No transformer cached credit_card_balance_cleaning_fold_1

    ValueError: No transformer cached credit_card_balance_cleaning_fold_1

    When I run "python main.py -- train_evaluate_predict_cv --pipeline_name XGBoost", I got this error in the title. However, I can run the lightGBM pipeline successfully.

    opened by liyinxiao 1
  • Ming zhang

    Ming zhang

    Pull Request template to Home Credit Default Risk Open Solution

    Code contributions

    Major - and most appreciated - contribution is pull request with feature or bug fix. Each pull request initiates discussion about your code contribution.

    Each pull request should be provided with minimal description about its contents.

    Thanks!

    Jakub & Kamil,

    core contributors to the minerva.ml

    opened by Bowen-Guo 1
  • Update models.py - Neural Network framework

    Update models.py - Neural Network framework

    Added a general framework for a Keras-based neural network (#136)

    Pull Request template to Home Credit Default Risk Open Solution

    Code contributions

    Major - and most appreciated - contribution is pull request with feature or bug fix. Each pull request initiates discussion about your code contribution.

    Each pull request should be provided with minimal description about its contents.

    Thanks!

    Jakub & Kamil,

    core contributors to the minerva.ml

    opened by yotamco100 1
  • CV improved LB not

    CV improved LB not

    Hi guys,

    i have extracted some more features from bureau file, it improved CV from .7950 to 0.7974 ( std 0.0024) but LB drop from .804 to .802. has anyone experienced this before ? i dont use any TARGET related features, and dont think it is overfitting.

    opened by davutpolat 1
  • Best configurations and models used

    Best configurations and models used

    Hello, I would like to use your pipeline described as 'solution 6,' but I don't get which models were used for the 1st level as oof-predictions.

    Also, I would like to know which configurations were used for the 1st level and the 2nd level respectively. In the config file, there are two configuration files (neptune_stacking.yaml and neptune.yaml) and I'm confused which one was used for each level.

    Could you let me know the models and the configurations used for the 1st and the 2nd layer respectively?

    Thank you!!

    opened by kssteven418 0
Releases(solution-6)
A python interface for training Reinforcement Learning bots to battle on pokemon showdown

The pokemon showdown Python environment A Python interface to create battling pokemon agents. poke-env offers an easy-to-use interface for creating ru

Haris Sahovic 184 Dec 30, 2022
Code for the paper "Generative design of breakwaters usign deep convolutional neural network as a surrogate model"

Generative design of breakwaters usign deep convolutional neural network as a surrogate model This repository contains the code for the paper "Generat

2 Apr 10, 2022
classification task on dataset-CIFAR10,by using Tensorflow/keras

CIFAR10-Tensorflow classification task on dataset-CIFAR10,by using Tensorflow/keras 在这一个库中,我使用Tensorflow与keras框架搭建了几个卷积神经网络模型,针对CIFAR10数据集进行了训练与测试。分别使

3 Oct 17, 2021
Object detection GUI based on PaddleDetection

PP-Tracking GUI界面测试版 本项目是基于飞桨开源的实时跟踪系统PP-Tracking开发的可视化界面 在PaddlePaddle中加入pyqt进行GUI页面研发,可使得整个训练过程可视化,并通过GUI界面进行调参,模型预测,视频输出等,通过多种类型的识别,简化整体预测流程。 GUI界面

杨毓栋 68 Jan 02, 2023
Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)

Fast Axiomatic Attribution for Neural Networks This is the official repository accompanying the NeurIPS 2021 paper: R. Hesse, S. Schaub-Meyer, and S.

Visual Inference Lab @TU Darmstadt 11 Nov 21, 2022
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

1 Jan 25, 2022
Code, pre-trained models and saliency results for the paper "Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images".

Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB This repository is the official implementation of the paper. Our results comming soon in

Xiaoqiang Wang 8 May 22, 2022
This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your username and app/website.

PasswordGeneratorAndVault This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your us

Chris 1 Feb 26, 2022
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
Course materials for Fall 2021 "CIS6930 Topics in Computing for Data Science" at New College of Florida

Fall 2021 CIS6930 Topics in Computing for Data Science This repository hosts course materials used for a 13-week course "CIS6930 Topics in Computing f

Yoshi Suhara 101 Nov 30, 2022
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022
EASY - Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients.

EASY - Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients. This repository is the official im

Yassir BENDOU 57 Dec 26, 2022
Implementation of "Fast and Flexible Temporal Point Processes with Triangular Maps" (Oral @ NeurIPS 2020)

Fast and Flexible Temporal Point Processes with Triangular Maps This repository includes a reference implementation of the algorithms described in "Fa

Oleksandr Shchur 20 Dec 02, 2022
A Keras implementation of YOLOv3 (Tensorflow backend)

keras-yolo3 Introduction A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K. Quick Start Download YOLOv3 weights fro

7.1k Jan 03, 2023
How to use TensorLayer

How to use TensorLayer While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLay

zhangrui 349 Dec 07, 2022
Author's PyTorch implementation of TD3+BC, a simple variant of TD3 for offline RL

A Minimalist Approach to Offline Reinforcement Learning TD3+BC is a simple approach to offline RL where only two changes are made to TD3: (1) a weight

Scott Fujimoto 193 Dec 23, 2022
Semantic Bottleneck Scene Generation

SB-GAN Semantic Bottleneck Scene Generation Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the f

Samaneh Azadi 41 Nov 28, 2022
A Keras implementation of YOLOv4 (Tensorflow backend)

keras-yolo4 请使用更完善的版本: https://github.com/miemie2013/Keras-YOLOv4 Please visit here for more complete model: https://github.com/miemie2013/Keras-YOLOv

384 Nov 29, 2022
Look Who’s Talking: Active Speaker Detection in the Wild

Look Who's Talking: Active Speaker Detection in the Wild Dependencies pip install -r requirements.txt In addition to the Python dependencies, ffmpeg

Clova AI Research 60 Dec 08, 2022
Collective Multi-type Entity Alignment Between Knowledge Graphs (WWW'20)

CG-MuAlign A reference implementation for "Collective Multi-type Entity Alignment Between Knowledge Graphs", published in WWW 2020. If you find our pa

Bran Zhu 28 Dec 11, 2022