Automated Hyperparameter Optimization Competition

Overview

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛

ACM CIKM 2021 AnalyticCup

在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真实业务场景问题出发,并基于脱敏后的数据集来评测各个参赛队伍的超参数优化算法。本赛题为超参数优化问题或黑盒优化问题:给定超参数的取值空间,每一轮可以获取一组超参数对应的Reward,要求超参数优化算法在限定的迭代轮次内找到Reward尽可能大的一组超参数,最终按照找到的最大Reward来计算排名。

1. 重要资源

2.代码结构

|--example_random_searcher  随机算法代码提交示例
|  `--searcher.py
|
|--example_bayesian_optimization 贝叶斯优化算法提交示例
|  |--requirements.txt     提交附加程序包示例
|  `--searcher.py
|
|--input                   测试评估函数数据
|  |--data-2
|  `--data-30
|
|--thpo                    thpo比赛工具包
|  |--__init__.py
|  |--abstract_searcher.py
|  |--common.py
|  |--evaluate_function.py
|  |--reward_calculation.py
|  |--run_search_one_time.py
|  `--run_search.py
|
|--main.py                 测试主程序文件
|--local_test.sh           本地测试脚本
|--prepare_submission.sh   提交代码前打包脚本
|--environments.txt        评测环境已经安装的包
`--requirements.txt        demo程序依赖的包环境

3. 快速入门

3.1 环境搭建

THPO-Kit程序工具包使用python3编写,程序依赖包在requirements.txt中,需要安装依赖包才能执行,使用pip3安装依赖包:

pip3 install -r requirements.txt

3.2 算法创建

  1. 参照 example_randon_searcher,新建一个自己算法的目录my_algo
  2. my_algo目录下新建searcher.py文件
  3. searcher.py文件里实现自己的Searcher类(文件名和类名不允许自定义)
  4. 实现 __init__suggest 函数
  5. 修改 local_test.sh,将SEARCHER修改为my_algo
  6. 执行 local_test.sh 脚本,将得到算法的执行结果

Step 1 - Step 2:[root folder]

|--my_algo
|  |--requirements.txt
|  `--searcher.py 
|--local_test.sh

Step 3 - Step 4:[searcher.py]

# 必须引入searcher抽象类,必不可少
from thpo.abstract_searcher import AbstractSearcher
from random import randint

class Searcher(AbstractSearcher):
    searcher_name = "RandomSearcher"

    def __init__(self, parameters_config, n_iter, n_suggestion):
        AbstractSearcher.__init__(self, 
                                  parameters_config, 
                                  n_iter,
                                  n_suggestion)

    def suggest(self, suggestion_history, n_suggestions=1):
        next_suggestions = []
        for i in range(n_suggestions):
            next_suggest = {
                name: 
                conf["coords"][randint(0,len(conf["coords"])-1)]
                for name, conf in self.parameters_config.items()
            }
            next_suggestions.append(next_suggest)
        return next_suggestions

Step 5:[local_test.sh]

SEARCHER="my_algo"

3.3 本地运行

执行脚本local_test.sh进行本地评测

./local_test.sh

执行结果:

====================== run search result ========================
 err_code:  0  err_msg:  
========================= iters means ===========================
func: data-2 iteration best: [25.24271821 26.36435157 12.77928619 10.19180929 11.3147711  10.17430656
 12.77928619 27.79752169 26.36793589 11.12007615]
func: data-30 iteration best: [-0.95264345 -0.27725879 -0.36873091 -0.68088963 -0.28840479 -0.50006427
 -0.32088949 -0.78627201 -0.53204227 -0.98427191]
========================= fianl score ============================
example_bayesian_optimization final score:  0.47173337831255463
==================================================================

3.4 提交比赛代码

使用prepare_submission.sh 脚本打包,提交打包后的searcher程序包到比赛代码提交入口

./prepare_submission.sh example_random_searcher

执行结果:

upload_example_random_searcher_08131917
  adding: requirements.txt (stored 0%)
  adding: searcher.py (deflated 66%)
----------------------------------------------------------------
Built achive for upload
Archive:  ./upload_example_random_searcher_08131917.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  08-13-2021 19:17   requirements.txt
     3767  08-13-2021 19:17   searcher.py
---------                     -------
     3767                     2 files
For scoring, upload upload_example_random_searcher_08131917.zip at address:
https://algo.browser.qq.com/


QQ Browser 2021 AI Algorithm Competiton - Automated Hyperparameter Optimization Contest

ACM CIKM 2021 AnalyticCup

The choices of hyperparameters have critical effects on models or strategies in recommendation systems. But the hyperparameters are mostly chosen based on experience, which brings high maintenance costs and sub-optimal results. Thus, this track aims at automated hyperparameters optimization based on anonymized realistic industrial tasks and datasets. Given the space of all possible hyperparameters' values, a reward could be achieved with a set of hyperparameters in each iteration. The participants are asked to maximize the reward within a given limit of iterations with a hyperparameters optimization algorithm. The final rank of the participants will be the rank of their maximum reward.

1.Resource

2.Repo structure

|--example_random_searcher   	    # example of random search
|  `--searcher.py
|
|--example_bayesian_optimization    # example of bayesian optimization
|  |--requirements.txt              # extra paackge requirement
|  `--searcher.py
|
|--input                            # testcases
|  |--data-2
|  `--data-30
|
|--thpo                             # thpo-kit
|  |--__init__.py
|  |--abstract_searcher.py
|  |--common.py
|  |--evaluate_function.py
|  |--reward_calculation.py
|  |--run_search_one_time.py
|  `--run_search.py
|
|--main.py                          # main
|--local_test.sh                    # script for local test
|--prepare_submission.sh            # script for submission
|--environments.txt                 # packages installed in remote envrionment
`--requirements.txt                 # demo requirements

3. Quick start

3.1 Environment setup

The THPO-Kit program toolkit is written in python3. The program dependency packages are in requirements.txt, and the dependency packages needs to be installed to execute scripts. Use pip3 to install the dependency package:

pip3 install -r requirements.txt

3.2 Create a searcher

  1. Refer to example_randon_searcher, create a new directory my_algo for your algorithm
  2. Create a new searcher.py file in the my_algo directory
  3. Implement your own Searcher class in the searcher.py file (the file name and class name are not allowed to be customized)
  4. Implement __init__ and suggest functions
  5. Modify local_test.sh and change SEARCHER to my_algo
  6. Execute the local_test.sh script to get the results of the algorithm

Step 1 - Step 2:[root folder]

|--my_algo
|  |--requirements.txt
|  `--searcher.py 
|--local_test.sh

Step 3 - Step 4:[searcher.py]

# MUST import AbstractSearcher from thpo.abstract_searcher
from thpo.abstract_searcher import AbstractSearcher
from random import randint

class Searcher(AbstractSearcher):
    searcher_name = "RandomSearcher"

    def __init__(self, parameters_config, n_iter, n_suggestion):
        AbstractSearcher.__init__(self, 
                                  parameters_config, 
                                  n_iter,
                                  n_suggestion)

    def suggest(self, suggestion_history, n_suggestions=1):
        next_suggestions = []
        for i in range(n_suggestions):
            next_suggest = {
                name: 
                conf["coords"][randint(0,len(conf["coords"])-1)]
                for name, conf in self.parameters_config.items()
            }
            next_suggestions.append(next_suggest)
        return next_suggestions

Step 5:[local_test.sh]

SEARCHER="my_algo"

3.3 Local test

Execute the script local_test.sh for local evaluation

./local_test.sh

Execution output:

====================== run search result ========================
 err_code:  0  err_msg:  
========================= iters means ===========================
func: data-2 iteration best: [25.24271821 26.36435157 12.77928619 10.19180929 11.3147711  10.17430656
 12.77928619 27.79752169 26.36793589 11.12007615]
func: data-30 iteration best: [-0.95264345 -0.27725879 -0.36873091 -0.68088963 -0.28840479 -0.50006427
 -0.32088949 -0.78627201 -0.53204227 -0.98427191]
========================= fianl score ============================
example_bayesian_optimization final score:  0.47173337831255463
==================================================================

3.4 Submission

Use prepare_submission.sh script to create a zip file, and submit the zip file to competition website Code submission entry.

./prepare_submission.sh example_random_searcher

Execution output:

upload_example_random_searcher_08131917
  adding: requirements.txt (stored 0%)
  adding: searcher.py (deflated 66%)
----------------------------------------------------------------
Built achive for upload
Archive:  ./upload_example_random_searcher_08131917.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  08-13-2021 19:17   requirements.txt
     3767  08-13-2021 19:17   searcher.py
---------                     -------
     3767                     2 files
For scoring, upload upload_example_random_searcher_08131917.zip at address:
https://algo.browser.qq.com/
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

44 Dec 20, 2022
A toolkit for Lagrangian-based constrained optimization in Pytorch

Cooper About Cooper is a toolkit for Lagrangian-based constrained optimization in Pytorch. This library aims to encourage and facilitate the study of

Cooper 34 Jan 01, 2023
Analysing poker data from home games with friends

Poker Game Analysis Analysing poker data from home games with friends. Not a lot of data is collected, so this project is primarily focussed on descri

Stavros Karmaniolos 1 Oct 15, 2022
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 05, 2022
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
SciFive: a text-text transformer model for biomedical literature

SciFive SciFive provided a Text-Text framework for biomedical language and natural language in NLP. Under the T5's framework and desrbibed in the pape

Long Phan 54 Dec 24, 2022
The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

ISC21-Descriptor-Track-1st The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track. You can check our solution

lyakaap 73 Dec 24, 2022
[ECCV 2020] Reimplementation of 3DDFAv2, including face mesh, head pose, landmarks, and more.

Stable Head Pose Estimation and Landmark Regression via 3D Dense Face Reconstruction Reimplementation of (ECCV 2020) Towards Fast, Accurate and Stable

Remilia Scarlet 221 Dec 30, 2022
Official Implement of CVPR 2021 paper “Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting”

RGBT Crowd Counting Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin. "Cross-Modal Collaborative Representation Learning and a L

37 Dec 08, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Multi-Task Meta-Learning Modification with Stochastic Approximation This repository contains the code for the paper "Multi-Task Meta-Learning Modifica

Andrew 3 Jan 05, 2022
Immortal tracker

Immortal_tracker Prerequisite Our code is tested for Python 3.6. To install required liabraries: pip install -r requirements.txt Waymo Open Dataset P

74 Dec 03, 2022
Multi-Objective Loss Balancing for Physics-Informed Deep Learning

Multi-Objective Loss Balancing for Physics-Informed Deep Learning Code for ReLoBRaLo. Abstract Physics Informed Neural Networks (PINN) are algorithms

Rafael Bischof 16 Dec 12, 2022
HiFi++: a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement

HiFi++ : a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement This is the unofficial implementation of Vocoder part of

Rishikesh (ऋषिकेश) 118 Dec 29, 2022
Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection, AAAI 2021.

Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection This repository is an official implementation of the AAAI 2021 paper Co-mi

MEGVII Research 20 Dec 07, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 07, 2022
Ontologysim: a Owlready2 library for applied production simulation

Ontologysim: a Owlready2 library for applied production simulation Ontologysim is an open-source deep production simulation framework, with an emphasi

10 Nov 30, 2022
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022