Bayesian Optimization using GPflow

Overview

Note: This package is for use with GPFlow 1.

For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind.

GPflowOpt

GPflowOpt is a python package for Bayesian Optimization using GPflow, and uses TensorFlow. It was initiated and is currently maintained by Joachim van der Herten and Ivo Couckuyt. The full list of contributors (in alphabetical order) is Ivo Couckuyt, Tom Dhaene, James Hensman, Nicolas Knudde, Alexander G. de G. Matthews and Joachim van der Herten. Special thanks also to all GPflow contributors as this package would not be able to exist without their effort.

Build Status Coverage Status Documentation Status

Install

The easiest way to install GPflowOpt involves cloning this repository and running

pip install . --process-dependency-links

in the source directory. This also installs all required dependencies (including TensorFlow, if needed). For more detailed installation instructions, see the documentation.

Contributing

If you are interested in contributing to this open source project, contact us through an issue on this repository. For more information, see the notes for contributors.

Citing GPflowOpt

To cite GPflowOpt, please reference the preliminary arXiv paper. Sample Bibtex is given below:

@ARTICLE{GPflowOpt2017,
   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},
    title = "{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}",
  journal = {arXiv preprint -- arXiv:1711.03845},
  year    = {2017},
  url     = {https://arxiv.org/abs/1711.03845}
}
Comments
  • GPflow 1.0

    GPflow 1.0

    Following up on #86 , development of a 1.0 compatible GPflowOpt version. This is far from done, the biggest difficulty (Acquisition) is still ahead unfortunately.

    At the same time lots of tests are affected: I'm reworking them to use some of the cool pytest features. When this work is over, I hope to do another PR to improve testing further (split out computationally demanding tests to system, use mock to test things which now trigger BO runs and model optimizations)

    do not merge yet Discussion 
    opened by javdrher 17
  • Cholesky faillures due to inappropriate initial hyperparameters

    Cholesky faillures due to inappropriate initial hyperparameters

    As mentioned #4 , tests often failed (mostly on python 2.7) due to cholesky decomposition errors. First I thought this was mostly caused by updating the data and calling optimize() again, but resetting the hyperparameters wasn't working all the time. Increasing the likelihood variance sometimes helps slightly but isn't very robust either.

    Right now the tests specify lengthscales for the initial model, and apply a hyperprior on the kernel variance. Each BO iteration, the hyperparameters supplied with the initial model are applied as a starting point. In addition, restarts are applied by randomizing the Params. This approach made it a lot more stable but isn't perfect yet. Especially in longer runs of BO, reverting to the supplied lengthscales each time ultimately causes crashes.

    Some things we may consider:

    • Normalizing the input/output data. Tested this a bit, didn't solve the issue. Additionally, the model hyperparameters loose some interpretability. Note that I think we will ultimately need this for PES anyway.
    • Add a callback for re-configuring hyperparameters. Instead of reverting to the initially supplied hypers each iteration, this function is called and configures the initial state. I think for more complex modeling approaches this ultimately required, but for simple scenario's with GPR this has to work automatically.
    • Applying hyperpriors is going to be important.

    I'd love to hear thoughts on how to improve this.

    help wanted 
    opened by javdrher 14
  • Question regarding getting started example from documentation

    Question regarding getting started example from documentation

    Good morning guys,

    I am currently trying to get gpflowopt up and running for some optimization problem. Naturally, I first tried the example you provided in the documentation Now, while I do get the same result as in the documentation, I am a bit puzzled about the function evaluations the optimizer is choosing. To be more specific, the optimizer always chooses to evaluate the function at the point [0.0, 0.5] in all 15 iterations. I am probably overlooking something, as this does not seem to be the desired behavior, right? The optimizer seems to be not really optimizing. Can anyone point out the mistake I made during the setup of the problem? I am pretty sure I follow the instructions of the example in the documentation to the letter.

    This is the code, that I am running:

    import numpy as np
    from gpflowopt.domain import ContinuousParameter
    import gpflow
    from gpflowopt.bo import BayesianOptimizer
    from gpflowopt.design import LatinHyperCube
    from gpflowopt.acquisition import ExpectedImprovement
    #from gpflowopt.optim import SciPyOptimizer
    
    def fx(X):
        X = np.atleast_2d(X)
        result = np.sum(np.square(X), axis=1)[:, None]
        print("X: {}".format(X))
        print("fx: {}".format(result))
        return result
    
    
    domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
    
    # Use standard Gaussian process Regression
    lhd = LatinHyperCube(21, domain)
    X = lhd.generate()
    Y = fx(X)
    model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
    model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)
    
    # Now create the Bayesian Optimizer
    alpha = ExpectedImprovement(model)
    optimizer = BayesianOptimizer(domain, alpha)
    
    # Run the Bayesian optimization
    #with optimizer.silent():
    r = optimizer.optimize(fx, n_iter=15)
    print(r)
    

    And this is the output I am seeing:

    python try_out_gpflowopt.py 
    2017-12-11 09:28:09.790044: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.
    1 SSE4.2 AVX AVX2 FMA
    X: [[ 0.2   0.65]
     [ 0.   -0.1 ]
     [-0.8   0.5 ]
     [ 0.4   1.4 ]
     [-0.6   1.25]
     [ 1.    0.05]
     [ 1.2   0.8 ]
     [-1.   -0.25]
     [ 0.8  -0.7 ]
     [ 1.4   1.55]
     [-1.6   1.1 ]
     [-1.8   0.35]
     [-0.2  -0.85]
     [ 1.8   0.2 ]
     [-0.4   1.85]
     [ 0.6   2.  ]
     [ 2.    0.95]
     [ 1.6  -0.55]
     [-1.4   1.7 ]
     [-1.2  -1.  ]
     [-2.   -0.4 ]]
    fx: [[ 0.4625]
     [ 0.01  ]
     [ 0.89  ]
     [ 2.12  ]
     [ 1.9225]
     [ 1.0025]
     [ 2.08  ]
     [ 1.0625]
     [ 1.13  ]
     [ 4.3625]
     [ 3.77  ]
     [ 3.3625]
     [ 0.7625]
     [ 3.28  ]
     [ 3.5825]
     [ 4.36  ]
     [ 4.9025]
     [ 2.8625]
     [ 4.85  ]
     [ 2.44  ]
     [ 4.16  ]]
    Warning: optimization restart 4/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ -3.76012797e-10   4.99988509e-01]]
    fx: [[ 0.24998851]]
    Warning: optimization restart 1/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 4/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 3/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
         fun: array([ 0.01])
     message: 'OK'
        nfev: 15
     success: True
           x: array([[ 0. , -0.1]])
    

    For the sake of completeness, these are the steps I took to setup GPFlowOpt:

    1. Created new conda environment based on Python 3.5
    2. Cloned the gpflowopt repo
    3. Ran pip install . --process-dependency-links

    BTW, this is the first issue I've ever created on GiHub, so please forgive me if I am violating any conventions and please let me know if I left out crucial information.

    opened by jbi35 9
  • Overflow warnings

    Overflow warnings

    During calls to optimize(), sometimes UserWarnings pop up:

    /home/javdrher/.virtualenvs/gpflowopt/lib/python3.5/site-packages/GPflow/transforms.py:129: RuntimeWarning: overflow encountered in exp result = np.log(1. + np.exp(x)) + self._lower

    Typically this is quite harmless, if its really causing troubles its usually followed by a cholesky decomposition exception. However, those warnings mess up output. Specifically in documentation notebooks. I was thinking of silencing the warnings, any reason not to?

    question 
    opened by javdrher 9
  • Bug fix in pareto.py, Pareto::divide_conquer_nd

    Bug fix in pareto.py, Pareto::divide_conquer_nd

    The algorithm in Pareto::divide_conquer_nd fails when two points in the Pareto set have the same value in a certain dimension. An example is included in the modified pareto.py: The Pareto set d21 contains three points, two of which have a value of 2.0 in the first dimension. Ordering the three points results in different ways results in different values of the hypervolume (28 and 32), which are all wrong (should be 29).

    The issue is in the dominance test associated with _is_test_required method and pseudo_pf. The array pseudo_pf assigns different ranks to the same values. Therefore, by reordering the Pareto set in the test case, different pseudo Pareto sets are generated for the same Pareto set.

    I figured out two ways for the fix. One is to fix pseudo_pf by sorting the Pareto set such that the same values are assigned the same rank (e.g. using scipy.stats.rankdata). The other is to fix the dominance test by checking the actual Pareto set. The first one leads to more iterations in the algorithm. Therefore, I implemented the second approach in pareto.py with a minimum modification - although some simplifications of the code are possible.

    opened by smanist 5
  • when installing GPflowOpt it is down grading GPflow from 1.2 to  0.4

    when installing GPflowOpt it is down grading GPflow from 1.2 to 0.4

    i have installed latest gpflow version (1.2) using pip Later when i tried to install gpflowopt using the following code pip install git+https://github.com/GPflow/GPflowOpt.git

    it is down grading GPflow from 1.2 to 0.4

    opened by pullanagari 5
  • Is GPflowOpt compatible anymore with GPflow functions?

    Is GPflowOpt compatible anymore with GPflow functions?

    I just downloaded GPflowOpt, yet nothing can run due to slight changes made in GPflow. For example, you have now made a 'core' subfolder and seemed to changed the AutoFlow function. I have tried to apply come changes on my own (it is not hard to change 'from gpflow.param import DataHolder, Autoflow' to two separate imports in their correct folders, params and core), but note @Autoflow calls now need to be @gpflow.autoflow. I spent several hours changing this for pretty much every function/class.

    Yet now it seems certain classes are also significantly changed. 'Parameterized' no longer has the attribute 'highest_parent', something needed for SciPyOptimizer.

    At this point, not a single thing can be called from GPflowOpt without error.

    opened by grahamski2323 5
  • Max-Value Entropy Search

    Max-Value Entropy Search

    I implemented the recent acquisition function Max-Value Entropy Search from:

    Wang, Z. & Jegelka, S.. (2017). Max-value Entropy Search for Efficient Bayesian Optimization. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:3627-3635

    We named it Min-Value Entropy Search because the GPflow framework seeks the minimum if the function. There is a notebook evaluating the method on the Shekel function and it seems to perform well.

    opened by nknudde 5
  • MGP

    MGP

    In this pull request I implemented the Approximatively Marginalised GP as described as indicated in issue #39 . It currently supports multi-output GPs. A notebook and some tests are included.

    opened by nknudde 5
  • GPR works, VGP doesn't

    GPR works, VGP doesn't

    Do improve the speed of optimisation, I replaced GPR with VGP as follows:

    domain = np.sum([GPflowOpt.domain.ContinuousParameter(f'mux{i}', mm[i], mx[i]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'muy{i}', mm[i+7], mx[i+7]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmax{i}', 1e-7, 1.) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmay{i}', 1e-7, 1.) for i in range(7)]) domain += GPflowOpt.domain.ContinuousParameter('offset', endo * 0.7, endo * 1.3) design = GPflowOpt.design.RandomDesign(500, domain) X = design.generate() Y = np.vstack([obj(x.reshape(1, -1)) for x in X]) model = GPflow.vgp.VGP(X, Y, GPflow.kernels.RBF(29, lengthscales=X.std(axis=0)), likelihood=GPflow.likelihoods.Gaussian()) acquisition = GPflowOpt.acquisition.ExpectedImprovement(model) opt = GPflowOpt.optim.StagedOptimizer([GPflowOpt.optim.MCOptimizer(domain, 500), GPflowOpt.optim.SciPyOptimizer(domain)]) optimizer = GPflowOpt.BayesianOptimizer(domain, acquisition, optimizer=opt) optimizer.optimize(obj, n_iter=500)

    GPR works, but with VGP I receive the following error:

    [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] 2017-07-18 23:03:28.798171: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [501,1] vs. [500,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] Warning: optimization restart 1/5 failed 2017-07-18 23:03:28.898935: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.898992: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899066: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899289: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] Warning: optimization restart 2/5 failed

    I'm using master GPflow and GPflowOpt on TensorFlow 1.2 and Python 3.6.

    Thanks.

    bug 
    opened by mccajm 5
  • Avoid (some) duplicate optimizes

    Avoid (some) duplicate optimizes

    Following #52 here is some code which gets rid of 1 optimize call (in case of no initial design specified). The PR also includes a context which can be used to suspend all optimizes. This is mostly useful for the lower-level api.

    note: In the following release the data mechanism will undergo some changes and enabling the scaling should get moved to avoid another scaling

    enhancement do not merge yet 
    opened by javdrher 4
  • Issue: Install package gpflowopt

    Issue: Install package gpflowopt

    Hello can you help me to solve this issue ERROR: Could not find a version that satisfies the requirement GPflow==0.5.0 (from gpflowopt) (from versions: 1.4.1.linux-x86_64, 1.5.0.linux-x86_64, 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.1, 1.5.0, 1.5.1, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.2.0, 2.2.1, 2.3.0, 2.3.1, 2.4.0, 2.5.1, 2.5.2) ERROR: No matching distribution found for GPflow==0.5.0

    I tried to install in google colab and spyder but not working

    opened by SamirLamin 2
  • Coupled of decoupled constrained BO?

    Coupled of decoupled constrained BO?

    I am very new to this BO domain and working on constraint BO and found this wonderful tool that has constrained BO application. As I can understand from the code that the acquisition function for objective and constraint are multiplied. I have a question about that. Is this constrained BO considered as coupled then?

    opened by pallavimitra 0
  • Ask/tell interface

    Ask/tell interface

    Is there a ask/tell interface? If not, is there a workaround?

    I'd like to initialize the optimizer with already known function evaluations. Likewise, I'd like to query the next data point that should be evaluated, according to the acquisition function.

    opened by moi90 2
  • Discrete variables optimization

    Discrete variables optimization

    Hey there, I found your project which seems promising for a problem, I am currently working on. However, I as far as I can see, the option to include discrete variables in the optimization is not yet implemented? Is this correct? and if so, are there any developement in this direction currently going on?

    opened by HolmKiilerich 2
  • Can I use PyTorch within the objective function?

    Can I use PyTorch within the objective function?

    Hi, Before I start writing my coding with GPflowOpt, I need to check with you whether I can use a PyTorch model within the objective function? My objective function needs the decoder of a VAE defined using PyTorch. In addition to other essential parameters, I want to pass this model as one parameter to the objective function. I understand that GPflowOpt is based on TF, but I am not sure whether the objective can be any function independent of TF. Thanks in advance...

    opened by yifeng-li 0
Releases(v0.1.0)
  • v0.1.0(Sep 11, 2017)

    Initial version of the GPflowOpt framework, including some basic acquisition functions and support for standard Bayesian Optimization strategies.

    Source code(tar.gz)
    Source code(zip)
Owner
GPflow
GPflow
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
Source code for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning Official implementation of ACC, described in the paper "Adaptively Calibrated C

3 Sep 16, 2022
Adaptive Graph Convolution for Point Cloud Analysis

Adaptive Graph Convolution for Point Cloud Analysis This repository contains the implementation of AdaptConv for point cloud analysis. Adaptive Graph

64 Dec 21, 2022
[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

AGIS-Net Introduction This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. paper | suppl

Yue Gao 102 Jan 02, 2023
Blender scripts for computing geodesic distance

GeoDoodle Geodesic distance computation for Blender meshes Table of Contents Overivew Usage Implementation Overview This addon provides an operator fo

20 Jun 08, 2022
領域を指定し、キーを入力することで画像を保存するツールです。クラス分類用のデータセット作成を想定しています。

image-capture-class-annotation 領域を指定し、キーを入力することで画像を保存するツールです。 クラス分類用のデータセット作成を想定しています。 Requirement OpenCV 3.4.2 or later Usage 実行方法は以下です。 起動後はマウスクリック4

KazuhitoTakahashi 5 May 28, 2021
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022
Code for paper Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting

Decoupled Spatial-Temporal Graph Neural Networks Code for our paper: Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting.

S22 43 Jan 04, 2023
joint detection and semantic segmentation, based on ultralytics/yolov5,

Multi YOLO V5——Detection and Semantic Segmentation Overeview This is my undergraduate graduation project which based on ultralytics YOLO V5 tag v5.0.

477 Jan 06, 2023
This repository includes different versions of the prescribed-time controller as Simulink blocks and MATLAB script codes for engineering applications.

Prescribed-time Control Prescribed-time control (PTC) blocks in Simulink environment, MATLAB R2020b. For more theoretical details, refer to the papers

Amir Shakouri 1 Mar 11, 2022
Allows including an action inside another action (by preprocessing the Yaml file). This is how composite actions should have worked.

actions-includes Allows including an action inside another action (by preprocessing the Yaml file). Instead of using uses or run in your action step,

Tim Ansell 70 Nov 04, 2022
Neural machine translation between the writings of Shakespeare and modern English using TensorFlow

Shakespeare translations using TensorFlow This is an example of using the new Google's TensorFlow library on monolingual translation going from modern

Motoki Wu 245 Dec 28, 2022
4st place solution for the PBVS 2022 Multi-modal Aerial View Object Classification Challenge - Track 1 (SAR) at PBVS2022

A Two-Stage Shake-Shake Network for Long-tailed Recognition of SAR Aerial View Objects 4st place solution for the PBVS 2022 Multi-modal Aerial View Ob

LinpengPan 5 Nov 09, 2022
Code for the paper "Offline Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Offline Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are

Michael Janner 266 Dec 27, 2022
Official code for the paper "Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks".

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks This repository contains the official code for the

Linus Ericsson 11 Dec 16, 2022
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis Figure: Shape-Accurate 3D-Aware Image Synthesis. A Shading-Guid

Xingang Pan 115 Dec 18, 2022
Weakly supervised medical named entity classification

Trove Trove is a research framework for building weakly supervised (bio)medical named entity recognition (NER) and other entity attribute classifiers

60 Nov 18, 2022
Large scale embeddings on a single machine.

Marius Marius is a system under active development for training embeddings for large-scale graphs on a single machine. Training on large scale graphs

Marius 107 Jan 03, 2023
Provide partial dates and retain the date precision through processing

Prefix date parser This is a helper class to parse dates with varied degrees of precision. For example, a data source might state a date as 2001, 2001

Friedrich Lindenberg 13 Dec 14, 2022
This repo. is an implementation of ACFFNet, which is accepted for in Image and Vision Computing.

Attention-Guided-Contextual-Feature-Fusion-Network-for-Salient-Object-Detection This repo. is an implementation of ACFFNet, which is accepted for in I

5 Nov 21, 2022