COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

Related tags

Deep Learningcovins
Overview

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

Version 1.0

COVINS is an accurate, scalable, and versatile visual-inertial collaborative SLAM system, that enables a group of agents to simultaneously co-localize and jointly map an environment.

COVINS provides a server back-end for collaborative SLAM, running on a local machine or a remote cloud instance, generating collaborative estimates from map data contributed by different agents running Visual-Inertial Odomety (VIO) and sharing their map with the back-end. COVINS also provides a generic communication module to interface the keyframe-based VIO of your choice. Here, we provide an example of COVINS interfaced with the VIO front-end of ORB-SLAM3. We provide guidance and examples how to run COVINS on the EuRoC dataset.

Index

  1. Related Publications
  2. License
  3. Basic Setup
  4. Running COVINS
  5. Docker Implementation
  6. Extended Functionalities
  7. Limitations and Known Issues

1 Related Publications

[COVINS] Patrik Schmuck, Thomas Ziegler, Marco Karrer, Jonathan Perraudin and Margarita Chli. COVINS: Visual-Inertial SLAM for Centralized Collaboration. IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021. PDF

[Redundancy Detection] Patrik Schmuck and Margarita Chli. On the Redundancy Detection in Keyframe-based SLAM. IEEE International Conference on 3D Vision (3DV), 2019. PDF.

[System Architecture] Patrik Schmuck and Margarita Chli. CCMโ€SLAM: Robust and Efficient Centralized Collaborative Monocular Simultaneous Localization and Mapping for Robotic Teams. Journal of Field Robotics (JFR), 2019. PDF

[Collaborative VI-SLAM] Patrik Schmuck, Marco Karrer and Margarita Chli. CVI-SLAM - Collaborative Visual-Inertial SLAM. IEEE Robotics and Automation Letters (RA-L), 2018. PDF

Video:

Mesh

2 License

COVINS is released under a GPLv3 license. For a list of code/library dependencies (and associated licenses), please see thirdparty_code.md.

For license-related questions, please contact the authors: collaborative (dot) slam (at) gmail (dot) com.

If you use COVINS in an academic work, please cite:

@article{schmuck2021covins,
  title={COVINS: Visual-Inertial SLAM for Centralized Collaboration},
  author={Schmuck, Patrik and Ziegler, Thomas and Karrer, Marco and Perraudin, Jonathan and Chli, Margarita},
  journal={arXiv preprint arXiv:2108.05756},
  year={2021}
}

3 Basic Setup

This section explains how you can build the COVINS server back-end, as well as the provided version of the ORB-SLAM3 front-end able to communicate with the back-end. COVINS was developed under Ubuntu 18.04, and we provide installation instructions for 18.04 as well as 20.04. Note that we also provide a Docker implementation for simplified deployment of COVINS.

Environment Setup

Dependencies

  • sudo apt-get update
  • Doxygen: sudo apt-get install doxygen
  • SuiteSparse: sudo apt-get install libsuitesparse-dev
  • YAML: sudo apt-get install libyaml-cpp-dev
  • VTK: sudo apt-get install libvtk6-dev
  • catkin_tools (from the catkin_tools manual)
    • sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu `lsb_release -sc` main" > /etc/apt/sources.list.d/ros-latest.list'
    • wget http://packages.ros.org/ros.key -O - | sudo apt-key add -
    • sudo apt-get update
    • sudo apt-get install python3-catkin-tools
  • ws_tools: sudo apt-get install python3-wstool
  • OMP: sudo apt-get install libomp-dev
  • Glewsudo apt install libglew-dev
  • ROS

Set up your workspace

This will create a workspace for COVINS as ~/ws/covins_ws. All further commands will use this path structure - if you decide to change the workspace path, you will need to adjust the commands accordingly.

  • cd ~
  • mkdir -p ws/covins_ws/src
  • cd ~/ws/covins_ws
  • catkin init
  • ROS Setup
    • U18/Melodic: catkin config --extend /opt/ros/melodic/
    • U20/Noetic: catkin config --extend /opt/ros/noetic/
  • catkin config --merge-devel
  • catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo

COVINS Installation

We provide a script (covins/install_file.sh) that will perform a full installation of COVINS, including back-end, front-end, and third-party packages, if the environment is set up correctly. If the installation fails, we strongly recommend executing the steps in the build script manually one by one. The script might not perform a correct installation under certain circumstances if executed multiple times.

  • cd ~/ws/covins_ws/src
  • git clone https://github.com/VIS4ROB-lab/covins.git
  • cd ~/ws/covins_ws
  • chmod +x src/covins/install_file.sh
  • ./src/covins/install_file.sh 8
    • The argument 8 is optional, and specifies the number of jobs the build process should use.

Generally, when the build process of COVINS or ORB-SLAM3 fails, make sure you have correctly sourced the workspace, and that the libraries in the third-party folders, such as DBoW2 and g2o are built correctly.

A remark on fix_eigen_deps.sh: compiling code with dependencies against multiple Eigen versions is usually fatal and must be avoided. Therefore, we specify and download the Eigen version explicitly through the eigen_catkin package, and make sure all Eigen dependencies point to this package.

Installing ROS Support for the ORB-SLAM3 Front-End

If you want to use rosbag files to pass sensor data to COVINS, you need to explicitly build the ORB-SLAM3 front-end with ROS support.

  • Install vision_opencv:
    • cd ~/ws/covins_ws/src
    • Clone: git clone https://github.com/ros-perception/vision_opencv.git
    • Check out the correct branch
      • U18/Melodic: git checkout melodic
      • U20/Noetic: git checkout noetic
    • Go to ~/ws/covins_ws/src/vision_opencv/cv_bridge/CMakeLists.txt
    • Add the opencv3_catkin dependency: change the line find_package(catkin REQUIRED COMPONENTS rosconsole sensor_msgs) to find_package(catkin REQUIRED COMPONENTS rosconsole sensor_msgs opencv3_catkin)
    • If you are running Ubuntu 20 (or generally have OpenCV 4 installed): remove the lines that search for an OpenCV 4 version in the CMakeLists.txt
    • source ~/ws/covins_ws/devel/setup.bash
    • catkin build cv_bridge
    • [Optional] Check correct linkage:
      • cd ~/ws/covins_ws/devel/lib
      • ldd libcv_bridge.so | grep opencv_core
      • This should only list libopencv_core.so.3.4 as a dependency
  • catkin build ORB_SLAM3
  • [Optional] Check correct linkage:
    • ~/ws/covins_ws/src/covins/orb_slam3/Examples/ROS/ORB_SLAM3
      • ldd Mono_Inertial | grep opencv_core
      • This should mention libopencv_core.so.3.4 as the only libopencv_core dependency

4 Running COVINS

This section explains how to run COVINS on the EuRoC dataset. If you want to use a different dataset, please do not forget to use a correct parameter file instead of covins/orb_slam3/Examples/Monocular-Inertial/EuRoC.yaml.

Setting up the environment

  • In ~/ws/covins_ws/src/covins/covins_comm/config/config_comm.yaml: adjust the value of sys.server_ip to the IP of the machine where the COVINS back-end is running
  • In every of the provided scripts to run the ORB-SLAM3 front-end (e.g., euroc_examples_mh1.sh, in orb_slam3/covins_examples/), adjust pathDatasetEuroc to the path where the dataset has been uncompressed. The default expected path is /MH_01_easy/mav0/... (for euroc_examples_mh1.sh, in this case)
  • In ~/ws/covins_ws/src/covins/covins_backend/config/config_backend.yaml: adjust the path of sys.map_path0 to the directory where you would like to load maps from.

Running the COVINS Server Back-End

  • Source your workspace: source ~/ws/covins_ws/devel/setup.bash
  • In a terminal, start a roscore: roscore
  • Start the COVINS backend by executing rosrun covins_backend covins_backend_node

Running the ORB-SLAM3 Front-End

Example scripts are provided in orb_slam3/covins_examples/. Don't forget to correctly set the dataset path in every script you want to use (see above: Setting up the environment). You can also check the original ORB-SLAM3 Repo for help on how to use the ORB-SLAM3 front-end.

  • Download the EuRoC dataset (ASL dataset format)
  • Source your workspace: source ~/ws/covins_ws/devel/setup.bash
  • Execute one of the example scripts provided in the orb_slam3/ folder, such as euroc_examples_mh123_vigba
    • euroc_examples_mhX.sh runs the front-end with a single sequence from EuRoC MH1-5.
    • euroc_examples_mh123_vigba.sh runs a 3-agent collaborative SLAM session (sequential) followed by Bundle Adjustment.
    • euroc_examples_mh12345_vigba.sh runs a 5-agent collaborative SLAM session (sequential) followed by Bundle Adjustment.
    • Multiple front-ends can run in parallel. The front-ends can run on the same machine, or on different machines connected through a wireless network. However, when running multiple front-ends on the same machine, note that the performance of COVINS might degrade if the computational resources are overloaded by running too many agents simultaneously.
    • Common error sources:
      • If the front-end is stuck after showing Loading images for sequence 0...LOADED!, most likely your dataset path is wrong.
      • If the front-end is stuck after showing --> Connect to server or shows an error message Could no establish connection - exit, the server is not reachable - the IP might be incorrect, you might have forgotten to start the server, or there is a problem with your network (try pinging the server IP)

COVINS does not support resetting the map onboard the agent. Since map resets are more frequent at the beginning of a session or dataset, for example due to faulty initialization, in the current implementation, the COVINS communication module is set up such that it only starts sending data if a pre-specified number of keyframes was already created by the front-end. This number is specified by comm.start_sending_after_kf in covins/covins_comm/config/config_comm.yaml, and is currently set to 50. Also check Limitations for more details.

Visualization

COVINS provides a config file for visualization with RVIZ (covins.rviz in covins_backend/config/)

  • Run tf.launch in covins_backend/launch/ to set up the coordinate frames for visualization: roslaunch ~/ws/covins_ws/src/covins/covins_backend/launch/tf.launch
  • Launch RVIZ: rviz -d ~/ws/covins_ws/src/covins/covins_backend/config/covins.rviz
    • Covisibility edges between keyframes of from different agents are shown in red, while edges between keyframes from the same agent are colored gray (those are not shown by default, but can be activated in RVIZ).
    • In case keyframes are visualized, removed keyframes are displayed in red (keyframes are not shown by default, but can be activated in RVIZ).
    • The section VISUALIZATION in config_backend.yaml provides several options to modify the visualization.

User Interaction

COVINS provides several options to interact with the map held by the back-end. This is implemented through ROS services.

  • Make sure your workspace is sourced: source ~/ws/covins_ws/devel/setup.bash
  • Map save: rosservice call /covins_savemap - this saves the map associated to the agent specified by AGENT_ID.
    • The map will be saved to the folder ..../covins_backend/output/map_data. Make sure the folder is empty, before you save a map (COVINS performs a brief check - if a folder named keyframes/ or mappoints/ exists in the target directory, it will show an error and abort the map save process. Any other files or folders will not result in an error though).
  • Map load: rosservice call /covins_loadmap 0 - loads a map stored on disk, from the folder specified by sys.map_path0 in config_backend.yaml.
    • Note: map load needs to be performed before registering any agent.
    • 0 specifies the operation mode of the load functionality. 0 means "standard" map loading, while 1 and 2 will perform place recognition (1) and place recognition and PGO (2). Note that both modes with place recognition are experimental, only "standard" map load is tested and supported for the open-source version of COVINS.
  • Bundle Adjustment: rosservice call /covins_gba - Performs visual-inertial bundle adjustemt on the map associated to the agent specified by AGENT_ID. Modes: 0: BA without outlier rejection, 1: BA with outlier rejection.
  • Map Compression / Redundancy Removal: rosservice call /covins_prunemap - performs redundancy detection and removal on the map associated to the agent specified by AGENT_ID.
    • MAX_KFs specifies the target number of keyframes held by the compressed map. If MAX_KFs=0, the threshold value for measuring redundancy specified by the parameter kf_culling_th_red in config_backend.yaml will be used.
    • All experiments with COVINS were performed specifying the target keyframe count. Therefore, we recommend resorting to this functionality.
    • The parameter kf_culling_max_time_dist in config_backend.yaml specifies a maximum time delta permitted between two consecutive keyframes, in order to ensure limit the error of IMU integration. If no keyframe can removed without violating this constraint, map compression will stop, even if the target number of keyframes is not reached.
  • Note: After a map merge of the maps associated to Agent 0 and Agent 1, the merged map is associated to both agents, i.e. rosservice call /covins_savemap 0 and rosservice call /covins_savemap 1 will save the same (shared) map.

Parameters

COVINS provides two parameter files to adjust the behavior of the system and algorithms.

  • ../covins_comm/config/config_comm.yaml contains all parameters related to communication and the agent front-end.
  • ../covins_backend/config/config_backend.yaml contains all parameters related to the server back-end.

The user should not be required to change any parameters to run COVINS, except paths and the server IP, as explained in this manual.

Output Files

  • COVINS automatically saves the trajectory estimates of each agent to a file in covins_backend/output. The file KF_ .csv stores the poses associated to the agent specified by AGENT_ID.

Running COVINS with ROS

  • Make sure your workspace is sourced: source ~/ws/covins_ws/devel/setup.bash
  • In ~/ws/covins_ws/src/covins/orb_slam3/launch_ros_euroc.launch: adjust the paths for voc and cam
  • cd to orb_slam3/ and run roslaunch launch_ros_euroc.launch
  • run the rosbag file, e.g. rosbag play MH_01_easy.bag
    • When using COVINS with ROS, we recommend skipping the initialization sequence performed at the beginning of each EuRoC MH trajectory. ORB-SLAM3 often performs a map reset after this sequence, which is not supported by COVINS and will therefore cause an error. For example, for MH1, this can be easily done by running rosbag play MH_01_easy.bag --start 45. (Start at: MH01: 45s; MH02: 35s; MH03-05: 15s)

5 Docker Implementation

We provide COVINS also as a Docker implementation. A guide how to install docker can be found here https://docs.docker.com/engine/install/. To avoid the need of sudo when running the commands below you can add your user to the docker group.

sudo usermod -aG docker $USER (see https://docs.docker.com/engine/install/linux-postinstall/)

Building the docker image

Build the docker file using the Make file provided in the docker folder. Provide the number of jobs make and catkin build should use. This can take a while. If the build fails try again with a reduced number of jobs value.

  • make build NR_JOBS=14

Running the docker image

The docker image can be used to run different parts of COVINS (e.g. server, ORB-SLAM3 front-end, ...).

ROS core

To start the roscore one can either use the host system ROS implementation (if ROS is installed). Otherwise, it can be started using the docker image.

  • ./run.sh -c

COVINS Server Back-End

The convins server back-end needs a running roscore, how to start one see above. Furthermore, the server needs two configuration files, one for the communication server on one for the back-end. These two files need to be linked when running the docker image.

  • ./run.sh -s ../covins_comm/config/config_comm.yaml ../covins_backend/config/config_backend.yaml

ORB-SLAM3 Front-End

The ORB-SLAM3 front-end client needs the communication server config file, the file which should be executed, and the path to the dataset. The dataset has to be given seperately since the file system of the docker container differs from the host system. Hence, the pathDatasetEuroc variable in the run script gets adapted automatically inside the docker container.

  • ./run.sh -o ../covins_comm/config/config_comm.yaml ../orb_slam3/covins_examples/euroc_examples_mh1

ORB-SLAM3 ROS Front-End

The ROS wrapper of the ORB-SLAM3 front-end can also be started in the docker container. It requires the server config file and the ROS launch file. A bag file can then for example be played on the host system.

  • ./run.sh -r ../covins_comm/config/config_comm.yaml ../orb_slam3/Examples/ROS/ORB_SLAM3/launch/launch_docker_ros_euroc.launch

Terminal

A terminal within the docker image can also be opened. This can for example be used to send rosservice commands.

  • ./run.sh -t

6 Extended Functionalities

Interfacing a Custom VIO System with COVINS

COVINS exports a generic communication interface, that can be integrated into custom keyframe-based VIO systems in order to share map data with the server back-end and generate a collaborative estimate. The code for the communication interface is located in the covins_comm folder, which builds a library with the same name that facilitates communication between the VIO system onboard the agent and the COVINS server back-end.

For straightforward understanding which steps need to be taken to interface a VIO front-end with COVINS, we have defined the preprocessor macro COVINS_MOD in covins/covins_comm/include/covins/covins_base/typedefs_base.hpp. This macro indicates all modifications made to the original ORB-SLAM3 code in order to set up the communication with the server back-end.

In a nutshell, the communication interface provides a base communicator class, which is intended to be used to create a derived communicator class tailored to the VIO system. The communicator module runs in a separate thread, taking care of setting up a connection to the server, and exchanging map data. For the derived class, the user only needs to define a function that can be used to pass data to the communicator module and fill the provided data containers, and the Run() function that is continuously executed by the thread allocated to the communicator module. Furthermore, the communicator module uses the predefined message types MsgKeyframe and MsgLandmark for transmitting data to the server, therefore, the user needs to define functions that fill those messages from the custom data structures of the VIO system.

Map Re-Use Onboard the Agent

COVINS also provides the functionality to share data from the collaborative estimate on the server-side with the agents participating in the estimate. COVINS provides only the infrastructure to share this data, the method for map re-use needs to be implement by the user.

By default, COVINS is configured to not send any data back to the agent. By setting comm.data_to_client to 1 in config_comm.yaml, this functionality can be activated. By default, the server then regularly sends information about one keyframe back to the agent. The agent will display a message that requests the user to define what to do with the received information.

  • In the function CollectDataForAgent() in covins_backend/src/covins_backend/, the data to send to the agent can be specified.
  • In the function ProcessKeyframeMessages() in orb_slam3/src/, the processing of the received keyframe can be specified.

7 Limitations and Known Issues

  • [MAP RESET] ORB-SLAM3 has the functionality to start a new map when tracking is lost, in order to improve robustness. This functionality is not supported by COVINS. The COVINS server back-end assumes that keyframes arriving from a specific agent are shared in a continuous fashion and belong to the same map, and if the agent map is reset and a second keyframe with a previously used ID arrives again at the server side, the back-end will detect this inconsistency and throw an error. We have almost never experienced this behavior on the EuRoC sequences when using the ASL dataset format, and rarely when using rosbag files.
    • Too little computational resources available to the front-end can be a reason for more frequent map resets.
    • Map resets are more frequent at the beginning of a dataset, and occur less when the VIO front-end is well initialized and already tracking the agent's pose over some time. Therefore, the communication module will only start sending data to the server once a pre-specified number of keyframes was created by the VIO front-end. This number is specified by comm.start_sending_after_kf in covins/covins_comm/config/config_comm.yaml, and is currently set to 50.
    • Particularly when running with rosbag files, setting the parameter orb.imu_stamp_max_diff: 2.0 in covins/covins_comm/config/config_comm.yaml, instead of the default (1.0), helped to significantly reduce map resets. We did not see any negative impact on the accuracy of the COVINS collaborative estimate from this change.
  • [Duplicate Files] The repository contains 2 copies of the ORB vocabularies, as well as 2 versions of the DBoW library. We decided to use this structure in order to keep the code of COVINS and ORB-SLAM3 as much separated as possible.
  • [Mixed Notation] COVINS mainly utilizes the Google C++ Style Guide. However, some modules re-use code of other open-source software using Hungarian notation, such as CCM-SLAM and ORB-SLAM2, and this code was not ported to the new notation convention yet (particularly, this applies to code parts related to the FeatureMatcher, KFDatabase, PlaceRecognition, Se3Solver).
Owner
ETHZ V4RL
Vision for Robotics Lab, ETH Zurich
ETHZ V4RL
Retrieve and analysis data from SDSS (Sloan Digital Sky Survey)

Author: Behrouz Safari License: MIT sdss A python package for retrieving and analysing data from SDSS (Sloan Digital Sky Survey) Installation Install

Behrouz 3 Oct 28, 2022
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

47 Nov 22, 2022
Learning to Prompt for Continual Learning

Learning to Prompt for Continual Learning (L2P) Official Jax Implementation L2P is a novel continual learning technique which learns to dynamically pr

Google Research 207 Jan 06, 2023
Pytorch code for "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks".

:speaker: Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

Amirsina Torfi 114 Dec 18, 2022
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting

Official code of APHYNITY Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting (ICLR 2021, Oral) Yuan Yin*, Vincent Le Guen*

Yuan Yin 24 Oct 24, 2022
Approaches to modeling terrain and maps in python

topography ๐ŸŒŽ Contains different approaches to modeling terrain and topographic-style maps in python Features Inverse Distance Weighting (IDW) A given

John Gutierrez 1 Aug 10, 2022
Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks

flownet2-pytorch Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. Multiple GPU training is supported, a

NVIDIA Corporation 2.8k Dec 27, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
Disease Informed Neural Networks (DINNs) โ€” neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) โ€” neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
Pytorch implementation of Nueral Style transfer

Nueral Style Transfer Pytorch implementation of Nueral style transfer algorithm , it is used to apply artistic styles to content images . Content is t

Abhinav 9 Oct 15, 2022
End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)

PDVC Official implementation for End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021) [paper] [valse่ฎบๆ–‡้€Ÿ้€’(Chinese)] This repo supports:

Teng Wang 118 Dec 16, 2022
Unifying Global-Local Representations in Salient Object Detection with Transformer

GLSTR (Global-Local Saliency Transformer) This is the official implementation of paper "Unifying Global-Local Representations in Salient Object Detect

11 Aug 24, 2022
This code is part of the reproducibility package for the SANER 2022 paper "Generating Clarifying Questions for Query Refinement in Source Code Search".

Clarifying Questions for Query Refinement in Source Code Search This code is part of the reproducibility package for the SANER 2022 paper "Generating

Zachary Eberhart 0 Dec 04, 2021
some academic posters as references. May we have in-person poster session soon!

some academic posters as references. May we have in-person poster session soon!

Bolei Zhou 472 Jan 06, 2023
A python3 tool to take a 360 degree survey of the RF spectrum (hamlib + rotctld + RTL-SDR/HackRF)

RF Light House (rflh) A python script to use a rotor and a SDR device (RTL-SDR or HackRF One) to measure the RF level around and get a data set and be

Pavel Milanes (CO7WT) 11 Dec 13, 2022
Automatically align face images ๐Ÿ™ƒโ†’๐Ÿ™‚. Can also do windowing and warping.

Automatic Face Alignment (AFA) Carl M. Gaspar & Oliver G.B. Garrod You have lots of photos of faces like this: But you want to line up all of the face

Carl Michael Gaspar 15 Dec 12, 2022
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
DANA paper supplementary materials

DANA Supplements This repository stores the data, results, and R scripts to generate these reuslts and figures for the corresponding paper Depth Norma

0 Dec 17, 2021