Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022

Related tags

Deep LearningATP-AMR
Overview

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

PWC

PWC

Hi this is the source code of our paper "ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs" accepted by findings of NAACL 2022.

News

  • 🎈 Release camera ready paper. arXiv 2022.04.20
  • 🎈 We have released four trained models and the test scripts. 2022.04.10

Todos

  • 🎯 We are working on merging our training/preprocessing code with the amrlib repo.

Brief Introduction

TL;DR: SOTA AMR Parsing single model using only 40k extra data. Rank 1st model on Structrual-Related Scores (SRL and Reentrancy).

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing. With carefully designed control experiments, we find that 1) Semantic role labeling (SRL) and dependency parsing (DP), would bring much more significant performance gain than unrelated tasks in the text-to-AMR transition. 2) To make a better fit for AMR, data from auxiliary tasks should be properly ``AMRized'' to PseudoAMR before training. 3) Intermediate-task training paradigm outperforms multitask learning when introducing auxiliary tasks to AMR parsing.

From an empirical perspective, we propose a principled method to choose, reform, and train auxiliary tasks to boost AMR parsing. Extensive experiments show that our method achieves new state-of-the-art performance on in-distribution, out-of-distribution, low-resources benchmarks of AMR parsing.

Requriments

Build envrionment for Spring

cd spring
conda create -n spring python=3.7
pip install -r requirements.txt
pip install -e .
# we use torch==1.11.0 and A40 GPU. lower torch version is fine.

Build envrionment for BLINK to do entity linking, Note that BLINK has some requirements conflicts with Spring, while the blinking script relies on both repos. So we build it upon Spring.

conda create -n blink37 -y python=3.7 && conda activate blink37

cd spring
pip install -r requirements.txt
pip install -e .

cd ../BLINK
pip install -r requirements.txt
pip install -e .
bash download_blink_models.sh

Preprocess and AMRization

coming soon ~

Training

(cleaning code and data in progress)

cd spring/bin
  • Train ATP-DP Task
python train.py --direction dp --config /home/cl/AMR_Multitask_Inter/spring/configs/config_dp.yaml
  • Train ATP-SRL Task
python train.py --direction dp --config /home/cl/AMR_Multitask_Inter/spring/configs/config_srl.yaml 
# yes, the direction is also dp
  • Train AMR Task based on intermediate ATP-SRL/DP Model
python train.py --direction amr --checkpoint PATH_TO_SRL_DP_MODEL --config ../configs/config.yaml
  • Train AMR,SRL,DP Task in multitask Manner
python train.py --direction multi --config ../configs/config_multitask.yaml

Inference

conda activate spring

cd script
bash intermediate_eval.sh MODEL_PATH 
# it will generate the gold and the parsed amr files, you should the change the path of AMR2.0/3.0 Dataset in the script.

conda activate blink37 
# you should download the blink models according to the ATP/BLINK/download_blink_models.sh in BLINK repo
bash blink.sh PARSED_AMR BLINK_MODEL_DIR

cd ../amr-evaluation
bash evaluation.sh PARSED_AMR.blink GOLD_AMR_PATH

Models Release

You could refer to the inference section and download the models below to reproduce the result in our paper.

#scores
Smatch -> P: 0.858, R: 0.844, F: 0.851
Unlabeled -> P: 0.890, R: 0.874, F: 0.882
No WSD -> -> P: 0.863, R: 0.848, F: 0.855
Concepts -> P: 0.914 , R: 0.895 , F: 0.904
Named Ent. -> P: 0.928 , R: 0.901 , F: 0.914
Negations -> P: 0.756 , R: 0.758 , F: 0.757
Wikification -> P: 0.849 , R: 0.824 , F: 0.836
Reentrancies -> P: 0.756 , R: 0.744 , F: 0.750
SRL -> P: 0.840 , R: 0.830 , F: 0.835
#scores
Smatch -> P: 0.859, R: 0.844, F: 0.852
Unlabeled -> P: 0.891, R: 0.876, F: 0.883
No WSD -> -> P: 0.863, R: 0.849, F: 0.856
Concepts -> P: 0.917 , R: 0.898 , F: 0.907
Named Ent. -> P: 0.942 , R: 0.921 , F: 0.931
Negations -> P: 0.742 , R: 0.755 , F: 0.749
Wikification -> P: 0.851 , R: 0.833 , F: 0.842
Reentrancies -> P: 0.753 , R: 0.741 , F: 0.747
SRL -> P: 0.837 , R: 0.830 , F: 0.833
#scores
Smatch -> P: 0.859, R: 0.847, F: 0.853
Unlabeled -> P: 0.891, R: 0.877, F: 0.884
No WSD -> -> P: 0.863, R: 0.851, F: 0.857
Concepts -> P: 0.917 , R: 0.899 , F: 0.908
Named Ent. -> P: 0.938 , R: 0.917 , F: 0.927
Negations -> P: 0.740 , R: 0.755 , F: 0.747
Wikification -> P: 0.849 , R: 0.830 , F: 0.840
Reentrancies -> P: 0.755 , R: 0.748 , F: 0.751
SRL -> P: 0.837 , R: 0.836 , F: 0.836
#scores
Smatch -> P: 0.844, R: 0.836, F: 0.840
Unlabeled -> P: 0.875, R: 0.866, F: 0.871
No WSD -> -> P: 0.849, R: 0.840, F: 0.845
Concepts -> P: 0.908 , R: 0.892 , F: 0.900
Named Ent. -> P: 0.900 , R: 0.879 , F: 0.889
Negations -> P: 0.734 , R: 0.729 , F: 0.731
Wikification -> P: 0.816 , R: 0.798 , F: 0.807
Reentrancies -> P: 0.729 , R: 0.749 , F: 0.739
SRL -> P: 0.822 , R: 0.830 , F: 0.826

Acknowledgements

We thank all people/group that share open-source scripts for this project, which include the authors for SPRING, amrlib, smatch, amr-evaluation, BLINK and all other repos.

Citation

If you feel our work helpful, please kindly cite

@misc{https://doi.org/10.48550/arxiv.2204.08875,
  doi = {10.48550/ARXIV.2204.08875},
  
  url = {https://arxiv.org/abs/2204.08875},
  
  author = {Chen, Liang and Wang, Peiyi and Xu, Runxin and Liu, Tianyu and Sui, Zhifang and Chang, Baobao},
  
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
Owner
Chen Liang
Currently a research intern at MSR Asia, NLC group
Chen Liang
Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Rithesh Kumar 16 Oct 06, 2022
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
Code release of paper Improving neural implicit surfaces geometry with patch warping

NeuralWarp: Improving neural implicit surfaces geometry with patch warping Project page | Paper Code release of paper Improving neural implicit surfac

François Darmon 167 Dec 30, 2022
Conversion between units used in magnetism

convmag Conversion between various units used in magnetism The conversions between base units available are: T - G : 1e4

0 Jul 15, 2021
A module that used for encrypt code which includes RSA and AES

软件加密模块 requirement: Crypto,pycryptodome,pyqt5 本地加密信息为随机字符串 使用说明 命令行参数 -h 帮助 -checkWorking 检查是否能正常工作,后接1确认指令 -checkEndDate 检查截至日期,后接1确认指令 -activateCode

2 Sep 27, 2022
Official pytorch implementation of "Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization" ACMMM 2021 (Oral)

Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization This is an official implementation of "Feature Stylization and Domain-

22 Sep 22, 2022
BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构

BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构。 文档地址:https://basecls.readthedocs.io 安装 安装环境 BaseCls 需要 Python = 3.6。 BaseCls 依赖 M

MEGVII Research 28 Dec 23, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
[ICCV 2021] Our work presents a novel neural rendering approach that can efficiently reconstruct geometric and neural radiance fields for view synthesis.

MVSNeRF Project page | Paper This repository contains a pytorch lightning implementation for the ICCV 2021 paper: MVSNeRF: Fast Generalizable Radiance

Anpei Chen 529 Dec 30, 2022
Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)

Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: Gated-Attention Architecture

Devendra Chaplot 234 Nov 05, 2022
Flower - A Friendly Federated Learning Framework

Flower - A Friendly Federated Learning Framework Flower (flwr) is a framework for building federated learning systems. The design of Flower is based o

Adap 1.8k Jan 01, 2023
OpenFed: A Comprehensive and Versatile Open-Source Federated Learning Framework

OpenFed: A Comprehensive and Versatile Open-Source Federated Learning Framework Introduction OpenFed is a foundational library for federated learning

25 Dec 12, 2022
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
A CROSS-MODAL FUSION NETWORK BASED ON SELF-ATTENTION AND RESIDUAL STRUCTURE FOR MULTIMODAL EMOTION RECOGNITION

CFN-SR A CROSS-MODAL FUSION NETWORK BASED ON SELF-ATTENTION AND RESIDUAL STRUCTURE FOR MULTIMODAL EMOTION RECOGNITION The audio-video based multimodal

skeleton 15 Sep 26, 2022
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

112 Nov 30, 2022
Explainability for Vision Transformers (in PyTorch)

Explainability for Vision Transformers (in PyTorch) This repository implements methods for explainability in Vision Transformers

Jacob Gildenblat 442 Jan 04, 2023
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, L

3 Dec 02, 2022