Code for Emergent Translation in Multi-Agent Communication

Overview

Emergent Translation in Multi-Agent Communication

PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Communication.

We present code for training and decoding both word- and sentence-level models and baselines, as well as preprocessed datasets.

Dependencies

Python

  • Python 2.7
  • PyTorch 0.2
  • Numpy

GPU

  • CUDA (we recommend using the latest version. The version 8.0 was used in all our experiments.)

Related code

Downloading Datasets

The original corpora can be downloaded from (Bergsma500, Multi30k, MS COCO). For the preprocessed corpora see below.

Dataset
Bergsma500 Data
Multi30k Data
MS COCO Data

Before you run the code

  1. Download the datasets and place them in /data/word (Bergsma500) and /data/sentence (Multi30k and MS COCO)
  2. Set correct path in scr_path() from /scr/word/util.py and scr_path(), multi30k_reorg_path() and coco_path() from /src/sentence/util.py

Word-level Models

Running nearest neighbour baselines

$ python word/bergsma_bli.py 

Running our models

$ python word/train_word_joint.py --l1 <L1> --l2 <L2>

where <L1> and <L2> are any of {en, de, es, fr, it, nl}

Sentence-level Models

Baseline 1 : Nearest neighbour

$ python sentence/baseline_nn.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG>

Baseline 2 : NMT with neighbouring sentence pairs

$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --nn_baseline 

Baseline 3 : Nakayama and Nishida, 2017

$ python sentence/train_naka_encdec.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --train_enc_how <ENC_HOW> --train_dec_how <DEC_HOW>

where <ENC_HOW> is either two or three, and <DEC_HOW> is either img, des, or both.

Our models :

$ python sentence/train_seq_joint.py --dataset <DATASET> --task <TASK>

Aligned NMT :

$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> 

where <DATASET> is multi30k or coco, and <TASK> is either 1 or 2 (only applicable for Multi30k).

Dataset & Related Code Attribution

  • Moses is licensed under LGPL, and Subword-NMT is licensed under MIT License.
  • MS COCO and Multi30k are licensed under Creative Commons.

Citation

If you find the resources in this repository useful, please consider citing:

@inproceedings{Lee:18,
  author    = {Jason Lee and Kyunghyun Cho and Jason Weston and Douwe Kiela},
  title     = {Emergent Translation in Multi-Agent Communication},
  year      = {2018},
  booktitle = {Proceedings of the International Conference on Learning Representations},
}
Owner
Facebook Research
Facebook Research
Transformer Based Korean Sentence Spacing Corrector

TKOrrector Transformer Based Korean Sentence Spacing Corrector License Summary This solution is made available under Apache 2 license. See the LICENSE

Paul Hyung Yuel Kim 3 Apr 18, 2022
A CRM department in a local bank works on classify their lost customers with their past datas. So they want predict with these method that average loss balance and passive duration for future.

Rule-Based-Classification-in-a-Banking-Case. A CRM department in a local bank works on classify their lost customers with their past datas. So they wa

ÖMER YILDIZ 4 Mar 20, 2022
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
This repository contains the code for "Generating Datasets with Pretrained Language Models".

Datasets from Instructions (DINO 🦕 ) This repository contains the code for Generating Datasets with Pretrained Language Models. The paper introduces

Timo Schick 154 Jan 01, 2023
Simple bots or Simbots is a library designed to create simple bots using the power of python. This library utilises Intent, Entity, Relation and Context model to create bots .

Simple bots or Simbots is a library designed to create simple chat bots using the power of python. This library utilises Intent, Entity, Relation and

14 Dec 15, 2021
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.

Dense Passage Retrieval Dense Passage Retrieval (DPR) - is a set of tools and models for state-of-the-art open-domain Q&A research. It is based on the

Meta Research 1.1k Jan 07, 2023
Text editor on python to convert english text to malayalam(Romanization/Transiteration).

Manglish Text Editor This is a simple transiteration (romanization ) program which is used to convert manglish to malayalam (converts njaan to ഞാൻ ).

Merin Rose Tom 1 May 11, 2022
wxPython app for converting encodings, modifying and fixing SRT files

Subtitle Converter Program za obradu srt i txt fajlova. Requirements: Python version 3.8 wxPython version 4.1.0 or newer Libraries: srt, PyDispatcher

4 Nov 25, 2022
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration

Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration This repo contains only model Implementation of Zero-Shot Text-to-Speech for Text

Rishikesh (ऋषिकेश) 33 Sep 22, 2022
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

20.5k Jan 08, 2023
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"

Status: Archive (code is provided as-is, no updates expected) Update August 2020: For an example repository that achieves state-of-the-art modeling pe

OpenAI 1.3k Dec 28, 2022
A PyTorch-based model pruning toolkit for pre-trained language models

English | 中文说明 TextPruner是一个为预训练语言模型设计的模型裁剪工具包,通过轻量、快速的裁剪方法对模型进行结构化剪枝,从而实现压缩模型体积、提升模型速度。 其他相关资源: 知识蒸馏工具TextBrewer:https://github.com/airaria/TextBrewe

Ziqing Yang 231 Jan 08, 2023
KoBERTopic은 BERTopic을 한국어 데이터에 적용할 수 있도록 토크나이저와 BERT를 수정한 코드입니다.

KoBERTopic 모델 소개 KoBERTopic은 BERTopic을 한국어 데이터에 적용할 수 있도록 토크나이저와 BERT를 수정했습니다. 기존 BERTopic : https://github.com/MaartenGr/BERTopic/tree/05a6790b21009d

Won Joon Yoo 26 Jan 03, 2023
Python package for performing Entity and Text Matching using Deep Learning.

DeepMatcher DeepMatcher is a Python package for performing entity and text matching using deep learning. It provides built-in neural networks and util

461 Dec 28, 2022
An evaluation toolkit for voice conversion models.

Voice-conversion-evaluation An evaluation toolkit for voice conversion models. Sample test pair Generate the metadata for evaluating models. The direc

30 Aug 29, 2022
[WWW 2021 GLB] New Benchmarks for Learning on Non-Homophilous Graphs

New Benchmarks for Learning on Non-Homophilous Graphs Here are the codes and datasets accompanying the paper: New Benchmarks for Learning on Non-Homop

94 Dec 21, 2022
Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering

Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2

Google Research Datasets 52 Jun 21, 2022
Code for our paper "Mask-Align: Self-Supervised Neural Word Alignment" in ACL 2021

Mask-Align: Self-Supervised Neural Word Alignment This is the implementation of our work Mask-Align: Self-Supervised Neural Word Alignment. @inproceed

THUNLP-MT 46 Dec 15, 2022