Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Related tags

Deep Learningabcd
Overview

Action-Based Conversations Dataset (ABCD)

This respository contains the code and data for ABCD (Chen et al., 2021)

Introduction

Whereas existing goal-oriented dialogue datasets focus mainly on identifying user intents, customer interactions in reality often involve agents following multi-step procedures derived from explicitly-defined guidelines. For example, in a online shopping scenario, a customer might request a refund for a past purchase. However, before honoring such a request, the agent should check the company policies to see if a refund is warranted. It is very likely that the agent will need to verify the customer's identity and check that the purchase was made within a reasonable timeframe.

To study dialogue systems in more realistic settings, we introduce the Action-Based Conversations Dataset (ABCD), where an agent's actions must be balanced between the desires expressed by the customer and the constraints set by company policies. The dataset contains over 10K human-to-human dialogues with 55 distinct user intents requiring unique sequences of actions to achieve task success. We also design a new technique called Expert Live Chat for collecting data when there are two unequal users engaging in real-time conversation. Please see the paper for more details.

Paper link: https://arxiv.org/abs/2104.00783

Blog link: https://www.asapp.com/blog/action-based-conversations-dataset/

Agent Dashboard

Customer Site

Usage

All code is run by executing the corresponding command within the shell script run.sh, which will kick off the data preparation and training within main.py. To use, first unzip the file found in data/abcd_v1.1.json.gz using the gunzip command (or similar). Then comment or uncomment the appropriate lines in the shell script to get desired behavior. Finally, enter sh run.sh into the command line to get started. Use the --help option of argparse for flag details or read through the file located within utils/arguments.py.

Preparation

Raw data will be loaded from the data folder and prepared into features that are placed into Datasets. If this has already occured, then the system will instead read in the prepared features from cache.

If running CDS for the first time, uncomment out the code within the run script to execute embed.py which will prepare the utterances for ranking.

Training

To specify the task for training, simply use the --task option with either ast or cds, for Action State Tracking and Cascading Dialogue Success respectively. Options for different model types are bert, albert and roberta. Loading scripts can be tuned to offer various other behaviors.

Evaluation

Activate evaluation using the --do-eval flag. By default, run.sh will perform cascading evaluation. To include ablations, add the appropriate options of --use-intent or --use-kb.

Data

The preprocessed data is found in abcd_v1.1.json which is a dictionary with keys of train, dev and test. Each split is a list of conversations, where each conversation is a dict containing:

  • convo_id: a unique conversation identifier
  • scenario: the ground truth scenario used to generate the prompt
  • original: the raw conversation of speaker and utterances as a list of tuples
  • delexed: the delexicalized conversation used for training and evaluation, see below for details

We provide the delexed version so new models performing the same tasks have comparable pre-processing. The original data is also provided in case you want to use the utterances for some other purpose.

For a quick preview, a small sample of chats is provided to help get started. Concretely, abcd_sample.json is a list containing three random conversations from the training set.

Scenario

Each scene dict contains details about the customer setup along with the underlying flow and subflow information an agent should use to address the customer concern. The components are:

  • Personal: personal data related to the (fictional) customer including account_id, customer name, membership level, phone number, etc.
  • Order: order info related to what the customer purchased or would like to purchase. Includes address, num_products, order_id, product names, and image info
  • Product: product details if applicable, includes brand name, product type and dollar amount
  • Flow and Subflow: these represent the ground truth user intent. They are used to generate the prompt, but are not shown directly the customer. The job of the agent is to infer this (latent) intent and then match against the Agent Guidelines to resolve the customer issue.

Guidelines

The agent guidelines are offered in their original form within Agent Guidelines for ABCD. This has been transformed into a formatted document for parsing by a model within data/guidelines.json. The intents with their button actions about found within kb.json. Lastly, the breakdown of all flows, subflows, and actions are found within ontology.json.

Conversation

Each conversation is made up of a list of turns. Each turn is a dict with five parts:

  • Speaker: either "agent", "customer" or "action"
  • Text: the utterance of the agent/customer or the system generated response of the action
  • Turn_Count: integer representing the turn number, starting from 1
  • Targets : list of five items representing the subtask labels
    • Intent Classification (text) - 55 subflow options
    • Nextstep Selection (text) - take_action, retrieve_utterance or end_conversation; 3 options
    • Action Prediction (text) - the button clicked by the agent; 30 options
    • Value Filling (list) - the slot value(s) associated with the action above; 125 options
    • Utterance Ranking (int) - target position within list of candidates; 100 options
  • Candidates: list of utterance ids representing the pool of 100 candidates to choose from when ranking. The surface form text can be found in utterances.json where the utt_id is the index. Only applicable when the current turn is a "retrieve_utterance" step.

In contrast to the original conversation, the delexicalized version will replace certain segments of text with special tokens. For example, an utterance might say "My Account ID is 9KFY4AOHGQ". This will be changed into "my account id is <account_id>".

Contact

Please email [email protected] for questions or feedback.

Citation

@inproceedings{chen2021abcd,
    title = "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems",
    author = "Chen, Derek and
        Chen, Howard and
        Yang, Yi and
        Lin, Alex and
        Yu, Zhou",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for 
    	Computational Linguistics: Human Language Technologies, {NAACL-HLT} 2021",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2021.naacl-main.239",
    pages = "3002--3017"
}
Owner
ASAPP Research
AI for Enterprise
ASAPP Research
This repository contains the code and models for the following paper.

DC-ShadowNet Introduction This is an implementation of the following paper DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised

AuAgCu 65 Dec 27, 2022
DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection

DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection Code for our Paper DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Obje

Steven Lang 58 Dec 19, 2022
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

AtlasNet [Project Page] [Paper] [Talk] AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation Thibault Groueix, Matthew Fisher, Vladimir

577 Dec 17, 2022
Implementation of the Chamfer Distance as a module for pyTorch

Chamfer Distance for pyTorch This is an implementation of the Chamfer Distance as a module for pyTorch. It is written as a custom C++/CUDA extension.

Christian Diller 205 Jan 05, 2023
Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Online Multiple Object Tracking with Cross-Task Synergy This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking

54 Oct 15, 2022
Constrained Language Models Yield Few-Shot Semantic Parsers

Constrained Language Models Yield Few-Shot Semantic Parsers This repository contains tools and instructions for reproducing the experiments in the pap

Microsoft 43 Nov 23, 2022
Histocartography is a framework bringing together AI and Digital Pathology

Documentation | Paper Welcome to the histocartography repository! histocartography is a python-based library designed to facilitate the development of

155 Nov 23, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
The repository for freeCodeCamp's YouTube course, Algorithmic Trading in Python

Algorithmic Trading in Python This repository Course Outline Section 1: Algorithmic Trading Fundamentals What is Algorithmic Trading? The Differences

Nick McCullum 1.8k Jan 02, 2023
Code to produce syntactic representations that can be used to study syntax processing in the human brain

Can fMRI reveal the representation of syntactic structure in the brain? The code base for our paper on understanding syntactic representations in the

Aniketh Janardhan Reddy 4 Dec 18, 2022
keyframes-CNN-RNN(action recognition)

keyframes-CNN-RNN(action recognition) Environment: python=3.7 pytorch=1.2 Datasets: Following the format of UCF101 action recognition. Run steps: Mo

4 Feb 09, 2022
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021
Personal project about genus-0 meshes, spherical harmonics and a cow

How to transform a cow into spherical harmonics ? Spot the cow, from Keenan Crane's blog Context In the field of Deep Learning, training on images or

3 Aug 22, 2022
Datasets, Transforms and Models specific to Computer Vision

torchvision The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. Installat

13.1k Jan 02, 2023
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
A machine learning malware analysis framework for Android apps.

🕵️ A machine learning malware analysis framework for Android apps. ☢️ DroidDetective is a Python tool for analysing Android applications (APKs) for p

James Stevenson 77 Dec 27, 2022
ICCV2021 - Mining Contextual Information Beyond Image for Semantic Segmentation

Introduction The official repository for "Mining Contextual Information Beyond Image for Semantic Segmentation". Our full code has been merged into ss

55 Nov 09, 2022
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 03, 2022