title
stringlengths
15
188
abstract
stringlengths
400
1.8k
introduction
stringlengths
9
10.5k
content
stringlengths
778
41.9k
abstract_len
int64
400
1.8k
intro_len
int64
9
10.5k
abs_len
int64
400
1.8k
A Conditional Splitting Framework for Efficient Constituency Parsing
We introduce a generic seq2seq parsing framework that casts constituency parsing problems (syntactic and discourse parsing) into a series of conditional splitting decisions. Our parsing model estimates the conditional probability distribution of possible splitting points in a given text span and supports efficient topdown decoding, which is linear in number of nodes. The conditional splitting formulation together with efficient beam search inference facilitate structural consistency without relying on expensive structured inference. Crucially, for discourse analysis we show that in our formulation, discourse segmentation can be framed as a special case of parsing which allows us to perform discourse parsing without requiring segmentation as a pre-requisite. Experiments show that our model achieves good results on the standard syntactic parsing tasks under settings with/without pre-trained representations and rivals state-of-the-art (SoTA) methods that are more computationally expensive than ours. In discourse parsing, our method outperforms SoTA by a good margin.
A number of formalisms have been introduced to analyze natural language at different linguistic levels. This includes syntactic structures in the form of phrasal and dependency trees, semantic structures in the form of meaning representations In recent years, neural end-to-end parsing methods have outperformed traditional methods that use grammar, lexicon and hand-crafted features. These methods can be broadly categorized based on whether they employ a greedy transition-based, a globally optimized chart parsing or a greedy topdown algorithm. Transition-based parsers Chart based methods, on the other hand, train neural scoring functions to model the tree structure globally Discourse parsing in RST requires an additional step -discourse segmentation which involves breaking the text into contiguous clause-like units called Elementary Discourse Units or EDUs (Figure In this paper, we propose a generic top-down neural framework for constituency parsing that we validate on both syntactic and sentence-level discourse parsing. Our main contributions are: • We cast the constituency parsing task into a series of conditional splitting decisions and use a seq2seq architecture to model the splitting decision at each decoding step. Our parsing model, which is an instance of a Pointer Network • The conditional probabilities of the splitting decisions are optimized using a cross entropy loss and structural consistency is maintained through a global pointing mechanism. The training process can be fully parallelized without requiring structured inference as in • Our model enables efficient top-down decoding with O(n) running time like transition-based parsers, while also supporting a customized beam search to get the best tree by searching through a reasonable search space of high scoring trees. The beam-search inference along with the structural consistency from the modeling makes our approach competitive with existing structured chart methods for syntactic • For discourse analysis, we demonstrate that our method can effectively find the segments (EDUs) by simply performing one additional step in the top-down parsing process. In other words, our method can parse a text into the discourse tree without needing discourse segmentation as a prerequisite; instead, it produces the segments as a by-product. To the best of our knowledge, this is the first model that can perform segmentation and parsing in a single embedded framework. In the experiments with English Penn Treebank, our model without pre-trained representations achieves 93.8 F1, outperforming all existing methods with similar time complexity. With pre-training, our model pushes the F1 score to 95.7, which is on par with the SoTA while supporting faster decoding with a speed of over 1,100 sentences per second (fastest so far). Our model also performs competitively with SoTA methods on the multilingual parsing tasks in the SPMRL 2013/2014 shared tasks. In discourse parsing, our method establishes a new SoTA in end-to-end sentence-level parsing performance on the RST Discourse Treebank with an F1 score of 78.82. We make our code available at where l t is the label of the text span (i t , j t ) encompassing tokens from index i t to index j t . Previous approaches to syntactic parsing In contrary, we formulate constituency parsing as the problem of finding the splitting points in a recursive, top-down manner. For each parent node in a tree that spans over (i, j), our parsing model is trained to point to the boundary between the tokens at k and k + 1 positions to split the parent span into two child spans (i, k) and (k + 1, j). This is done through the Pointing mechanism The correspondence between token-and boundary-based representations of a tree is straightforward. After including the start (<sos>) and end (<eos>) tokens, the token-based span (i, j) is equivalent to the boundary-based span (i -1, j) Proposition 1 A binary syntactic tree T of a sentence containing n tokens can be transformed into a set of splitting decisions C(T ) = {(i, j) ) k : i < k < j} such that the parent span (i, j) is split into two child spans (i, k) and (k, j). An example of the splitting representation of a tree is shown in Figure
Note that in syntactic parsing, the split position must be within the span but not at its edge, that is, k must satisfy i < k < j for each boundary span (i, j). Otherwise, it will not produce valid sub-trees. In this case, we keep splitting until each span contains a single leaf token. However, for discourse trees, each leaf is an EDU -a clause-like unit that can contain one or multiple tokens. Unlike previous studies which assume discourse segmentation as a pre-processing step, we propose a unified formulation that treats segmentation as one additional step in the top-down parsing process. To accommodate this, we relax Proposition 1 as: Proposition 2 A binary discourse tree DT of a text containing n tokens can be transformed into a set of splitting decisions C(DT ) = {(i, j) ) k : i < k ≤ j} such that the parent span (i, j) gets split into two child spans (i, k) and (k, j) for k < j or a terminal span or EDU for k = j (end of splitting the span further). We illustrate it with the DT example in Figure Let C(T ) and L(T ) respectively denote the structure (in split representation) and labels of a tree T (syntactic or discourse) for a given text x. We can express the probability of the tree as: This factorization allows us to first infer the tree structure from the input text, and then find the corresponding labels. As discussed in the previous section, we consider the structure prediction as a sequence of splitting decisions to generate the tree in a top-down manner. Specifically, at each decoding step t, the output y t represents the splitting decision (i t , j t ) ) k t and y <t represents the previous splitting decisions. Thus, we can express the probability of the tree structure as follows: This can effectively be modeled within a Seq2Seq pointing framework as shown in Figure We now describe the components of our parsing model: the sentence encoder, the span representation, the pointing model and the labeling model. Sentence Encoder Given an input sequence of n tokens x = (x 1 , . . . , x n ), we first add <sos> and <eos> markers to the sequence. After that, each token t in the sequence is mapped into its dense vector representation e t as where e char t , e word t are respectively the character and word embeddings of token t. Similar to To represent each boundary between positions k and k + 1, we use the fencepost representation where f k and b k+1 are the forward and backward LSTM hidden vectors at positions k and k + 1, re- spectively. To represent the span (i, j), we compute a linear combination of the two endpoints This span representation will be used as input to the decoder. Figure The Decoder Our model uses a unidirectional LSTM as the decoder. At each decoding step t, the decoder takes as input the corresponding span (i, j) (specifically, h i,j ) and its previous state d t-1 to generate the current state d t and then apply a biaffine function where each MLP operation includes a linear transformation with LeakyReLU activation to transform d and h into equal-sized vectors, and W dh ∈ IR d×d and w h ∈ IR d are respectively the weight matrix and weight vector for the biaffine function. The biaffine scores are then passed through a softmax layer to acquire the pointing distribution a t ∈ [0, 1] n for the splitting decision. When decoding the tree during inference, at each step we only examine the 'valid' splitting points between i and j -for syntactic parsing, it is i < k < j and for discourse parsing, it is i < k ≤ j. Label Classifier For syntactic parsing, we perform the label assignments for a span (i, j) as: where each of MLP l and MLP r includes a linear transformation with LeakyReLU activations to transform the left and right spans into equal-sized vectors, and W lr ∈ IR d×L×d , W l ∈ IR d×L , W r ∈ IR d×L are the weights and b is a bias vector with L being the number of phrasal labels. For discourse parsing, we perform label assignment after every split decision since the label here represents the relation between the child spans. Specifically, as we split a span (i, j) into two child spans (i, k) and (k, j), we determine the relation label as the following. where MLP l , MLP r , W lr , W l , W r , b are similarly defined. Training Objective The total loss is simply the sum of the cross entropy losses for predicting the structure (split decisions) and the labels: where θ = {θ e , θ d , θ label } denotes the overall model parameters, which includes the encoder parameters θ e shared by all components, parameters for splitting θ d and parameters for labeling θ label . As mentioned, existing top-down syntactic parsers do not consider the decoding history. They also perform greedy inference. With our conditional splitting formulation, our method can not only model the splitting history but also enhance the search space of high scoring trees through beam search. At each step, our decoder points to all the encoded boundary representations which ensures that the pointing scores are in the same scale, allowing a fair comparison between the total scores of all candidate subtrees. With these uniform scores, we could apply a beam search to infer the most probable tree using our model. Specifically, the method generates the tree in depth-first order while maintaining top-B (beam size) partial trees at each step. It terminates exactly after n -1 steps, which matches the number of internal nodes in the tree. Because beam size B is constant with regards to the sequence length, we can omit it in the Big O notation. Therefore, each decoding step with beam search can be parallelized (O(1) complexity) using GPUs. This makes our algorithm run at O(n) time complexity, which is faster than most top-down methods. If we strictly use CPU, our method runs at O(n Input: Sentence length n; beam width B; boundary-based encoder states: (h0, h1, . . . , hn); label scores: P θ (l|i, j), 0 ≤ i < j ≤ n, l ∈ {1, . . . , L}, initial decoder state s. Output: Parse tree T 1: L d = n -1 // Decoding length 2: beam = array of L d items // List of empty beam items 3: init_tree= [(0, n), (0, 0), . . . , (0, 0)] // n -2 paddings (0,0) 4: beam[0] = (0, s, init_tree) // Init 1st item(log-prob,state,tree) 5: for t = 1 to L d do 6: for (logp, s, tree) ∈ beam[t -1] do 7: (i, j) = tree[t -1] // Current span to split 8: a, s = decoder-step(s, hi,j) // a: split prob. dist. for (k, p k ) ∈ top-B(a) and i < k < j do 10: curr-tree = tree 11: // S * : best structure 23: labeled-spans = [(i, j, arg max l P θ (l|i, j)) ∀(i, j) ∈ S * ] 24: labeled-singletons = [(i, i + 1, arg max l P θ (l|i, i + 1)) for i = {0, . . . , n -1}] 25: T = labeled-spans ∪ labeled-singletons By enabling beam search, our method can find the best tree by comparing high scoring trees within a reasonable search space, making our model competitive with existing structured (globally) inference methods that use more expensive algorithms like CKY and/or larger models Datasets and Metrics To show the effectiveness of our approach, we conduct experiments on both syntactic and sentence-level RST parsing tasks. 2 We use the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) For evaluation on syntactic parsing, we report the standard labeled precision (LP), labeled recall (LR), and labelled F1 computed by evalb 3 . For evaluation on RST-DT, we report the standard span, nuclearity label, relation label F1 scores, computed using the implementation of (Lin et al., 2019). 4 4.1 English (PTB) Syntactic Parsing Setup We follow the standard train/valid/test split, which uses Sections 2-21 for training, Section 22 for development and Section 23 for evaluation. This results in 39,832 sentences for training, 1,700 for development, and 2,416 for testing. For our model, we use an LSTM encoder-decoder framework with a 3-layer bidirectional encoder and 3layer unidirectional decoder. The word embedding size is 100 while the character embedding size is 50; the LSTM hidden size is 400. The hidden dimension in MLP modules and biaffine function for split point prediction is 500. The beam width B is set to 20. We use the Adam optimizer (Kingma and Ba, 2015) with a batch size of 5000 tokens, and an initial learning rate of 0.002 which decays at the rate 0.75 exponentially at every 5k steps. Model selection for final evaluation is performed based on the labeled F1 score on the development set. Plus, Results with Pre-training Similar to We use the identical hyper-parameters and optimizer setups as in English PTB. We follow the standard train/valid/test split provided in the SPMRL datasets; details are reported in the Table Setup For discourse parsing, we follow the standard split from Basque French German Hungarian Korean Polish Swedish Results Table We compare parsing speed of different models in Table Discourse Parsing For measuring discourse parsing speed, we follow the same set up as With the recent popularity of neural architectures, such as LSTMs In discourse parsing, existing parsers receive the EDUs from a segmenter to build the discourse tree, which makes them susceptible to errors when the segmenter produces incorrect EDUs Our approach differs from previous methods in that it represents the constituency structure as a series of splitting representations, and uses a Seq2Seq framework to model the splitting decision at each step. By enabling beam search, our model can find the best trees without the need to perform an expensive global search. We also unify discourse segmentation and parsing into one system by generalizing our model, which has been done for the first time to the best of our knowledge. Our splitting mechanism shares some similarities with Pointer Network We have presented a novel, generic parsing method for constituency parsing based on a Seq2Seq framework. Our method supports an efficient top-down decoding algorithm that uses a pointing function for scoring possible splitting points. The pointing mechanism captures global structural properties of a tree and allows efficient training with a cross entropy loss. Our formulation, when applied to discourse parsing, can bypass discourse segmentation as a pre-requisite step. Through experiments we have shown that our method outperforms all existing top-down methods on English Penn Treebank and RST Discourse Treebank sentence-level parsing tasks. With pre-trained representations, our method rivals state-of-the-art methods, while being faster. Our model also establishes a new state-ofthe-art for sentence-level RST parsing.
1,079
4,219
1,079
TripleNet: Triple Attention Network for Multi-Turn Response Selection in Retrieval-based Chatbots
We consider the importance of different utterances in the context for selecting the response usually depends on the current query. 1 In this paper, we propose the model TripleNet to fully model the task with the triple context, query, response instead of context, response in previous works. The heart of TripeNet is a novel attention mechanism named triple attention to model the relationships within the triple at four levels. The new mechanism updates the representation for each element based on the attention with the other two concurrently and symmetrically. We match the triple C, Q, R centered on the response from char to context level for prediction. Experimental results on two large-scale multi-turn response selection datasets show that the proposed model can significantly outperform the state-of-the-art methods. 2
To establish a human-machine dialogue system is one of the most challenging tasks in Artificial Intelligence (AI). Existing works on building dialogue systems are mainly divided into two categories: retrieval-based method A: i downloaded angry ip scanner and now it doesn't work and i can't uninstall it B: you installed it via package or via some ::::: binary :::::: installer A: i installed from ubuntu soft center B: hm i do n't know what package it is but it should let you remove it the same way A: ah makes sense then ... hm ::: was : it : a :::: deb :: file True Response: i think : it :::: was :::::: another ::::: format mayge sth starting with r False Response: thanks i appreciate it try sudo apt-get install libxine-extracodecs Figure In this paper, we are focusing on the retrievalbased method because it is more practical in applications. Selecting a response from a set of candidates is an important and challenging task for the retrieval-based method. Many of the previous approaches are based on Deep Neural Network (DNN) to select the response for single-turn conversation Previous works • we use a novel triple attention mechanism to model the relationships within C, Q, R instead of C, R ; • we propose a hierarchical representation module to fully model the conversation from char to context level; • The experimental results on Ubuntu and Douban corpus show that TripleNet significantly outperform the state-of-the-art result.
Earlier works on building the conversation systems are generally based on rules or templates As the portability and coverage of such systems are far from satisfaction, people pay more attention to the data-driven approaches for the opendomain conversation system In this paper, we focus on the task response selection which belongs to retrieval-based approach. The early studies of response selection generally focus on the single-turn conversation, which use only the current query to select the response Our model is different from the previous methods: first we model the task with the triple C, Q, R instead of C, R in the early works, and use a novel triple attention matching mechanism to model the relationships within the triple. Then we represent the context from low (character) to high (context) level, which constructs the representations for the context more comprehensively. In this section, we will give a detailed introduction of the proposed model TripleNet. We first formal- Bi-directional Attention Function (BAF) ize the problem of the response selection for multiturn conversation. Then we briefly introduce the overall architecture of the proposed model. Finally, the details of each part of our model will be illustrated. For the response selection, we define the task as given the context C, current query Q and candidate response R, which is different from almost all the previous works The information in context is composed of four levels: context, utterances, words and characters, which can be formulated as C = (u 1 , u 2 , ..., u i , ..., u n ), where u i represents the ith utterance, and n is the maximum utterance number. The last utterance in the context is query Q = U n ; we still use query as the end of context to maintain the integrity of the information in context. Each utterance can be formulated as u i = (w 1 , ..., w j , .., w m ), where w j is the jth word in the utterance and m is the maximum word number in the utterance. Each word can be represented by multiply characters w j = (ch 1 , ..., ch k , .., ch l ), where ch k is the kth char and l is the length of the word in char-level. The latter two levels are similar in the query and response. The overall architecture of the model TripleNet is displayed in Figure Then the triple attention mechanism is applied to update the representations. At last, the model matches them while focused on the response and fuses the result for prediction. In the hierarchical representation module, we represent the conversation in four perspectives including char, word, utterance, and context. In the char-level, a convolutional neural network (CNN) is applied to the embedding matrix of each word and produces the embedding of the word by convolution and maxpooling operations as the charlevel representation. In word-level, we use a shared LSTM layer to obtain the word-level embedding for each word. After that, we use selfattention to encode the representation of each utterance into a vector which is the utterance-level representation. At last, the utterance-level representation of each utterance is fed into another LSTM layer to further model the information among different utterances, forming the contextlevel representation. The structure of the triple attention mechanism can be seen in the right part of Figure Char-level Representation. At first, we embed the characters in each word into fixed size vectors and use a CNN followed by max-pooling to get character-derived embeddings for each word, which can be formulated by where W j 1 , b j 1 are parameters, x t:t+s j -1 refers to the concatenation of the embedding of (x t ,...,x t+s j -1 ), s j is the window size of jth filter, and the ch is the representation of the word in char-level. Word-level Representation. Furthermore, we embed word x by pre-trained word vectors, and we also introduce a word matching (MF) feature to the embedding to make the model more sensitive to concurrent words. If the word appears in the response and context or query simultaneously, we set the feature to 1, otherwise to 0. where e(x) to denotes the embedding representation, W e is the pre-trained word embedding, and ch(x) is the character embedding function. We use a shared bi-directional LSTM to get contextual word representations in each utterance, query, and the response. The representation of each word is formed by concatenating the forward and backward LSTM hidden output. where h(x) is the representation of the word. We denote the word-level representation of the context as h u ∈ R m * dw and the response as h r ∈ R m * dw , where d w is the dimension of Bi-LSTMs. Until now, we have constructed the representations of context, query, and response in char and word level, and we only represent the latter two in these two levels because they don't have such rich contextual information as the context. Utterance-level Representation. Given the , we construct the utterance-level representation by self-attention where W 2 ∈ R d * dw , W 3 ∈ R d are trainable weights, d is a hyperparameter, u k is the utterancelevel representation, and α k i is the attention weight for the ith word in the kth utterance, which signifies the importance of the word in the utterance. Context-level Representation. To further model the continuity and contextual information among the utterances, we fed the utterance-level representations into another bi-directional LSTM layer to obtain the representation for each utterance in context perspective. where c k ∈ R dc is the context-level representation for the kth utterance in the context and d c is the output size of the Bi-LSTM. In this part, we update the representations of context, query, and response in each level by triple attention, the motivation of which is to model the latent relationships within context, query, response . Given the triple C, Q, R , we fed each of its pairs into bi-directional attention function (BAF). where BN denotes the batch normalization layer where Att pq , Att qp are the attention between P and Q in two directions, P , Q are the new representations the two sequences (P, Q), and we apply a batch normalization layer upon them too. We find that the triple attention has some interesting features: (1) triple, the representation for each element in the triple C, Q, R is updated based on the attention to the other two concurrently; (2) symmetrical, which means each element in the triple plays the same role in the structure because their contents are similar in the whole conversation; (3) unchanged dimension, all the outputs of triple attention has the same dimensions as the inputs, so we can stack multiple layers as needed. Triple Matching. We match the triple C, Q, R in each level with the cosine distance using new representations produced by triple attention. This process focuses on the response because it is our target. For example, in the char-level, we match the triple by M 1 rq (i, j) = cosine(ch r (i), ch q (j)) (24) where ch is the representation updated by triple attention, M 1 ∈ R m * (n+m) is the char-level matching result, the word-level matches the triple in the same way, and the utterance and the context level match the triple without the maxpooling operation. We use M 2 , M 3 , M 4 as the matching results in the word, utterance and context levels. Fusion. After obtaining the four-level matching matrix, we use hierarchical RNN to get highly abstract features. Firstly, we concatenate the four matrices to form a 3D cube M ∈ R m * (n+m) * 4 and we use m as one of the matrix in M , which denotes the matching result for one word in response in four levels. Where m i and mj are the ith, jth row in the matrix m and m. We merge the results from different time steps in the outputs of LSTM by max-pooling operation. Until now, we encode the matching result into a single feature vector v. Final Prediction. For the final prediction, we fed the vector V into a full-connected layer with sigmoid output activation. where W 4 , b 4 are trainable weights. Our purpose is to predict the matching score between the context, query and candidate response, which can be seen as a binary classification task. To train our model, we minimize the cross entropy loss between the prediction and ground truth. We first evaluate our model on Ubuntu Dialogue Corpus We implement our model by For better comparison with the baseline models, the main super parameters in TripleNet, such 3 We basically divided baseline models into two categories for comparisons. Non-Attention Models. The majority of the previous works on this task are designed without attention mechanisms, including the Sequential Matching Network (SMN) The overall results on two datasets are depicted in Table Our results are obviously better on the two datasets compared with recently attention-based model DAM, which exceeds 2.3% in R 10 @1 of Ubuntu and 2.6% in P @1 of Douban. Furthermore, our score is significantly exceeding in almost all metrics except the R 10 @5 in Douban when compared with DUA, which may be be-cause the metric is not very stable as the test set in Douban is very small (1000). To further improve the performance, we utilize pre-trained ELMo Compared to non-attention models such as the SMN and Multi-view, which match the context and response at two levels, TripleNet shows substantial improvements. On R 10 @1 for Ubuntu corpus, there is a 6.3% absolute improvement from SMN and 12.8% from Multi-view, showing the effectiveness of triple attention. To better demonstrate the effectiveness of TripleNet, we conduct the ablations on the model under the Ubuntu corpus for its larger data size. We first remove the triple attention and matching parts (-TAM); the result shows a marked decline (2.4% in R 10 @1), which is in the second part of Table When we remove the matching between context and response, we find that the performance of the model has a marked drop (2.1 in R 10 @1), which shows that the relationship within C, R is the base for selecting the response. The query and response matching part also leads to a significant decline. This shows that we should pay more attention to query within the whole context. Hierarchical representation ablation. To find out the calculation of which level is most important, we also tried to remove each level calculation from the hierarchical representation module, which can be seen in the fourth part of Table 2. To our surprise, when we remove char (char) and context level calculation (-context), we observe that the reduction (0.5 in R 10 @1) is more significant than the other two, indicating that we should pay more attention to the lowest and highest level information. Also by removing the other two levels, there is also a significant reduction from TripleNet, which means each level of the three is indispensable for our TripleNet . From the experiments in this part, we find that each subsection of the hierarchical representation module only leads to a slight performance drop. Maybe it's because the representation from each By decoding our model for the case in Figure In the query-context attention, the query mainly pays attention to the keyword 'package.' This is helpful to get the topic of the conversation. While the attention of context focuses on the word 'a' which is near the key phrase 'deb file,' which may be because the representation of the word catches some information from the words nearby by Bi-LSTM. In the query-response attention, the result shows that the attention of the query mainly focuses on the word 'format,' which is the most important word in the response. But we can also find that the response does not catch the important words in the query. In the response-context attention, the response pays more attention to the word 'binary,' which is another important word in the context. From the three maps, we find that each attention can catch some important information but miss some useful information too. If we join the information in query-context and response-context attention, we can catch the most import information in the context. Furthermore, the query-response attention can help us catch the most important word in the response. So it is natural for TripleNet to select the right response because the model can integrate the three attentions together. In this section, we will discuss the importance of different utterances in the context. To find out the importance of different utterances in the context, we conduct an experiment by removing each one of them with the model (-Query) in the ablation experiment part because the model deals all the utterances include the query in the same way. For each experiment in this part, we remove the ith (0 < i < 13 and Q = U 12 ) utterance in the context both in training and evaluation processes and report the decrease of performance in Figure From the whole result, we can conclude that it's better to model the query separately than deal all of the utterances in the same way for their significantly different importance; we also find that we should pay more attention to the utterances near the query because they are more important. In this paper, we propose a model TripleNet for multi-turn response selection. We model the context from low (character) to high (context) level, update the representation by triple attention within C, Q, R , match the triple focused on response, and fuse the matching results with hierarchical LSTM for prediction. Experimental results show that the proposed model achieves state-of-the-art results on both Ubuntu and Douban corpus, which ranges from a specific domain to open domain, and English to Chinese language, demonstrating the effectiveness and generalization of our model. In the future, we will apply the proposed triple attention mechanism to other NLP tasks to further testify its extensibility.
829
1,448
829
GL-CLEF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding
Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages. We present Global-Local Contrastive LEarning Framework (GL-CLEF) to address this shortcoming. Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages. In addition, a key step in GL-CLEF is a proposed Local and Global component, which achieves a fine-grained crosslingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot). Experiments on MultiATIS++ show that GL-CLEF achieves the best performance and successfully pulls representations of similar sentences across languages closer.
Spoken language understanding (SLU) is a critical component in task-oriented dialogue systems [CLS] [CLS] [CLS] To this end, many works have been explored for zero-shot cross-lingual SLU. Multilingual BERT (mBERT) To solve the aforementioned challenges, we propose a Global-Local Contrastive LEarning Framework (GL-CLEF) for zero-shot crosslingual SLU. For the first challenge, as shown in Figure For the second challenge, SLU requires accomplishing tasks at two different levels: token-level slot filling and sentence-level intent detection. As such, simply leveraging ordinary sentence-level contrastive learning is ineffective for fine-grained knowledge transfer in token-level slot filling. Therefore, we first introduce a Local module in GL-CLEF to learn different granularity alignment representations (i.e., sentence-level Local intent CL and token-level local slot CL). To be specific, sentence-level Local intent CL and token-level local slot CL are introduced for aligning similar sentence and token representations across different languages for intent detection and slot filling, respectively. In addition, we further argue that slot and intent are highly correlated and have similar semantic meanings in a sentence. This phenomenon can serve as a signal for self-supervised alignment across intent and slots. Therefore, a Global module named semantic-level global intent-slot CL is further proposed to bring the representations of slot and intents within a sentence closer together. We conduct experiments on MultiATIS++ To facilitate further research, codes are publicly available at
We first describe traditional SLU before the specifics of zero-shot cross-lingual version of SLU. Traditional SLU in Task-oriented Dialogue. SLU in Task-oriented Dialogue contains two subtasks: Intent Detection and Slot Filling. • Intent Detection: Given input utterance x, this is a classification problem to decide the corresponding intent label o I . • Slot Filling: Often modeled as a sequence labeling task that maps an input word sequence x = (x 1 , . . . , x n ) to slots sequence o S = (o S 1 , . . . , o S n ), where n denotes the length of sentence x. Since the two tasks of intent detection and slot filling are highly correlated, it is common to adopt a joint model that can capture shared knowledge. We follow the formalism from Zero-shot Cross-lingual SLU. This means that a SLU model is trained in a source language, e.g., English (cf. Figure where tgt represents the target language. We describe the general approach to general SLU task first, before describing our GL-CLEF model which explicitly uses contrastive learning to explicitly achieve cross-lingual alignment. The main architecture of GL-CLEF is illustrated in Figure Encoder. Given each input utterance x = (x 1 , x 2 , . . . , x n ), the input sequence can be constructed by adding specific tokens x = ([CLS], x 1 , x 2 , ..., x n , [SEP]), where [CLS] denotes the special symbol for representing the whole sequence, and [SEP] can be used for separating non-consecutive token sequences Then, we employ mBERT model to take codeswitched data for encoding their representations H = (h CLS , h 1 , . . . , h n , h SEP ). Slot Filling. Since mBERT produces subwordresolution embeddings, we follow Intent Detection. We input the sentence representation h CLS to a classification layer to find the label o I : o I = softmax(W I h CLS + b I ), where W I and b I are tuneable parameters. We introduce our global-local contrastive learning framework (GL-CLEF) in detail, which consists of three modules: 1) a sentence-level local intent contrastive learning (CL) module to align sentence representation across languages for intent detection, 2) a token-level local slot CL module to align token representations across languages for slot filling, and 3) semantic-level global intent-slot CL to align representations between a slot and an intent. For contrastive learning, the key operation is to choose appropriate positive and negative pairs against to the original (anchor) utterance. Positive Samples. Positive samples should preserve the same semantics compared against the anchor utterance. Therefore, given each anchor utterance x = ([CLS], x 1 , x 2 , ..., x n , [SEP]), we follow Negative Samples. A natural approach for generating negative samples is randomly choosing other queries in a batch. However, this method requires the recoding of the negative samples, hurting efficiency. Inspired by , where K is the maximum capacity for negative queue. Sentence-level Local Intent CL. Since intent detection is a sentence-level classification task, aligning sentence representation across languages is the goal of zero-shot cross-lingual intent detection task. Therefore, in GL-CLEF, we propose a sentence-level local intent CL loss to explicitly encourage the model to align similar sentence representations into the same local space across languages for intent detection. Formally, this is formulated as: , where s(p, q) denotes the dot product between p and q; τ is a scalar temperature parameter. Token-level Local Slot CL. As slot filling is a token-level task, we propose a token-level local slot CL loss to help the model to consider token alignment for slot filling, achieving fine-grained cross-lingual transfer. We apply toke-level CL for all tokens in the query. Now, we calculate the ith token CL loss for simplicity: where the final L LS is the summation of all tokens CL loss. Semantic-level Global Intent-slot CL. We noted that slots and intent are often highly related semantically when they belong to the same query. Therefore, we think that the intent in a sentence and its own slots can naturally constitute a form of positive pairings, and the corresponding slots in other sentences can form negative pairs. We thus further introduce a semantic-level global intentslot CL loss to model the semantic interaction between slots and intent, which may further improve cross-lingual transfer between them. Formally: where we consider CL loss from both anchor sentences (L GIS1 ) and code-switched sentence (L GIS2 ), and add them to do semantic-level contrastive learning (L GIS ) . where ŷI i are the gold intent label and n I is the number of intent labels. where ŷi,S j are the gold slot label for jth token; n S is the number of slot labels. The overall objective in GL-CLEF is a tuned linear combination of the individual losses: where λ * are tuning parameters for each loss component. We use the latest multilingual benchmark dataset of MultiATIS++ We use the base case multilingual BERT (mBERT), which has N = 12 attention heads and M = 12 transformer blocks. We select the best hyperparameters by searching a combination of batch size, learning rate with the following ranges: learning rate {2 × 10 -7 , 5 × 10 -7 , 1 × 10 -6 , 2 × 10 -6 , 5 × 10 -6 , 6 × 10 -6 , 5 × 10 -5 , 5 × 10 -4 }; batch size {4, 8, 16, 32}; max size of negative queue {4, 8, 16, 32}; For all experiments, we select the best-performing model over the dev set and evaluate on test datasets. All experiments are conducted at TITAN XP and V100. To verify the effect of GL-CLEF, we compare our model with the following state-of-the-art baselines: 1) mBERT. mBERT 1 follows the same model architecture and training procedure as BERT Following From the results in Table To understand GL-CLEF in more depth, we perform comprehensive studies to answer the following research questions (RQs): (1) Do the local intent and slot CLs benefit sentenceand token-level representation alignment? (2) Can semantic-level global intent-slot CL boost the overall sentence accuracy? (3) Are local intent CL and local slot CL complementary? (4) Does GL-CLEF pull similar representa- tions across languages closer? (5) Does GL-CLEF improve over other pre-trained models? (6) Does GL-CLEF generalize to non pre-trained models? (7) Is GL-CLEF robust to the one-to-many translation problem? Answer 1: Local intent CL and slot CL align similar sentence and token representations across languages. We investigate the effect of the local intent CL and local slot CL mechanism, by removing the local intent CL and slot CL, respectively (Figure Similarly, considering the effectiveness of local slot CL, we find the performance of slot filling averaged on 9 languages drops by 2.44% against the full system. We attribute performance drops to the fact that local slot CL successfully make a fine-grained cross-lingual knowledge transfer for aligning token representation across languages, which is essential for token-level crosslingual slot filling tasks. Answer 2: Semantic-level global intent-slot successfully establishes a semantic connection across languages. We further investigate the effect of the semantic-level intent-slot CL mechanism when we remove the global intent-slot CL loss (Figure Answer 5: Contributions from contrastive learning and pre-trained model use are complementary. To verify the contribution from GL-CLEF is still effective when used in conjunction with other strong pre-trained models, we perform experiments with XLM-R Answer 6: GL-CLEF still obtains gains over BiLSTM. A natural question that arises is whether GL-CLEF is effective for non pre-trained models, in addition to transformers. To answer the question, we replace mBERT with BiLSTM, keeping other components unchanged. The results are shown in Table Answer 7: GL-CLEF is robust. It is worth noting that words in the source language can have multiple translations in the target language. We follow 5 Related Work In recent years, related work also considers aligning representations between source and target languages during fine-tuning, eschewing the need for an extra pre-training process. Specifically, Contrastive Learning. Contrastive learning is now commonplace in NLP tasks. We introduced a global-local contrastive learning (CL) framework (GL-CLEF) to explicitly align representations across languages for zero-shot crosslingual SLU. Besides, the proposed Local CL module and Global CL module achieves to learn different granularity alignment (i.e., sentence-level local intent alignment, token-level local slot alignment, semantic-level global intent-slot alignment). Experiments on MultiATIS++ show that GL-CLEF obtains best performance and extensive analysis indicate GL-CLEF successfully pulls closer the representations of similar sentence across languages. Spoken language understanding (SLU) is a core component in task-oriented dialogue system, which becomes sufficiently effective to be deployed in practice. Recently, SLU has achieved remarkable success, due to the evolution of pre-trained models. However, most SLU works and applications are English-centric, which makes it hard to generalize to other languages without annotated data. Our work focuses on improving zero-shot cross-lingual SLU model that do not need any labeled data for target languages, which potentially is able to build multilingual SLU models and further promotes the globalization of task-oriented dialog systems.
1,139
1,598
1,139
Towards Safer Operations: An Expert-involved Dataset of High-Pressure Gas Incidents for Preventing Future Failures
This paper introduces a new IncidentAI dataset for safety prevention. Different from prior corpora that usually contain a single task, our dataset comprises three tasks: named entity recognition, cause-effect extraction, and information retrieval. The dataset is annotated by domain experts who have at least six years of practical experience as high-pressure gas conservation managers. We validate the contribution of the dataset in the scenario of safety prevention. Preliminary results on the three tasks show that NLP techniques are beneficial for analyzing incident reports to prevent future failures. The dataset facilitates future research in NLP and incident management communities. The access to the dataset is also provided. 1
Daily activities usually face incidents that can significantly affect risk management. In specific industries such as manufacturing, an incident can make a significant consequence that not only reduces the reputation of companies but also breaks the product chain and costs a lot of money. It motivates the introduction of the safety-critical area where AI solutions have been proposed to prevent repeated failures from historical samples There still exists a gap in the adoption of AI techniques for actual incident management scenarios due to the lack of high-quality annotated datasets. The main challenges arise from two main reasons. First, data annotation of incidents for AI-related tasks is a labor-expensive and time-consuming task that requires domain experts who have a deep understanding and excellent experience in their daily work. Second, the collection of historical incidents is also challenging due to its dependence on the policies of companies. We argue that the growth of the safety-critical area can be leveraged by introducing annotated incident datasets. To fill the gap, this paper takes the high-pressure gas domain, a sector of the gas industry, as a case study. This is because gas and its products are the major industry in the energy market that play an influential role in the global economy To address the aforementioned questions, this paper introduces a new Japanese dataset that focuses on high-gas incidents and demonstrates the potential NLP applications in analyzing high-gas incident reports. To do that, we first work closely with business members and domain experts to identify three potential NLP tasks: named entity recognition (NER), cause-effect extraction (CE), and infor- mation retrieval (IR) based on actual scenarios. The NER task allows analyzers to extract fundamental units of an incident in the form of entities, e.g., the product or the process of the product. This information is used to visualize statistics concerning key entities from past incidents retrieved through IR steps. The CE task allows analyzers to extract the cause and effect of an incident. The IR task is typically used to examine historical incidents similar to the current one and to develop countermeasures to prevent the recurrence of such incidents. In business scenarios, information from the three tasks is vital for safety-critical and risk management. This paper makes three main contributions as follows. • It introduces a new IncidentAI dataset that focuses on high-gas incidents for NER, CE, and IR. To the best of our knowledge, this is the first Japanese dataset that covers all three tasks in the context of high-gas incidents. It is annotated by domain experts to ensure a highquality dataset that can assist in the efficient analysis of incident reports using AI models. • It shows a scenario of IncidentAI in actual business cases. The scenario can serve as a reference for AI companies that are also interested in the analysis of incident reports. • It benchmarks the results of AI models on NER, CE, and IR tasks that facilitate future studies in NLP and safety prevention areas.
Incident databases There exist industry-specific incident databases in many industries 3 The HPGIncident Dataset The original dataset was collected from publicly available reports of high-gas incidents published in 2022 by the High-Pressure Gas Safety Institute of Japan. 2 The original data contains descriptions of incidents, types of incidents, dates of incidents, industries, etc. From the original 18,171 incident cases, 2,159 cases belonging to three industries: "general chemistry", "petrochemical", and "oil refining" were first extracted. These cases were used for the annotation of IR. Subsequently, we selected 970 cases from that 2,159 cases based on the most recent dates for both the annotation of NER and CE tasks. We used the description of incidents as the input for annotation shown in the next section. The dataset was created by three Japanese domain experts, each with at least six years of practical experience as high-pressure gas conservation managers. These experts possess qualifications as highpressure gas production safety managers, a national certification demonstrating a certain level of knowledge and experience necessary to ensure the safety of high-pressure gas manufacturing facilities. The process was divided into two steps: the creation of the guideline and the annotation of the entire dataset. In the first step, we randomly selected 100 samples from 970 collected samples for NER and CE, and from 2,159 collected samples for IR. Our team collaborated closely with experts to establish criteria for consistent annotations, including identifying the information types (entities) and their definitions for NER and CE, and determining the attributes that characterize incidents for IR. These criteria formed the basis of our guidelines. This initial stage was iterative, conducted in several rounds until a certain agreement score was achieved among the experts. This process played a vital role in training the annotators, ensuring that they shared a uniform understanding of the guidelines. Once a high agreement score had been achieved, the remaining samples were apportioned into three segments, each corresponding to an annotator, who then proceeded to annotate their respective parts. Subsequently, 100 random samples were selected from one annotator's portion. The other two annotators were tasked with annotating these 100 samples. For each task, NER, CE, and IR, an inter-annotator agreement was computed using these 100 samples. Due to space constraints, please refer to Appendix A for a more detailed explanation of annotation. NER annotation As mentioned, entities provide basic information about an incident. This repre-2 Cause-effect annotation Causes and effects provide critical information about a given incident for the analysis, in which causes contain information about the cause of an incident and effects mention the consequences of the causes. Similar to NER, we engaged in detailed discussions with domain experts to identify cause and effect types. We observed that the cause is quite easy to identify while the effect composes several types such as the leak- Tag sentences containing accident events other than gas leakage. For example, explosions, fires, etc. Example: It is estimated that hydrogen, which has a low ignition energy, was ignited by static electricity. Tag sentences that confirm the event causing Event_Leak and Event_others. Target not only direct causes but also indirect causes (e.g., Cause's Cause). In case of ignition or explosion, the three elements of combustion (combustibles, oxygen, and heat) shall be noted as a cause. Example: As a result of reduced tightening torque in some of the flange sections cooled by hydrogen The annotation of causes and effects is on the span level. The annotation was done in two steps (follows Section 3.2), in which the first step was conducted in several rounds to create the annotation guideline to annotate the whole CE dataset. For annotation, the definition of causes and effects in Table The objective of the IR annotation task is to realize a use case where users can -query incident descriptions to retrieve relevant past incidents. We found that the annotation of IR is challenging to measure the similarity of incidents by using single aspects, e.g., the description of incidents. Therefore, instead of directly assigning a relevance score to predefined levels like "Not Relevant," "Relevant," and "Highly Relevant," we first identified a set of key attributes to each incident report and then evaluated relevance on an attributeby-attribute basis. The attributes allow us to reflect the nature of similarity among incidents. We collaborated with domain experts to identify crucial attributes for determining how similar incident reports are. These specific attributes are shown in Table This section shows the statistics of recent incident databases and corpora. The databases include CVE (Common Vulnerabilities and Exposures) (Corporation, 2023), FAA Once the dataset has been created, NLP tasks were designed to establish the baselines of each task. The NER task was formulated as a sequence labeling problem Layered nested NER This model stacks flat NER layers for nested NER Multiple BiLSTM-CRF This model uses multiple flat BiLSTM-CRF, one for each entity type BINDER is an optimized bi-encoder model for NER by using contrastive learning CNN-Nested-NER It is a simple but effective model for nested NER Preliminary results Table The CE extraction task was formulated as a span extraction problem BERT-QA We followed BERT-QA FastQA Apart from BERT-QA, we also tested FastQA Guided-QA Guided-QA LLMs We tested ChatGPT Outputs should be in Japanese. Text: <example incident> Cause/effect: <example cause> Text: <target incident> Cause/effect: Preliminary results Table We analyze the success and failure cases of IR model BERT-finetuned. Most success retrieval cases such as Figure To better adapt the model to challenging technical terms and jargon in the incident reports, we further fine-tuned the aforementioned base encoder by using the unsupervised constrastive learning objective We also evaluated the recent commercial solution from OpenAI with the model name text-embedding-ada-002. Preliminary results Table Nested NER Figure Cause-Effect Extraction As we observe the data, cause and effect spans usually appear in phrases that indicate incidents such as leak, insufficient tightening. Because causes and effects share such common patterns, it is harder for our models to make correct predictions. Figures This paper introduces a new Japanese dataset for safety prevention by using AI models. The highquality dataset is annotated by domain experts for NER, CE, and IR tasks. The dataset contributes to IncidentAI in two important points. First, it composes the three NLP tasks in a corpus that facilitates the development of AI pipelines for safety prevention in a low-resource language. Second, it benchmarks the results of the three tasks which are beneficial for the next studies of analyzing incident reports. Future work will adapt the dataset to create AI pipelines for preventing failures of IncidentAI. Although the newly created dataset of incidents is a very high-quality corpus that is composed of three NLP tasks: NER, cause-effect extraction (CE), and IR, the size of the dataset is quite small with 970 annotated samples for NER and CE. The number of annotated samples for IR is also small with 2,159 samples. While collecting raw data is quite easy, data annotation is time-consuming and laborexpensive with the involvement of domain experts. It explains the size of our dataset is quite limited. So, it requires more effort for data augmentation when using the dataset in some cases. For example, LLMs need thousands annotated samples for finetuning. In addition, the dataset is in Japanese. On the one hand, it facilitates the introduction of AI models for IncidentAI in a low-resource language. However, the dataset requires translation to more popular languages, e.g., English for wider use. For evaluation, some models are quite straightforward because the purpose is to provide preliminary results of the dataset. We believe the performance of the three tasks can be still improved with stronger models, especially in the case of causeeffect extraction with BERT-QA and LLMs. The dataset and models experimented in this work have no unethical applications or risky broader impacts. The dataset was crawled from publicly available reports of high-pressure gas incidents published in 2022 by the High Pressure Gas Safety Institute of Japan. Raw data contains information such as descriptions of incidents at high-pressure gas plants, types of incidents, dates of incidents, industries, ignition sources, etc. It does not include any confidential or personal information of workers or companies. Three annotators are domain experts who have at least six years of experience in the high-pressure gas incident domain. They knew the purpose of data creation and agreed to join the annotation process with their responsibilities. Their personal information is kept for data publication. The models used for evaluation can be publicly accessed with GitHub links. There is no bias for the re-implementation that can affect the final results. The high-pressure gas that caused the reported accident was classified from the perspective of danger in the event of an accident. Cases where the gas could not be identified were included under "d. Not applicable". The definition of flammable gas and toxic gas shall conform to the High Pressure Gas Safety Act in Japan. The events that caused or triggered the accident were classified. Equipment factors refer to those caused by initial defects in parts built into the equipment. Human factors refer to errors made in operation or judgment by people on site. External factors indicate those caused by events from outside the equipment, such as falling objects. The events that occurred as a result of the accident were classified. Physical and human damage were only considered if they occurred as secondary events, such as gas leaks or fires. Property damage: Accidents resulting in damage to equipment or facilities due to fire or explosion. Do not include damage to equipment or other items that caused the accident. Human casualties: Accidents resulting in health hazards to humans due to leakage, fire, or explosion Time span from cause to effect The classification was made based on the time from when the cause or trigger of the accident occurred until the accident event took place. Sudden: Accidents where the results are caused generally within a few minutes to several tens of minutes from the occurrence of the cause. Operational status of equipment The classification was made based on the operational status of the equipment at the time of the accident. Non-steady state operation refers to operating conditions that differ from normal operation, such as immediately after the equipment starts running or during test operation CNN-nested-NER The number of training epochs is 10, with the learning rate of 3e -5 and the batch size of 8. The depth of CNN layers is 3, with a dimension of 120 for each. The BERT-QA models were implemented using BERT classes provided by Huggingface The pre-trained model TurkuNLP/wikibert-baseja-cased was also used for all CE models. We fine-tuned the base model distiluse-base-multilingual-cased-v2 from sentence-BERT We utilize the unsupervised training objective from SimCSE
736
3,123
736
Neural semi-Markov CRF for Monolingual Word Alignment
Monolingual word alignment is important for studying fine-grained editing operations (i.e., deletion, addition, and substitution) in textto-text generation tasks, such as paraphrase generation, text simplification, neutralizing biased language, etc. In this paper, we present a novel neural semi-Markov CRF alignment model, which unifies word and phrase alignments through variable-length spans. We also create a new benchmark with human annotations that cover four different text genres to evaluate monolingual word alignment models in more realistic settings. Experimental results show that our proposed model outperforms all previous approaches for monolingual word alignment as well as a competitive QA-based baseline, which was previously only applied to bilingual data. Our model demonstrates good generalizability to three out-of-domain datasets and shows great utility in two downstream applications: automatic text simplification and sentence pair classification tasks. 1
Monolingual word alignment aims to align words or phrases with similar meaning in two sentences that are written in the same language. It is useful for improving the interpretability in natural language understanding tasks, including semantic textual similarity
Insertion Figure One major challenge for automatic alignment is the need to handle not only alignments between words and linguistic phrases (e.g., a dozen ↔ more than 10), but also non-linguistic phrases that are semantically related given the context (e.g., tensions ↔ relations being strained in Figure Our experimental results show that the proposed semi-Markov CRF model achieves state-of-the-art performance with higher precision, in comparison to the previous monolingual word alignment models Word alignment has a long history and was first proposed for statistical machine translation. The most representative ones are the IBM models Neural methods have been explored in the past decade primarily for bilingual word alignment. Some early attempts In this section, we first describe the problem formulation for monolingual word alignment, then present the architecture of our neural semi-CRF word alignment model (Figure We formulate word alignment as a sequence tagging problem following previous works , where b i is the beginning word index, its corresponding label a i means every word within the span s i is aligned to the target span t a i . That is, the word-level alignments a w b i , a w b i +1 , ..., a w b i +d-1 have the same value j. We use a w to denote the label sequence of alignments between words and s w b i to denote the b i th word in the source sentence. There might be cases where span s i is not aligned to any words in the target sentence, then a i = [NULL]. When D ≥ 2, the Markov property would no longer hold for word- level alignment labels, but for span-level labels. That is, a i depends on a w b i -1 , the position in the target sentence where the source span (with ending word index b i -1) that precedes the current span s i is aligned to. We therefore design a discriminative model using semi-Markov conditional random fields The conditional probability of alignment a given a sentence pair s and t is defined as follows: a,s,t) a ∈A e ψ(a ,s,t) (1) where the set A denotes all possible alignments between the two sentences. The potential function ψ can be decomposed into: where i denotes the indices of a subset of source spans that are involved in the alignment a; a * represents the gold alignment sequence at spanlevel. The potential function ψ consists of three elements, of which the first two compose negative log-likelihood loss: the span interaction function υ, which accounts the similarity between a source span and a target span; the Markov transition function τ , which models the transition of alignment labels between adjacent source spans; the cost is implemented with Hamming loss to encourage the predicted alignment sequence to be consistent with gold labels. Function υ and τ are implemented as two neural components which we describe below. Span Representation Layer. First, source and target sentences are concatenated together and encoded by the pre-trained SpanBERT Span Interaction Layer. The semantic similarity score between source span s i and target span t j is calculated by a 2-layer feed-forward neural network FF sim with Parametric Relu (PReLU) where [; ] is concatenation and • is element-wise multiplication. We use h s i and h t j to denote the representation of source span s i and target span t j , respectively. Markov Transition Layer. Monolingual word alignment moves along the diagonal direction in most cases. To incorporate this intuition, we propose a scoring function to model the transition between the adjacent alignment labels a w b i -1 and a i . The main feature we use is the distance between the beginning index of current target span and the end index of the target span that the prior source span is aligned to. The distance is binned into 1 of 13 buckets with the following boundaries Training and Inference. During training, we minimizes the negative log-likelihood of the gold alignment a * , and the model is trained from both directions (source to target, target to source): where a * s2t and a * t2s represent the gold alignment labels from both directions. During inference, we use the Viterbi algorithm to find the optimal alignment. There are different strategies to merge the outputs from two directions, including intersection, union, grow-diag We implement our model in PyTorch In this section, we present the manually annotated Multi-genre Monolingual Word Alignment (Mul-tiMWA) benchmark that consists of four datasets of different text genres. As summarized in Table In contrast to iSTS MultiMWA-MTRef. We create this dataset by annotating 3,998 sentence pairs from the MTReference To address the lack of reliable annotation, we hire two in-house annotators to correct the original labels using GoldAlign MultiMWA-Newsela. Newsela corpus MultiMWA-arXiv. The arXiv It has been used to study paraphrase generation MultiMWA-Wiki. Wikipedia has been widely used in text-to-text tasks, including text simpli-fication In this section, we present both in-domain and outof-domain evaluations for different word alignment models on our MultiWMA benchmark. We also provide a detailed error analysis of our neural semi-CRF model and an ablation study to analyze the importance of each component. We introduce a novel state-of-the-art baseline by adapting the QA-based method in A span prediction model based on fine-tuning multilingual BERT is then expected to extract performed from the target sentence. The predictions from both directions (source to target, target to source) are symmetrized to produce the final alignment, using a probability threshold of 0.4 instead of the typical 0.5. We change to use standard BERT in this model for monolingual alignment and find that the 0.4 threshold chosen by Following the literature The in-domain evaluation results are shown in Table Table Table We sample 50 sentence pairs from the dev set of MultiMWA-MTRef and analyze the errors under Sure+Poss setup. Phrase Boundary (58.6%). The phrase boundary error (see 3 in Figure Function Words (19.1%). Function words can be tricky to align when rewording and reordering happens, such as 2 . Adding on the complexity, same function word may appear more than once in one sentence. This type of error is common in all the models we experiment with. It attributes 4.7 points of F 1 for JacanaPhrase, 1.3 for QA aligner, and 1.5 for our neural semi-CRF aligner. Content Words (14.2%). Similar to function words, content words (e.g., security bureau ↔ defense ministry) can also be falsely aligned or missed, but the difference between neural and nonneural model is much more significant. This error type attributes 7.7 points of F 1 score for Jacana aligner, but only 1.1 and 0.8 for neural semi-CRF aligner and QA aligner, respectively. Context Implication (5.6%). Some words or phrases that are not strictly semantically equivalent can also be aligned if they appear in a similar context. For example, given the source sentence 'Gaza international airport was put into operation the day before' and the target sentence 'The airport began operations one day before', the phrase pair was put into ↔ began can be aligned. This type is related to 2.8 F 1 score improvement for Jacana aligner, but only 0.4 and 0.2 for neural semi-CRF and QA-based aligners, respectively. Debatable Labels (1.9%). Word alignment annotation can be subjective sometimes. Take phrase alignment two days of ↔ a two-day for example, it can go either way to include the function word 'a' in the alignment, or not. Name Variations (0.6%). While our neural semi-CRF model is designed to handle spelling variations or name abbreviations, it fails sometimes as shown by 1 in Figure Skip Alignment (0.0%). Non-contiguous tokens can be aligned to the same target token or phrase (e.g., owes ... to ↔ is a result of), posing a challenging situation for monolingual word aligners. However, this error is rare, as only 0.6% of all alignments in MTRef dev set are discontinuous. In this section, we apply our monolingual word aligner to some downstream applications, including both generation and understanding tasks. Text simplification aims to improve the readability of text by rewriting complex sentences with simpler language. We propose to incorporate word alignment information into the state-of-the-art Ed-itNTS model Table We can utilize our neural aligner in sentence pair classification tasks In this work, we present the first neural semi-CRF word alignment model which achieves competitive performance on both in-domain and outof-domain evaluations. We also create a manually annotated Multi-Genre Monolingual Word Alignment (MultiMWA) benchmark which is the largest and of higher quality compared to existing datasets. The original EditNTS model constructs expert program with the shortest edit path from complex sentence to simple sentence, specifically, it calculates the Levenshtein distances without substitutions and recovers the edit path with three labels: ADD, KEEP and DEL. Since edit distance relies on word identity to match the sentence pair, it cannot produce lexical paraphrases (e.g. conduct ↔ performed and simulations ↔ experiments in Figure In order to show the effectiveness of our modified model, we compared two more versions of EditNTS in Table After the first round of annotation, we discovery that the definition of phrasal alignment can be ambiguous, which will hinder the development and error analysis for word alignemnt models. There-
980
261
980
WHAT, WHEN, and HOW to Ground: Designing User Persona-Aware Conversational Agents for Engaging Dialogue
This paper presents a method for building a personalized open-domain dialogue system to address the WWH (WHAT, WHEN, and HOW) problem for natural response generation in a commercial setting, where personalized dialogue responses are heavily interleaved with casual response turns. The proposed approach involves weighted dataset blending, negative persona information augmentation methods, and the design of personalized conversation datasets to address the challenges of WWH in personalized, open-domain dialogue systems. Our work effectively balances dialogue fluency and tendency to ground, while also introducing a response-type label to improve the controllability and explainability of the grounded responses. The combination of these methods leads to more fluent conversations, as evidenced by subjective human evaluations as well as objective evaluations.
A personalized dialogue (PD) system is capable of generating user-customized responses based on long-term memory about the user's persona, leading to more trustworthy and engaging conversations The key to enhanced user engagement in a PD system lies in finding a persona that is contextually relevant and appropriate, on which a model is grounded to generate a natural response. However, as shown in the example in Figure in the training dataset, deciding what persona attribute to select in each turn during model inference is a non-trivial problem. (We will refer to this problem as the "WHAT to ground" problem hereafter.) Another aspect to consider in a PD system is that under certain dialogue contexts, it is better not to generate a personalized response given retrieved persona attributes in order to create a more natural interaction (the second response in Figure Given such a challenge, designing a user-based persona-aware PD system capable of generating engaging and human-like personalized responses requires addressing the "WHAT," "WHEN," and "HOW" (WWH) questions: 1) What personal information should be grounded given the conversation context, 2) When to generate responses using personal information, and 3) How to make natural and human-like personalized response. Most previous research on personalized dialogue systems has focused on generating natural responses in ideal personalized conversation settings Large-scale Language Models (LLMs) such as GPT-3 have shown outstanding capabilities in various Natural Language Understanding (NLU) tasks and especially, in-context learning In addressing the research gap and real-world challenges, we propose a method that controls the inclination of models to generate personalized responses. Our technique blends persona-augmented datasets to construct a personalized dialogue system, thus enabling human-like natural conversations. Our approach involves the following steps: 1) We create a Multi-Session Personalized Conversation (MSPC) dataset. This trains the model to ground the provided persona information effectively for a personalized response. 2) We control the model's persona-grounding level by adjusting the blending weights of the conversational datasets. Furthermore, we enrich the dataset with negative samples of persona subsets at the turn level for model fine-tuning. 3) To enhance both generation quality and the controllability and interpretability of persona-grounded generation, we use a turn label. This label indicates whether a turn is personalized or casual and serves as one of the inputs. Ultimately, we build a personalized dialogue system by fine-tuning an 18-billion parameter large language model (LLM). This LLM has a high level of understanding of conversation history, the ability to generate high-quality responses, and the capacity to focus effectively on given inputs, including users' personas. We also propose four grounding type categorizations to allow for analysis of the model's grounding patterns and detailed performance in subjective evaluation using sensibleness and specificity, which complements the objective evaluation based on groundedness, and fluency.
Since the release of the PersonaChat dataset With respect to the How, However, to the best of our knowledge, there has been no work on addressing all three WWH questions in PD system. Therefore, considering the crucial importance of addressing the WWH issues in a commercial system, we propose novel methods to tackle all three WWH questions, which are mission-critical for a commercial system. To develop a PD system that addresses the WWH problems, we construct a Korean Multi-Session Personalized Dialogue dataset, which we refer to as MSPD. This dataset includes an agent that performs several unique roles, setting it apart from other PD datasets. Primarily, the agent is required to remember user persona attributes, including any persona attributes introduced during the conversation. The agent must also produce personalized responses that are both reasonably and timely grounded on the persona. The goal of this dataset is to enable a model to learn the HOW and WHEN of grounding. On average, the dataset contains 4 sessions per episode, with each session consisting of 10-12 turns between the user and the agent. This format allows the agent to learn how to sustain a natural conversation flow, both within and between sessions. As illustrated in the red and blue text in Figure Alongside the MSPD, we incorporate a variety of informal dialogue datasets, referred to as D casual , to train a more balanced model capable of generating high-quality daily, knowledgebased, empathetic, and personalized conversations. D casual consists of a comprehensive collection of approximately 12.5 million utterances. We use carefully-curated Korean dialogue datasets available online As shown in Figure In this study, every input of the training dataset consists of user demographic information d (e.g. gender, age), a subset of user persona ρ m , which consists of persona attributes, and dialogue con- u and a refer to the user and agent, respectively, and the target response Given the input, which is in the format of (d, ρ m , c m ), we optimize the model via the conditional probability for personalized response y m and a loss function with Negative Log-Likelihood (NLL) loss that can be formulated as: where ℓ is the length of the target response. Blending a variety of conversational datasets has been shown to improve the diversity, empathy, and knowledge of a dialogue system, leading to more natural and engaging conversations We define a data instance as (c, r) where c and r are the dialogue context and target response, respectively as described in section 4.1. We blend datasets by instance according to blending weights for each dataset. In particular, in order to finely control the WWH problems with the blending weights (w), the MSPD dataset is divided into the agent's personalized responses (D MSPD-PR ) and non-personalized responses (D MSPD-NPR ) (e.g., agent's red and black colored responses in Figure where a set of Control of WHEN To address the WHEN problem, it is important to control a model's propensity to generate a persona-grounded response. Given a persona, an agent must generate personalized responses at the right time to create coherent and natural conversations. Generating persona-grounded responses too frequently leads to unnatural conversations. On the other hand, a model that generates personalized responses too infrequently does not sufficiently enhance a user's engagement with the agent. In particular situations where a persona subset is retrieved by a retrieval model at each turn, the model should generate a casual response instead of generating a personalized response, resulting in a more natural flow. In order to learn this natural flow, we intentionally include a persona subset consisting of all contextually irrelevant persona attributes in the input for non-personalized responses. We call this a negative persona subset augmentation in our study. This augmentation "suppresses" the model's inclination to ground too frequently. However, too much augmentation can hinder the model's ability to ground, so we perform the negative persona subset augmentation only for data in D MSPD-NPR , not all casual datasets D casual . Control of WHAT When a model generates a persona-grounded response, it needs to determine the WHAT, i.e., the specific persona attribute on which to base the response. By providing both the ground-truth persona attributes, ρ pos , which are relevant to the response, and "negative" persona at-tributes, ρ neg1 , ..., ρneg k-1 , which are not relevant to the target response, the model learns to select the appropriate persona attribute(s) from multiple options given the current dialogue context. We refer to the process of adding multiple negative persona attributes to a ground truth persona as negative persona attribute augmentation. Finally, we vary the subset of the persona ρ in (1) for negative persona augmentation depending on the response type: , where Controllability In a commercial setting, it is often necessary to determine whether to generate a personalized response based on business logic. For instance, this might include deciding when the agent should proactively send a message to users. We can exert explicit control over the model's decision regarding the WHEN by employing Response Type Labels (RTL), denoted as <RTL>. First, we train the model to generate both a response and corresponding RTL token: P (<RTL>, y|d, ρ, c) in (1). We have pre-defined special tokens <PRTL> for personalized response type labels and <CRTL> for casual response type labels. Then, at inference time, we can insert the RTL to generate a response that corresponds to the response type: Explainability Error analysis is a crucial element in commercial systems for swift debugging and resolution of issues. However, this process can often be labor-intensive, typically involving a manual review of log data to evaluate the quality and appropriacy of generated personalized responses. Therefore, besides enhancing controllability, we also employ the Response Type Label (RTL) to improve the explainability of the model's generated responses. In this regard, the level of explainability provided by the RTL facilitates easier and more efficient error analysis, leading to improved service operation. To validate the efficacy of our proposed methods in building a controllable Personalized Dialogue (PD) system that addresses the WWH problems, we compare the performance of several models. These are enhanced with fine-tuned baseline models, such as dataset blending and negative sampling methods. Additionally, by comparing models trained with different blending weights, we evaluate the impact of the blending weight on the model's grounding propensity and fluency. The baseline models are all derived from our in-house 18B parameter pretrained language model, which shares the same architecture as GPT-3 Objective Evaluation We use perplexity (PPL) to measure the fluency of the responses generated by the model. In addition, the F1 score between the persona attributes and the generated response acts as a proxy to evaluate the model's ability to ground. We also calculate the P-coverage score, which measures how well the user persona is reflected in the generated responses We complement objective evaluation metrics with subjective human evaluation at both the session and turn levels, specifically employing the Sensibleness and Specificity (SS) score rated as either 0 or 1 at the turn level Under the grounding level, we have two subcategories: 1) Hard Grounding, where there's a direct and explicit association between y and the persona attribute, ρ pos , characterized by high expressive similarity. 2) Soft Grounding, where there's an indirect and implicit association between y and ρ pos , marked by low expressive similarity. Under the consistency category, we have two subcategories: 1) Consistent Grounding, where there's consistency between y and the given ρ pos . 2) Inconsistent Grounding, where there's an inconsistency between y and the given ρ pos . Table Table As shown in Table Achieving natural and engaging conversations requires careful consideration of the trade-off between the model's inclination to ground and response fluency. To control the WWH balance, we can adjust the blending weights for datasets with different persona augmentations and select appropriate values for PPL and F1 scores. We set a F1 score of '1' as the minimum threshold for the model's grounding tendency, as we have consistently observed that models with F1 scores below 1 seldom attempt grounding in conversations. This approach ensures that optimal PD systems maintain a balance between a sufficient quantity of grounded responses and a high fluency score. As can be seen in We also evaluated explainability by analyzing whether the generated Response Type Labels (RTL) accurately reflect the model's decisions on persona grounding. For this purpose, we sampled 90 generated responses for each response type. The accuracy of the generated RTL for the casual and the personalized response type wa 96.7% and 98.8%, respectively. This confirms that generating the RTL provides a reliable explanation for the model's decision on the WHEN problem. The high average (over 0.88) scores for both turn and session levels in Table The RTL generation model (M odel 4 ) shows a lower inclination to ground, yet it had a better badsensible ratio. Therefore, in accordance with the objective evaluation result, we can confirm that generating both the response and the RTL can have a positive effect on fluency, even though there is no significant improvement in terms of session evaluation. We confirmed a positive correlation between fluency, as measured by PPL, and human sensibleness judgment. M odel 4 exhibited a decrease of 0.49 in PPL compared to M odel 3 , indicating improved fluency in Table
863
3,171
863
Thesauruses for Prepositional Phrase Attachment
Probabilistic models have been effective in resolving prepositional phrase attachment ambiguity, but sparse data remains a significant problem. We propose a solution based on similarity-based smoothing, where the probability of new PPs is estimated with information from similar examples generated using a thesaurus. Three thesauruses are compared on this task: two existing generic thesauruses and a new specialist PP thesaurus tailored for this problem. We also compare three smoothing techniques for prepositional phrases. We find that the similarity scores provided by the thesaurus tend to weight distant neighbours too highly, and describe a better score based on the rank of a word in the list of similar words. Our smoothing methods are applied to an existing PP attachment model and we obtain significant improvements over the baseline.
Prepositional phrases are an interesting example of syntactic ambiguity and a challenge for automatic parsers. The ambiguity arises whenever a prepositional phrase can modify a preceding verb or noun, as in the canonical example I saw the man with the telescope. In syntactic terms, the prepositional phrase attaches either to the noun phrase or the verb phrase. Many kinds of syntactic ambiguity can be resolved using structural information alone Fortunately it is possible to do well at this task just by considering the lexical preferences of the words making up the PP. Lexical preferences describe the tendency for certain words to occur together or only in specific constructions. For example, saw and telescope are more likely to occur together than man and telescope, so we can infer that the correct attachment is likely to be verbal. The most useful lexical preferences are captured by the quadruple (v, n 1 , p, n 2 ) where v is the verb, n 1 is the head of the direct object, p is the preposition and n 2 is the head of the prepositional phrase. A benchmark dataset of 27,937 such quadruples was extracted from the Wall Street Journal corpus by A major problem faced by any statistical attachment algorithm is sparse data, which occurs when plausible PPs are not well-represented in the training data. For example, if the observed frequency of a PP in the training is zero then the maximum likelihood estimate is also zero. Since the training corpus only represents a fraction of all possible PPs, this is probably an underestimate of the true probability. An appealing course of action when faced with an unknown PP is to consider similar known examples instead. For example, we may not have any data for eat pizza with fork, but if we have seen eat pasta with fork or even drink beer with straw then it seems reasonable to base our decision on these instead. Similarity is a rather nebulous concept but for our purposes we can define it to be distributional similarity, where two words are considered similar if they occur in similar contexts. For example, pizza and pasta are sim-ilar since they both often occur as the direct object of eat. A thesaurus collects together lists of such similar words. The first step in constructing a thesaurus is to collect co-occurrence statistics from some large corpus of text. Each word is assigned a probability distribution describing the probability of it occurring with all other words, and by comparing distributions we can arrive at a similarity score. The corpus, co-occurrence relationships and distributional similarity metric all affect the nature of the final thesaurus. There has been a considerable amount of research comparing corpora, co-occurrence relations and similarity measures for general-purpose thesauruses, and these thesauruses are often compared against wide-coverage and general purpose semantic resources such as Word-Net. In this paper we examine whether it is useful to tailor the thesaurus to the task. General purpose thesauruses list words that tend to occur together in free text; we want to find words that behave in similar ways specifically within prepositional phrases. To this end we create a PP thesaurus using existing similarity metrics but using a corpus consisting of automatically extracted prepositional phrases. A thesaurus alone is not sufficient to solve the PP attachment problem; we also need a model of the lexical preferences of prepositional phrases. Here we use the back-off model described in In Section 2 we cover related work in PP attachment and smoothing techniques, with a brief comparison between similarity-based smoothing and the more common (for PP attachment) class-based smoothing. Section 3 describes Collins' PP attachment model and our thesaurusbased smoothing extensions. Section 4 discusses the thesauruses used in our experiment and describes how the specialist thesaurus is constructed. Experimental results are given in Section 5 and we show statistically significant improvements over the baseline model using generic thesauruses. Contrary to our hypothesis the specialist thesaurus does not lead to significant improvements and we discuss possible reasons why it underperforms on this task.
Early work on PP attachment disambiguation used strictly syntactic or high-level pragmatic rules to decide on an attachment This marked a flowering in the field of PP attachment, with a succession of papers bringing the whole armoury of machine learning techniques to bear on the problem. Smoothing for statistical models involves adjusting probability estimates away from the maximum likelihood estimates to avoid the low probabilities caused by sparse data. Typically this involves mixing in probability distributions that have less context and are less likely to suffer from sparse data problems. For example, if the probability of an attachment given a PP p(a|v, n 1 , p, n 2 ) is undefined because that quadruple was not seen in the training data, then a less specific distribution such as p(a|v, n 1 , p) can be used instead. A wide range of different techniques have been proposed An alternative but complementary approach is to mix in probabilities from distributions over "similar" contexts. This is the idea behind both similarity-based and class-based smoothing. Class-based methods cluster similar words into classes which are then used in place of actual words. For example the class-based language model of This helps solve the sparse data problem since the number of classes is usually much smaller than the number of words. Class-based methods have been applied to the PP attachment task in several guises, using both automatic clustering and hand-crafted classes such as WordNet. Li and Abe (1998) use both WordNet and an automatic clustering algorithm to achieve 85.2% accuracy on the WSJ dataset. The maximum entropy approach of In contrast, similarity-based techniques do not discard any data. Instead the smoothed probability of a word is defined as the total probability of all similar words S(w) as drawn from a thesaurus, weighted by their similarity α(w, w ). For example, the similarity-based language model of where w 1 ∈S(w1) α(w 1 , w 1 ) = 1. The similarity function reflects how often the two words appear in the same context. For example, Lin's similarity metric Our use of specialist thesauruses for this task is also novel, although in they have been used in the somewhat related field of selectional preference acquisition by ,v,n1,p,n2) f 5. Default: noun attachment Figure Firstly we compare the different PP similarity functions. Figure On the other hand, if β is set quite low (for example Figure The reduction in the error rate with the single best policy on the development set is somewhat less than with the smoothed frequency models, and the results more errorprone and sensitive to the choice of k. These models are more likely to be unlucky with a choice of feature than with the smoothed frequencies. The training corpus is created from 3.3 million prepositional phrases extracted from the British National Corpus. These PPs are identified semi-automatically using a version of the weighted GR extraction scheme described in We use the similarity metric described in For our experiments we use the Wall Street Journal dataset created by A thesaurus providing better neighbours should do better on this task. Figure Clearly both generic thesauruses consistently outperform the specialist thesaurus. The latter tends to produce neighbours with have less obvious semantic similarity, for example providing pour as the first neighbour of fetch. We hypothesised that using syntactic rather than semantic neighbours could be desirable, but in this case it often generates contexts that are unlikely to occur: pour price of profit as a neighbour of fetch price of profit, for example. Although this may be a flaw in the approach, we may simply be using too few contexts to create a reliable thesaurus. Previous research has found that using more data leads to better quality thesauruses The WASPS and Lin models produce statistically significant (P < 0.05) improvements over the vanilla Collins model using a paired t-test with 10-fold crossvalidation on the entire dataset On the face of it, these are not resounding improvements over the baseline, but this is a very hard task. An inspection of the data shows that many of the remaining errors are due to poor neighbouring PPs being used for smoothing. For example, the PP in entrust company with cash modifies the verb, but no matching quadruples are present in the training data. The only matching (n 1 , p, n 2 ) triple using WASPS is (industry, for, income), which appears twice in the training data modifying the noun. The model therefore guesses incorrectly even though the thesaurus is providing what appear to be semantically appropriate neighbours. Another example is attend meeting with representative, where the (v, p, n 2 ) triple (talk, with, official) convinces the model to incorrectly guess verb attachment. Part of the problem is that words in the PP are replaced independently and without consideration to the remaining context. However we had hoped the specialist thesaurus might alleviate this problem by providing neighbours that are more appropriate for this specific task. Finding good neighbours for verbs is clearly more difficult than for nouns since subcategorisation and selectional preferences also play a role. Our results show that the similarity-based smoothing of frequency estimates significantly improves an already respectable probabilistic PP attachment model. However our hypothesis that a task-specific thesaurus would outperform a generic thesaurus was not borne out by our experiments. The neighbours provided by the specialist thesaurus are not as informative as those supplied by the generic thesauruses. Of course, this negative result is naturally good news for developers of generic thesauruses. We described ways of finding and scoring distributionally similar PPs. A significant number of errors in the final model can be traced to the way individual words in the PP are replaced without regard to the wider context, producing neighbouring PPs that have conflicting attachment preferences. The specialist thesaurus was not able to overcome this problem. A second finding is that distributional similarity scores provided by all thesauruses weight dissimilar neighbours too highly, and more aggressive weighting schemes are better for smoothing. Our aim is to apply similarity-based smoothing with both generic and specialist thesauruses to other areas in lexicalised parse selection, particularly other overtly lexical problems such as noun-noun modifiers and conjunction scope. Lexical information has a lot of promise for parse selection in theory, but there are practical problems such as sparse data and genre effects
845
4,220
845
Gender Inflected or Bias Inflicted: On Using Grammatical Gender Cues for Bias Evaluation in Machine Translation
Neural Machine Translation (NMT) models are state-of-the-art for machine translation. However, these models are known to have various social biases, especially gender bias. Most of the work on evaluating gender bias in NMT has focused primarily on English as the source language. For source languages different from English, most of the studies use gender-neutral sentences to evaluate gender bias. However, practically, many sentences that we encounter do have gender information. Therefore, it makes more sense to evaluate for bias using such sentences. This allows us to determine if NMT models can identify the correct gender based on the grammatical gender cues in the source sentence rather than relying on biased correlations with, say, occupation terms. To demonstrate our point, in this work, we use Hindi as the source language and construct two sets of gender-specific sentences: OTSC-Hindi and WinoMT-Hindi that we use to evaluate different Hindi-English (HI-EN) NMT systems automatically for gender bias. Our work highlights the importance of considering the nature of language when designing such extrinsic bias evaluation datasets.
Various models trained to learn from data are susceptible to picking up spurious correlations in their training data, which can lead to multiple social biases. In NLP, such biases have been observed in different forms: Even state-of-the-art NMT models develop such biases This problem also exists for HI-EN Machine Translation Prior research evaluating gender bias in machine translation has predominantly centered around English as the source language Therefore, in this work, we propose to evaluate NMT models for bias using sentences with grammatical gender cues of the source language. This allows us to ascertain whether NMT models can discern the accurate gender from context or if they depend on biased correlations. In this work, we contribute the following : • Using Hindi as source language in NMT, we highlight the limitations of existing bias evaluation methods that use gender-neutral sentences. • Additionally, we propose to use context-based gender bias evaluation using grammatical gender markers of the source language. We construct two evaluation sets for bias evaluation of NMT models: Occupation Testset with Simple Context (OTSC-Hindi) and WinoMT-Hindi. • Using these evaluation sets, we evaluate various blackbox and open-source HI-EN NMT models for gender bias. • We highlight the importance of creating such benchmarks for source languages with expressive gender markers. Code and data are publicly available
NMT Models : We test HI-EN NMT models which are widely popular and represent state-ofthe-art in both commercial or academic research : (1) IndicTrans Cho et al. ( For translation into English, TGBI uses the fraction of sentences in a sentence set S translated as "masculine", "feminine" or "neutral" in the target , i.e., p m , p f and p n , respectively to calculate P S as : P i is calculated for each sentence set S i (S 1 to S n ) to finally calculate TGBI = avg(P i ). Using lists from Often, using a metric like TGBI is not very practical. For example, when the original intent is not gender-neutral but constraints of the source language make it gender-neutral, then showing all versions We construct two sets of sentences, one with a simple gender-specified context and another with a more complex context. In creating these sets, we focus on the gender markers of the source language, i.e. Hindi. Also, we use template sentences which can help to automatically evaluate bias without using additional tools at the target side. Escudé Font and Costa-jussà (2019) created a test set with custom template sentences to evaluate the ". The possessive pronoun " " or " " and the verb " " or " " specify friend's gender. Here, the pronoun "u " references speaker's friend. gender bias for English to Spanish Translation. Inspired by this template, we create a Hindi version with grammatical gender cues: " { , } । " (I have known [him/her] for a long time, my friend works as a [occupation].) Figure (meri)" is used for female friend. Based on the use of " (mera)" or " (meri)", the verb " (karta)" and " (karti)" is used for a male friend and female friend, respectively. So in this template, there are four possibilities based on the gender of the speaker and the gender of the speaker's friend. Using 1071 occupations, we construct these four sets with 1071 sentences each and check the percentage of sentences where the speaker's friend is translated as male or female. This is because English translation only specifies the gender of the friend while the gender of the speaker is lost in translation. In the real world, NMT models deal with more complex sentences: long sentences with further context, Figure The lawyer shouts at the secretary as he got angry. The lawyer yelled at the secretary there because she had done a bad job. The lawyer shouts at the secretary as she got angry. The lawyer yelled at the secretary there because he had done a bad job. However, since it is in English, using it for evaluating bias for other source languages is not possible. Therefore we contextualize this test set for the evaluation of bias in HI-EN Translation by manually creating "WinoMT-Hindi", which consists of 704 WinoBias-like sentences in Hindi, but modified to include gender cues of the language, mainly: gender-inflected adjectives, postpositions, and verbs. Construction of "WinoMT-Hindi" is explained in Figure We don't need reference translations in English, as automatic evaluation is possible. Due to the nature of our source sentences, we can mark the gender of the target by simply checking for the presence of male pronouns (he, him or his) or female pronouns (she or her) in the translation. Interestingly, we also observe that few sentences are translated into gender-neutral form. For example, the sentence: " Ê ÚÚ e ky " (Secretary asks mover what he should do to help) is translated as "The secretary asks the mover what to do to help" by Google Translate. While there is an increased interest in promoting Gender-Neutral translation for inclusivity For gender bias evaluation, we use the metrics: Acc, ∆ G and ∆ S given by The results are shown in Table The problem with the TGBI metric is that it may not accurately capture the true fairness of an NMT system since evaluation is only done on genderneutral sentences. The results are shown in Table The results are shown in We also observe that ∆ S values are very low for all NMT systems. There are two potential reasons. First, it is observed that these HI-EN NMT systems strongly prefer masculine outputs irrespective of occupation stereotypes. Hence they give the "masculine default" in most cases leading to a similar performance on pro-stereotypical and antistereotypical sentences. Another reason can be the poor contextualisation of occupation stereotype. We rely on stereotype labels provided by original English occupation lists by However, WinoMT-Hindi provides a way to generalise and motivate the creation of such evaluation benchmarks for other languages. Many works have focused on evaluating gender translation accuracy by creating various benchmarks. WinoMT benchmark by Other benchmarks include MuST-SHE Bias evaluation of NMT models on source lan-guages other than English has mainly focused on the translation of gender-neutral sentences. To conclude our study, we highlighted the need for contextualising NMT bias evaluation for non-English source languages, especially for languages that capture gender-related information in different forms. We demonstrated this using Hindi as a source language by creating evaluation benchmarks for HI-EN Machine Translation and comparing various state-of-the-art translation systems. In future, we plan to extend our evaluation to more languages and use natural sentences for evaluation without following a particular template. We are also looking forward to developing evaluation methods that are more inclusive of all gender identities.
1,146
1,432
1,146
SDR: Efficient Neural Re-ranking using Succinct Document Representation
BERT based ranking models have achieved superior performance on various information retrieval tasks. However, the large number of parameters and complex self-attention operations come at a significant latency overhead. To remedy this, recent works propose late-interaction architectures, which allow precomputation of intermediate document representations, thus reducing latency. Nonetheless, having solved the immediate latency issue, these methods now introduce storage costs and network fetching latency, which limit their adoption in real-life production systems.
Information retrieval (IR) systems traditionally comprise of two stages: retrieval and ranking. Given a user query, the role of the retrieval stage is to quickly retrieve a set of candidate documents * Both authors contributed equally to the paper. † Work carried out while working at Amazon. BERT SPLIT is a distilled late-interaction model with reduced vector width and no compression ( § 4.2). For MRR@10 above 0.35, SDR is 4x-11.6x more efficient compared to the baseline. from a (very large) search index. Retrieval is typically fast but not accurate enough; in order to improve the quality of the end result for the user, the candidate documents are re-ranked using a more accurate but computationally expensive algorithm. Neural approaches have achieved the state of the art ranking performance in IR applications To rank k documents, the ranker is called k times with an input of the form (query, document), where the query is the same, but the document is different. Several works Precomputing document representations has shown to significantly reduce latency and at the same time retain comparable scores to BERT models In this work, we propose Succinct Document Representation (SDR), a general scheme for compressing document representations. It enables lateinteraction rankers to be efficient in both latency and storage, while maintaining high ranking quality. SDR is suitable for any ranking scheme that uses contextual embeddings, and achieves extreme compression ratios (2-3 orders of magnitude) with little to no impact on retrieval accuracy. SDR consists of two major components: (1) embedding dimension reduction using an autoencoder with side information and (2) distribution-optimized quantization of the reduced-dimension vectors. In SDR, the autoencoder consists of two subnetworks: an encoder that reduces the vector's dimensions and a decoder that reconstructs the compressed vector. The encoder's output dimension represents the tradeoff between reconstruction fidelity and storage requirements. To improve the compression-reliability tradeoff, we leverage static token embeddings, which are available since the ranker has access to the document text (as it needs to render it to the user), and are computationally cheap to obtain. We feed these embeddings to both the encoder and decoder as side information, allowing the autoencoder to focus more on storing "just the context" of a token, and less on its original meaning that is available in the static embeddings. Ablation tests verify that adding the static vectors significantly improves the compression rates for the same ranking accuracy. Since data storage is measured in bits rather than floating-point numbers, SDR uses quantization techniques to reduce storage size further. Given that it is hard to evaluate the amount of information in each of the encoder's output dimensions, we perform a randomized Hadamard transform on the vectors, resulting in (1) evenly spread information across all coordinates and (2) transformed vectors that follow a Gaussian-like distribution. We utilize known quantization techniques to represent these vectors using a small number of bits, controlling for the amount of quantization distortion. Existing late-interaction schemes either ignore the storage overhead, or consider basic compression techniques, such as a simple (1 layer) autoencoder and float16 quantization. However, this is insufficient to reach reasonable storage size To summarize, here are the contribution of this work • We propose the Succinct Document Representation (SDR) scheme for compressing the document representations required for fast Transformer-based rankers. The scheme is based on a specialized autoencoder architecture and subsequent quantization. • For the MSMARCO passage retrieval task, SDR shows compression ratios of 121x with no noticeable decrease in ranking performance. Compared to existing approaches for producing compressed representations, our method attains better compression rates (between 4x and 11.6x) for the same ranking quality. Similar results are demonstrated on the TREC CAR dataset. • We provide a thorough analysis of the SDR system, showing that the contribution of each of the components to the compression-ranking effectiveness is significant.
Late-interaction models. The idea of running several transformer layers for the document and the query independently, and then combining them in the last transformer layers, was developed concurrently by multiple teams: PreTTR Several other works Compressed embeddings. Our work reduces storage requirements by reducing the number of bits per floating-point value. Quantization gained attention and success in reducing the size of neural network parameters Our work is based on the late-interaction architecture Our compression scheme for the document representations consists of two sequential steps, (i) dimensionality reduction and (ii) block-wise quantization, described in § 3.1 and § 3.2 respectively. AutoEncoders with Side Information (AESI) To compress document representations, we reduce the dimensionality of token representations (i.e., the output of BERT's L-th layer) using an autoencoder. Standard autoencoder architectures typically consist of a neural network split into an encoder and a decoder: the encoder projects the input vector into a lower-dimension vector, which is then reconstructed back using the decoder. Our architecture, AESI, extends the standard autoencoder by using the document's text as side information to both the encoder and decoder. Such an approach is possible since, no matter how the document scores are computed, re-ranking systems have access to the document's text in order to render it back to the user. In the rest of this section, we add the precise details of the AESI architecture. Side Information. In line with our observation that the ranker has access to the document's raw text, we propose utilizing the token embedding information, which is computed by the embedding layer used in BERT's architecture. The token embeddings encode rich semantic information about the token itself; however, they do not fully capture the context in which they occur; hence, we refer to them as static embeddings. For example, through token embeddings, we cannot disambiguate between the different meanings of the token bank, which can refer to either a geographical location (e.g., "river bank") or a financial institution, depending on the context. Static embeddings are key for upper BERT layers, which learn the contextual representation of tokens via the self-attention mechanism. We use the static embeddings as side information to both the encoder and decoder parts of the autoencoder. This allows the model to focus on encoding the distilled context, and less on the token information since it is already provided to the decoder directly. AESI Approach. For a token whose representation we wish to compress, our approach proceeds as follows. We take the L-th layer's output contextual representation of the token together with its static embedding and feed both inputs to the autoencoder. The information to be compressed (and reconstructed) is the contextual embedding, and the side-information, which aids in the compression task, is the static embedding. The decoder takes the encoder output, along with the static embedding, and attempts to reconstruct the contextual embedding. Figure AESI approach has two parameters that are determined empirically. First, the L-th transformer layer of the contextual representation provided as input, which has a direct impact on latency 3 . Second, the size of the encoder's output directly impacts the compression rate and thus storage costs. Encoding starts by concatenating the input vector (i.e., the output of layer L, the vector we compress) and the static token embedding (i.e., the output of BERT's embedding layer), and then passes the concatenated vector through an encoder network, which outputs a c-dimensional encoded vector. Decoding starts by concatenating the encoded vector with the static token embedding, then passes the concatenated vector through a decoder layer, which reconstructs the input vector. Specifically, we use a two-layer dense network for both the encoder and the decoder, which can be written using the following formula: Figure bedding (the output of the L-th layer), u ∈ R h is the static token embedding (the output of the embedding layer, which is the input to BERT's layer 0 and includes token position embeddings and type embeddings), and u; v means concatenation of these vectors. h is the dimension of token embeddings (e.g., 384), i is the intermediate autoencoder size, and c is the dimension of the projected (encoded) vector. gelu(•) is an non-linear activation function Storing the compressed contextual representations in a naive way consumes 32 bits (float32) per coordinate per token, which is still costly. To further reduce storage overhead, we propose to apply a quantization technique, which uses a predetermined B bits per coordinate. However, different coordinates and different tokens have different importance and possibly also different scales, so using the same number of bits and same quantization threshold for all of them increases the quantization error. To remedy this issue, we follow an approach similar to EDEN quantization Efficiently applying the Hadamard transform requires the size of the input to be a power of two. In addition, the input dimension should be large enough (specifically, larger than the output of AESI) so that information can be shuffled effectively. Therefore, we concatenate the AESI vectors of all tokens from a single document, then segment it to a larger block size (we use 128), padding the last block with zeros when necessary. The padding slightly increases space requirements and is considered when evaluating the compression efficiency. In this section we describe the datasets used to evaluate the competing approaches for ranking documents given a query. Next, we describe the baseline and the different configurations of SDR with emphasis on how we measure the compression ratio. To evaluate the effectiveness of our proposed approach (SDR) and the competing baseline, we consider two information retrieval datasets, each with different characteristics. MSMARCO passage re-ranking In this task (1) MSMARCO-DEV, the development set for the MSMARCO passage reranking task, which consists of 6,980 queries. On average, each query has a single relevant passage, and other passages are not annotated. The models are measured using the mean reciprocal rank metric (MRR@10). (2) TREC 2019 DL Track. Here we consider the test queries from TREC 2019 DL Track passage reranking dataset. Unlike MSMARCO-DEV, there are multiple passages annotated for each query with graded relevance labels (instead of binary labels), allowing us to use the more informative nDCG@10 metric. Due to the excessive annotation overhead, this dataset consists of just 200 queries, so results are noisier compared to MSMARCO-DEV. TREC Complex Answer Retrieval (CAR) is a dataset For both datasets, in addition to the quality metrics, we also measure the Compression Ratio (CR) as the amount of storage required to store the token embeddings when compared to the baseline model. E.g., CR = 10 implies storage size that is one tenth of the baseline vectors. Our algorithm is based on the late-interaction architecture We trained autoencoder variants on a random subset of 500k documents to reduce training time. We incorporate the quantization overhead into the computation of the compression ratios, including metadata and the overhead of padding (cf. Appendix A). In the following sections, we denote the SDR variants as "AESI-{c}-{B}b" where {c} is replaced with the width of the encoded vector and {B} is replaced with the number of bits in the quantization scheme. When discussing AESI with no quantization, we simply write "AESI-{c}". To measure end to end latency, we configured an OpenSearch In this section, we present the end to end latency results ( § 5.1), show compression ratios and quality tradeoff of the SDR scheme ( § 5.2). We then examine how the proposed autoencoder ( § 5.3) compares with other baselines and present additional measurements ( § 5.4). Table We also consider variants of the algorithms where the documents are pre-tokenized, and the tokenization output is retrieved instead of computing at runtime (marked as +tok in the table). This further improves the ranking latency at the expense of a slight increase in index size. Note that the baseline does not use the raw text and therefore does not benefit from precomputed tokens. Table In Appendix D we explore additional configurations and show that the baseline with 52 features reaches the same quality as SDR-16-6b. However, we do not measure end-to-end latency for this case due to the excessive storage size and indexing time. Note that using 52 features for the baseline is expected to have a negative impact on retrieval latency, making the benefits of SDR even more pronounced. Table AESI-16-6b reduces storage requirements by 121x, while at the same time showing no significant ranking performance drop. Using AESI-16-6b, a document's embedding can be stored with only 947 bytes and the entire MSMARCO collection can be stored within 8.6GB. There are several advantages of fitting the entire collection's representation into the main memory of the hosting machine, allowing for fast access, further fine-tuning, etc. If further compression rates are required, AESI-8-5b uses just 5 bytes per token, reaching a compression rate of 277x and 487 bytes per document on average. At this level of compression, the entire MSMARCO corpus fits in 3.8GB. The MRR@10 drop is noticeable (0.0119) but still quite low. Finally, for TREC19-DL, the impact of compressing token embeddings is less evident. Only in the most extreme cases such as AESI-4-4b we see a significant drop in nDCG@10 performance. These results demonstrate that the performance drop is very small, showing the effectiveness of our method. To better understand the impact of the autoencoder, we present MRR@10 results as a function of autoencoder dimensions (i.e., number of floats stored per token) and with the different autoencoder configurations. In addition to the 2-layer AESI architecture we described in § 3.1 (AESI-2L), we consider the following variations: AutoEncoder with 2 Layers (AE-2L). Standard 2-layer autoencoder with gelu activation. This is the same as AESI, only without the side information. AutoEncoder with 1 Layer (AE-1L). Standard autoencoder with a single dense layer in the encoder and decoder. AESI with 1 Layer (AESI-1L). AESI with a single dense encoder and decoder layer. ). Provides side information to the decoder but not the encoder. To reduce measurement overhead, we ran the experiment only over the MSMARCO dataset. In addition, we took only the top 25 BERT SPLIT passages for each query, denoted MSMARCO-DEV-25, which has a negligible impact on the results. Figure Quantization Techniques we compare the quantization technique we use to several other techniques, including Deterministic Rounding Our scheme uses a fixed number of bits per coordinate, which is essential for performance. However, variable-rate compression can further reduce storage. We used rate-distortion theory (from the information theory field) to upper bound the benefits of such techniques by 11%, which does not seem to justify the added system complexity (cf. Appendix B). To better understand the impact of side information, we measure the error rate between an input vector and its reconstructed vector (i.e., after encoding and decoding). As expected, in practically all cases, adding the side information reduces error rate compared to a 2-layer autoencoder (AE-2L) with the same code dimension. In IR, the document frequency of a token is known to be negatively correlated with the token's importance. We found that the error rate for AE-2L decreases with frequency, while the error rate for AESI increases with frequency. This shows that the AESI scheme can better focus on tokens that are important for ranking. A possible explanation for this phenomena is that the static embeddings for infrequent tokens are more informative (i.e., more helpful as side information) compared to static embeddings for frequent tokens (e.g., 'the'). We also found AESI excels more in compressing nouns, verbs, and adjectives, while AE-2L excels more in compressing punctuation, determiners, and adpositions. Again, this demonstrate that the static embeddings is most helpful in encoding tokens that are crucial for ranking. The details of this evaluation are provided in Appendix C. In this paper, we proposed a system called SDR to solve the storage cost and latency overhead of existing late-interaction transformer based models for passage re-ranking. The SDR scheme uses a novel autoencoder architecture that uses static token embeddings as side information to improve encoding quality. In addition, we explored different quantization techniques and showed that the recently proposed EDEN performs well in our use case and presented extensive experimentation. Overall, the SDR scheme reduces pre-computed document representation size by 4x-11.6x compared to a baseline that uses existing approaches. In future work, we plan to continue investigating means to reduce pre-computed document representation size.We believe that additional analysis of BERT's vector and their interaction with the context would be fundamental in such an advancement. Definition 1 Definition 2 where H 2 k is a normazlized Walsh-Hadmard matrix, and D is a diagonal matrix whose diagonal entries are i.i.d. Rademacher random variables (i.e., taken uniformly from {+1, -1}). While H is randomized and thus defines a distribution, when D is known, we abuse the notation and define the inverse Hadamard transform as The quantization operates as follows. Given a vector, denoted x ∈ R d , we first precondition it using a randomized Hadamard transform, H, and normalize by multiplying by √ d / x 2 . There are several desired outcomes of this transform To retrieve an estimate of the original vector, we perform the same steps in reverse. We replace Algorithm 1 B-bits Vector Quantization (EDEN) the vector of cluster assignments X with a vector ŷ containing each assigned cluster's centroid, denormalize, and then apply the inverse randomized Hadamard transform, H -1 . To avoid encoding D directly, we recreate it using shared randomness Block-wise Quantization. The AESI encoder reduces the dimension of the contextual embeddings from hundreds (e.g., 384) to a much smaller number (e.g., 12). On the other hand, the randomized Hadamard transform's preconditioning effect works best in higher dimensions To study the impact of quantization, we fix AESI-16 as our baseline and measure how different quantization strategies and number of bits affect the MRR@10 score. Note that we do not measure quantization over the baseline BERT SPLIT since it can only achieve a compression ratio of up to 32x per coordinate (using 1 bit per coordinate). In addition to EDEN (Appendix A, Algorithm 1), we consider the following quantization strategies: Deterministic Rounding (DR) Figure The current quantization scheme requires padding to full 128 blocks. For AESI with a small code size, the padding overhead may reach 10% -20% percent. In addition, we send a normalization value per 128-block, which we currently send as a float32 value, adding 4% -5% additional overhead. Padding can be reduced by treating the last 128-block separately, e.g., applying a method that does not require Hadamard transform. Normalization overhead can be reduced, e.g., by sending normalization factors as float16 instead of full float32. However, such solutions complicate the implementation while providing limited storage benefits, hence, they were not explored in the context of this paper. Beyond Scalar Quantization. Scalar quantization using a fixed number of bits is a suboptimal technique in general since it does not allocate fewer bits for more frequent cases. Entropy coding In order to estimate the potential gains of all these methods combined, we turn to information theory, and rate-distortion theory in particular, which studies the optimal tradeoffs between distortion and compression rate In the body of the paper, we showed the effectiveness in ranking and utility in compression rates of AESI over AE architectures. However, such evaluations do not capture the encoded information at the token-level. In this intrinsic evaluation we try to discern when and why adding the static embedding as side information contributes to better capturing the token meaning. We study the effectiveness of different autoencoder configurations in reconstructing back the original token vector, as measured through the MSE between the original vector and the reconstructed vector: where v is a contextualized vector (BERT SPLIT output at layer 10), u is the static embedding, and the encoder E(v, u) and the decoder D(e, u) are as defined in § 3.1. High MSE scores indicate the inability of the autoencoder to encode the original vector's information. Document Frequency: One way to assess the importance of a document w.r.t. a query is through the inverse document frequency of query tokens, typically measured through TF-IDF or BM25 schemes Based on this premise, we study how MSE varies across token frequency. We selected a random sample of 256k documents from MSMARCO, tokenized them, and run them through BERT SPLIT to get 20M contextualized token representations. Then, for each token we measured their document frequency as DF (t) = log 10 (|{d ∈ D : t ∈ d}|/|D|) (where D is our document collection), and in Figure First, on all encoded width configurations, our approach, AESI, consistently achieves lower MSE compared to the AE architecture (for all DF values). Lower MSE correlates to a better ranking quality, as shown in § 5.3. Furthermore, for tokens with low DF, adding the static side information during the training of AESI for compression provides a huge advantage, which shrinks when the token is present in many documents in the collection. Second, on the end spectrum of high-frequency tokens, we note a downwards trend for AE and an upwards trend for AESI, especially for DF ∈ [-1, 0]. The MSE decrease for AE is expected since the training data contains more frequent tokens. The increase for AESI can be explained given that in this frequency range, we deal with tokens that are function words (e.g., 'the') whose role is more in tying up content within a sentence and has less standalone meaning. In this case, static embeddings cannot capture context, which reduces the contribution provided by the side information. In Table In Figure
567
4,277
567
Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Knowledge distillation (KD) has been a ubiquitous method for model compression to strengthen the capability of a lightweight model with the transferred knowledge from the teacher. In particular, KD has been employed in quantization-aware training (QAT) of Transformer encoders like BERT to improve the accuracy of the student model with the reducedprecision weight parameters. However, little is understood about which of the various KD approaches best fits the QAT of Transformers. In this work, we provide an in-depth analysis of the mechanism of KD on attention recovery of quantized large Transformers. In particular, we reveal that the previously adopted MSE loss on the attention score is insufficient for recovering the self-attention information. Therefore, we propose two KD methods; attention-map and attention-output losses. Furthermore, we explore the unification of both losses to address task-dependent preference between attentionmap and output losses. The experimental results on various Transformer encoder models demonstrate that the proposed KD methods achieve state-of-the-art accuracy for QAT with sub-2-bit weight quantization.
Knowledge distillation (KD) Quantization-aware training (QAT) stands out for its recent success in reducing not only the memory requirements but also the computational complexity of Transformer models In this work, we provide an in-depth analysis of KD on attention recovery for QAT of Transformers in terms of the knowledge sources and the objectives. We first reveal that all-layer KD of the intermediate Transformer layer is essential for QAT, in contrast to the KD-based model compression. In the case of BERT-Base, we further discover that the KL-Div-based KD on attention-map (called attention-map loss) outperforms the prior KD technique that takes MSE loss on the attention score. However, the attention-map loss is insufficient for the large Transformer encoders since weight quantization disrupts attention propagation for specific NLP tasks when there are many layers. Therefore, we devise an insightful KD, MSE loss on attention output (called attention-output loss), and help preserve attention recovery along with many layers. The proposed attention-map and output losses and their combination are evaluated on various Transformer encoder models (BERT-Base/Large and a BERT-like Korean language model (ULM). The experimental results demonstrate that the proposed KD methods significantly boost the model accuracy surpassing the state-of-the-art for QAT with aggressive sub-2-bit weight quantization. We summarize our contributions as follows: • We improve the prior KD techniques for QAT to boost the accuracy of large Transformer encoders. • We quantitatively reveal that the attentionmap loss (based on KL-Div) outperforms the existing attention-score loss (based on MSE). The proposed attention-map loss is particularly beneficial for the BERT-Base model. • We discover the task-dependent attention characteristics, particularly noticeable in BERT-Large. In particular, we reveal that specific tasks on large Transformers suffer homogenization of attention output when weights are quantized. We propose a new KD method, attention-output loss, to address this issue. • We further explore the potential of unifying the attention-map and output losses to handle task-dependent attention characteristics ubiquitously. • We evaluate the proposed KD methods on various large-scale Transformer encoders and NLP tasks, achieving state-of-the-art accuracy for sub-2-bit aggressive QAT. 2 Related Work
Transformer-based encoder models like BERT Motivated by where f (x) := (xW V + b V )W O and α i,j is j'th attention probability of i'th token in AM h . Therefore, MHA can be decomposed into two parts: selfattention generation (SA-GEN) corresponding to the attention map (α), and self-attention propagation (SA-PROP) corresponding to f (x). Fig. FFN consists of two fully-connected layers with weight parameters W 1 and W 2 : Therefore, output of a Transformer layer X l+1 is defined as: Here, Y l and X l+1 are called attention output (AO) and Transformer output, respectively. Knowledge distillation (KD) Since KD provides the student information to reach the teacher's capability, KD has been widely adopted for model compression of large-scale Transformer models like BERT. A basic distillation approach is to match the probability distribution at the output of the teacher and student models via CE loss, as in DistilBERT Quantization is a promising technique to reduce the high inference cost of large-scale models without changing the model structure. Instead of representing numbers in 32-bit floating-point (FP32), employing fixed-point representation, such as 8-bit integer (INT8) quantization, has achieved significant speedup and storage savings for BERT Recently, QAT has been applied for compressing BERT with a precision lower than 2-bit. Ternary-BERT Although KD has been a de-facto technique for QAT, there is a lack of understanding about why. In particular, the aforementioned QAT methods all employed the layer-wise KD on the self-attention score (AS l ) and Transformer output (X l ) along with the KD on soft labels. Considering numerous KD techniques with various choices for the knowledge sources and the objective, it is not clear if the current recipe helps QAT the most. This work investigates the prior layer-wise KD techniques and improves them with new objectives and knowledge sources. In this section, we investigate prior KD techniques for QAT evaluated on BERT-Base. As discussed earlier, KD techniques commonly used for QAT include 1) all-layer distillation and 2) distillation on SA-GEN. First, we provide justification and improvement on these techniques. Then we further showcase the limitation when they are applied to large-scale Transformer encoders. Generally, the internal representation of the teacher, such as a layer output, is widely used for knowledge distillation for model compression We conjecture that quantization applied to the weight parameters disrupts the functionality of the Transformer layer, necessitating layer-wise guidance. To validate this conjecture, we conducted two experiments. First, we compared the accuracy of the uniformly selected layer distillation with a varying number of distilled layers. As shown in Fig. We further investigate the objective of all-layer KD. As discussed earlier, prior QAT methods employed MSE loss on the attention score (called attentionscore loss) for all-layer KD, as follows: Given that the attention map captures the correlation of one token to all the others, it is essential to maintain the relative importance of tokens. However, quantization in nature clamps and coarsely represents the weight parameters, making attention less distinguishable. We expect KD to help maintain disparity, but the attention-score loss is not a proper objective since it mainly focuses on logit matching As an alternative, we propose to use the KL-Div loss on the attention-map (called attention-map loss) defined as follows: (6) Assuming that the temperature hyper-parameter (τ ) is one, KL-Div focuses on label-matching To further understand the impact of the objectives of KD on the QAT accuracy, we conducted the temperature sweep of KL-Div. Since the gradients of KL-Div loss can be simplified into the gradients of MSE loss when the temperature is sufficiently large As shown in the table, the accuracy of the quantized model increases as the loss term becomes similar to the attention-map loss. Such performance improvement supports our understanding that 1) label matching is crucial for compensating QAT on SA-GEN, and 2) the attention-map loss is more effective for label matching. We extend the investigation of KD techniques to QAT on large transformer models. In this section, we first reveal the limitation of the attention-map loss due to the task-dependent characteristics. Then we propose a new KD loss, the attention-output loss, to address this challenge. Lastly, we propose a combination of the two losses to handle taskdependent characteristics. Although the same pre-trained models are employed for the downstream fine-tuning, the characteristics of attention vary depending on the tasks Fig. Since the quantization clamps and coarsely represents the values, it is challenging to maintain the distinct attention for the tasks in Case-1. As dis-cussed in Sec. 3.2, in the case of BERT-Base, the attention-map loss was capable of recovering the disparity in the attention (Fig. We conjecture that the attention-map loss fails due to the increased number of layers of BERT-Large. We adopt the analysis framework of Observations from Fig. The benefits of the attention-output loss are apparent. As shown in Fig. To further understand the task-dependent characteristics, we empirically observe the attentionoutput loss's impact on the attention map's selfattention probability. To quantify the modification in the attention map, we introduce the ranking ratio, defined as a ranking of the attention probability of an individual token normalized by the sequence length. Fig. Considering task-dependent attention characteristics of BERT-Large, we further explore the potential of unifying the attention-map and output losses for QAT. Note that the preference between the attention-map and output losses varies according to the model size (e.g., BERT-Base vs. Large) and tasks (Case-1 vs. Case-2). As for exploration, we formulate a unified attention-map and output loss with γ as a mixing parameter as follows: where γ ∈ {0.1, 0.2, 0.3, . . . , 0.9}. As will be discussed in Sec.5.2, the unified loss can boost the accuracy of the best performing KD loss (either the attention-map or output loss). As applying this unified loss in KD-QAT, we identified that every tasks has its own score favorable mixing parameters which shows task-dependent characteristics. Detailed mixing parameter information for each task is in Appendix. A.3. The configuration of each model is as follows: 1. BERT-Base. It is a 12-layer Transformer encoder with a hidden dimension of 768 using 12 attention heads and contains about 110M parameters. 2. BERT-Large. It is composed of 24 Transformer encoder layers, and uses a hidden dimension of 1024 with 16 attention heads. This model contains about 340M parameters. 3. ULM-Encoder-Large. It also has the same configuration as BERT-large except for feedforward dimension, which is 2816 for ULM-Encoder-Large while BERT-Large has 4096. It contains about 280M parameters. We initiate QAT from the task-specific finetuned models. Our experiments were performed on A6000 GPUs. Our implementation is based on the TernaryBERT PyTorch codebase. For performance comparison, we consider the following KD options: • Baseline. The standard TernaryBERT with the attention-score and Transformer output loss along with KD on soft labels. • Map. Use the attention-map loss instead of the attention-score loss of TernaryBERT. • Output. Use the attention-output loss instead of the attention-score loss of TernaryBERT. • Map+Output. Use the unified attention-map and output loss instead of the attention-score loss of TernaryBERT. Tables • The GLUE tasks can be categorized into two cases. Case-1( †): RTE, CoLA, STS-B. Case-2(⋆): SST-2, QNLI, MNLI, QQP. • In the case of BERT-Base, attention-map loss benefits all the tasks in Case-1 and Case-2, whereas attention-output loss is ineffective. • In the case of BERT-Large, the attention-map loss is marginally helpful for Case-1 and Case-2, while the attention-output loss significantly boosts the accuracy of Case-1 tasks. • Overall, the unified loss facilitates QAT accuracy, except for BERT-Large on Case-1 tasks (in which the attention-output loss works the best). • MRPC is a corner case; the QAT accuracy often outperforms the Full-Precision accuracy, implying that quantization noise regularizes the model favorably for this task. Table In Sec. 4.2, we proposed attention-output loss to suppress the quantization error along the SA-PROP. As shown in Fig. Table . 5 shows that the MHA loss method improves performance marginally in Case-1 tasks. When the residual connection is added to the MHA loss objective (MHA loss + Residual in Table. 5), which is equivalent to attention-output loss, the performance of all tasks increases. (especially in Case-1 GLUE tasks). These observations indicate that incorporating residual connection as an objective of attention-output loss is significant in recovering disruption of SA-PROP under the quantization. In this work, we investigate the mechanism of Knowledge distillation (KD) for QAT of large Transformers. We propose two KD methods, attention-map, and attention-output losses, to improve the recovery of the self-attention information. The experimental results on various Transformer encoder models demonstrate that the proposed KD methods and their combination achieve state-of-the-art accuracy for QAT with sub-2-bit weight quantization. Our code is available at This work investigates how KD works for QAT on Transformer Encoders. Although the analysis techniques employed in this work reveal many exciting insights, a more theoretical analysis of the impact of quantization under KD would be highly appreciated. Also, we explore the potential of unifying the two proposed KD techniques; incorporating automatic balancing of the two (or more) KD losses would be an interesting future research direction. We evaluate our method on all datasets of the GLUE benchmark 1. GLUE. The General Language Understanding Evaluation is a collection of resources for training, evaluating, and analyzing natural language understanding systems. 2. KLUE-TC. The KLUE Topic Classification is a single sentence classification task, and it classifies which topic the input sentence belongs to among the 7 representative topics. We averaged the accuracy and F1 score as the metric. 3. KLUE-STS. The KLUE Semantic Textual Similarity is to measure the degree of semantic similarity between two Korean sentences. We averaged the Pearson Correlation Coefficient (PCC) and Spearman Correlation Coefficient (SCC) to measure the performance. 4. NSMC. The NAVER Sentiment Movie Corpus has a collection of movie reviews scraped from NAVER Movies For evaluating our methods, we use batch size 16 for CoLA and 32 for other GLUE tasks. The learning rate starts from zero and gradually increases to 2e-5 during the warm-up stage and decays linearly to 2e-9 for 3 epochs. The dropout probability was always kept at 0.1. For an optimizer, we use BertAdam For ULM-Large, we train for 10 epochs using AdamW • Batch size: 16, 32, 64 • Learning rate: 1e-5, 2e-5, 5e-5
1,149
2,408
1,149
Discovering Differences in the Representation of People using Contextualized Semantic Axes
A common paradigm for identifying semantic differences across social and temporal contexts is the use of static word embeddings and their distances. In particular, past work has compared embeddings against "semantic axes" that represent two opposing concepts. We extend this paradigm to BERT embeddings, and construct contextualized axes that mitigate the pitfall where antonyms have neighboring representations. We validate and demonstrate these axes on two people-centric datasets: occupations from Wikipedia, and multi-platform discussions in extremist, men's communities over fourteen years. In both studies, contextualized semantic axes can characterize differences among instances of the same word type. In the latter study, we show that references to women and the contexts around them have become more detestable over time.
Warning: This paper contains content that may be offensive or upsetting. Quantifying and describing the nature of language differences is key to measuring the impact of social and cultural factors on text. Past work has compared English embeddings for people to adjectives or concepts embeddings of less frequent minority names are closer to words related to unpleasantness. The use of "semantic axes" is enticing in that it offers an interpretable measurement of word differences beyond a single similarity value Our work investigates the extension and application of semantic axes to contextualized embeddings. We present a novel approach for constructing semantic axes with English BERT embeddings (Figure We demonstrate the use of contextualized axes on two datasets: occupations from Wikipedia, and people discussed in misogynistic online communities. We use the former as a case where terms appear in definitional contexts, and characteristics of people are well-known. In the latter longitudinal, cross-platform case study, we examine lexical choices made by communities whose attitudes towards women tend to be salient and extreme. We chose this set of online communities as a substantive use case of our method, in light of recent attention in web science on analyzing online extremism and hate at scale (e.g. Our code, vocabularies, and other resources can be found in our Github repo:
Static embeddings. Several formulae for calculating the similarity of a target word to two sets of pole words have been proposed in prior work on static semantic axes. These differ in whether they take the difference between a target word's similarities to each pole Relying on single-word poles for axes can be unstable to the choice of each word Our type-based embedding baseline, GLOVE, uses 300-dimensional GloVe vectors pretrained on Wikipedia and Gigaword Contextualized embeddings. Static embeddings, however, present a number of limitations. Such embeddings cannot easily handle polysemy or homonymy For contextualized axes, we obtain a potential pool of contexts for adjectives sampled over all of Wikipedia from December 21, 2021, preprocessed using To select contexts, we mask out the target adjective in each of its 1000 sentences, and have BERT-base predict the probabilities of synonyms and antonyms for that masked token. We remove contexts where the average probability of antonyms is greater than that of synonyms, sort by average synonym probability, and take the top 100 contexts. One limitation of our approach is that predictions are restricted to adjectives that can be represented by one wordpiece token. If none of the words on a pole of an axis appear in BERT's vocabulary, we backoff to BERT-DEFAULT to represent that axis. For each axis type, we also have versions where words' embeddings are z-scored, which has been shown to improve BERT's alignment with humans' word similarity judgements We internally validate our axes for self-consistency. For each axis, we remove one adjective's embeddings from either side, and compute its cosine similarity to the axis constructed from the remaining adjectives. For BERT approaches, we average the adjective's multiple embeddings to produce only one before computing its similarity to the axis. In a "consistent" axis, a left-out adjective should be closer to the pole it belongs to. That is, if it belongs to S l , its similarity to the axis should be positive. We average these leave-one-out similarities for each pole, negating the score when the adjective belongs to S r , to produce a consistency metric, C. Table The best method for producing consistent axes is z-scored BERT-PROB, with a significant difference in C from z-scored BERT-DEFAULT and GLOVE (Mann-Whitney U-test, p < 0.001). It also produces the highest number of consistent axes. GLOVE presents itself as a formidable baseline, Previous work on static semantic axes validates them using sentiment lexicons, exploratory anal- yses, and human-reported associations We perform external validation of self-consistent axes on a dataset where people appear in a variety of well-defined and known contexts: occupations from Wikipedia. We conduct two main experiments. In the first, we test whether contextualized axes can detect differences across occupation terms, and in the second, we investigate whether they can detect differences across contexts. We use a taxonomy of subreddits and external forums described by We use Reddit posts and comments from March 2008 to December 2019 from subreddits listed in Ribeiro et al. (2021a)'s study, downloaded from Pushshift We also include seven external forums provided by We use a list We call this dataset GEN-ERAL_REL, and it contains 1.2 billion tokens from September 2009 to December 2019. For Reddit data, we do not use posts and comments written by usernames who have bot-like behavior, which we define as repeating any 10-gram more than 100 times. Each occupation is represented by a pre-trained GloVe embedding or a BERT embedding averaged over all occurrences on its page. If an axis uses z-scored adjective embeddings, we also z-score the occupation embeddings compared to it. We assign poles to occupations based on which side of the axis they are closer to via cosine similarity. Top poles are highly related to their target occupation category, as seen by the examples for z-scored BERT-PROB in embeddings' proximity can reflect any type of semantic association, not just that a person actually has the attributes of an adjective. For example, adjectives related to unhealthy are highly associated with Health occupations, which can be explained by doctors working in environments where unhealthiness is prominent. Therefore, embedding distances only provide a foggy window into the nature of words, and this ambiguity should be considered when interpreting word similarities and their implications. This limitation applies to both static embeddings and their contextualized counterparts. We conduct human evaluation on this task of using semantic axes to differentiate and characterize occupations. Three student annotators examined the top three poles retrieved by each axisbuilding approach and ranked these outputs based on semantic relatedness to occupation categories (Appendix B). These annotators had fair agreement, with an average Kendall's W of 0.629 across categories and experiments. Though GLOVE is a competitive baseline, z-scored BERT-PROB is the highest-ranked approach overall (Table The identity of a word, and prior associations learned from BERT's training data, have the potential to overpower its in-context use Each person embedding is averaged over one occupation's contexts. The identity of person tends to overpower its similarity to axes across contexts, in that the top-ranked poles are similar across occupation categories. So, in contrast to the previous occupation experiment, additional steps are needed to draw out meaningful differences in how person is used in one group of contexts from its typical use. To do this, we estimate the average cosine similarity to axes of n person embeddings in occupational contexts using 1000 bootstrapped samples, where n is the number of terms in an occupation category. We take the axes with the highest statistically significant (p < 0.001, one-sample t-test) difference in cosine similarity. We assume that occupations' Wikipedia pages mention them within definitional contexts, so topranked poles should reflect the original occupation replaced by person. These top poles are less intuitive than those outputted by the earlier term-level experiment (Table Now that we have contextualized semantic axes that can measure differences across words and contexts, we apply them onto a domain that can showcase salient and socially meaningful variation. NLP research on harmful language often employs methods that focus on the target group, such as measuring their association with other words The manosphere has been linked to acts of violence in the physical world Our case study extends beyond prior work with its methodology and scale. We use contextualized semantic axes to tackle one question: how have references to women and contexts around them changed over fourteen years? We use a mix of NER, online glossaries, and manual inspection to curate a unique vocabulary of people (details in Appendix D). This vocabulary has 2,434 unigrams and 4,179 bigrams, tokenized using BERT's tokenizer without splitting words into wordpieces Since gender is central to the manosphere, we infer these labels based on terms' social gender in a dataset. For example, accuser is not semantically gendered like girl and woman, but its social gender, estimated using pronouns, is more feminine in EXTREME_REL than GENERAL_REL. We use two stages of gender inference to account for pronoun sparsity and noise. First, we use a list of semantically gendered nouns, and second, we use feminine and masculine pronouns linked to terms via coreference resolution (details in Appendix E). We label each vocabulary term based on its fraction of cooccurring feminine pronouns in EXTREME_REL and GENERAL_REL, separately. We are able to label 72.5% of the vocabulary in EXTREME_REL and 67.0% of it in GENERAL_REL. Contextualized semantic axes can reveal how word and phrase types change over time. Here, our analyses focus on 1,482 feminine (genderleaning > 0.75) terms in EXTREME_REL. To capture broad snapshots of words' use, we randomly sample up to 500 sentence-level occurrences of each term in each platform and ideology (e.g. a specific forum or Reddit category) in each year. Overall z-scored BERT embeddings for each vocab word are averages over this stratified sample of its contexts. The history of the manosphere is characterized by waves of different ideological communities where α = x T y/||y|| 2 . "Waves" of term types for people correspond to ideological change. Figure We examine the shifts of high variance, substantive axes across temporal clusters. High variance axes include those related to gender, appearance, and desirability (Table Contextualized semantic axes can reveal how the contexts around people have changed over time. Women in online communities can be referenced in a variety of ways (Figure replacements to respect singular/plural forms to ensure ecological validity and not perturb BERT's sensitivity to grammaticality In comparison to GENERAL_REL, EX-TREME_REL has more detestable, sickening, and dirty contexts for women (Figure Contextualized semantic axes can also illuminate differences among lexical variables, or different linguistic forms that share the same referential meaning As prominent examples, men-led communities use the lexical innovations femoids and foids, which are shortenings of female humanoids, as dehumanizing words for all women We sample up to 100 occurrences of each variant in each platform and ideology per year, limiting time ranges to when domain-specific variants are widely used by their home community. We examine the use of variants for men by Femcels and FDS in 2018-2019, and the use of variants for women by all other communities in EXTREME_REL in 2017-2019. Unlike in the person experiment for occupations, we have substantial pools of occurrences to compare. Thus, to find axes that distinguish one variant from another, we use axis scores as features in random forest classifiers In this work, we examine the capability of contextualized embeddings for discovering differences among words and contexts. Our method uses predicted word probabilities to pinpoint which contexts to include when aggregating BERT embeddings to construct axes. This approach creates more self-consistent axes that better fit different occupation categories, in comparison to baselines. We further demonstrate the use of these axes in a longitudinal, cross-platform case study. Overall, contextualized embeddings offer more flexibility and granularity compared to static ones for the analysis of content across time and communities. That is, rather than train static word embeddings for various subsets of data, we can characterize change and variation at the token-level. Though we focus on analyzing associations between adjectives and people, our approach can generalize to other types of entities as well. Measuring and comparing the contexts of other entity types should include many of the same considerations we did, such as reducing the conflation of antonyms, controlling for word identity by replacing target words with a shared hypernym, and experimenting with z-scoring. Future work includes understanding why some opposing concepts are conflated in large language models, and how a word embed-ding's identity influences its encoding of contexts. Aside from computing power requirements (Appendix H), we outline a few additional limitations of our methodology and its application not discussed in the main text. Domain shift. The use of pretrained BERT on a niche set of communities makes our approaches susceptible to domain shift, such as rare words having less robust embeddings WordNet. WordNet is a popular lexical resource for NLP, but its senses for words can be overly finegrained Errors. Our method for drawing out differences in words is better than common baselines yet still imperfect, and some of the opposing concepts in embedding space that BERT struggles to separate may be important for an application domain. Therefore, domain expertise is needed to recognize spurious patterns from real ones and fill these gaps. In the main text we mention that embeddings offer a "foggy window" into how two concepts may be associated or related, and the exact type of relation is not always clear. For example, if contexts for women are closer to unpleasant, does it mean that the text discusses unpleasant events that affect women, or that the writers believe that women are unpleasant, or both? Some of this uncertainty could be resolved qualitatively by inspecting sentences at poles' extremes. We compare embeddings for people to axes, but it is also possible to include relation-based approaches such as dependency parsing and compare words that share specific relations with people to axes (e.g. User privacy. Online data opens many doors for research, but its use raises concerns around user privacy. For our use case, we believe that the benefits of our work outweigh privacy-related harms. Consent is infeasible to obtain for large datasets All online discussions included in our work were public when downloaded by their original curators, mainly Communities may expect their posts to stay within their in-group, but the content in our work was posted on public platforms. This publicness and increased visibility plays a key role in how this content impacts others, such as those who view this information and propagate it elsewhere, or those who are direct targets of hate. Common targets such as women and people of color carry a bigger burden when participating in online spaces (Hoffmann and Jonas, 2017), and our broader research agenda aims to mitigate this issue. Social biases in models and resources. We use WordNet to group similar adjectives into semantic axes, but we observe some socially harmful asso-ciations in this resource. For example, gross and fat are listed as similar lemmas. As another example, WordNet conflates gender and sexuality when androgynous and bisexual are also listed as similar lemmas. The BERT language model, like all large, pretrained models, is also susceptible to social biases in its training data Gender inference. In this paper's main case study, we perform gender inference for word and phrase types. This step was necessary to study how women are portrayed over time, which is a key question due to the centrality of misogyny in these communities. However, perfect prediction of each word's perceived gender in our dataset using pronouns is impossible (Cao and Restricting pronouns to the traditional binary of feminine and masculine is limiting, since individuals use other pronouns as well. They/them pronouns are predominantly used to reference plural terms in this dataset, and the coreference model we use does not handle neopronouns. The manosphere and the typical framing under which it is studied is heavily cisheteronormative. We use a frequency cutoff to determine our vocabulary (Appendix D), so references to transgender and nonbinary people may be filtered out. Vocab terms retained for transgender people are outdated or typically offensive terms such as transsexuals and transgenders, and no vocab term includes non-binary, nb, or nonbinary. Table We recruited three student volunteers with familiarity with NLP coursework and tasks to rank the top poles provided by each axis-building method for our occupation and person experiments. We used Qualtrics to design and launch the survey. Since we were not asking about personal opinions but rather evaluating models, we were determined exempt from IRB review by the appropriate office at our institution. Each question pertains to a specific occupation category, and within each experiment, question order and answer option order are randomly shuffled. Each model option is presented with its top three poles, in order of most to less Hi! Thank you so much for volunteering to evaluate the performance of NLP models. Please read these instructions carefully. In this task, you will judge how much lists of adjectives from WordNet outputted by models are semantically related to occupational differences described in Wikipedia. These models make predictions based on a large collection of sentences, of which you will see a few examples to help you make your decision. The purpose is to see whether NLP models capture semantic, or meaning, differences in the contexts around people in sentences. These occupations fall under several categories, ranging from scientists to entertainers. You are deciding which models' outputs are typically more related to occupations, which may not reflect your personal opinions about occupations. There are two sets of questions, and 11 questions in each set. Examples of occupations in Fairytales include fairy godmothers, prince charming, evil villains, and wizards. You are given the sets of adjectives below. Adjective sets include "MORE" and "LESS" labels based on how people in the category above are more or less related to them, in comparison to other people who work as artists, government workers, and scientists: The above three models are ranked from most related to the occupation category to least related. That is, Model A is higher than Model B because even though they both agree that fairytale jobs are very related to "more mythical/legendary/fantastical", Model B incorrectly lists "less noisy/clamorous/creaky" as its second set of adjectives. Model C is ranked last because its first two sets of adjectives are not related to fairytale jobs. Try to be consistent in your rankings. That is, in the example above, you should not rank Model C after A and before B because A and B agree on the first set and overall share two valid adjective sets. Model C is more of an outlier, with only one valid third adjective set. relevant. Figure We used a list of subreddits In total we have 12 subreddits in TRP, 11 in MRA, 7 in PUA, 22 in Incels, 3 in MGTOW, 4 in Femcels, and 6 in FDS. The complete list of subreddits and their categories is also in our Github repo. First, we extract nominal and proper persons using NER, keeping ones that are popular (occur at least 500 times in EXTREME_REL), and unambiguous, where at least 20% of its instances in these datasets are tagged as a person. Gathering a substantial number of labels from our domain to train an indomain NER system from scratch is outside the scope of our work, so we experimented with three models trained on other labeled datasets: ACE, contemporary literature, and a combination of both. We evaluated these models on a small set of posts and comments labeled by one author, after retrieving 25 examples per forum or Reddit ideological category using reservoir sampling. The annotator only labeled spans for nominal and named PERSON entities. Table We extract bigrams and unigrams from detected spans, excluding determiners and possessives whose heads are the root of the span. Named entities that refer to types of people rather than specific individuals were estimated through their co-occurrence with the determiner a, e.g. a Chad. Then, one author consulted community glossaries and examined in-context use of words to manually correct the list of automatically extracted terms. We include additional popular and unambiguous words not tagged sufficiently often enough by NER, but defined as people in prior work and online resources. Table The resulting vocabulary contains niche language, where 20.7% of unigrams are not found in WordNet, and 85.1% of those missing are also not in the Internet resource Urban Dictionary. This section includes additional details around our gender inference process. Our list of semantically gendered terms, or words gendered by definition, expands upon the one used by We check if any of the above words appear in a unigram or bigram vocabularly term. Around 29.9% of our vocabulary in EXTREME_REL is gendered through this word list approach. To infer gender for the remaining words using pronouns, we ran coreference resolution on EX-TREME_REL, and extracted all pronouns that are clustered in coreference chains with terms in our vocabulary Table Our main goal here is to tease out which axes differentiate the contexts of lexical variants, rather than find the best model that performs well on a classification task. Therefore, we choose to use a random forest classifier for its interpretability: it outputs weights that indicate what features were most important across its decisions. We use scikitlearn's implementation, and perform randomized search with 5-fold cross validation and weighted F1 scoring to select model parameters (Table We only use BERT-base for inference, but the overall runtime cost is high due to the size of our corpora: English Wikipedia and social media discussions. We use one Titan XP GPU with 8 CPU cores for most of the paper, and occasionally expanded to multiple machines with 1080ti and K80 GPUs in parallel when handling social media data. We use BERT for two main purposes: predicting word probabilities to select contexts for constructing axes, and obtaining word embeddings. On one Table 10: Parameter choices for random forest classification. Symbols mark selected parameters for each task, where † refers to men vs. moids, ‡ refers to women vs. femoids, and * refer to women vs. foids. These models had weighted F1 scores of 0.670, 0.759, and 0.781, respectively. Titan XP GPU, the former takes ∼1 hour for one million sentences containing one masked target word each, and the latter takes ∼2.5 hours for one million sentences, including wordpiece aggregation.
831
1,395
831
Scalable Term Selection for Text Categorization
In text categorization, term selection is an important step for the sake of both categorization accuracy and computational efficiency. Different dimensionalities are expected under different practical resource restrictions of time or space. Traditionally in text categorization, the same scoring or ranking criterion is adopted for all target dimensionalities, which considers both the discriminability and the coverage of a term, such as χ 2 or IG. In this paper, the poor accuracy at a low dimensionality is imputed to the small average vector length of the documents. Scalable term selection is proposed to optimize the term set at a given dimensionality according to an expected average vector length. Discriminability and coverage are separately measured; by adjusting the ratio of their weights in a combined criterion, the expected average vector length can be reached, which means a good compromise between the specificity and the exhaustivity of the term subset. Experiments show that the accuracy is considerably improved at lower dimensionalities, and larger term subsets have the possibility to lower the average vector length for a lower computational cost. The interesting observations might inspire further investigations.
Text categorization is a classical text information processing task which has been studied adequately • Many irrelevant terms have detrimental effect on categorization accuracy due to overfitting • Some text categorization tasks have many relevant but redundant features, which also hurt the categorization accuracy (i) Many sophisticated learning machines are very slow at high dimensionalities, such as LLSF (ii) In Asian languages, the term set is often very large and redundant, which causes the learning and the predicting to be really slow. (iii) In some practical cases the computational resources (time or space) are restricted, such as hand-held devices, real-time applications and frequently retrained systems. (iv) Some deeper analysis or feature reconstruction techniques rely on matrix factorization (e.g. LSA based on SVD), which might be computationally intractable while the dimensionality is large. Sometimes an aggressive term selection might be needed particularly for (iii) and (iv). But it is notable that the dimensionality is not always directly connected to the computational cost; this issue will be touched on in Section 6. Although we have many general feature selection techniques, the domain specified ones are preferred • discriminability: how unbalanced is the distribution of the term among the categories. • coverage: how many documents does the term occur in. (Borrowing the terminologies from document indexing, we can say the specificity of a term set corresponds to the discriminability of each term, and the exhaustivity of a term set corresponds to the coverage of each term.) The main difference among these criteria is to what extent the discriminability is emphasized or the coverage is emphasized. For instance, empirically IG prefers high frequency terms more than χ 2 does, which means IG emphasizes the coverage more than χ 2 does. The problem is, these criteria are nonparametric and do the same ranking for any target dimensionality. Small term sets meet the specificity-exhaustivity dilemma. If really the sparseness is the main reason of the low performance of a small term set, the specificity should be moderately sacrificed to improve the exhaustivity for a small term set; that is to say, the term selection criterion should consider coverage more than discriminability. Contrariwise, coverage could be less considered for a large term set, because we need worry little about the sparseness problem and the computational cost might decrease. The remainder of this paper is organized as follows: Section 2 describes the document collections used in this study, as well as other experiment settings; Section 3 investigates the relation between sparseness (measured by average vector length) and categorization accuracy; Section 4 explains the basic idea of scalable term selection and proposed a potential approach; Section 5 carries out experiments to evaluate the approach, during which some empirical rules are observed to complete the approach; Section 6 makes some further observations and discussions based on Section 5; Section 7 gives a concluding remark.
Two document collections are used in this study. CE (Chinese Encyclopedia): This is from the electronic version of the Chinese Encyclopedia. We choose a Chinese corpus as the primary document collection because Chinese text (as well as other Asian languages) has a very large term set and a satisfying subset is usually not smaller than 50000 20NG (20 Newsgroups For CE collection, character bigrams are chosen to be the indexing unit for its high performance Term weighting is done by tfidf 3 , in which t i denotes a term, d j denotes a document, N d denotes the total document number. The classifiers used in this study are support vector machines Performance is evaluated by microaveraged F 1measure. For single-label tasks, microaveraged precision, recall and F 1 have the same value. χ 2 is used as the term selection baseline for its popularity and high performance. (IG was also reported to be good. In our previous experiments, χ 2 is generally superior to IG.) In this study, features are always selected globally, which means the maximum are computed for category-specific values In this study, vector length (how many different terms does the document hold after term selection) is used as a straightforward sparseness measure for a document Therefore, it is quite straightforward a thought to measure the "sparseness of a term subset" (or more precisely, the exhaustivity) by the corresponding average vector length (AVL) of all documents. 4 In the 4 Due to the lognormal distribution of vector length, it seems more plausible to average the logarithmic vector length. However, for a fixed number of documents , log remainder of this paper, (log) AVL is an important metric used to assess and control the sparseness of a term subset. Since the performance droping down at low dimensionalities is attributable to low AVLs in the previous section, a scalable term selection criterion should automatically accommodate its favor of high coverage to different target dimensionalities. The first step is to separately measure the discriminability and the coverage of a term. A basic guideline is that these two metrics should not be highly (positive) correlated; intuitively, they should have a slight negative correlation. The correlation of the two metrics can be visually estimated by the joint distribution figure. A bunch of term selection metrics were explored by It is a symmetric ratio, so log(PR) is likely to be more appropriate. For multi-class categorization, a global value can be assessed by PR max (t i ) = max c PR(t i , c), like χ 2 max for χ 2 (Yang and Pedersen, 1997; Now we have the two metrics: log(PR) for discriminability and log(df ) for coverage, and a parametric A weighted harmonic averaging is adopted here because either metric's being too small is a severe detriment. λ ∈ [0, 1] is the weight for log(PR), which denotes how much the discriminability is emphasized. When the dimensionality is fixed, a smaller λ leads to a larger AVL and a larger λ leads to a smaller AVL. The optimal λ should be a function of the expected dimensionality (k): and F 1 is the default evaluation criterion. Naturally, this optimal λ leads to a corresponding optimal AVL: For a concrete implementation, we should have an (empirical) function to estimate λ * or AVL * : In the next section, the values of AVL * (as well as λ * ) for some k-s are figured out by experimental search; then an empirical formula, AVL • (k), comes forth. It is interesting and inspiring that by adding the "corpus AVL" as a parameter this formula is universal for different document collections, which makes the whole idea valuable. 5 Experiments and Implementation The expected dimensionalities (k) chosen for experimentation are CE: 500, 1000, 2000, 4000, . . . , 32000, 64000; 20NG: 500, 1000, 2000, . . . , 16000, 30220. 5 For a given document collection and a given target dimensionality, there is a corresponding AVL for a λ, and vice versa (for the possible value range of AVL). According to the observations in Section 5.2, AVL other than λ is the direct concern because it is more intrinsic, but λ is the one that can be tuned directly. So, in the experiments, we vary AVL by tuning λ to produce it, which means to calculate λ(AVL). AVL(λ) is a monotone function and fast to calculate. For a given AVL, the corresponding λ can be quickly found by a Newton iteration in [0,1]. In fact, AVL(λ) is not a continuous function, so λ is only tuned to get an acceptable match, e.g. within ±0.1. 5 STS is tested to the whole T on 20NG but not on CE, because (i) TCE is too large and time consuming for training and testing, and (ii) χ 2 was previously tested on larger k and the performance (F1) is not stable while k > 64000. For each k, by the above way of fitting λ, we manually adjust AVL (only in integers) until F 1 (S k (λ(AVL))) peaks. By this way, Figure Figure Same experiments are done on 20NG and the results are shown in Figure In Figure in which λ(•) can be calculated as in Section 5.1. The target dimensionality, k, is involved as a parameter, so the approach is named scalable term selection. As stated in Section 5.1, AVL • (k) has a very close performance to AVL * (k) and its performance is not plotted here. 6 Further Observation and Discussion An investigation shows that for a quite large range of λ, term rankings by ζ(t i ; λ) and χ 2 (t i ) have a strong correlation (the Spearman's rank correlation coefficient is bigger than 0.999). In order to com- In Figure There are actually two kinds of sparseness in a (vectorized) document collection: collection sparseness: the high-dimensional learning space contains few training samples; document sparseness: a document vector has few nonzero dimensions. In this study, only the document sparseness is investigated. The collection sparseness might be a backroom factor influencing the actual performance on different document collections. This might explain why the explicit characteristics of STS are not the same on CE to 20NG: (comparing with χ 2 , see Figure 20NG. The F 1 improvements at low dimensionalities is not quite significant, but AVL remains a lower level. For higher k, there is less difference in F 1 , but the smaller AVL yield lower computational cost than χ 2 . Nevertheless, STS shows a stable behavior for various dimensionalities and quite different document collections. The existence of the universal constant γ empowers it to be adaptive and practical. As shown in Figure In this paper, Scalable Term Selection (STS) is proposed and supposed to be more adaptive than traditional high-performing criteria, viz. χ 2 , IG, BNS, etc. The basic idea of STS is to separately measure discriminability and coverage, and adjust the relative importance between them to produce a optimal term subset of a given size. Empirically, the constant relation between target dimensionality and the optimal relative average vector length is found, which turned the idea into implementation. STS showed considerable adaptivity and stability for various dimensionalities and quite different document collections. The categorization accuracy increasing at low dimensionalities and the computational cost decreasing at high dimensionalities were observed. Some observations are notable: the loglinear relation between optimal average vector length (AVL * ) and dimensionality (k), the semi-loglinear relation between weight λ and dimensionality, and the universal constant γ. For a future work, STS needs to be conducted on more document collections to check if γ is really universal. In addition, there could be other implementations of the general STS idea, via other metrics of discriminability and coverage, other weighted combination forms, or other term subset evaluations.
1,237
3,115
1,237
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care. The ability to accurately and efficiently predict mood from easily collectible data has several important implications for the early detection, intervention, and treatment of mental health disorders. One promising data source to help monitor human behavior is daily smartphone usage. However, care must be taken to summarize behaviors without identifying the user through personal (e.g., personally identifiable information) or protected (e.g., race, gender) attributes. In this paper, we study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors. Using computational models, we find that language and multimodal representations of mobile typed text (spanning typed characters, words, keystroke timings, and app usage) are predictive of daily mood. However, we find that models trained to predict mood often also capture private user identities in their intermediate representations. To tackle this problem, we evaluate approaches that obfuscate user identity while remaining predictive. By combining multimodal representations with privacy-preserving learning, we are able to push forward the performanceprivacy frontier.
Mental illnesses can have a damaging permanent impact on communities, societies, and economies all over the world
Figure Intensive monitoring of behaviors via adolescents' natural use of smartphones may help identify realtime predictors of mood in high-risk youth as a proxy for suicide risk Recent work in affective computing has begun to explore the potential in predicting mood from mobile data. Studies have found that typing patterns Prior work has also shown that private information is predictable from digital records of human behavior In this paper, as a step towards using multimodal privacy-preserving mood prediction as fine-grained signals to aid in mental health assessment, we analyze a recent dataset of mobile behaviors collected from adolescent populations at high suicidal risk. With consent from participating groups, the dataset collects fine-grained features spanning online communication, keystroke patterns, and application usage. Participants are administered daily questions probing for mood scores. By collecting and working on ground-truth data for this population, we are able to benchmark on a more accurate indica-tor of mood rather than proxy data such as mood signals inferred from social media content or behavior Intensive monitoring of behaviors via adolescents' frequent use of smartphones may shed new light on the early risk of suicidal thoughts and ideations We begin with a brief review of the data collection process. This data monitors adolescents spanning (a) recent suicide attempters (past 6 months) with current suicidal ideation, (b) suicide ideators with no past suicide attempts, and (c) psychiatric controls with no history of suicide ideation or attempts. Passive sensing data is collected from each participant's smartphone across a duration of 6 months. Participants are administered clinical interviews probing for suicidal thoughts and behaviors (STBs), and self-report instruments regarding symptoms and acute events (e.g., suicide attempts, psychiatric hospitalizations) are tracked weekly via a questionnaire. All users have given consent for their mobile data to be collected and shared with us for research purposes. This study has been carefully reviewed and approved by an IRB. We follow the NIH guidelines, with a central IRB (single IRB) linked to secondary sites. We have IRB approval for the central institution and all secondary sites. 2.1 Mood Assessment via Self-Report Every day at 8am, users are asked to respond to the following question -"In general, how have you been feeling over the last day?" -with an integer score between 0 and 100, where 0 means very negative and 100 means very positive. To construct our prediction task, we discretized these scores into the following three bins: negative (0 -33), neutral (34 -66), and positive (67 -100), which follow a class distribution of 12.43%, 43.63%, and 43.94% respectively. For our 3-way classification task, participants with fewer than 50 daily self-reports were removed since these participants do not provide enough data to train an effective model. In total, our dataset consists of 1641 samples, consisting of data coming from 17 unique participants. We focused on keyboard data, which includes the time of data capture, the mobile application used, and the text entered by the user. For each daily score response at 8am, we use information collected between 5am on the previous day to 5am on the current day. We chose this 5am-5am window by looking at mobile activity and finding the lowest activity point when most people ended their day: 5am. Since users report the previous day's mood (when prompted at 8am), we decided to use this 5am-5am time period to summarize the previous day's activities. Through prototyping, this prompt time and frequency were found to give reliable indicators of the previous day's mood. From this window, we extracted the following features to characterize and contextualize typed text. Text: After removing stop-words, we collected the top 1000 words (out of approximately 3.2 million) used across all users in our dataset and created a bag-of-words feature that contains the daily number of occurrences of each word. We show some sample bag-of-timing histograms in Figure App usage: Similar to "positive" words, we define "positive" apps to be those with higher than overall positive mood relative frequency and lower than overall negative mood relative frequency, and "negative" apps to be the opposite. Apps were also then sorted by difference in relative frequency. D.3.2 Understanding the multimodal features Characters with keystrokes: For each user, we plotted histograms of keystroke timings of alphanumeric characters, symbols (punctuation and emojis), spacebar, enter, delete, and use of autocorrect, split across daily mood categories. See Figure Words with keystrokes: For each user, we plotted histograms of the word-level keystroke timings of the top 500 words, split across the daily mood categories of positive, neutral, and negative. We also performed Wilcoxon rank-sum tests at 5% signifi- cance level In this paper, we focus on studying approaches for learning privacy-preserving representations from mobile data for mood prediction. Our processed data comes in the form of {(x t,i , x k,i , x a,i , y i )} n i=1 with x t ∈ N |Vt|=2000 denoting the bag-of-words features, x k ∈ N |V k |=100 denoting the bag-oftimings features, and x a ∈ N |Va|=274 denoting the bag-of-apps features. y denotes the label which takes on one of our 3 mood categories: negative, neutral, and positive. In parallel, we also have data representing the corresponding (one-hot) user identity x id which will be useful when learning privacypreserving representations that do not encode information about user identity x id and evaluating privacy performance. We considered two unimodal baselines: 1. Support Vector Machines (SVMS) project training examples to a chosen kernel space and finds the optimal hyperplane that maximally separates each class of instances. We apply an SVM classifier on input data x uni ∈ {x t , x k , x a } and use supervised Boxes with numbers denote which parameters are being optimized in the corresponding step. For example, in the addition phase (3), NI-MLP optimizes parameters δ in g(.; δ). (2a) depicts identity-dependent dimensions z id , which is a sparse vector of size dim(z feat ) whose nonzero values (colored purple) signify dimensions of the identity-dependent subspace in z feat . learning to predict daily mood labels y. 2. Multilayer Perceptrons (MLPS) have seen widespread success in supervised prediction tasks due to their ability in modeling complex nonlinear relationships. Because of the small size of our dataset, we choose a simple multilayer perceptron with two hidden layers. Similarly, we apply an MLP classifier on input data x uni ∈ {x t , x k , x a } to predict daily mood labels y. We extend both SVM and MLP classifiers using early fusion While classifiers trained with traditional supervised learning can learn useful representations for mood prediction, they carry the risk of memorizing the identity of the user along with their sensitive mobile usage and baseline mood scores, and possibly revealing these identities to adversarial thirdparties We adapt the Selective-Additive Learning (SAL) framework (1) Pretrain phase: The input is a set of (multimodal) features x that are likely to contain both identity-dependent and independent information. The intermediate representation z feat = f feat (x; θ * feat ) is obtained from an MLP classifier pretrained for mood prediction. f feat denotes the classifier with pretrained parameters θ * feat . (2) Selection phase: Our goal is to now disentangle the identity-dependent and independent information within z feat . We hypothesize that dependent and independent information are encoded in separate subspaces of the feature vector z feat . This allows us to disentangle them by training a separate classifier to predict z feat as much as possible given only the user identity: (1) where x id denotes a one hot encoding of user identity as input, f id denotes the identity encoder with parameters θ id , and λ denotes a hyperparameter that controls the weight of the 1 regularizer. f id projects the user identity encodings to the feature space learned by f feat . By minimizing the objective in equation (1) for each (x, x id ) pair, f id learns to encode user identity into a sparse vector z id = f id (x id ; θ * id ) representing identity-dependent features: the nonzero values of z id represent dimensions of the identity-dependent subspace in z feat , while the remaining dimensions belong to the identity-independent subspace. (3) Addition phase: Given two factors z feat and z id , to ensure that our prediction model does not capture identity-related information z id , we add multiplicative Gaussian noise to remove information from the identity-related subspace z id while repeatedly optimizing for mood prediction with a final MLP classification layer g(z feat , z id ; δ). This resulting model should only retain identity-independent features for mood prediction: where ∼ N (0, σ 2 ) is repeatedly sampled across batches and training epochs. We call this approach NOISY IDENTITY MLP, or NI-MLP for short, and summarize the final algorithm in Figure There is often a tradeoff between privacy and prediction performance. To control this tradeoff, we vary the parameter σ, which is the variance of noise added to the identity-dependent subspace across batches and training epochs. σ = 0 recovers a standard MLP with good performance but reveals user identities, while large σ effectively protects user identities but at the possible expense of mood prediction performance. In practice, the optimal tradeoff between privacy and performance varies depending on the problem. For our purposes, we automatically perform model selection using this performance-privacy ratio R computed on the validation set, where is defined as the improvement in privacy per unit of performance lost. Here, s is defined as the accuracy in user prediction and t is defined as the F1 score on mood prediction. We perform experiments to test the utility of text, keystroke, and app features in predicting daily mood while keeping user privacy in mind. Data splits: Given that our data is longitudinal, we split our data into 10 partitions ordered chronologically by users. We do so in order to maintain independence between the train, validation, and test splits in the case where there is some form of time-level dependency within our labels. Evaluation: For each model, we run a nested kfold cross-validation (i.e., we perform 9-fold validation within 10-fold testing). For each test fold, we identify the optimal parameter set as the one that achieves the highest mean validation score over the validation folds. To evaluate NI-MLP, we use the best performing MLP model for each test fold as our base classifier before performing privacypreserving learning. For all experiments, we report the test accuracy and macro F1 score because our classes are imbalanced. Given the low number of cross-validation folds, we use the Wilcoxon signedrank test We make the following observations regarding the learned language and multimodal representations for mood prediction: set) as our baseline. From Table Observation 2: Pretrained sentence encoders struggle on this task. We also applied pretrained sentence encoders such as BERT 1. BERT is suitable for written text on the web (Wikipedia, BookCorpus, carefully humanannotated datasets) which may not generalize to informal typed text that contains emojis, typos, and abbreviations (see Section 4.4 for a qualitative analysis regarding the predictive abilities of emojis and keystrokes for mood prediction). 2. We hypothesize that it is difficult to capture such long sequences of data (>1000 time steps) spread out over a day. Current work has shown that BERT struggles with long sequence lengths Observation 3: Fusing both text and keystroke timings improves performance. This dataset presents a unique opportunity to study representations of typed text as an alternative to conventionally studied written or spoken text. While the latter two use language alone, typed text includes keystroke features providing information about the timings of when each character was typed. In Table 1, we present some of our initial results in learning text and keystroke representations for mood prediction and show consistent improvements over text alone. We further study the uniqueness of typed text by comparing the following baselines: 1. Text: bag-of-words only. 2. Text + char keystrokes: bag-of-words and bagof-timings across all characters. 3. Text + split char keystrokes: bag-of-words and bag-of-timings subdivided between 6 groups: alphanumeric characters, symbols, spacebar, enter, delete, and use of autocorrect. This baseline presents a more fine-grained decomposition of the typing speeds across different semantically related character groups. 4. Text + word keystrokes: bag-of-words and bagof-timings summed up over the characters in each word. This presents a more interpretable model to analyze the relationships between words and the distribution of their typing speeds. From Table Observation 4: Multimodal representation learning achieves the best performance. In Table Despite these promising results in mood prediction, we ask an important question: Does the model capture user identities as an intermediate step towards predicting mood? To answer this question, we an- alyze the privacy of raw mobile data and trained models. We then study our proposed method of learning privacy-preserving features to determine whether it can obfuscate user identity while remaining predictive of daily mood. How private is the mobile data? We evaluate how much the data reveal user identities by training predictive models with typed text, keystroke timings, and app usage as input and user identity as the prediction target. From Table How private are the learned privacy-preserving features? We also study whether our learned features are correlated with user identity through both visualizations and quantitative evaluations. Visualizations: We use t-SNE (Van der As an attempt to reduce reliance on user identity, we train NI-MLP which is designed to obfuscate user-dependent features. After training NI-MLP, we again visualize the representations learned in Figure Quantitative evaluation: To empirically evaluate how well our models preserve privacy, we extracted the final layer of each trained model and fit a logistic regression model to predict user identity using these final layer representations as input. The more a model preserves privacy, the harder it should be to predict user identity. From Table Figure NI-MLP provides a tunable parameter σ to control the tradeoff, which allows us to plot a range of (performance, privacy) points. Using a multimodal model on text, keystroke, and app features obtains better performance and privacy at the same time. for the best multimodal model, which indicates the possibility of NI-MLP as a means of achieving privacy-preserving mood prediction. Understanding the tradeoff between performance and privacy: NI-MLP provides a tunable parameter σ to control the variance of noise applied on the identity-related dimensions. This parameter σ has the potential to give a tradeoff between privacy and prediction performance. In Figure To further shed light on the relationships between mood prediction performance and privacy, we performed a more in-depth study of the text, keystroke, and app usage features learned by the model (see Appendix D.3 for more examples). Text: We find that several words are particularly indicative of mood: can't/cant, don't/don't, and sorry are negative for more users than positive, while yes is overwhelmingly positive across users (9 pos, 1 neg), but yeah is slightly negative (5 pos, 7 neg). We also analyze the use of emojis in typed text and find that while there are certain emojis that lean positive (e.g., ), there are ones (e.g., :( and ) that used in both contexts depending on the user (see Table Apps: In Table We also analyze how the same characters and words can contribute to different mood predictions based on their keystroke patterns. As an example, the distribution of keystrokes for the enter character on the keyboard differs according to the daily mood of one user (see Figure In this paper, we investigated the learning of language and multimodal representations of typed text collected from mobile data. We studied the challenge of learning markers of daily mood as a step towards early detection and intervention of mental health disorders for social good. Our method also shows promising results in obfuscating user identities for privacy-preserving learning, a direction crucial towards real-world learning from sensitive mobile data and healthcare labels. In addition, our findings illustrate several challenges and opportunities in representation learning from typed text as an understudied area in NLP. Limitations & future work: While our approach shows promises in learning representations for mood prediction, several future directions on the modeling and NLP side include: 1) better models and pre-training algorithms for NLP on typed text, 2) algorithms that provide formal guarantees of privacy Applications in mental health: Suicide is the second leading cause of death among adolescents. In addition to deaths, 16% of high school students report seriously considering suicide each year, and 8% make one or more suicide attempts "Just-in-time" adaptive interventions delivered via mobile health applications provide a platform of exciting developments in low-intensity, high-impact interventions To realize this goal, we need accurate and timely methods that predict when interventions are most needed. Monitoring (with participants' permission) mobile data to assess mental health and provide early interventions is, therefore, a rich opportunity for scalable deployment across high-risk populations. Our data collection, experimental study, and computational approaches provide a step towards data-intensive longitudinal monitoring of human behavior. However, one must take care to summarize behaviors from mobile data without identifying the user through personal (e.g., personally identifiable information) or protected attributes (e.g., race, gender). This form of anonymity is critical when implementing these technologies in real-world scenarios. Our goal is to be highly predictive of mood while remaining as privacy-preserving as possible. We outline some of the potential privacy and security concerns below. Limitations: While we hope that our research can provide a starting point on the potential of detecting mood unobtrusively throughout the day in a privacy-preserving way, we strongly acknowledge there remain methodological issues where a lot more research needs to be done to enable the realworld deployment of such technologies. We emphasize that healthcare providers and mobile app startups should not attempt to apply our approach in the real world until the following issues (and many more) can be reliably resolved: 1. We do not make broad claims across teenage populations from only 17 participants in this study. Furthermore, it remains challenging for models to perform person-independent prediction which makes it hard to deploy across large populations. 2. Our current work on predicting daily mood is still a long way from predicting imminent suicide risk. Furthermore, any form of prediction is still significantly far away from integrating methods like this into the actual practice of mental health, which is a challenging problem involving a broad range of medical, ethical, social, and technological researchers 3. Text and keystrokes can differ for participants who speak multiple languages or non-prestige vernaculars. One will need to ensure that the method works across a broad range of languages to ensure accessibility in its desired outcomes. 4. This study assumes that participants have no restrictions for data/network connections & data plans on their phones, which may leave out vulnerable populations that do not meet this criterion. Privacy and security: There are privacy risks associated with making predictions from mobile data. To deploy these algorithms across at-risk populations, it is important to keep data private on each device without sending it to other locations. Even if data is kept private, it is possible to decode data from gradients (Zhu and Han, 2020) or pretrained models We acknowledge that there is a risk of exposure bias due to imbalanced datasets, especially when personal mobile data and sensitive health labels (e.g., daily mood, suicidal thoughts and behaviors, suicide risk). Models trained on biased data have been shown to amplify the underlying social biases especially when they correlate with the prediction targets Overall, we believe that our proposed approach can help quantify the tradeoffs between performance and privacy. We hope that this brings about future opportunities for large-scale real-time analytics in healthcare applications. The Mobile Assessment for the Prediction of Suicide (MAPS) dataset was designed to elucidate real-time indicators of suicide risk in adolescents ages 13 -18 years. Current adolescent suicide ideators and recent suicide attempters along with aged-matched psychiatric controls with no lifetime suicidal thoughts and behaviors completed baseline clinical assessments (i.e., lifetime mental disorders, current psychiatric symptoms). Following the baseline clinical characterization, a smartphone app, the Effortless Assessment of Risk States (EARS), was installed onto adolescents' phones, and passive sensor data were acquired for 6-months. Notably, during EARS installation, a keyboard logger is configured on adolescents' phones, which then tracks all words typed into the phone as well as the apps used during this period. Each day during the 6month follow-up, participants also were asked to rate their mood on the previous day on a scale ranging from 1 -100, with higher scores indicating a better mood. After extracting multimodal features and discretizing the labels (see Section 2), we summarize the final dataset feature and label statistics in Table We provide additional details on the model implementation and experimental setup. All models and analyses were done in Python. SVM models were implemented with Scikitlearn and MLP/NI-MLP models were implemented with PyTorch. BERT, XLNet, and Longformer models were fine-tuned using Hugging Face (website: We performed a small hyperparameter search over the ranges in Table Each model has about two million parameters. See Table All experiments were conducted on a GeForce RTX 2080 Ti GPU with 12 GB memory. See Table We present several additional analysis of the data and empirical results: There is often a tradeoff between privacy and prediction performance. To control this tradeoff, we vary the parameter σ, which is the amount of noise added to the identity-dependent subspace across batches and training epochs. In practice, we automatically perform model selection using this performance-privacy ratio R computed on the validation set, where is defined as the improvement in privacy per unit of performance lost. Here, s is defined as the accuracy in the user prediction task and t is defined as the F1 score on the mood prediction task. In the rare cases where NI-MLP performed better than the original MLP and caused R to become negative, we found this improvement in performance always came at the expense of worse privacy as compared to other settings of λ and σ in NI-MLP. Therefore, models with negative R were not considered for Table For Table Interestingly, in Figure Note that we do not include privacy results for features learned by SVM, which finds a linear separator in a specified kernel space rather than learning a representation for each sample. Explicitly projecting our features is computationally infeasible due to the high dimensionality of our chosen kernel spaces. In this section, we provide more empirical analysis on the unimodal and multimodal features in the MAPS dataset. Text: We begin with some basic statistics regarding word distributions. For each user, we tallied the frequencies of each word under each daily mood category (positive, neutral, and negative), as well as the overall number of words in each mood category. We define "positive" words and emojis to be those with a higher relative frequency of positive mood compared to the overall positive mood frequency, and lower than overall negative mood frequency. Likewise, "negative" words and emojis have higher than overall negative mood frequency and lower than overall positive mood frequency. We filtered out words for specific users if the word was used less than 40 times. Finally, we ranked the words by the difference in relative frequency (i.e., a word is "more positive" the larger the difference between its positive mood relative frequency and the user's overall positive mood relative frequency). See Table Since this is a new dataset, we explored several more methods throughout the research process. In this section we describe some of the approaches that yielded initial negative results despite them working well for standard datasets: 1. User specific models: We also explored the setting of training a separate model per user but we found that there was too little data per user to train a good model. As part of future work, we believe that if NI-MLP can learn a user-independent classifier, these representations can then be used for further finetuning or few-shot learning on each specific user. Previous work in federated learning 2. User-independent data splits: We have shown that text, keystrokes, and app usage features are highly dependent on participant identities. Consequently, models trained on these features would perform poorly when evaluated on a user not found in the training set. We would like to evaluate if better learning of user-independent features can improve generalization to new users (e.g., split the data such that the first 10 users are used for training, next 3 for validation, and final 4 for testing). Our initial results for these were negative, but we believe that combining better privacy-preserving methods that learn user-independent features could help in this regard. 3. Fine-grained multimodal fusion: Our approach of combining modalities was only at the input level (i.e., early fusion
1,319
113
1,319
Dating Greek Papyri with Text Regression
Dating Greek papyri accurately is crucial not only to edit their texts but also to understand numerous other aspects of ancient writing, document and book production and circulation, as well as various other aspects of administration, everyday life and intellectual history of antiquity. Although a substantial number of Greek papyri documents bear a date or other conclusive data as to their chronological placement, an even larger number can only be dated tentatively or in approximation, due to the lack of decisive evidence. By creating a dataset of 389 transcriptions of documentary Greek papyri, we train 389 regression models and we predict a date for the papyri with an average MAE of 54 years and an MSE of 1.17, outperforming image classifiers and other baselines. Last, we release date estimations for 159 manuscripts, for which only the upper limit is known.
Ancient textual artefacts are arguably the richest source of information on the ancient world. In the Graeco-Roman world and particularly in its Greekspeaking part, the most extensive coeval texts come from inscriptions and papyri. The latter is a collective term used for all ancient manuscripts, regardless of their writing material which, apart from papyrus, may be parchment, pottery, wood, and others. To correctly evaluate and make good use of these texts, we need to determine their date, provenance and historical context of their production and use. As far as dating is concerned, the value of the relevant evidence provided by the artefacts themselves varies considerably, ranging from a direct date in the text (following, of course, the calendar and dating system of the respective historical period) to no evidence at all. In between, there are texts containing references to known historical figures and events of a certain period, papyri which have been found next to other objects that can be dated, or other indirect evidence. The presence or absence of a date depends on the type of text preserved on the papyrus and its use through time, as well as on its state of conservation. Just like in modern times, it is much more likely to include a date in an official letter than in a page torn from a novel book. At the same time, it is more probable to find a date in a fully surviving letter than in a damaged one missing, for instance, the upper part of the first page. Greek papyri, which mostly survive in fragments, are divided into two broad categories: books (literary and sub-literary papyri) and documents of all kinds (documentary papyri). The former ones never carry a date, whereas the latter often do, albeit not always unambiguously convertible by modern scholars. Most importantly for our study, literary papyri contain copies of works authored many years (often centuries) before the production of the actual manuscripts. On the other hand, documentary texts were usually written down as they were composed or shortly after that, making the content of their texts contemporary to their writing style or script. Therefore, any temporal indication in the text is also dating evidence regarding the production of the document. Even when there is no direct date in the text (e.g. Figure When neither direct or indirect dating is possible, papyrologists resort to palaeography, the study of the script. In palaeography, particular writing styles are associated with certain chronological periods. Therefore, similar writing styles point to similar dates In this study we focus on computational dating of Greek documentary papyri based on their transcriptions, contributing in the following three ways: 1. We present and publicly release a machineactionable dataset of 389 documentary Greek papyri, containing texts of various aspects of daily life (e.g. contracts, receipts, letters). 2. We draw the baseline in text regression for the tasks of dating experimenting with Monte Carlo and leave one out cross validation. 3. We apply a committee of regressors to three papyri, which present different types of dating challenges, and on 159 manuscripts for which only the upper date limit is known. This approach does not apply to literary papyri and our research involves solely documents. Apart from their texts being contemporary with the actual manuscripts (by dating the text, we date the papyrus), nonliterary papyri also include vastly more numerous objectively dated specimens than literary ones. Specific dates on our training set also allow for more accurate (narrower date-spans) predictions by our models.
Dating historical documents with computational means has been studied for many languages The studied languages are Latin The employed methods usually were standard machine learning methods, such as KNN Pre-trained convolutional neural networks have been used to extract features, which are passed to a classifier or regressor Our dataset, which we release publicly, The dataset was compiled mainly from PA-PYRI. Nonliterary papyri in Greek from the 3rd c. BCE to the 7th c. CE are written in a great variety of cursive hands The date of a manuscript may be found in different forms. It can be an exact date, a range of years, a starting date (not before that date), or an ending date (not after that date), or two-three alternative dates. Our dataset has been curated so that dating applies at the level of the quarter of the century, by considering manuscripts dated exactly or with a period ranging within that quarter. We did not consider manuscripts that were dated only before or after a specific date. Our first dataset comprised 400 manuscripts, 40 samples per century. Our initial pool consisted of 77,040 items and we opted for ones that satisfy the following conditions: • The transcriptions must be available in machine actionable form. • The papyri must contain documents (not works of literature) to ensure that text and papyrus are contemporary. • The papyri must be securely and accurately dated. Many papyri do not carry a date and are, therefore, dated with subjective criteria or with a large date span (e.g. 1st-2ndCE). • The image is available, to allow image-based dating and potentially jointly from different modalities: text and image. Given these limitations, it was the 7thCE that dictated the size per century of a balanced dataset, since there are not more than 40 securely dated papyri from 7thCE. For each of these records, the text was retrieved afterwards from PAPYRI.INFO by parsing the respective XML files. We discarded records whose extracted text was less than ten characters, which resulted in our final 389 records. From these records, we extracted the entire text from one side of the papyrus (the side that had more text than the other). In the few cases of papyri with more than one fragment, we only included the first one. This decision was based on weighing the benefit of avoiding a considerable amount of noise during automatic parsing against eliminating a portion of text, in a dataset whose nature is by definition fragmentary. The transcribed text comprises a variety of characters and symbols. We preprocessed the data by lowercasing and normalising the text (see Table The transcriptions available are not diplomatic (reflecting exactly what is written) but normalised according to modern conventions, for example as far as punctuation and word separation (or sometimes spelling) are concerned. Therefore, we chose to disregard these conventions, because they do not represent data present in our sources, but normalisation on the papyrologists' part for the purpose of scholarly editions. To provide some more concrete examples, there is no capitalization of proper names or initial words in sentences in papyri. Punctuation is very scarce and sometimes completely absent. Diacritics are not meaningless, but they are extremely rare in documentary papyri (i.e., except diaresis which is used in a different way than modern conventions, to mark iota and upsilon as the first letter of a word). Breathings and accents are marked inconsistently (if at all) by different scribes. Hence, removing diacritics leads to inclusion and can help avoid multiple variations of what is in fact the same word. Regarding spelling, we kept both the original and the corrected form (if provided by the editors), because spelling mistakes reflect language evolution. The overall text length per quarter of century varies over time, as can be seen in Figure The most frequent character in our dataset is 'α' (35,101 occurrences), followed by 'ο' (33,176), 'ι' In order to assess the quality of the ground truth, we employed the Callimachus' Conservation number (CCN), To estimate the date of production of manuscripts, we opted for text regression, taking advantage of the continuous target objective. Statistical validity was established with 5-fold Monte Carlo crossvalidation. The best regression method was used to form a committee of models, which were applied on unseen data in order to analyse the predictions. We performed Monte Carlo cross-validation, by sampling 90% for training, 10% for validation, and then re-sampling with replacement five times. We report the mean absolute error (MAE), the mean squared error (MSE), and the explained variance (R 2 ). Besides the average results across folds, we also report the best score achieved per metric. Fernández-Delgado et al. ( Using the best-performing regression method out of the ones examined, we performed leave one out cross-validation, which allowed an evaluation using the whole dataset. Furthermore, it yielded as many regressors as the data points, which in our case is 389. We used these models to form a committee and date unseen papyri (further discussed in §6). This section presents our experimental results using regression on textual features to date Greek manuscripts. First, we present preliminary experiments and then we analyse the experimental findings from our regression analysis. Preliminary experiments comprised image classification By using the open-access web interface, Experiments were undertaken with Google Colaboratory, using a 12GB NVIDIA Tesla K80 GPU. We extracted term-frequency-inverse-documentfrequency features using lower-cased text and character n-grams (from 1-to 5-grams). 9 All other parameters were set to default values. 10 Linear regression achieved a MAE of 86 years on average (Table Using the best performing XTrees, we performed leave one out cross validation, by hiding one instance, training the algorithm on the remaining instances, and then using the model to predict the hidden record. 11 The MAE was found to be 55 years, MSE was 1.11, and R 2 was 85.89, close to the Monte Carlo evaluation scores. In order to better understand the errors, we rounded the predictions and the ground truth, evaluating as if we would in a classification setting. Predictions most often fall on or close to the diagonal (Figure 10 Manual hyper-parameter tuning per regressor yielded insignificant improvements. 11 The experiment lasted 15 hours. achieved for the 1st and 2nd CE, followed by the 7th CE (see Table In very few cases, our leave-one-out regression fell considerably out of its predictions (Figure We applied our 389 regressors, produced upon leave-one-out cross-validation, to three use cases, which present different types of dating challenges. This document This papyrus, also shown in Figure The last manuscript 14 contains a request for transfer of taxation from 538 CE. It is a geographical outsider since it does not come from Egypt but from Petra (Jordan). We tested this manuscript since many of the words found in the text are infrequent in Egyptian manuscripts, on which our models are trained. The date mentioned in the papyrus is "second indiction". This refers to the second year of a repeated fifteen-year cycle (indiction) and the year 538 is relative, since it could be the second year of the previous or the next indiction (523 or 553). 538 is logically deduced by the editors in view of the whole dossier of papyri from Petra. Our models date this manuscript to 555 CE (521-575 CE), overcoming the geographical variation. The computational, quantitative method suggested in this work is intended to complement human expertise. Its main contribution lies in providing an additional dating criterion for ancient Greek documents, in addition to the ones usually employed by papyrologists (palaeography, onomastics, prosopography, toponymy, archaeological evidence, etc.). It can predict a date for those papyri that do not include one, narrow down the possible time-span of doubtful dating, or contribute to deciding on one particular date when several alternatives seem possible. Despite the fact that limitations exist (discussed in §7.3), compared to traditional approaches the models trained in this study are expected to reduce biases. Their value is not limited to predicting dates for individual manuscripts, but they can be applied to any attribute of a group of papyri, e.g. the place of provenance or the text's type. At the same time, easily accessible open-source metadata exist for most published papyri ( §3.1). The use of supervised learning, such as the work of In the case of PSI 8 934 ( §6.1), our investigation showed that the mention of the name 'Aurelios Victor' ('Αὐρήλιος Βίκτωρ') influenced the decision, resulting to a more recent date than what would have been predicted otherwise. Similarly, in the case of P. Petra 1 5 ( §6.3), the decision was influenced by a reference to 'indiction' ('ἰνδικτίωνος'), a word that refers to a periodic reassessment of taxation in the Late Roman Empire. Computational dating can facilitate a macroscopic analysis of vaguely dated or undated manuscripts. By generating estimated dates for hundreds of such manuscripts, the expert can view from distance the collection, potentially drawing useful conclu-sions or making significant remarks. To test this hypothesis, we collected 220 manuscripts dated with an upper CE date limit (i.e., not after that date). We formed a committee of regressors, Our experimental analysis proved that text regression is a considerably reliable and accurate tool in dating nonliterary papyri. Limitations and challenges stem mainly from the composition of our dataset, which is balanced as far as the dates of the papyri included are concerned, both at the level of the century (approx. 40 records per century) and at the level of the quarter of the century (albeit less strictly and with the exception of the 7th CE). Furthermore, although we retained a substantial text sample of each papyrus, in approximately 1/4 of the records some text was eliminated. Despite our effort to balance the dataset in terms of dates, biases are present. Since our main concern in collecting the data was for the date distribution, no deliberate selection was made on the basis of the document types. Some types are thus over or underrepresented (e.g. private letters that do not usually bear a date; §6.2). Each type of document has however distinctive linguistic characteristics, such as the level of formality or unusual constructions (e.g. accounts). This uneven typological representation probably affects the performance of the models. Other possible biases in the dataset concern the provenance of papyri, the length of their text, and the state of conservation (sizeable portions of missing text or entirely missing parts of the documents). Chronological analysis of word occurrence is possible if we detect and collect terms only attested in the papyrological material during a limited period. The word 'denarius' only appears after the 2nd CE and before the 5th CE, its presence in a text thus means that the text must have been written during this timespan. Likewise a text containing the word 'indiction' cannot have been written before the 4th CE. The investigation should also regard the possibility that the models make a prediction for a papyrus based on typical dating formulas present in the text like the name of the ruling emperor. Although our investigation of explanations did not yield any major concerns, a bigger sample of test cases should be created and more explainability methods should be employed Transcription of the papyri is required (at least partial, but substantial) to reach this high degree of accuracy with our method. Thus, while there are transcriptions available for most already published papyri, it is less practical for dating unpublished papyri that have not been yet transcribed to a relatively high standard. In that case, image classification on the scripts can provide a less accurate prediction of the date as starting point. We presented a machine-actionable dataset of 389 Greek documentary papyri of (mostly) Egyptian provenance, dated and balanced in terms of chronological quarter-century distribution. We trained extremely randomised trees on top of character n-gram-based features, reaching a mean absolute error of 54 years and 60% in century-level classification accuracy. We then formed a committee of regressors, which we applied to three use cases: a land lease, a private letter, and a geographical outsider (not from Egypt). To assist future research, our committee dated 159 manuscripts, for which only the upper limit is known. Future endeavours for this research extend far beyond the dating of individual manuscripts. It can produce valuable data for the study of the Greek language and its evolution through a millennium, help identify and trace linguistic habits and trends, as well as the history of document production, circulation, and use (e.g. which period produces what kind of texts, which administration relied on what type of documents, etc.). It can also produce further data and resources towards the typology of ancient Greek documents, completing with computational methods the work already underway and well-advanced of the grammateus project. Last, it can in the future fruitfully be combined with computational paleography to analyse the script and content of a given text.
870
3,639
870
A Fast Boosting-based Learner for Feature-Rich Tagging and Chunking
Combination of features contributes to a significant improvement in accuracy on tasks such as part-of-speech (POS) tagging and text chunking, compared with using atomic features. However, selecting combination of features on learning with large-scale and feature-rich training data requires long training time. We propose a fast boosting-based algorithm for learning rules represented by combination of features. Our algorithm constructs a set of rules by repeating the process to select several rules from a small proportion of candidate rules. The candidate rules are generated from a subset of all the features with a technique similar to beam search. Then we propose POS tagging and text chunking based on our learning algorithm. Our tagger and chunker use candidate POS tags or chunk tags of each word collected from automatically tagged data. We evaluate our methods with English POS tagging and text chunking. The experimental results show that the training time of our algorithm are about 50 times faster than Support Vector Machines with polynomial kernel on the average while maintaining stateof-the-art accuracy and faster classification speed.
Several boosting-based learning algorithms have been applied to Natural Language Processing problems successfully. These include text categorization Furthermore, classifiers based on boostingbased learners have shown fast classification speed However, boosting-based learning algorithms require long training time. One of the reasons is that boosting is a method to create a final hypothesis by repeatedly generating a weak hypothesis in each training iteration with a given weak learner. These weak hypotheses are combined as the final hypothesis. Furthermore, the training speed of boosting-based algorithms becomes more of a problem when considering combination of features that contributes to improvement in accuracy. This paper proposes a fast boosting-based algorithm for learning rules represented by combination of features. Our learning algorithm uses the following methods to learn rules from large-scale training samples in a short time while maintaining accuracy; 1) Using a rule learner that learns several rules as our weak learner while ensuring a reduction in the theoretical upper bound of the training error of a boosting algorithm, 2) Repeating to learn rules from a small proportion of candidate rules that are generated from a subset of all the features with a technique similar to beam search, 3) Changing subsets of features used by weak learner dynamically for alleviating overfitting. We also propose feature-rich POS tagging and text chunking based on our learning algorithm. Our POS tagger and text chunker use candidate tags of each word obtained from automatically tagged data as features. The experimental results with English POS tagging and text chunking show drastically improvement of training speeds while maintaining competitive accuracy compared with previous best results and fast classification speeds. 2 Boosting-based Learner
We describe the problem treated by our boostingbased learner as follows. Let X be the set of examples and Y be a set of labels {-1, +1}. Let F = {f 1 , f 2 , ..., f M } be M types of features represented by strings. Let S be a set of training sam-## S = {(xi, yi)} m i=1 : xi ⊆ X ,yi ∈ {±1} ## a smoothing value ε =1 ## rule number r: the initial value is 1. Initialize: For i=1,...,m: w 1,i = exp( 1 2 log( Figure ) be the number of features included in a feature-set x i , which we call the size of x i , and x i,j ∈ F (1 ≤ j ≤ |x i | ) be a feature included in x i . 1 We call a feature-set of size k a k-feature-set. Then we define subsets of feature-sets as follows. Definition 1 Subsets of feature-sets If a feature-set x j contains all the features in a feature-set x i , then we call x i is a subset of x j and denote it as Then we define weak hypothesis based on the idea of the real-valued predictions and abstaining (RVPA, for short) 1 Our learner can handle binary vectors as in 2 We use the RVPA because training with RVPA is faster than training with Real-valued-predictions (RVP) while maintaining competitive accuracy Our boosting-based learner selects R types of rules for creating a final hypothesis F on several training iterations. The F is defined as We use a learning algorithm that generates several rules from a given training samples S = {(x i , y i )} m i=1 and weights over samples {w r,1 , ..., w r,m } as input of our weak learner. w r,i is the weight of sample number i after selecting r -1 types of rules, where 0<w r,i , 1 ≤ i ≤ m and 1 ≤ r ≤ R. Given such input, the weak learner selects ν types of rules {f j } ν j=1 (f j ⊆ F) with gain: where f is a feature-set, and W r,y (f ) is and The weak learner selects a feature-set having the highest gain as the first rule, and the weak learner finally selects ν types of feature-sets having gain in top ν as {f j } ν j=1 at each boosting iteration. Then the boosting-based learner calculates the confidence value of each f in {f j } ν j=1 and updates the weight of each sample. The confidence value c j for f j is defined as After the calculation of c j for f j , the learner updates the weight of each sample with wr+1,i = wr,iexp(-yih f j ,c j ). (1) Then the learner adds (f j , c j ) to F as the rth rule and its confidence value. After the updates of weights with {f j } ν j=1 , the learner starts the next boosting iteration. The learner continues training until obtaining R rules. Our boosting-based algorithm differs from the other boosting algorithms in the number of rules learned at each iteration. The other boosting-based algorithms usually learn a rule at each iteration ## sortByW (F,f q): Sort features (f ∈ F ) ## in ascending order based on weights of features ## (a % b): Return the reminder of (a ÷ b) Figure The initial weights are defined with the default rule. 3 Fast Rule Learner We use a method to generate candidate rules without duplication We assign smaller integer to more infrequent features as id. If there are features having the same frequency, we assign id to each feature with lexicographic order of features. Training based on this candidate generation showed faster training speed than generating candidates by an arbitrary order We propose a method for learning rules by repeating to select a rule from a small portion of candidate rules. We evaluated the effectiveness of four types of methods to learn a rule from a subset of features on boosting-based learners with a text chunking task The results showed that Frequency-based distribution (F-dist) has shown the best accuracy. F-dist Figure However, we guess training using a subset of features depends on how to distribute features to buckets like online learning algorithms that generally depend on the order of the training examples To alleviate the dependency on selected buckets, we propose a method that redistributes features, called Weight-based distribution (W-dist). W-dist redistributes features to buckets based on the weight of feature defined as for each f ∈ F after examining all buckets. Figure 2 describes an overview of W-dist. We propose a weak learner that learns several rules from a small portion of candidate rules. Figure We also use the following pruning techniques • Frequency constraint: We examine candidates seen on at least ξ different examples. • Size constraint: We examine candidates whose size is no greater than a size threshold ζ. • Upper bound of gain: We use the upper bound of gain defined as For any feature-set f ⊆F, which contains f (i.e. Figure • words, words that are turned into all capitalized, prefixes and suffixes (up to 4) in a 7-word window. • labels assigned to three words on the right. • whether the current word has a hyphen, a number, a capital letter • whether the current word is all capital, all small • candidate POS tags of words in a 7-word window Figure Figure The training of AdaBoost.SDF with (ν = 1, ω = ∞, 1 < |B| ) is equivalent to the approach of AdaBoost.DF 4.1 English POS Tagging We used the Penn Wall Street Journal treebank We collect candidate POS tags of each word from the automatically tagged corpus provided for the shared task of English Named Entity recognition in CoNLL 2003. 4 The corpus includes 17,003,926 words with POS tags and chunk tags 4 • labels assigned to two words on the right. • candidate chunk tags of words in a 5-word window Figure We collected candidate POS tags of words that appear more than 9 times in the corpus. We express these candidates with one of the following ranges decided by their frequency f q; 10 ≤ f q < 100, 100 ≤ f q < 1000 and 1000 ≤ f q. For example, we express 'work' annotated as NN 2000 times like "1000≤NN". If 'work' is current word, we add 1000≤NN as a candidate POS tag feature of the current word. If 'work' appears the next of the current word, we add 1000≤NN as a candidate POS tag of the next word. We used the data prepared for CoNLL-2000 shared tasks. The data consists of subsets of Penn Wall Street Journal treebank; training (sections 15-18) and test (section 20). We prepared the development set from section 21 of the treebank as in For instance, "[He] (NP) [reckons] (VP) [the current account deficit] (NP)..." is represented by IOE2 as follows; "He/E-NP reckons/E-VP the/I-NP current/I-NP account/I-NP deficit/E-NP". We used features shown in Figure • Candidate tags expressed with frequency information as in POS tagging • The ranking of each candidate decided by frequencies in the automatically tagged data • Candidate tags of each word For example, if we collect "work" annotated as I-NP 2000 times and as E-VP 100 time, we generate the following candidate features for "work"; 1000≤I-NP, 100≤E-VP<1000, rank:I-NP=1 rank:E-NP=2, candidate=I-NP and candidate=E-VP. We converted the chunk representation of the automatically tagged corpus to IOE2 and we collected chunk tags of each word appearing more than nine times. AdaBoost.SDF treats the binary classification problem. To extend AdaBoost.SDF to multi-class, we used the one-vs-the-rest method. To identify proper tag sequences, we use Viterbi search. We map the confidence value of each classifier into the range of 0 to 1 with sigmoid function 7 , and select a tag sequence which maximizes the sum of those log values by Viterbi search. We compared AdaBoost.SDF with Support Vector Machines (SVM). SVM has shown good performance on POS tagging We tested R=100,000, |B|=1,000, ν = {1,10,100}, ω={1,10,100,∞}, ζ={1,2,3}, and ξ={1,5} for AdaBoost.SDF. We tested the soft margin parameter C={0.1,1,10} and the kernel degree d={1,2,3} for SVM. 9 We used the followings for comparison; Training time is time to learn 100,000 rules. Best training time is time for generating rules to show the best F-measure (F β=1 ) on development data. Accuracy is F β=1 on a test data with the rules at best training time. 7 s(X) = 1/(1 + exp(-βX)), where X = F (x) is a output of a classifier. We used β=5 in this experiment. 8 We used TinySVM ( Table Figure Figure These results in Table We compared F β=1 and best training time of Fdist and W-dist. We used ζ = 2 that has shown We measured testing speeds of taggers and chunkers based on rules or models listed in Table Text Chunking F β=1 Regularized Winnow + full parser output 6 Related Works We list previous best results on English POS tagging and Text chunking in Table These results have also shown that AdaBoost.SDF-based taggers and chunkers show competitive accuracy by learning combination of features automatically. Most of these previous works manually selected combination of features except for SVM with polynomial kernel and Learners LazyBoosting randomly selects a small proportion of features and selects a rule represented by a feature from the selected features at each iteration Collins and Koo proposed a method only updates values of features co-occurring with a rule feature on examples at each iteration Kudo et al. proposed to perform several pseudo iterations for converging fast AdaBoost.MH KR learns a weak-hypothesis represented by a set of rules at each boosting iteration AdaBoost.SDF differs from previous works in the followings. AdaBoost.SDF learns several rules at each boosting iteration like AdaBoost.MH KR . However, the confidence value of each hypothesis in AdaBoost.MH KR does not always minimize the upper bound of training error for AdaBoost because the value of each hypothesis consists of the sum of the confidence value of each rule. Compared with AdaBoost.MH KR , AdaBoost.SDF computes the confidence value of each rule to minimize the upper bound of training error on given weights of samples at each update. Furthermore, AdaBoost.SDF learns several rules represented by combination of features from limited search spaces at each boosting iteration. The creation of subsets of features in Ad-aBoost.SDF enables us to recreate the same classifier with same parameters and training data. Recreation is not ensured in the random selection of subsets in LazyBoosting. We have proposed a fast boosting-based learner, which we call AdaBoost.SDF. AdaBoost.SDF repeats to learn several rules represented by combination of features from a small proportion of candidate rules. We have also proposed methods to use candidate POS tags and chunk tags of each word obtained from automatically tagged data as features in POS tagging and text chunking. The experimental results have shown drastically improvement of training speed while maintaining competitive accuracy compared with previous best results. Future work should examine our approach on several tasks. Future work should also compare our algorithm with other learning algorithms.
1,155
1,866
1,155
Cree Corpus: A Collection of nêhiyawêwin Resources
Plains Cree (nêhiyawêwin) is an Indigenous language that is spoken in Canada and the USA. It is the most widely spoken dialect of Cree and a morphologically complex language that is polysynthetic, highly inflective, and agglutinative. It is an extremely low resource language, with no existing corpus that is both available and prepared for supporting the development of language technologies. To support nêhiyawêwin revitalization and preservation, we developed a corpus covering diverse genres, time periods, and texts for a variety of intended audiences. The data has been verified and cleaned; it is ready for use in developing language technologies for nêhiyawêwin. The corpus includes the corresponding English phrases or audio files where available. We demonstrate the utility of the corpus through its community use and its use to build language technologies that can provide the types of support that community members have expressed are desirable. The corpus is available for public use 1 .
Recent work with Indigenous persons has shown that some want advanced technologies to support the learning and use of their languages. The Cree and Métis persons involved in this study stated a desire for technologies such as an app to help with learning the structure of the language for conversation, translation, and AI agents that resemble a speaker Government policies have contributed towards supporting the preservation and revitalization of some Indigenous languages, e.g., Inuktitut These practices prevented and continue to prevent the development of language technologies because state-of-the-art statistical and neural models require large amounts of text. To work towards addressing this issue, we created a nêhiyawêwin corpus from various sources. Our corpus is composed of 49,038 words and 3,727 lines of text in Standard Roman Orthography (SRO), 10 texts in syllabics, and 1,026 lines of English-nêhiyawêwin parallel data. To the best of our knowledge, this is the first collection of processed nêhiyawêwin data ready for use to build language technologies. The most similar existing work includes a small collection of nêhiyawêwin text, lexical, and audio resources in their original formats (Open Language Archives Community). There is also a morphosyntactic tagged corpus In response to the limited availability of resources and tools, this work contributes a collection of ready to use resources to enable the development of language technologies that can support the preservation and revitalization of nêhiyawêwin. We demonstrate the practicality of the corpus through its use by community-based teachers of nêhiyawêwin. Using these materials has informed their lesson plans. Further, we describe the ongoing development of predictive language models using the contributed corpus. These models enable predictive text that is expected to provide some of the language support needs that have been expressed by nêhiyawêwin speakers. With this work, we aim to inspire future data collection and sharing of nêhiyawêwin resources that are aligned with community interests.
Plains Cree is called nêhiyawêwin by its speakers, and it is not capitalized. nêhiyawêwin is a widely-spoken dialect of the Indigenous language that English-speakers call Cree: nêhiyawêwin is the mother tongue for approximately 3,655 speakers, and it is the language spoken most at home for approximately 2,165 persons Current language technologies for nêhiyawêwin include Finite State Transducers (FSTs) that have been used for tasks such generating word forms and conjugating verbs in online dictionaries It is not surprising that FSTs are one of the few technologies that exist given that nêhiyawêwin is a polysynthetic, agglutinative, and highly inflective language, which complicates the task of creating language technologies. These characteristics allow the meaning of a single token or word to map to that of a full phrase or sentence in English. For example, 'kimîciso' maps to 'you all eat' in English. nêhiyawêwin has two writing systems: SRO and syllabics. A single character in syllabics represents one or more SRO characters (e.g., σ is ni in SRO and △ is i). Complicating this, is the variability in how these writing systems have been used and continue to be used across regions and time. This variability means that choices must be made with respect to the writing systems and 'standards' that are followed when developing language technologies. These are difficult choices and each community may have different preferences, which means that tools for converting across varied writing systems would help to maintain community norms. An example of such a tool is the SRO-syllabics converter Our corpus contains text from several domains making it a diverse collection of nêhiyawêwin resources (see Table The material is organized into folders by category or source along with its copyright information for how the public can use them. Where nêhiyawêwin-English parallel texts exist, the folder contains a cleaned and aligned version of these texts; a given line in one language file corresponds to the same line in the other language file. Syllabics versions of texts are provided where available. Some texts also have an accompanying audio file. Before adding a text to our corpus, we checked the copyright and license or obtained permission from the content creator. We provide a bag-ofwords (BoW) representation when text was under copyright or the content owners felt this was an acceptable alternative to sharing the original text. These BoW files contain a list of words from the original text and their usage counts. As these files only contain individual words, there is no nêhiyawêwin-English mapping because there is often no one-to-one translation between nêhiyawêwin and English words. Table To build this corpus we first identified sources of nêhiyawêwin text. We then extracted the text. Following extraction, we aligned the texts across languages and performed additional processing. We used Google search to find nêhiyawêwin text online and entered keywords such as 'nêhiyawêwin text' and 'plains cree text'. Please see Appendix A for a full list of keywords. Some websites continually updated their content with new material (e.g., Cree Literacy Network 2 ) so we returned and checked those sites for additional content. Data were identified as nêhiyawêwin by carefully inspecting the source and its description. The contents of the text were also checked by one of our team members who had been trained in how to differentiate between dialects of Cree. This step ensured the text was in the targeted dialect. If uncertainties arose, such as when facing unfamiliar accents, hyphens, or characters, a nêhiyawêwin speaker would verify whether the text was Plains Cree. Copyright information was verified to see if the text could be shared or perhaps if the copyright would allow BoW format. For texts that contain Elders' stories, described below, permission from speakers was obtained to share the stories. The resources in the corpus can be publicly used as allowed by the copyright information detailed on GitHub for a particular source. Text was extracted from the original sources (e.g., PDFs, webpages) and converted into plain text. Care was taken to ensure the text was properly copied and that it excluded irrelevant information (e.g., HTML markup or English annotations). Some data was collected by scraping websites, where licensing allowed it. When licensing did not permit scraping, we contacted site owners to obtain permission. In some cases, they shared the raw materials with us for inclusion in the corpus. Parallel phrases in English and nêhiyawêwin were extracted when available. The retrieved nêhiyawêwin texts Language Text nêhiyawêwin Before: êwîpîk'skwâtamân tân'si êkîpêhisikiskinohamâsoyân nêhiyawêwin. âskaw âyiman ôma ôta, ôtênâhk. tâpitaw mâna ayisiyiniwak êhâpacihtâcik âkayâsîmowin. After: êwîpîk'skwâtamân tân'si êkîpêhisikiskinohamâsoyân nêhiyawêwin. âskaw âyiman ôma ôta, ôtênâhk. tâpitaw mâna ayisiyiniwak êhâpacihtâcik âkayâsîmowin. English Before: I'm going to speak about how I came to teach myself Cree. Sometimes it's hard here, in the city. People usually always use English. After:I'm going to speak about how I came to teach myself Cree. Sometimes it's hard here, in the city. People usually always use English. Table nêhiyawêwin English êpêkakwêcim'kawiyân ôma, tanêhiyawîyân. êkwa anima âya, k'tisipîk'skwêwin'nân niyanân kayâs kâkîpêhohpikêyâhk. I've been asked this, to speak Cree, and well, of our language a long time ago when we were growing up Table may have used SRO or syllabics. Some were accompanied by an audio file. The availability of formats varied from resource to resource. Beyond these publicly available online resources, we collected resources from the field. These resources are recordings of Elders who chose a story to tell us. They gave us permission to use and share these stories for the purposes of supporting learning and developing language technologies that could do the same. Most of the shared stories relate to their personal lives or socio-political issues. These recordings were made over a summer by attending cultural events and interacting with community members. The recordings were transcribed and translated into English in some cases. Three speakers of nêhiyawêwin took part in the transcription, translation, and verification process. Where parallel texts were available, alignment was performed before other preprocessing or data cleaning. Most parallel texts contained some spacing markers, such as line breaks for paragraphs or spaces for phrases. In these scenarios, single sentences or phrases were easily aligned to each other. Challenges arose when a paragraph contained a different number of sentences across languages. Since we aimed to provide sentence or phrase alignments in the corpus, we needed to distinguish how a sen-tence in one language is expressed in the other. In longer texts, when multiple sentences in nêhiyawêwin mapped to one sentence in English, or vice versa, this mapping was used as the alignment to maintain the original meaning of the text. This situation was prominent in Biblical texts. In shorter texts, a nêhiyawêwin speaker reviewed the text and decided on the appropriate alignment. We note that this process of aligning paragraphs, then text within paragraphs is demonstrated to outperform alignment that does not account for paragraph boundaries Preprocessing was only performed on texts that used the SRO writing system. Texts in syllabics did not undergo the below-described preprocessing. We focused on preprocessing SRO texts for several reasons. It was relatively easy to obtain texts in SRO, which meant that there were more of them. SRO representations of the language vary in their use of diacritics and other conventions, which means that combining sources requires some element of normalization so that the texts can be jointly used. Moreover, one of the intended uses for our corpus is to support instructional activities for local courses, and SRO is the first writing The writing system used by speakers differs by community, where some use SRO and others use syllabics. This is also the case for the communities with which we have worked. The choice of writing systems and the considerations surrounding that choice are further discussed in Sections 5 and 7. Before running the processing script 1 , we manually identified the use of slashes or parentheses. When slashes were used, usually in English text to denote gender or possible alternative phrasings, we ensured that the nêhiyawêwin data would represent all possibilities (see Table Parentheses were mainly used in English sentences to provide additional context. If the text in parentheses provided alternative phrasing, the alternative sentence in English would be constructed with the same nêhiyawêwin meaning mapped to it. This follows a similar pattern to that used with slashes for options like he or she. If the parenthetical expression did not provide an alternative or additional context, it was removed. Parentheses were removed manually in this process and not considered as punctuation to be kept in the preprocessing script, which we describe below. This initial manual process addressed the varying nature of each case and our desire to extract as much information as possible from the text. Following the manual preprocessing, a Python script was run on the data files. The script follows a similar pattern for both nêhiyawêwin and English, with slight modifications for each. Since nêhiyawêwin can be written with different types of diacritics used to represent the same information in SRO (e.g., ā, á, â), we converted all accents to circumflex to maintain consistency within the corpus. A different choice could have easily been made. Because each community may have a different preference, we have included a script that can be modified so that the corpus can be re-standardized according to a specific community's preferences. All text was converted to lowercase. The only punctuation the script does not remove is periods, exclamation marks, question marks, colons, commas, apostrophes, and single quotes. Each of these punctuation markers are represented as a single token by inserting a space before them. Hyphens are preprocessed differently from other punctuation. Because nêhiyawêwin and English use hyphens differently, we applied rules specific to each language. In English, hyphens were removed and replaced with a space because the words surrounding the hyphen could often stand alone Language Text nêhiyawêwin Before: ātiht kinosēwak misikitiwak māka ātiht apisīsisiwak After: âtiht kinosêwak misikitiwak mâka âtiht apisîsisiwak Before: kâ-pimwêwêhahk okakêskîhkêmowina After: kâ pimwêwêhahk okakêskîhkêmowina English Before: Some fish are big, but some are small. After: some fish are big , but some are small . Before: he-drums-people-into-the-afterlife's counselling speeches After: he drums people into the afterlife 's counselling speeches Now that we have described how the corpus was created, we need to discuss ethical considerations around the creation and use of such resources. The process of creating language technologies for any community of speakers should be guided by the goals and interests of the respective community. Natural language processing (NLP) research should directly involve the language communities for which the technologies are being designed, as it will directly impact the speakers of the language. Further, the process of constructing these technologies should be clear to the community so there is an understanding of the data required for the model and how it will be used. For example, communities may wish to see language technologies such as text-to-speech to honor an oral tradition. However, these systems require an underlying model trained on corresponding audio and text for the language, which may or may not be in accordance with a community's wishes. In direct terms, the existence of this corpus in itself is not an invitation to make Indigenous language models and technologies independently and without consultation. As discussed by Pine and Turin ( An important consideration when developing language technologies using corpora and language models is the nature of the language used to train those models. For example, language models trained on Internet texts (e.g., GPT-3) have been subject to scrutiny following the revelation of racist and generally offensive outputs An additional important note is that, between aligning texts and automatically extracting text from varied sources, it is possible for there to be mistakes or inconsistencies. This should be taken into consideration when using the corpus. We also welcome edits and contributions. Beyond the above considerations, each of the choices that we made during data cleaning has the potential to have normative effects on the language. Some may view norming and standardization as a benefit The corpus is already being used to support community needs as part of a broader project for developing language learning technologies and technologies to support language use. Within this context, corpus materials are being used to help people learn nêhiyawêwin. Materials are also being used to develop language models that support tasks that community members who are learning nêhiyawêwin would like supported. We briefly discuss these ongoing activities to demonstrate the utility of the corpus. As part of developing language-learning technologies, several teachers of nêhiyawêwin who work in and come from different nêhiyawêwin-speaking communities have joined our group. These teachers provide guidance on how to teach the language and help us to develop curricula and teaching materials. Upon listening to the recordings in the corpus, one of the teachers was struck by the richness of the language and thematic content of the personal stories that Elders told. As a result of this experience with the corpus materials, she decided to work with those recordings to develop learning materials. She started by identifying the relevant cultural themes and values that were conveyed through the recorded stories. She then developed lesson plans around those recordings, the thematic and cultural content, and the grammatical structures used within the stories. This resulted in up to four lessons per recording. She developed accompanying worksheets to al-low students to practice the grammatical concepts she decided to add to her course. She also developed read-along activities. To do this, she had to convert the recordings from .m4a to .mp3 so that they could be played using technologies that are provided in her classroom, which demonstrates the potential barriers that file formats can introduce. Building on her work, we have developed interactive online learning activities using her newly created worksheets. These interactive learning activities provide students with feedback and have been integrated into a computer assisted language learning (CALL) system. In addition to the interactive worksheet activities, we have been developing a read-along activity as part of this CALL system. This read-along activity specifically uses the shadowing approach As the above case illustrates, the corpus materials can be used to develop and expand teaching materials. As reported by collaborating teachers, these materials have also influenced how teachers approach their students and courses. One teacher decided to start teaching certain aspects of the language, such as the transitive animate verb paradigm, sooner. Before listening to the stories from Elders, she would only teach the transitive animate paradigm to more advanced students. She thinks it is not taught in many settings because of its inherent complexity. Listening to the stories helped her realize what a central part it was of fluent speakers' speech. This realization came after analyzing the recorded stories. Upon reflection, she recognized that the adults in her life would use it when speaking to her as a child. Consequently, she now teaches it to young children with the expectation that they will gain knowledge and familiarity with this paradigm even though they are unlikely to produce language using verbs in the transitive animate form soon after they learn it. She expects that they will start using the transitive animate paradigm once they are older and more fluent. Beyond supporting the development of learning materials and activities for use in person or online, the corpus has helped to identify gaps in existing materials. As part of preparing accompanying learning materials for students, language teachers often decompose new vocabulary items into their constituent morphemes because this helps students to learn the language and build upon their existing knowledge when they encounter new words Text prediction is a language technology that many people use daily without noticing it. For many, they rely on it when typing on their phones to compose an email or text. They also use it to help them fill in forms. This language technology may be taken for granted in high-resource languages. The absence of support tools like these for nêhiyawêwin speakers has been noted, and learners of nêhiyawêwin have expressed a desire for similar types of support Text prediction is a subtask of one of the projects that is being run out of the National Research Council Canada. This project aims to create "software to assist Indigenous communities in preserving their languages and extending their use" To extend the work by The corpus was divided into 90% for the training set and 10% for the development set. We used KenLM The COVID-19 pandemic has brought about informational materials translated into several languages in an attempt to reach as many members of the public as possible with general health guidance around this issue. Usually these pamphlets contain This corpus can be used to support several lines of future work. An immediate next direction would be further supporting the development of nêhiyawêwin learning materials using the corpus. For example, creating additional read-along activities and other game-based learning activities. SoundHunters is one such game that aims to improve learner phonological awareness Another avenue, would be applying the corpus to support the further creation of NLP technologies for nêhiyawêwin. As mentioned, predictive text models were created for nêhiyawêwin because this type of language technology is both desired and can be supported through the corpus. To determine if these models are helpful for nêhiyawêwin speakers when typing, we will perform user studies. From these studies, we aim to learn if the predictive models support text entry in a timely way and whether people perceive them to be useful. We will collect perceptual data and feedback from potential users after they have completed several text-entry tasks through the developed predictivetext system. We will use the same measures that are commonly employed to determine the performance of new text-entry techniques. These measures include response time, error rates, and key strokes per character We recognize that by preprocessing SRO text, we have enabled easier use of this writing system for developing language technologies compared to syllabics. Future work should create a similar pipeline for syllabics that aligns with language rules used by communities, so that it can receive the same status and attention in the development of language technologies. This work contributes a collection of nêhiyawêwin resources that have been cleaned, processed, and shared for creating language technologies. Care was taken to collect, align, and preprocess the material so it could be used by others. It is hoped that sharing these resources along with the documentation of how they have been prepared will support language preservation and revitalization efforts. The utility of this corpus was shown via its community use in teaching nêhiyawêwin and by building language models to enable the creation of language technologies desired by speakers. This preliminary and on-going work demonstrates the value of the developed corpus for this low-resource language. Through these efforts in developing the corpus we hope to pave the way for the future creation of language technologies for and by nêhiyawêwin speakers.
1,000
2,087
1,000
Exploring Dynamic Selection of Branch Expansion Orders for Code Generation
Due to the great potential in facilitating software development, code generation has attracted increasing attention recently. Generally, dominant models are Seq2Tree models, which convert the input natural language description into a sequence of tree-construction actions corresponding to the pre-order traversal of an Abstract Syntax Tree (AST). However, such a traversal order may not be suitable for handling all multi-branch nodes. In this paper, we propose to equip the Seq2Tree model with a context-based Branch Selector, which is able to dynamically determine optimal expansion orders of branches for multibranch nodes. Particularly, since the selection of expansion orders is a non-differentiable multi-step operation, we optimize the selector through reinforcement learning, and formulate the reward function as the difference of model losses obtained through different expansion orders. Experimental results and in-depth analysis on several commonly-used datasets demonstrate the effectiveness and generality of our approach. We have released our code at https: //github.com/DeepLearnXMU/CG-RL.
Code generation aims at automatically generating a source code snippet given a natural language (NL) description, which has attracted increasing attention recently due to its potential value in simplifying programming. Instead of modeling the abstract syntax tree (AST) of code snippets directly, most of methods for code generation convert AST into a sequence of tree-construction actions. This allows for using natural language generation (NLG) models, such as the widely-used encoder-decoder Joint work with Pattern Recognition Center, WeChat AI, Tencent Inc, China. * Equal contribution † Corresponding author models, and obtains great success Generally, during the generation of dominant Seq2Tree models based on pre-order traversal, branches of each multi-branch nodes are expanded in a left-to-right order. Figure To verify this conjecture, we choose TRANX
Only TRANX 8.47 Only TRANX-R2L 7.66 Table ones. Table In this paper, we explore dynamic selection of branch expansion orders for code generation. Specifically, we propose to equip the conventional Seq2Tree model with a context-based Branch Selector, which dynamically quantifies the priorities of expanding different branches for multi-branch nodes during AST generations. However, such a non-differentiable multi-step operation poses a challenge to the model training. To deal with this issue, we apply reinforcement learning to train the extended Seq2Tree model. Particularly, we augment the conventional training objective with a reward function, which is based on the model training loss between different expansion orders of branches. In this way, the model is trained to determine optimal expansion orders of branches for multi-branch nodes, which will contribute to AST generations. To summarize, the major contributions of our work are three-fold: demonstrate the effectiveness and generality of our model on various datasets. As shown in Figure In the following subsections, we first describe the basic ASDL grammars of Seq2Tree models. Then, we introduce the details of TRANX Formally, an ASDL grammar contains two components: type and constructors. The value of type can be composite or primitive. As shown in the 'ActionSequence' and 'AST z' parts of Figure There are three kinds of ASDL grammar-based actions that can be used to generate the action sequence: 1) APPLYCONSTR Obviously, a constructor with multiple fields can produce multiple AST branches 2 , of which generation order has important effect on the model performance, as previously mentioned. Similar to other NLG models, TRANX is trained to minimize the following objective function: where a t is the t-th action, and p(a t |a <t , x) is modeled by an attentional encoder-decoder network For an NL description x=x 1 , x 2 , ..., x N , we use a BiLSTM encoder to learn its word-level hidden states. Likewise, the decoder is also an LSTM network. Formally, at the timestep t, the temporary hidden state h t is updated as 2 We also note that the field with sequential cardinality will be expanded to multiple branches. However, in this work, we do not consider this scenario, which is left as future work. where E(a t-1 ) is the embedding of the previous action a t-1 , s t-1 is the previous decoder hidden state, and p t is a concatenated vector involving the embedding of the frontier field and the decoder hidden state for the parent node. Furthermore, the decoder hidden state s t is defined as where c t is the context vector produced from the encoder hidden states and W is a parameter matrix. Here, we calculate the probability of action a t according to the type of its frontier field: • Composite. We adopt an APPLYCONSTR action to expand the field or a REDUCE action to complete the field. where E(c) denotes the embedding of the constructor c. • Primitive. We apply a GENTOKEN action to produce a token v, which is either generated from the vocabulary or copied from the input NL description. Formally, the probability of using = p (gen |a <t , x) p gen (v|a <t , x) + (1 -p (gen |a <t , x))p copy (v|a <t , x) , (5) where p (gen |a <t , x) is modeled as sigmoid (Ws t ). Please note that our proposed dynamic selection of branch expansion orders does not affect other aspects of the model. In this section, we extend the conventional Seq2Tree model with a context-based branch selector, which dynamically determines optimal expansion orders of branches for multi-branch AST nodes. In the following subsections, we first illustrate the elaborately-designed branch selector module and then introduce how to train the extended Seq2Tree model via reinforcement learning in detail. As described in Section 2.2, the action prediction at each timestep is mainly affected by its previous action, frontier field and the action of its parent node. Thus, it is reasonable to construct the branch selector determining optimal expansion orders of branches according to these three kinds of information. Specifically, given a multi-branch node n t at timestep t, where the ASDL grammar of action a t contains m fields [f 1 , f 2 , ...f m ], we feed the branch selector with three vectors: 1) E(f i ): the embedding of field f i , 2) E(a t ): the embedding of action a t , 3) s t : the decoder hidden state, and then calculate the priority score of expanding fields as follows: where W 1 ∈R d 1 ×d 2 and W 2 ∈R d 2 ×1 are learnable parameters. 4 Afterwards, we normalize priority scores of expanding all fields into a probability distribution: (7) Based on the above probability distribution, we can sample m times to form a branch expansion order o = [f o 1 , ..., f om ], of which the policy probability is computed as (8) 4 We omit the bias term for clarity. It is notable that during the sampling of f o i , we mask previously sampled fields f o <i to ensure that duplicate fields will not be sampled. During the generation of ASTs, with the above context-based branch selector, we deal with multibranch nodes according to the dynamically determined order instead of the standard left-to-right order. However, the non-differentiability of multistep expansion order selection and how to determine the optimal expansion order lead to challenges for the model training. To deal with these issues, we introduce reinforcement learning to train the extended Seq2Tree model in an end-to-end way. Concretely, we first pre-train a conventional Seq2Tree model. Then, we employ self-critical training with a reward function that measures loss difference between different branch expansion orders to train the extended Seq2Tree model. It is known that a well-initialized network is very important for applying reinforcement learning Concretely, for each multi-branch node in an AST, we sample a branch expansion order from a uniform distribution, and then reorganize the corresponding actions according to the sampled order. We conduct the same operations to all multi-branch nodes of the AST, forming a new training instance. Finally, we use the regenerated training instances to pre-train our model. In this way, the pre-trained Seq2Tree model acquires the preliminary capability to make predictions in any order. With the above initialized parameters, we then perform self-critical training Specifically, we train the extended Seq2Tree model by combining the MLE objective and RL objective together. Formally, given the training instance (x, a), we first apply the sampling method described in section 3.1 to all multi-branch nodes, reorganizing the initial action sequence a to form a new action sequence a o , and then define the model training objective as where L mle ( * ) denotes the conventional training objective defined in Equation 1, L rl ( * ) is the negative expected reward of branch expansion order o for the multi-branch node n, λ is a balancing hyperparameter, N mb denotes the set of multi-branch nodes in the training instance, and θ denotes the parameter set of our enhanced model. More specifically, L rl ( * ) is defined as where we approximate the expected reward with the loss of an order o sampled from the policy π. Inspired by successful applications of selfcritical training in previous studies Please note that we extend the standard reward function by setting a threshold η to clip the reward, which can prevent the network from being overconfident in current expansion order of branches. Finally, we apply the REINFORCE algorithm 4 Experiments To investigate the effectiveness and generalizability of our model, we carry out experiments on several commonly-used datasets. Following previous studies • DJANGO To facilitate the descriptions of experimental results, we refer to the enhanced TRANX model as TRANX-RL. In addition to TRANX, we compare our enhanced model with several competitive models: • TRANX (w/ pre-train). Table compare with it because our model involves a pre-training stage. • COARSE2FINE This model adopts a two-stage decoding strategy to produce the action sequence. It first generates a rough sketch of its meaning, and then fills in missing detail. • TREEGEN To ensure fair comparisons, we use the same experimental setup as TRANX Table First, our reimplemented TRANX model achieves comparable performance to previously reported results Second, compared with TRANX, TRANX-R2L and TRANX-RAND, our TRANX-RL exhibits better performance. This result demonstrates the advantage of dynamically determining branch expansion orders on dealing with multi-branch AST nodes. Third, the TRANX model with pre-training does not gain a better performance. In contrast, removing the model pre-training leads to the performance degradation of our TRANX-RL model. This result is consistent with the conclusion of previous studies As implemented in related studies on other NLG tasks, such as machine translation Given a multi-branch node, its child nodes have an important influence in the subtree. Therefore, we focus on the accuracy of action predictions for the child nodes. For fair comparison, we predict actions with pre-vious ground-truth history actions as inputs. Table Figure In the second example, TRANX incorrectly predicts the second child node at the t 10 -th timestep, while TRANX-RL firstly predicts it at the timestep t 6 . We think this error results from the sequentially generated nodes and the errors in early timesteps would accumulatively harm the predictions of later sibling nodes. By comparison, our model can flexibly generate subtrees with shorter lengths, alleviating error accumulation. With the prosperity of deep learning, researchers introduce neural networks into code generation. In this aspect, Typically, Finally, it should be noted that have been many NLP studies on exploring other decoding methods to improve other NLG tasks In this work, we first point out that the generation of domainant Seq2Tree models based on pre-order traversal is not optimal for handling all multi-branch nodes. Then we propose an extended Seq2Tree model equipped with a context-based branch selector, which is capable of dynamically determining optimal branch expansion orders for multi-branch nodes. Particularly, we adopt reinforcement learning to train the whole model with an elaborate reward that measures the model loss difference between different branch expansion orders. Extensive experiment results and in-depth analyses demonstrate the effectiveness and generality of our proposed model on several commonlyused datasets. In the future, we will study how to extend our branch selector to deal with indefinite branches caused by sequential field.
1,104
863
1,104
Learning Prototypical Goal Activities for Locations
People go to different places to engage in activities that reflect their goals. For example, people go to restaurants to eat, libraries to study, and churches to pray. We refer to an activity that represents a common reason why people typically go to a location as a prototypical goal activity (goal-act). Our research aims to learn goal-acts for specific locations using a text corpus and semi-supervised learning. First, we extract activities and locations that co-occur in goal-oriented syntactic patterns. Next, we create an activity profile matrix and apply a semi-supervised label propagation algorithm to iteratively revise the activity strengths for different locations using a small set of labeled data. We show that this approach outperforms several baseline methods when judged against goal-acts identified by human annotators.
Every day, people go to different places to accomplish goals. People go to stores to buy clothing, go to restaurants to eat, and go to the doctor for medical services. People travel to specific destinations to enjoy the beach, go skiing, or see historical sites. For most places, people typically go there for a common set of reasons, which we will refer to as prototypical goal activities (goal-acts) for a location. For example, a prototypical goal-act for restaurants would be "eat food" and for IKEA would be "buy furniture". Previous research has established that recognizing people's goals is essential for narrative text understanding and story comprehension Goals and plans are essential to understand people's behavior and we use our knowledge of prototypical goals to make inferences when reading. For example, consider the following pair of sentences: "Mary went to the supermarket. She needed milk." Most people will infer that Mary purchased milk, unless told otherwise. But a purchase event is not explicitly mentioned. In contrast, a similar sentence pair "Mary went to the theatre. She needed milk." feels incongruent and does not produce that inference. Recognizing goals is also critical for conversational dialogue systems. For example, if a friend tells you that they went to a restaurant, you might reply "What did you eat?", but if a friend says that they went to Yosemite, a more appropriate response might be "Did you hike?" or "Did you see the waterfalls?". Our knowledge of prototypical goal activities also helps us resolve semantic ambiguity. For example, consider the following sentences: (a) She went to the kitchen and got chicken. (b) She went to the supermarket and got chicken. (c) She went to the restaurant and got chicken. In sentence (a), we infer that she retrieved chicken (e.g., from the refrigerator) but did not pay for it. In (b), we infer that she paid for the chicken but probably did not eat it at the supermarket. In (c), we infer that she ate the chicken at the restaurant. Note how the verb "got" maps to different presumed events depending on the location. Our research aims to learn the prototypical goalacts for locations using a text corpus. First, we extract activities that co-occur with locations in goaloriented syntactic patterns. Next, we construct an activity profile matrix that consists of an activity vector (profile) for each of the locations. We then apply a semi-supervised label propagation algorithm to iteratively revise the activity profile strengths based on a small set of labeled locations. We also incorporate external resources to measure similarity between different activity expressions. Our results show that this semi-supervised learning approach outperforms several baseline methods in identifying the prototypical goal activities for locations.
Recognizing plans and goals is fundamental to narrative story understanding Goals and plans can also function to trigger scripts Graph-based semi-supervised learning has been successfully used for many tasks, including sentiment analysis Our aim is to learn the most prototypical goal-acts for locations. To tackle this problem, we first extract locations and related activities from a large text corpus. Then we use a semi-supervised learning method to identify the goal activities for individual locations. In the following sections we describe these processes in detail. To collect information about locations and activities, we use the 2011 Spinn3r dataset We use the text data to identify activities that are potential goal-acts for a location. However we also need to identify locations and want to include both proper names (e.g., Disneyland) as well as nominals (e.g., store, beach), so Named Entity Recognition will not suffice. Consequently, we extract (Loc, Act) pairs using syntactic patterns. First, we apply the Stanford dependency parser This syntactic structure was chosen to identify activities that are described as being the reason why someone went to the location. However it is not perfect. In some cases, X is not a location (e.g., "go to great lengths to ..." yields "lengths" as a location), or Y is not a goal-act for X (e.g., "go to the office to retrieve my briefcase ..." yields "retrieve briefcase" which is not a prototypical goal for "office"). Interestingly, the pattern extracts some nominals that are not locations in a strict sense, but behave as locations. For example, "go to the doctor" extracts "doctor" as a location. Literally a doctor is a person, but in this context it really refers to the doctor's office, which is a location. The pattern also extracts entities such as "roof", which are not generally thought of as locations but do have a fixed physical location. Other extracted entities are virtual but function as locations, such as "Internet". For the purposes of this work, we use the term location in a general sense to include any place or object that has a physical, virtual or implied location. The "go to" pattern worked quite well at extracting (Loc, Act) pairs, but in relatively small quantities due to the very specific nature of the syntactic structure. So we tried to find additional activities for those locations. Initially, we tried harvesting activities that occurred in close proximity (within 5 words) to a known location, but the results were too noisy. Instead, we used the pattern "Y in/at X" with the same syntactic constraints for Y (the extracted activity) and X (a location extracted by the "go to" pattern). We discovered many sentences in the corpus that were exactly or nearly the same, differing only by a few words, which resulted in artificially high frequency counts for some (Loc, Act) pairs. So we filtered duplicate or near-duplicate sentences by computing the longest common substring of sentence pairs that extracted the same (Loc, Act). If the shared substring had length ≥ 5, then we discarded the "duplicate" sentence. Finally, we applied three filters. To keep the size of the data manageable, we discarded locations and activities that were each extracted with frequency < 30 by our patterns. And we manually filtered locations that are Named Entities corresponding to cities or larger geo-political regions (e.g., provinces or countries). Large regions defined by government boundaries fall outside the scope of our task because the set of activities that typically occur in (say) a city or country is so broad. Finally, we added a filter to try to remove extremely general activities that can occur almost anywhere (e.g., visit). If an activity co-occurred with > 20% of the extracted (distinct) locations, then we discarded it. After these filters, we extracted 451 distinct locations, 5143 distinct activities, roughly 200, 000 distinct (Loc, Act) pairs, and roughly 500, 000 instances of (Loc, Act) pairs. We define an activity profile matrix Y of size n × m, where n is the number of distinct locations and m is the number of distinct activities. Y i,j represents the strength of the jth activity a j being a goal-act for l i . We use y i ∈ R m to denote the ith row of Y . Table We could build the activity profile for location l i using the co-occurrence data extracted from the blog corpus. For example, we could estimate P (a j | l i ) directly from the frequency counts of the activities extracted for l i . However, a high co-occurrence frequency doesn't necessarily mean that the activity represents a prototypical goal. For example, the activity "have appointment" frequently co-occurs with "clinic" but doesn't reveal the underlying reason for going to the clinic (e.g., probably to see a doctor or undergo a medical test). To appreciate the distinction, imagine that you asked a friend why she went to a health clinic, and she responded with "because I had an appointment". You would likely view her response as being snarky or evasive (i.e., she didn't want to tell you the reason). In Section 4, we will evaluate this approach as a baseline and show that it does not perform well. Our aim is to learn the activity profiles for locations using a small amount of labeled data, so we frame this problem as a semi-supervised learning task. Given a small number of "seed" locations coupled with predefined goal-acts, we want to learn the goal-acts for new locations. We use l i ∈ L to represent location l i , where |L| = n. We define an undirected graph G = (V, E) with vertices representing locations (|V | = n) and edges E = V ×V , such that each pair of vertices v i and v k is connected with an edge e ik whose weight represents the similarity between l i and l k . We can then represent the edge weights as an n × n symmetric weight matrix W indicating the similarity between locations. There could be many ways to define the weights, but for now we use the following definition from To assess the similarity between locations, we measure the cosine similarity between vectors of their co-occurrence frequencies with activities. Specifically, let matrix We use the same value σ 2 = 0.03 as where f i is a vector of length m capturing the co-occurrence frequencies between location l i and each activity a j in the extracted data (i.e., F i,j is the number of times that activity a j occurred with location l i ). We then define location similarity as: We use semi-supervised learning with a set of "seed" locations from human annotations, and another set of locations that are unlabeled. So we subdivide the set of locations into S = {l 1 , ..., l s }, which are the seed locations, and U = {l s+1 , ..., l s+u }, which are the unlabeled locations, such that s + u = n. For an unlabeled location l i ∈ U , the initial activity profile is the normalized co-occurrence frequency vector f i . For each seed location l i ∈ S, we first automatically construct an activity profile vector h i based on the gold goal-acts which were obtained from human annotators as described in Section 4.1. All activities not in the gold set are assigned a value of zero. Each activity a j in the gold set is assigned a probability P (a j | l i ) based on the gold answers. However, the gold goal-acts may not match the activity phrases found in the corpus (see discussion in Section 4.3), so we smooth the vector created with the gold goal-acts by averaging it with the normalized co-occurrence frequency vector f i extracted from the corpus. The activity profiles of seed locations stay constant through the learning process. We use y 0 i to denote the initial activity profiles. So when l i ∈ S, We apply a learning framework developed by into four blocks by the sth row and column: From We then use the label propagation algorithm described in One problem with the above algorithm is that it only takes advantage of relations between vertices (i.e., locations). If there are intrinsic relations between activities, they could be exploited as a complementary source of information to benefit the learning. Intuitively, different pairs of activities share different similarities, e.g., "eat burgers" should be more similar to "have lunch" than "read books". Under this idea, similar to the previous location similarity weight matrix W , we want to define an activity similarity weight matrix A m×m where A i,k indicates the similarity weight between activity a i and a k : where σ 2 is the same as in Eq (1). We explore 3 different similarity functions sim(a i , a k ) based on co-occurrence with locations, word matching, and embedding similarities. First, similar to Eq (2), we can use each activity's co-occurrence frequency with all locations as its location profile and define a similarity score based on cosine values of location profile vectors: where the predefined co-occurrence frequency matrix As a second option, the similarity between activities can often be implied by their lexical overlap, e.g., two activities sharing the same verb or noun might be related. For each word belonging to any of our activities, we use WordNet if verb and noun match 0.5 if verb or noun match 0 otherwise (8) As a third option, we can use 300-dimension word embedding vectors Finally, we can plug these similarity functions into Eq (6). We use A L , A O , A E to denote the corresponding matrix. We can also plug in multiple similarity metrics such as (sim L + sim E )/2 and use combination symbols A L+E to denote the matrix. Once we have a similarity matrix for activities, the next question is how will it help with the activity profile computation? Recall from Eq (5), we know that the activity profile of an unlabeled location can be represented by a linear combination of other locations' activity profiles. The activity profile matrix Y is an n × m matrix where each row is the activity profile for a location. We can also view Y as a matrix whose each column is the location profile for an activity. Using the same idea, we can make each column approximate a linear combination of its highly related columns (i.e., the location profile of an activity will become more similar to the location profiles of its similar activities). Our expectation is that this approximation will help improve the quality of Y . By being right multiplied by matrix A, Y gets updated from manipulating its columns (activities) as well. We modify the algorithm accordingly as below: Since this is a new task and there is no existing dataset for evaluation, we use crowd-sourcing via Amazon Mechanical Turk (AMT) to acquire gold standard data. First, we released a qualification test containing 15 locations along with detailed annotation guidelines. 25 AMT workers finished our assignment, and we chose 15 of them who did the best job following our guidelines to continue. We gave the 15 qualified workers 200 new locations, consisting of 152 nominals and 48 proper names, 4 randomly selected from our extracted data and set aside as test data. For each location, we asked the AMT workers to complete the following sentence: People go to LOC to VERB NOUN LOC was replaced by one of the 200 locations. Annotators were asked to provide an activity that is the primary reason why a person would go to that location, in the form of just a VERB or a VERB NOUN pair. Annotators also had the option to label a location as an "ERROR" if they felt that the provided term is not a location, since our location extraction was not perfect. 4 Same distribution as in the whole location set. Only 10 annotators finished labeling our test cases, so we used their answers as the gold standard. We discarded 12 locations that were labeled as an "ERROR" by ≥ 3 workers. A key question that we wanted to investigate through this manual annotation effort is to know whether people truly do associate the same prototypical goal activities with locations. To what extent do people agree when asked to list goalacts? Also, some places clearly have a smaller set of goal-acts than others. For example, the primary reason to go to an airport is to catch a flight, but there's a larger set of common reasons why people go to Yosemite (e.g.,"hiking camping", "rock climbing", "see waterfalls", etc.). Complicating matters, the AMT workers often described the same activity with different words (e.g., "buy book" vs. "purchase book"). Automatically recognizing synonymous event phrases is a difficult NLP problem in its own right. Figure In Table To assess the difficulty of this NLP task, we created 3 baseline systems for comparison with our learning approach. All of these methods take the list of activities that co-occurred with a location l i in our extracted data and rank them. The first baseline, FREQ, ranks the activities based on the co-occurrence frequency F i,j between l i and a j in our patterns. The second baseline, PMI, ranks the activities using point-wise mutual information. The third baseline, EMBED, ranks the activities based on the cosine similarity of the semantic embedding vectors for l i and a j . We use GloVe The gold standard contains a set of goal-acts for each location. Since the same activity can be expressed with many different phrases, the only way to truly know whether two phrases refer to the same activity is manual evaluation, which is expensive. Furthermore, many activities are very similar or highly related, but not exactly the same. For example, "eat burger" and "eat food" both describe eating activities, but the latter is more general than the former. Considering them to be the same is not always warranted (e.g., "eat burger" is a logical goal-act for McDonald's but not for Baskin-Robbins which primarily sells ice cream). As another example, "buy chicken" and "eat chicken" refer to different events (buying and eating) so they are clearly not the same semantically. But at a place like KFC, buying chicken implies eating chicken, and vice versa, so they seem like equally good answers as goal-acts for KFC. Due to the complexities of determining which gold standard answers belong in equivalence classes, we considered all of the goal-acts provided by the human annotators to be acceptable answers. To determine whether an activity a j produced by our system matches any of the gold goal-acts for a location l i , we report results using two types of matching criteria. Exact Match judges a j to be a correct answer for l i if (1) it exactly matches (after lemmatization) any activity in l i 's gold set, or (2) a j 's verb and noun both appear in l i 's gold set, though possibly in different phrases. For example, if a gold set contains "buy novels" and "browse books", then "buy books" will be a match. Since Exact Match is very conservative, we also define a Partial Match criterion to give 50% credit for answers that partially overlap with a gold answer. An activity a j is a partial match for l i if either its verb or noun matches any of the activities in l i 's gold set of goal-acts. For example, "buy burger" would be a partial match with "buy food" because their verbs match. All of our methods produce a ranked list of hypothesized goal-acts for a location. So we use Mean Reciprocal Rank (MRR) to judge the quality of the top 10 activities in each ranked list. We report two types of MRR scores. MRR based on the Exact Match criteria (MRR E ) is computed as follows, where n is the number of locations in the test set: We also compute MRR using both the Exact Match and Partial Match criteria. First, we need to identify the "best" answer among the 10 activities in the ranked list, which depends both on each activity's ranking and its matching score. The matching score for activity a j is defined as: Given 10 ranked activities a 1 ... a 10 for l i , we then compute: best score(l i ) = max j=1..10 score(a j ) rank(a j ) And then finally define MRR P as follows: Unless otherwise noted, all of our experiments report results using 4-fold cross-validation on the 200 locations in our test set. We used 4 folds to ensure 50 seed locations for each run (i.e., 1 fold for training and 3 folds for testing). The first two columns of Table Table To gain more insight about the behavior of the models, Table Table The goal-acts learned by our system were extracted from the Spinn3r dataset, while the gold standard answers were provided by human annotators, so the same (or very similar) activities are often expressed in different ways (see Section 4.3). This raises the question: what is the upper bound on system performance when evaluating against human-provided goal-acts? To answer this, we compared all of the activities that co-occurred with each location in the corpus against its gold goalacts. Only 36% of locations had at least one gold goal-act among its extracted activities when matching identical strings (after lemmatization). Because of this issue, our Exact Match criteria also allowed for combining verbs and nouns from different gold answers. Under this Exact Match criteria, 73% of locations had at least one gold goal-act among the extracted activities, so this represents an upper bound on performance using this metric. Under the Partial Match criteria, 98% of locations had at least one gold goal-act among the extracted activities, but only 50% credit was awarded for these cases so the maximum score possible would be ∼86%. We also manually inspected 200 gold locations to analyze their types. We discovered some related groups, but substantial diversity overall. The largest group contains ∼20% of the locations, which are many kinds of stores (e.g., Ikea, WalMart, Apple store, shoe store). Even within a group, different locations often have quite different sets of co-occurring activities. In fact, we discovered some spelling variants (e.g., "WalMart" and "wal mart"), but they also have substantially different activity vectors (e.g., because one spelling is much more frequent), so the model learns about them independently. 8 Other groups include restaurants (∼5%), home-related (e.g., bathroom) (∼5%), education (∼5%), virtual (e.g., Wikipedia) (∼3%), medical (∼3%) and landscape (e.g., hill) (∼3%). It is worth noting that our locations were extracted by two syntactic patterns and it remains to be seen if this has brought in any bias -detecting location nouns (especially nominals) 7 A lemmatization error for the verb "enrolled". 8 Of course normalizing location names beforehand may be beneficial in future work. is a challenging problem in its own right. We introduced the problem of learning prototypical goal activities for locations. We obtained human annotations and showed that people do associate prototypical goal-acts with locations. We then created an activity profile framework and applied a semi-supervised label propagation algorithm to iteratively update the activity strengths for locations. We demonstrated that our learning algorithm identifies goal-acts for locations more accurately than several baseline methods. However, this problem is far from solved. Challenges also remain in how to evaluate the accuracy of goal knowledge extracted from text corpora. Nevertheless, our work represents a first step toward learning goal knowledge about locations, and we believe that learning knowledge about plans and goals is an important direction for natural language understanding research. In future work, we hope to see if we can take advantage of more contextual information as well as other external knowledge to improve the recognition of goalacts.
838
2,826
838
Sentence-Level Agreement for Neural Machine Translation
The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references. In NMT, there is a natural correspondence between the source sentence and the target sentence. However, this relationship has only been represented using the entire neural network and the training objective is computed in wordlevel. In this paper, we propose a sentencelevel agreement module to directly minimize the difference between the representation of source and target sentence. The proposed agreement module can be integrated into NMT as an additional training objective function and can also be used to enhance the representation of the source sentences. Empirical results on the NIST Chinese-to-English and WMT English-to-German tasks show the proposed agreement module can significantly improve the NMT performance.
Neural network based methods have been applied to several natural language processing tasks Based on this hypothesis, Sentence-level agreement method has been applied to many natural language processing tasks. Aliguliyev (2009) used sentence similarity measure technique for automatic text summarization. In human translation, a translator's primary concern is to translate a sentence through its entire meaning rather than word-by-word meaning. Therefore, in early machine translation studies, such as example-based machine translation • Sentence-Level Agreement as Training Objective: we use the sentence-level agreement as a part of the training objective function. In this way, we not only consider the translation of the word level but also consider the sentence level. • Enhance Source Representation: As our model can make the vector distribution of the sentence-level between source-side and target-side closer, we can combine their sentence-level embeddings to enhance the source representation. Experimental results on Chinese-to-English and English-to-German translation tasks demonstrate that our model is able to effectively improve the performance of NMT.
In this section, we take the Transformer architecture proposed by As an encoder-to-decoder architecture, X = {x 1 , x 2 , ..., x J } represents a source sentence and Y = {y 1 , y 2 , ..., y I } represents a target sentence. The encoder-to-decoder model learns to estimate the conditional probability from the source sentence to the target sentence word by word: where θ is a set of model parameters and y <i denotes a partial translation. Different from the other NMT, Transformer has the self-attention layers that can operate in parallel. A single self-attention layer has two sub-layers: a multi-head self-attention layer and a feed forward network. The feed forward network consists of two simple fully connected networks with a ReLU activation function in between: where W 1 and W 2 are both linear transformation networks, b 1 and b 2 are both bias. We define H enc as the sentence representation of X via the self-attention layers in encoder, and H dec as the sentence representation of words Y via embedding layers in decoder. The parameters of Transformer are trained to minimize the following objective function on a set of training examples {(X n , Y n )} N n=1 : (3) 3 Agreement on Source and Target Sentence Some studies In this paper, we investigate the sentence-level relationship between the source and target sentences. We propose a sentence-level agreement method which can make the sentencelevel semantics of the source and target closer. The entire architecture of the proposed method is illustrated in Figure First, we need to get the sentence-level representation of the source and target. Some studies showed that the Mean operation is an effective method to represent sentence of sequence words Denote H enc is the mean of H enc and H dec is the mean of H dec . We design a Sentence Agreement Loss L mse to measure the distance between the source and target sentence-level vectors: Finally, our goal is to improve translation with shortening the distance in sentence-level. Thus, the final objective of our model is composed of parts, the formula is as follows: (5) Sentence-level agreement helps make the targetside sentence representation closer to the source. Intuitively, we can also use this mechanism to strengthen the source representation to improve the translation. Further, we propose a simple and efficient architecture in Figure In particular, we use a Tanh activation function instead of ReLU in the feed forward network. The value range of Tanh is -1 to 1, which indicates some information should be counterproductive. Our Enhanced Sentence Agreement Loss LE mse is to measure the distance between the source and target sentence-level vectors: where EH enc is the mean of EH enc . Le and Mikolov (2014) use concatenation as the method to combine the sentence vectors to strengthen the capacity of representation. We also use the same method to combine H enc and EH enc : In this way, we can enhance the source representation with a sentence-level representation closer to the target-side. The updated translation training objective is: Thus, the final objective is as follows: NIST04, NIST05, NIST06 datasets are testsets. We use the case-insensitive 4-gram NIST BLEU score as our evaluation metric To efficiently train NMT models, we train each model with sentences of length up to 50 words. In this way, about 90% and 89% of ZH-EN and EN-DE parallel sentences are covered in the experiments. In addition, we use byte pair encoding Table In In particular, by comparing Row 3 and 4, we find that our proposed methods achieve a similar performance with the Transformer(Big) and gain a faster speed with fewer parameters. It indicates that enhancing source representation with a sentence-level representation is an effective method for improving translation performance. We further study how the proposed models influenced sentence-level similarity in translation. For this, we follow the method of (11) As Table In addition, there is a correlation between NMT performance (BLEU) and the sentence-level similarity. This indicates that the proposed method can improve the sentence-level similarity between source and target sentences and the performance of NMT. In this work, we have presented a sentence-level agreement method for NMT. Our goal is to bring the sentence representation of the source-side and the target-side closer together. At the same time, we can utilize this information to enhance source representation. Our study suggests the source-totarget sentence-level relationship is very useful for translation. In future work, we intend to apply these methods to other natural language tasks.
884
1,169
884
Factored Translation Models
We present an extension of phrase-based statistical machine translation models that enables the straight-forward integration of additional annotation at the word-levelmay it be linguistic markup or automatically generated word classes. In a number of experiments we show that factored translation models lead to better translation performance, both in terms of automatic scores, as well as more grammatical coherence.
The current state-of-the-art approach to statistical machine translation, so-called phrase-based models, is limited to the mapping of small text chunks without any explicit use of linguistic information, may it be morphological, syntactic, or semantic. Such additional information has been demonstrated to be valuable by integrating it in pre-processing or postprocessing steps. However, a tighter integration of linguistic information into the translation model is desirable for two reasons: • Translation models that operate on more general representations, such as lemmas instead of surface forms of words, can draw on richer statistics and overcome the data sparseness problems caused by limited training data. • Many aspects of translation can be best explained on a morphological, syntactic, or semantic level. Having such information available to the translation model allows the direct modeling of these aspects. For instance: reordering at the sentence level is mostly driven Therefore, we extended the phrase-based approach to statistical translation to tightly integrate additional information. The new approach allows additional annotation at the word level. A word in our framework is not only a token, but a vector of factors that represent different levels of annotation (see Figure We report on experiments with factors such as surface form, lemma, part-of-speech, morphological features such as gender, count and case, automatic word classes, true case forms of words, shallow syntactic tags, as well as dedicated factors to ensure agreement between syntactically related items. This paper describes the motivation, the modeling aspects and the computationally efficient decoding methods of factored translation models. We present briefly results for a number of language pairs. However, the focus of this paper is the description of the approach. Detailed experimental results will be described in forthcoming papers.
Many attempts have been made to add richer information to statistical machine translation models. Most of these focus on the pre-processing of the input to the statistical system, or the post-processing of its output. Our framework is more general and goes beyond recent work on models that back off to representations with richer statistics Rich morphology often poses a challenge to statistical machine translation, since a multitude of word forms derived from the same lemma fragment the data and lead to sparse data problems. If the input language is morphologically richer than the output language, it helps to stem or segment the input in a pre-processing step, before passing it on to the translation system Structural problems have also been addressed by pre-processing: On the other end of the translation pipeline, additional information has been used in post-processing. The goal of integrating syntactic information into the translation model has prompted many researchers to pursue tree-based transfer models One example to illustrate the short-comings of the traditional surface word approach in statistical machine translation is the poor handling of morphology. Each word form is treated as a token in itself. This means that the translation model treats, say, the word house completely independent of the word houses. Any instance of house in the training data does not add any knowledge to the translation of houses. In the extreme case, while the translation of house may be known to the model, the word houses may be unknown and the system will not be able to translate it. While this problem does not show up as strongly in English -due to the very limited morphological inflection in English -it does constitute a significant problem for morphologically rich languages such as Arabic, German, Czech, etc. Thus, it may be preferably to model translation between morphologically rich languages on the level of lemmas, and thus pooling the evidence for different word forms that derive from a common lemma. In such a model, we would want to translate lemma and morphological information separately, and combine this information on the output side to ultimately generate the output surface words. Such a model can be defined straight-forward as a factored translation model. See Figure Note that while we illustrate the use of factored translation models on such a linguistically motivated example, our framework also applies to models that incorporate statistically defined word classes, or any other annotation. The translation of factored representations of input words into the factored representations of output words is broken up into a sequence of mapping steps that either translate input factors into output factors, or generate additional output factors from existing output factors. Recall the example of a factored model motivated by morphological analysis and generation. In this model the translation process is broken up into the following three mapping steps: 1. Translate input lemmas into output lemmas 2. Translate morphological and POS factors 3. Generate surface forms given the lemma and linguistic factors Factored translation models build on the phrasebased approach Our current implementation of factored translation models follows strictly the phrase-based approach, with the additional decomposition of phrase translation into a sequence of mapping steps. Translation steps map factors in input phrases to factors in output phrases. Generation steps map output factors within individual output words. To reiterate: all translation steps operate on the phrase level, while all generation steps operate on the word level. Since all mapping steps operate on the same phrase segmentation of the input and output sentence into phrase pairs, we call these synchronous factored models. Let us now take a closer look at one example, the translation of the one-word phrase häuser into English. The representation of häuser in German is: surface The three mapping steps in our morphological analysis and generation model may provide the following applicable mappings: 1. Translation: Mapping lemmas • haus → house, home, building, shell We call the application of these mapping steps to an input phrase expansion. Given the multiple choices for each step (reflecting the ambiguity in translation), each input phrase may be expanded into a list of translation options. The German häuser|haus|NN|plural-nominative-neutral may be expanded as follows: 1. Factored translation models follow closely the statistical modeling approach of phrase-based models (in fact, phrase-based models are a special case of factored models). The main difference lies in the preparation of the training data and the type of models learned from the data. The training data (a parallel corpus) has to be annotated with the additional factors. For instance, if we want to add part-of-speech information on the input and output side, we need to obtain part-of-speech tagged training data. Typically this involves running automatic tools on the corpus, since manually annotated corpora are rare and expensive to produce. Next, we need to establish a word-alignment for all the sentences in the parallel training corpus. Here, we use the same methodology as in phrase-based models (typically symmetrized GIZA++ alignments). The word alignment methods may operate on the surface forms of words, or on any of the other factors. In fact, some preliminary experiments have shown that word alignment based on lemmas or stems yields improved alignment quality. Each mapping step forms a component of the overall model. From a training point of view this means that we need to learn translation and generation tables from the word-aligned parallel corpus and define scoring methods that help us to choose between ambiguous mappings. Phrase-based translation models are acquired from a word-aligned parallel corpus by extracting all phrase-pairs that are consistent with the word alignment. Given the set of extracted phrase pairs with counts, various scoring functions are estimated, such as conditional phrase translation probabilities based on relative frequency estimation or lexical translation probabilities based on the words in the phrases. In our approach, the models for the translation steps are acquired in the same manner from a wordaligned parallel corpus. For the specified factors in the input and output, phrase mappings are extracted. The set of phrase mappings (now over factored representations) is scored based on relative counts and word-based translation probabilities. The generation distributions are estimated on the output side only. The word alignment plays no role here. In fact, additional monolingual data may be used. The generation model is learned on a word-for-word basis. For instance, for a generation step that maps surface forms to part-of-speech, a table with entries such as (fish,NN) is constructed. One or more scoring functions may be defined over this table, in our experiments we used both conditional probability distributions, e.g., p(fish|NN) and p(NN|fish), obtained by maximum likelihood estimation. An important component of statistical machine translation is the language model, typically an ngram model over surface forms of words. In the framework of factored translation models, such sequence models may be defined over any factor, or any set of factors. For factors such as part-of-speech tags, building and using higher order n-gram models (7-gram, 9-gram) is straight-forward. As in phrase-based models, factored translation models can be seen as the combination of several components (language model, reordering model, translation steps, generation steps). These components define one or more feature functions that are combined in a log-linear model: Z is a normalization constant that is ignored in practice. To compute the probability of a translation e given an input sentence f, we have to evaluate each feature function h i . For instance, the feature function for a bigram language model component is (m is the number of words e i in the sentence e): = p(e 1 ) p(e 2 |e 1 )..p(e m |e m-1 ) (2) Let us now consider the feature functions introduced by the translation and generation steps of factored translation models. The translation of the input sentence f into the output sentence e breaks down to a set of phrase translations {( fj , ēj )}. For a translation step component, each feature function h T is defined over the phrase pairs ( fj , ēj ) given a scoring function τ : For a generation step component, each feature function h G given a scoring function γ is defined over the output words e k only: The feature functions follow from the scoring functions (τ , γ) acquired during the training of translation and generation tables. For instance, recall our earlier example: a scoring function for a generation model component that is a conditional probability distribution between input and output factors, e.g., γ(fish,NN,singular) = p(NN|fish). The feature weights λ i in the log-linear model are determined using a minimum error rate training method, typically Powell's method Compared to phrase-based models, the decomposition of phrase translation into several mapping steps creates additional computational complexity. Instead of a simple table look-up to obtain the possible translations for an input phrase, now multiple tables have to be consulted and their content combined. In phrase-based models it is easy to identify the entries in the phrase table that may be used for a specific input sentence. These are called translation options. We usually limit ourselves to the top 20 translation options for each input phrase. The beam search decoding algorithm starts with an empty hypothesis. Then new hypotheses are generated by using all applicable translation options. These hypotheses are used to generate further hypotheses in the same manner, and so on, until hypotheses are created that cover the full input sentence. The highest scoring complete hypothesis indicates the best translation according to the model. How do we adapt this algorithm for factored translation models? Since all mapping steps operate on the same phrase segmentation, the expansions of these mapping steps can be efficiently pre-computed prior to the heuristic beam search, and stored as translation options. For a given input phrase, all possible translation options are thus computed before However, we need to be careful about combinatorial explosion of the number of translation options given a sequence of mapping steps. In other words, the expansion may create too many translation options to handle. If one or many mapping steps result in a vast increase of (intermediate) expansions, this may be become unmanageable. We currently address this problem by early pruning of expansions, and limiting the number of translation options per input phrase to a maximum number, by default 50. This is, however, not a perfect solution. We are currently working on a more efficient search for the top 50 translation options to replace the current bruteforce approach. We carried out a number of experiments using the factored translation model framework, incorporating both linguistic information and automatically generated word classes. This work is implemented as part of the open source Moses In the first set of experiments, we translate surface forms of words and generate additional output factors from them (see Figure The English-German systems were trained on the full 751,088 sentence Europarl corpus and evaluated on the WMT 2006 test set English-Spanish systems were trained on a 40,000 sentence subset of the Europarl corpus. Here, we also used morphological and part-of-speech fac-tors on the output side with an 7-gram sequence model, resulting in absolute improvements of 1.25% (only morph) and 0.84% (morph+POS). Improvements on the full Europarl corpus are smaller. English-Czech systems were trained on a 20,000 sentence Wall Street Journal corpus. Morphological features were exploited with a 7-gram language model. Experimentation suggests that it is beneficial to carefully consider which morphological features to be used. Adding all features results in lower performance (27.04% BLEU), than considering only case, number and gender (27.45% BLEU) or additionally verbial (person, tense, and aspect) and prepositional (lemma and case) morphology (27.62% BLEU). All these models score well above the baseline of 25.82% BLEU. An extended description of these experiments is in the JHU workshop report The next model is the one described in our motivating example in Section 4 (see also Figure We carried out experiments for the language pair German-English, using the 52,185 sentence News Commentary corpus Experimental results are summarized in Table Note that this model completely ignores the surface forms of input words and only relies on the To overcome this problem, we introduce an alternative path model: Translation options in this model may come either from the surface form model or from the lemma/morphology model we just described. For surface forms with rich evidence in the training data, we prefer surface form mappings, and for surface forms with poor or no evidence in the training data we decompose surface forms into lemma and morphology information and map these separately. The different translation tables form different components in the log-linear model, whose weights are set using standard minimum error rate training methods. The alternative path model outperforms the surface form model with POS LM, with an BLEU score of 19.47% vs. 19.05%. The test set has 3276 unknown word forms vs 2589 unknown lemmas (out of 26,898 words). Hence, the lemma/morph model is able to translate 687 additional words. Finally, we went beyond linguistically motivated factors and carried out experiments with automatically trained word classes. By clustering words together by their contextual similarity, we are able to find statistically similarities that may lead to more generalized and robust models. We trained models on the IWSLT 2006 task (39,953 sentences). Compared to a baseline English-Chinese system, adding word classes on the output side as additional factors (in a model as pre- viously illustrated in Figure To demonstrate the versatility of the factored translation model approach, consider the task of recasing With factored translation models, it is possible to integrate this step into the model, by adding a generation step. See Table Factored translation models have also been used for the integration of CCG supertags We presented an extension of the state-of-the-art phrase-based approach to statistical machine translation that allows the straight-forward integration of additional information, may it come from linguistic tools or automatically acquired word classes. We reported on experiments that showed gains over standard phrase-based models, both in terms of automatic scores (gains of up to 2% BLEU), as well as a measure of grammatical coherence. These experiments demonstrate that within the framework of factored translation models additional information can be successfully exploited to overcome some short-comings of the currently dominant phrasebased statistical approach. The framework of factored translation models is very general. Many more models that incorporate different factors can be quickly built using the existing implementation. We are currently exploring these possibilities, for instance use of syntactic information in reordering and models with augmented input information. We have not addressed all computational problems of factored translation models. In fact, computational problems hold back experiments with more complex factored models that are theoretically possible but too computationally expensive to carry out. Our current focus is to develop a more efficient implementation that will enable these experiments. Moreover, we expect to overcome the constraints of the currently implemented synchronous factored models by developing a more general asynchronous framework, where multiple translation steps may operate on different phrase segmentations (for instance a part-of-speech model for large scale reordering).
417
1,936
417
Finding Syntax in Human Encephalography with Beam Search
Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak.
Computational psycholinguistics has "always been...the thing that computational linguistics stood the greatest chance of providing to humanity" As The contribution of the present paper is situated precisely at this intersection. It combines a probabilistic generative grammar (RNNG; Comparison with language models based on long short term memory networks (LSTM, e.g. Following this Introduction, section 2 presents recurrent neural network grammars, emphasizing their suitability for incremental parsing. Sections 3 then reviews a previously-proposed beam search procedure for them. Section 4 goes on to introduce the novel application of this procedure to human data via incremental complexity metrics. Section 5 explains how these theoretical predictions are specifically brought to bear on EEG data using regression. Sections 6 and 7 elaborate on the model comparison mentioned above and report the results in a way that isolates the operative element. Section 8 discusses these results in relation to established computational models. The conclusion, to anticipate section 9, is that syntactic processing can be found in naturalistic speech stimuli if ambiguity resolution is modeled as beam search.
Recurrent neural network grammars (henceforth: RNNGs Figure Each step of this generative story depends on the state of a stack, depicted inside the gray box in Figure Phrase-closing actions trigger a syntactic composition function (depicted in Figure The parameters of all these components are adaptively adjusted using backpropagation at training time, minimizing the cross entropy relative to a corpus of trees. At testing time, we parse incrementally using beam search as described below in section 3. Beam search is one way of addressing the search problem that arises with generative grammars -constructive accounts of language that are sometimes said to "strongly generate" sentences. Strong generation in this sense simply means that they derive both an observable word-string as well as a hidden tree structure. Probabilistic grammars are joint models of these two aspects. By contrast, parsers are programs intended to infer a good tree from a given word-string. In incremental parsing with history-based models this inference task is particularly challenging, because a decision that looks wise at one point may end up looking foolish in light of future words. Beam search addresses this challenge by retaining a collection called the "beam" of parser states at each word. These states are rated by a score that is related to the probability of a partial derivation, allowing an incremental parser to hedge its bets against temporary ambiguity. If the score of one analysis suddenly plummets after seeing some word, there may still be others within the beam that are not so drastically affected. This idea of ranked parallelism has become central in psycholinguistic modeling (see e.g. As Algorithm 1 Word-synchronous beam search with fast-tracking. After In Algorithm 1 the beam is held in a set-valued variable called nextword. Beam search continues until this set's cardinality exceeds the designated action beam size, k. If the beam still isn't large enough (line 3) then the search process explores one more action by going around the while-loop again. Each time through the loop, lexical actions compete against structural actions for a place among the top k (line 5). The imbalance mentioned above makes this competition fierce, and on many loop iterations nextword may not grow by much. Once there are enough parser states, another threshold called the word beam k word kicks in (line 15). This other threshold sets the number of analyses that are handed off to the next invocation of the algorithm. In the study reported here the word beam remains at the default setting suggested by Stern and colleagues, k/10. Table In order to relate computational models to measured human responses, some sort of auxiliary hypothesis or linking rule is required. In the domain of language, these are traditionally referred to as complexity metrics because of the way they quantify the "processing complexity" of particular sentences. When a metric offers a prediction on each successive word, it is an incremental complexity metric. Table They quantify unexpectedness and uncertainty, respectively, about alternative syntactic analyses at a given point within a sentence. Electroencephalography (EEG) is an experimental technique that measures very small voltage fluctuations on the scalp. For a review emphasizing its implications vis-á-vis computational models, see We analyzed EEG recordings from 33 participants as they passively listened to a spoken recitation of the first chapter of Alice's Adventures in Wonderland. 2 This auditory stimulus was delivered via earphones in an isolated booth. All participants scored significantly better than chance on a post-session 8-question comprehension quiz. An additional ten datasets were excluded for not meeting this behavioral criterion, six due to excessive noise, and three due to experimenter error. All participants provided written informed consent under the oversight of the University of Michigan HSBS Institutional Review Board (#HUM00081060) and were compensated $15/h. 3 Data were recorded at 500 Hz from 61 active electrodes (impedences < 25 kΩ) and divided into 2129 epochs, spanning -0.3-1 s around the onset of each word in the story. Ocular artifacts were removed using ICA, and remaining epochs with excessive noise were excluded. The data were filtered from 0.5-40 Hz, baseline corrected against a 100 ms pre-word interval, and separated into epochs for content words and epochs for function words because of interactions between parsing variables of interest and word-class Linear regression was used per-participant, at each time-point and electrode, to identify content-word EEG amplitudes that correlate with complexity metrics derived from the RNNG+beam search combination via the complexity metrics in Table Each Target predictor was included in its own model, along with several Control predictors that are known to influence sentence processing: sentence order, word-order in sentence, log word frequency All predictors were mean-centered. We also constructed null regression models in which the rows of the design matrix were randomly permuted. 4 β coefficients for each effect were tested against these null models at the group level across 2 4 Temporal auto-correlation across epochs could impact model fits. Content-words are spaced 1 s apart on average and a spot-check of the residuals from these linear models indicates that they do not show temporal auto-correlation: AR(1) < 0.1 across subjects, time-points, and electrodes. all electrodes from 0-1 seconds post-onset, using a non-parametric cluster-based permutation test to correct for multiple comparisons across electrodes and time-points We compare the fit against EEG data for models that are trained on the same amount of textual data but differ in the explicitness of their syntactic representations. At the low end of this scale is the LSTM language model. Models of this type treat sentences as a sequence of words, leaving it up to backpropagation to decide whether or not to encode syntactic properties in a learned history vector RNNGs are higher on this scale because they explicitly build a phrase structure tree using a symbolic stack. We consider as well a degraded version, RNNG -comp which lacks the composition mechanism shown in Figure In all cases, these language models were trained on chapters 2-12 of Alice's Adventures in Wonderland. This comprises 24941 words. The stimulus that participants saw during EEG data collection, for which the metrics in Table RNNGs were trained to match the output trees provided by the Stanford parser During RNNG training, the first chapter was used as a development set, proceeding until the per-word perplexity over all parser actions on this set reached a minimum, 180. This performance was obtained with a RNNG whose state vector was 170 units wide. The corresponding LSTM language model state vector had 256 units; it reached a per-word perplexity of 90.2. Of course the RNNG estimates the joint probability of both trees and words, so these two perplexity levels are not directly comparable. Hyperparameter settings were determined by grid search in a region near the one which yielded good performance on the Penn Treebank benchmark reported on Table To explore the suitability of the RNNG + beam search combination as a cognitive model of language processing difficulty, we fitted regression models as described above in section 5 for each of the metrics in Table Surprisal from the LSTM sequence model did not reliably predict EEG amplitude at any timepoint or electrode. The DISTANCE predictor did derive a central positivity around 600 ms post-word onset as shown in Figure We compared RNNG to its degraded cousin, RNNG -comp , in three regions of interest shown in Figure Single-trial data were averaged across electrodes and time-points within each region and fit with a linear mixed-effects model with fixed effects as described below and random intercepts by-subjects Statistically significant results obtained for DIS-TANCE from RNNG -comp in the P600 region and for SURPRISAL for RNNG in the ANT region. No significant results were observed in the N400 region. These results are detailed in Table Since beam search explores analyses in descending order of probability, DISTANCE and SUR-PRISAL ought to be yoked, and indeed they are correlated at r = 0.33 or greater across all of the beam sizes k that we considered in this study. However they are reliably associated with different EEG effects. SURPRISAL manifests at anterior electrodes relatively early. This seems to be a different effect from that observed by This pattern of results suggests an approach to the overall modeling task. In this approach, both grammar and processing strategy remain the same, and alternative complexity metrics, such as SUR-PRISAL and DISTANCE, serve to interpret the unified model at different times or places within the brain. This inverts the approach of Recurrent neural net grammars indeed learn something about natural language syntax, and what they learn corresponds to indices of human language processing difficulty that are manifested in electroencephalography. This correspondence, between computational model and human electrophysiological response, follows from a system that lacks an initial stage of purely stringbased processing. Previous work was "two-stage" in the sense that the generative model served to rerank proposals from a conditional model
453
1,204
453
Modeling Dual Read/Write Paths for Simultaneous Machine Translation
Simultaneous machine translation (SiMT) outputs translation while reading source sentence and hence requires a policy to decide whether to wait for the next source word (READ) or generate a target word (WRITE), the actions of which form a read/write path. Although the read/write path is essential to SiMT performance, no direct supervision is given to the path in the existing methods. In this paper, we propose a method of dual-path SiMT which introduces duality constraints to direct the read/write path. According to duality constraints, the read/write path in source-totarget and target-to-source SiMT models can be mapped to each other. As a result, the two SiMT models can be optimized jointly by forcing their read/write paths to satisfy the mapping. Experiments on En↔Vi and De↔En tasks show that our method can outperform strong baselines under all latency.
Simultaneous machine translation (SiMT) The sequence of READ and WRITE actions in the translation process form a read/write path, which is key to SiMT performance. Improper read/write path will bring damage to translation performance as compared to the following WRITE actions too many but not necessary READ actions will result in high translation latency while too few but not sufficient READ actions will exclude indispensable source information. Therefore, an ideal read/write path is that the READ actions compared to the following WRITE actions are just sufficient and necessary, which means the source words covered by consecutive READ actions and the target words generated by the following consecutive WRITE actions should be semantically equivalent. Ensuring sufficiency and necessity between READ/WRITE actions will lead to a proper read/write path and thereby good SiMT performance. But unfortunately, the existing SiMT methods, which employ a fixed or adaptive policy, do not consider the sufficiency or necessity in their policy. The fixed policy performs SiMT based on a pre-defined read/write path Under these grounds, we aim at introducing the evaluation of sufficiency and necessity between READ/WRITE actions to direct the read/write path without involving external information. As mentioned above, in an ideal read/write path, the source segment (i.e., source words read by the consecutive READ actions) and the corresponding target segment (i.e., target words generated by the following consecutive WRITE actions) are supposed to be semantically equivalent and thus translation to each other, which constitutes a separate segment pair. Hence, an ideal read/write path divides the whole sentence pair into a sequence of segment pairs where the source sentence and the target sentence should be translation to each other segment by segment. That means if the translation direction is reversed, an ideal read/write path for target-to-source SiMT can also be deduced from the same sequence of segment pairs. For example, according to the alignment in Figure Based on the above reasoning, we propose a method of Dual-Path SiMT, which uses the SiMT model in the reverse direction to guide the SiMT model in the current direction according to duality constraints between their read/write paths. With duality constraints, the read/write paths in sourceto-target and target-to-source SiMT should reach an agreement on the corresponding segment pairs. Along this line, our method maintains a source-totarget SiMT model and a target-to-source SiMT model concurrently, which respectively generate their own read/write path using monotonic multi-head attention
We first briefly introduce SiMT with a focus on monotonic multi-head attention For a SiMT task, we denote the source sentence as Read/write path can be represented in multiple forms, such as an action sequence of READ and WRITE (e.g., RRWWWRW• • • ), or a path from (0, 0) to (I, J) in the attention matrix from the target to source, where moving right (i.e., →) means READ action and moving down (i.e., ↓) means WRITE action, as shown in Figure Mathematically, a read/write path can be represented by a monotonic non-decreasing sequence {g i } I i=1 of step i, where the g i represents the number of source words read in when writing the i th target word y i . The value of {g i } I i=1 depends on the specific SiMT policy, where monotonic multi-head attention (MMA) Monotonic multi-head attention MMA processes the source words one by one, and concurrently predicts a selection probability p ij to indicates the probability of writing y i when reading x j , and accordingly a Bernoulli random variable z ij is calculated to determine READ or WRITE action: where V K and V Q are learnable parameters, d k is dimension of head. If z ij = 0, MMA performs READ action to wait for the next source word x j+1 . If z ij = 1, MMA sets g i = j and performs WRITE action to generate y i based on x ≤g i . Therefore, the decoding probability of y with parameters θ is where x ≤g i are first g i source tokens, and y <i are previous target tokens. Note that when integrated into multi-head attention, all attention heads in decoder layers independently determine the READ/WRITE action. If and only when all heads decide to perform WRITE action, the model starts translating, otherwise the model waits for the next source word. Expectation training Since sampling a discrete random variable z ij precludes back-propagation, MMA applies expectation training where α ij calculates the expectation probability of writing y i when reading x j . Then, the attention distribution and context vectors are accordingly calculated in the expected form. To trade-off between translation quality and latency, MMA introduces a latency loss L g to the training loss: where L g measures the total latency, and λ is the weight of latency loss. Please refer to Our dual-path SiMT model employs a source-totarget (forward) model and a target-to-source (backward) model, called single-path SiMT, which generate their own read/write path based on MMA. According to duality constraints that the read/write paths of the two single-path SiMT models should share the same segment pair sequence, the two read/write paths should be transposed to each other in principle as shown in Figure The purpose of transposing a read/write path is to get a new read/write path in the reverse direction based on the same segment pairs as the original path. As the transposing process works in the same way for the two directions, we just introduce the process for the forward single-path SiMT. Since there is no explicit read/write path in the training of single-path SiMT model, the transposing process can only use the expected writing probability matrix α as the input, shown in Eq.( The transposing process consists of three steps. First, derive the read/write path from the expected writing probability matrix α and segment the sentence pair into a sequence of segment pairs. Second, transpose the sequence of segment pairs into the corresponding one for the backward SiMT. Last, merge the transposed segment pairs to get the transposed path and then project it to γ. In the following, we will introduce the steps of segment, transpose and merge in details. Segment Given the expected writing probability matrix α, to get the read/write path, we first find out the source position d i that the WRITE action for each target position i corresponds to, which is According to the property of monotonic attention, there are some consecutive WRITE actions that corresponds to the same source position, so the target words generated by the consecutive WRITE actions form a target segment. Formally, we assume there are K target segments in total, denoted as ), where b y k and e y k are its beginning and end target positions, we can get the corresponding source segment as and Thus the sentence pairs ⟨x, y⟩ can be segmented into the sequence of segment pairs as By replacing the source words with READ actions and target words with WRITE actions, we can get the action segment pairs. Then, the read/write path is formed by concatenating all the action segment pairs, where the length of the read/write path is equal to the total number of source words and target words. Transpose After getting the sequence of segment pairs, the transposed read/write path can be derived from it. As the transposed read/write path is in the form to fit the backward single-path SiMT, the sequence of segment pairs should also be transposed to fit the another direction. According to duality constraints, the sequence of segment pairs is shared by forward and backward SiMT, so we only need to exchange the source segment and target segment in each segment pair, that is from ⟨x k , ȳk ⟩ to ⟨ȳ k , xk ⟩, where the beginning and end positions of each source/target segment remain the same. Then, we get the corresponding transposed action segment pairs by replacing target words with READ actions and source words with WRITE actions. In this way, we accomplish the transposing of segment pairs. Let's review the example in Figure Assuming the expected writing probability matrix for the forward single-path SiMT is α F and its transposed expected writing probability matrix is γ F , and similarly in the backward single-path SiMT, the matrices are α B and γ B , respectively. We reduce the gap between the read/write path with the transposed path of read/write path in another direction by minimizing L 2 distance between their corresponding expected writing probability matrix as follows: Two L 2 distances are added to the training loss as a regularization term and final training loss is where L θ F and L θ B are the loss function of the forward and backward single-path SiMT model respectively, calculated as Eq.( In the inference time, the forward and backward single-path SiMT models can be used separately, depending on the required translation direction. Dual learning is widely used in dual tasks, especially machine translation. For both unsupervised SiMT policy falls into two categories: fixed and adaptive. For fixed policy, the read/write path is defined by rules and fixed during translating. For adaptive policy, the read/write path is learned and adaptive to the current context. Early adaptive policies used segmented translation We evaluated our method on four translation directions of the following two public datasets. IWSLT15 1 English↔Vietnamese (En↔Vi) (133K pairs) WMT15 2 German↔English (De↔En) (4.5M pairs) Following We conducted experiments on following systems. Offline Conventional Transformer Wait-k Wait-k policy, the widely used fixed policy MMA 3 Monotonic multi-head attention (MMA) proposed by Single Path SiMT model of one translation direction based on monotonic multi-head attention. To avoiding outlier heads 4 that are harmful for the read/write path, we slightly modified MMA for more stable performance. We no longer let the heads in all decoder layers independently determine the READ/WRITE action, but share the READ/WRITE action between the decoder layers. Dual Paths Dual-path SiMT described in Sec.3. The implementations of all systems are adapted from Fairseq Library We evaluate these systems with BLEU (Papineni 3 github.com/pytorch/fairseq/tree/ master/examples/simultaneous_translation 4 Since MMA requires all heads in decoder layers to independently decide READ/WRITE action and starts translating only when all heads select WRITE action, some outlier heads that perform too many READ actions will result in higher latency. where τ = argmax Figure We conducted extensive analyses to understand the specific improvements of our method. Unless otherwise specified, all results are reported on De→En. We conducted ablation studies on the duality constraints, where we use direct transposition to replace transposing process of read/write path, only constrain the forward single-path model or remove the duality constraints. As shown in Table The read/write path needs to ensure sufficient content for translation and meanwhile avoid unnecessary latency, where the aligned source position where the best case is A N ec = 1 for g i = a i , performing WRITE just at the aligned position and there is no unnecessary waiting. The more detailed description please refers to Appendix A. As shown in Figure To verify that our method improves the duality of two read/write paths, we conduct duality evaluation between source-to-target and target-to-source read/write paths. Specifically, we first express both the original read/write path on target-to-source and the transposed path of source-to-target read/write path in the form of matrices, and then calculate the Intersection over Union score (IoU) between the area below them (see Figure Figure To analyze the relationship between the forward and backward single-path SiMT model in terms of the latency setting, we set the latency weight (λ in Eq.( In this paper, we develop the dual-path SiMT to supervise the read/write path by modeling the duality constraints between SiMT in two directions. Experiments and analyses we conducted show that our method outperforms strong baselines under all latency and achieves a high-quality read/write path. The black line indicates the ground-truth alignments between the target and source. g i is the number of source words read in when generating the i th target word. a i is the ground-truth aligned source position of the i th target word. a i > g i (numbers colored in red) means that the i th target word is forced to be translated in advance before reading its aligned source word. A Evaluation Metrics of Read/Write Path In Sec.6.2, we propose two metrics A Suf and A N ec to measure the sufficiency and necessity of the read/write path using alignments. Here, we give a more detailed calculation of them. Given the ground-truth alignments, we denote the aligned source position of the i th target word as a i . Specifically, for one-to-many alignment from target to source, we choose the furthest source word as it aligned source position. For a read/write path, we denote the number of source words read in when generating the i th target word as g i . Figure Sufficiency A Suf measures how many aligned source words are read before translating the target word (i.e., a i ≤ g i ), which ensures the faithfulness of the translation, calculated as where 1 a i ≤g i counts the number that a i ≤ g i . Taking the case in Figure Necessity A N ec measures how far the output position g i is from the aligned position a i , where the closer output position to the alignment position indicates that the read/write path outputs earlier, and there is less unnecessary latency. A N ec is calculated as Note that A N ec only focuses on aligned positions that are read before output position (i.e., a i ≤ g i ). In the case shown in Figure To verify that our proposed method does make the read/write path of source-to-target and target-tosource more dual, we calculate the Intersection over Union score (IoU) to evaluate the duality in Sec.6.3. Following, we describe the detailed calculation of IoU score. Figure where the larger IoU score means that the sourceto-target and target-to-source read/write path are much more dual. which means the source-to-target and target-tosource read/write path are exactly in the dual form and reach the agreement on the sequence of segment pairs. In the calculation of IoU score, for 'MMA' and 'Single Path', the source-to-target and target-tosource read/write paths come from independent models in the two directions respectively. For 'Dual Paths', the source-to-target and target-to-source read/write paths come from the forward and backward single-path SiMT model concurrently. All systems in our experiments use the same hyperparameters, as shown in Table We also compare 'Dual Paths' and 'Single Path' with previous methods on the latency metrics Average Proportion (AP) Average Proportion (AP) Differentiable Average Lagging (DAL) (Arivazhagan et al., 2019) is a differentiable version of average lagging, which can be integrated into training. Given the read/write path g i , DAL is calculated as Figure
867
2,669
867
Learning to Jointly Predict Ellipsis and Comparison Structures
Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.
Representing the underlying meaning of text has been a long-standing topic of interest in computational linguistics. Recently there has been a renewed interest in representation of meaning for various tasks such as semantic parsing, where the task is to map a natural language sentence into its corresponding formal meaning representation With the rise of continuous-space models there is even more interest in capturing deeper generic semantics of text as opposed to surface word representations. One of the most common ways for expressing evaluative sentiment towards different entities is using comparison. A simple natural language example of comparison is Their pizza is the best. Capturing the underlying meaning of comparison structures, as opposed to their surface wording, is required for accurate evaluation of qualities and quantities. For instance, given a more complex comparison example, The pizza was great, but it was not as awesome as the sandwich, the state-ofthe-art sentiment analysis system Consider the generic meaning representation depicted in in Figure (1) My Mazda drove faster than his Hyundai. It is evident that this meaning representation does not fully capture how the semantics of the adjective fast relates to the driving event, and what it actually means for a car to drive faster than another car. More importantly, there is an ellipsis in this sentence, the resolution of which results in complete understood reading of My Mazda drove faster than his Hyundai drove fast , which is in no way captured in Figure Figure tures which can capture the mentioned phenomena can enable the development of computational semantic models which are suitable for various reasoning tasks. In this paper we introduce a joint theoretical model for comprehensive semantic representation of the structure of comparison and ellipsis in natural language. We jointly model comparison and ellipsis as inter-connected predicateargument structures, which enables automatic ellipsis resolution. The main contributions of this paper can be summarized as follows: (1) introducing a novel framework for jointly representing the semantics of comparison and ellipsis on top of syntactic trees, (2) releasing a dataset of 2,800 expert annotated user review comparison instances To our knowledge, this paper presents the first comprehensive computational framework of its kind for ellipsis and comparison constructions. Our semantic model can be incorporated as a part of any broad-coverage semantic parser
Broadly, elliptical constructions involve the omission of one or more phrases from a clause (such as 'drove fast' phrase at the end of example (1)) whose content can still be fully recovered from the unelided words of the sentence In 2010, a SemEval task was organized with the goals of (1) automatically detecting VPE in text, and (2) resolving the antecedent of each VPE The syntax and semantics of comparison structures in natural language have been the subject of extensive systematic research in linguistics for a long time The most recent work on the computational semantics of comparison -Tree-based Structure Modeling: Bakhshandeh and Allen use span-based predicate-argument treatment, which is often prone to errors and lower inter-annotator agreement. We base our framework on top of constituency syntactic parse trees, which leads to more accurate -Reviews Dataset: While Bakhshandeh and Allen use newswire text, we shift our focus to the actual user reviews, which contain more comparison structures (Section 4.2). Furthermore, while their dataset included 531 sentences, we collect gold annotations for 2,800 sentences, which significantly increases the size of the available data for the community. In this Section we introduce a novel semantic framework of comparison structures which incorporates ellipsis. Our framework extends and improves the state-of-the-art semantic framework for comparison structures in various ways (outlined in Section 2). We follow the model of interconnected predicate-argument structures. In this model the predicates are either comparison or ellipsis operators, and each predicate takes a set of arguments called its semantic frame. For instance, in [Sam] is the tallest [student] [in the gym], the morpheme -est expresses a comparison operator and the brackets delimit its various arguments. In this Section we provide details about our semantic framework. Comparison structures are modeled as sets of inter-connected predicate-arguments. We base our comparison framework on Bakhshandeh and Allen We consider two main categories of comparison predicates, each of which can grade any of the four parts of speech including adjectives, adverbs, nouns, and verbs. 1. Ordering: Indicates how two or more entities are ordered along a scale. The subtypes of this predicate are the following: -Comparatives with '>', '<' indicate that one degree is greater or lesser than another; expressed by the morphemes more/-er and less. (2) The steak is tastier than the potatoes. (3) Tom ate more soup. -Equatives involving '≥' indicate that one degree meets or exceeds another; expressed by as in constructions such as as tall or as much. (4) The Mazda drives as fast as the Nissan. -Superlatives indicate an entity or event has the 'highest' or 'lowest' degree on a scale; expressed by most/-est and least. (5) That chef made the best soup. 2. Extreme: Indicates having too much or enough of a quality or quantity. The subtypes of this predicate are the following: -Excessive indicate that an entity or event is 'too high' on a scale; expressed by too. -Assetive indicate that an entity or event has 'enough' of a degree; expressed by enough. Each predicate takes a set of arguments that we refer to as the predicate's 'semantic frame'. Following are the arguments included in our framework: - -Scale (-/neutral/+) is the scale for the comparison, such as size, beauty, temperature. We assign the generic sentiment values positive (+), neutral, and negative (-) to the underlying scales. -Standard (Std) is the reason a degree is 'too much' (excessive predicates) or 'enough' (assetive predicates). An individual j may be 'too tall to reach the top shelf ' but 'tall enough to get on this ride'. -Differential (Diff) is an explicit phrase indicating the 'size' of a difference between degrees. For instance, '2 inches taller' or '6 degrees warmer'. -Domain (Dom) is an explicit expression of the type of domain in which the comparison takes place (superlatives). An individual m may be 'the tallest girl' but not 'the tallest student'. -Domain Specifier (D-Spec) is the specification of the domain argument, further narrowing the scope of the domain. An individual m may be 'the tallest girl in the class' but not 'the tallest girl in the country'. The Case of Copulas: A copula is a form of the verb to be that links the subject of a sentence with a predicate, such as was in the sentence She was a doctor. Comparison structures are often formed on the basis of copular constructions, for example (6a). Compare this with (6b), and their corresponding comparison structures. (6) a. This was the best pizza in town. b. I ate the best pizza in town. sup This was the most delicious pizza . Perhaps the most common type of comparison structure is the comparative construction, with (13) as an example, where ∆ marks an ellipsis site. Roughly, ( (7) The steak sizzled more appetizingly than the hamburger ∆. (8) appetizingness(e1) > appetizingness(e2) On the surface, the sentence in ( (9) than the hamburgersizzled appetizingly It is clear that resolving ellipsis in comparison structures is crucial for language understanding and failure to do so would deliver an incorrect meaning representation. Numerous subtypes of elliptical constructions are distinguished in linguistics • Comparatives: Ellipsis takes place in the dependent clause headed by than. We indicate the three ellipsis possibilities for these clauses resuming (10), a nominal comparative. The elided materials are written in subscript. (10) Mary ate more rice ... -VP-deletion (aka 'Comparative Deletion'): ... than John did eat rice. -Stripping (aka 'Phrasal Comparative'): ... than John ate rice. -Gapping: ... than John, ate how-much soup. -Pseudogapping: ... than John did eat soup. -Sluicing: ... than someone, but I don't remember than who ate how-much rice. -Subdeletion: ... than John ate how-much soup. • Equatives: Ellipsis takes place in the dependent clause headed by as. We indicate the possibilities for these clauses resuming (11), a nominal equative. (11) Mary ate as much rice ... -VP-deletion: ... as John did eat how-much rice. Now that we have the ellipsis predicate types, we want to empirically model ellipsis constructions as predicate-argument structures with reference to an antecedent, where each ellipsis predicate is associated with its corresponding comparative predicate. The question is how to represent the ellipsis construction in a sentence. Consider the example of VP-deletion in the following adverbial comparative: (12) The steak was cooked more carefully than the burger ∆. where ∆ should be resolved to was cooked howcarefully. How is called the null operator, which 7 Whether this construction is grammatical is controversial. serves as the placeholder for the measurement of a degree. In order to represent the resolution of the elided material such as ∆, we first annotate the predicate of an ellipsis construction as an 'attachment' site in the syntactic tree, right next to the node that the elided material should follow. Hence, in (12), the token the burger will be annotated as the ellipsis predicate, which signifies the start of an ellipsis construction. Defining the arguments for ellipsis predicates can be complicated. Here the goal is to thoroughly construct the antecedent of the elided material by annotating the existing words of the context sentence. In order to address this, we define the following three argument types for ellipsis: -Reference is the constituency node which is the base of an antecedent. -EXclude (Ex) is the constituency node which should be excluded from the Reference. -How-much (?) is the constituency node which should be replaced by a null operator such as how or how-much; this is always the argument matching more/-er or as (much) in the context sentence. We thus annotate the explicit node cooked as the Ground-Ellipsis (G/E) which also links the comparison construction to the ellipsis predicate. One approach for extracting sentences containing comparisons is to mine the text for some (automatically or manually created) patterns, then train a classifier for labeling comparison and noncomparison sentences However, the variety of comparison structures is so vast that being limited to some specific patterns or syntactic structures will not result in good coverage of comparisons. Instead, we use the following filter (CompF ilter) with a set of basic comparison structure linguistic markers for extracting potential comparison instances: -Any sentence containing a word with POS tag equal to JJR, RBR, JJS, or RBS. -Any sentence containing a comparison morpheme such as more, most, less, enough, too. This filter is guaranteed not to have any false negatives since it is exhaustive enough to capture any possible comparison sentence. We applied this filter to the English Web Corpus and the Movie Reviews dataset and extracted a pool of 2,800 sentences for final annotation in the next step. It is important to note that this filter will capture some cases which look like comparison instances at the surface level, but which are not so semantically (e.g., ( The sentences used for annotation play a significant role in the diversity and comprehensiveness of the comparison structures represented in our dataset. Earlier work In order to augment the volume of review content, we also use the Movie Reviews dataset We trained six linguists to do the semantic annotation for comparison and ellipsis structures for the sampled comparison instances according to the framework presented in Section 3. The annotations were done via our interactive two-stage treebased annotation tool. In this tool, each annotator can be assigned with a set of tree-based annotation assignments, where pairing annotators to do the same task for inter-annotator analysis is also feasible. For this task, the annotations were done on top of constituency parse trees, and the annotators were instructed to choose the top-most constituency node when choosing the predicate or arguments. Our annotation tool sets up the data collection as a two-stage expert annotation process: (1) for each sentence, one expert annotates and submits the annotation, (2) another expert reviews the submission and either returns the submission with feedback or marks it as a gold. This recursive process ensures higher annotation quality. We iterate over the sentences until getting 100% interannotator agreement. On average, annotating every sentence takes about one minute and revising controversial sentences (12% of the time) takes about 4 minutes of expert annotation time. This process yields a total of 2,800 annotated sentences with 100% agreement. Figure In this Section we describe our methodology for joint prediction of comparison and ellipsis structure for a given sentence. We model the problem as a joint predicateargument prediction of comparison and ellipsis structures. It is important to note that our predicate-argument semantic structure itself looks similar to a dependency parse tree, however, as explained earlier, we base this representation on top of constituency parse trees. For each training sentence, we denote the underlying constituency tree as T . The set of all constituency nodes in T is V T . Each v ∈ V T can be tagged as a comparison predicate c ∈ C = {Comp, Sup, Eq, Exc, Ast} In Equation -Any sentence containing a word with POS tag equal to JJR, RBR, JJS, or RBS. -Any sentence containing a comparison morpheme such as more, most, less, enough, too. The next step is to define the probability distribution in Equation pA e (ae|c, e, v, T, θa e ) ∝ exp(f A E (e, c, T ) T θa e ) (4) In each of the above equations, f is the corresponding feature function. For predicates the main features are lexical features, bigram features, node's constituency position, node's minimum distance from leaves, and node's parent constituency label. For the arguments, we use the same feature-set as for the predicates, but also including the leftmost verb (for the case of copulas), the constituency path between argument and the predicate, and the predicate type. θ C , θ E , θ ac and θ ae are the parameters of the log-linear model. We calculate these parameters using Stochastic Gradient Descent algorithm. For inference we model the problem as a structured prediction task. Given the syntactic tree of a given sentence, for each node we first select the predicate type with the highest p C . Then for each selected comparison predicate, we find the corresponding ellipsis predicate that has the highest p E probability. Define tc, te ∈ R, where R is the set of all tuples of corresponding comparison and ellipsis predicates, tc is the index of the comparison predicate and te is the index of the ellipsis predicate. We tackle the problem of argument assignment by using Integer Linear Programming, where one can pose domain-specific knowledge as constraints. We define a binary variable b ij and b ik where i is the a node in tree, j is a comparison argument label and k is a ellipsis argument label. For each tc, te , we maximize the linear Equation ILP Constraints: Any specific comparison label calls for a unique set of constraints in the ILP formulation, which ensures the validity of predictions. For instance, the Superlative predicate type never takes any Ground arguments, or the argument Standard is only applicable to the excessive predicate type. We implement the semantic frame (as listed in Table We incorporate a few other ILP constraints for encoding our knowledge regarding ellipsis structures as well as comparison. For more details of these knowledge-driven constraints please refer to the supplementary material. We divide our dataset into train and train-dev (70%), and test (30%) sets. For evaluation of a given system prediction against the reference gold annotation, for each constituency node in the reference, we give the system a point in two ways: (1) Exact: the label assigned to the node by the system exactly matches the gold label; (2) Head: the reference label matches the label of the head word of the node in system's prediction. We report on Precision (P), Recall (R) and F1 score. We test three models: our comprehensive ILP model (detailed in Section 5), our model without the ILP constraints, and a rule-based baseline. The baseline encodes the same linguistically motivated ILP constraints via rules. It further uses a few pattern extraction functions for pinpointing comparison morphemes which detect comparison and ellipsis predicates. More details about the baseline can be found in the supplementary material. The results on predicate prediction is shown in Table Systems that can understand comparison and make inferences about how entities and events compare in natural language are crucial for various NLP applications, ranging from question answering to product review analysis. Having a comprehensive semantic framework which can represent the underlying meaning of comparison structures is the first step toward enabling such an inference. In this paper we introduced a novel semantic framework for jointly capturing the meaning of comparison and ellipsis constructions. We modeled the problem as inter-connected predicateargument prediction. Based on this framework, we trained experts to annotate a dataset of ellipsis and comparison structures, which we are making publicly available 11 . Furthermore, we introduced 11 In order to access the dataset and our interactive twostage tree-based annotation tool please refer to In future, we are planning on improving our joint prediction models for further improving the performance. Moreover, we plan on using our semantic framework for text comprehension applications. Elliptical constructions involve the omission of one or more phrases from a clause, while the content can still be understood from the rest of the sentence (13) The steak sizzled more appetizingly than the hamburger ∆. (14) appetizingness(e1) > appetizingness(e2) In event semantics, sentences like ( In the In comparatives with more/-er, and equatives with as, how the 'scale' is introduced in the dependent clause differs according to the major part of speech of the comparison structure. For adjectival and adverbial comparisons (taller, as quickly), the scale is provided by those categories (height, appetizingness) and the null operator is simply how. For nominal and verbal comparisons (more rice, sizzle as much), much introduces a variable scale (µ), and the null operator is called how-much. In addition to the major characteristics pointed out in the paper, our framework improves on the following issues as compared with Bakhshandeh and Allen -While we also model comparison structures as predicate-argument pairs, we do not use additional semantic role links. We retain all semantic information on predicate and argument types, which results in better semantic generalization across all predicates (Section 3). -We categorize arguments into semantic frames associated with each predicate type. This enables addressing complex cases such as 'copulas' (Section 3.1.2) which play a crucial role in asserting properties about entities. Furthermore, we introduce a more comprehensive set of argument types which more accurately capture the syntactic and semantic properties of various predicate types. We implemented a rule-based baseline for predicate-argument structure prediction. This model mainly uses POS and lexical wording rules for predicate prediction. For example, we have the
829
2,508
829
Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts
This work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals that arise from corpus construction, case distribution, and confounding factors. To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information. We adopt adversarial training to prevent the system from relying on it. We evaluate our deconfounded models by employing interpretability techniques and comparing to expert annotations. Quantitative experiments and qualitative analysis show that our deconfounded model consistently aligns better with expert rationales than baselines trained for prediction only. We further contribute a set of reference expert annotations to the validation and testing partitions of an existing benchmark dataset of European Court of Human Rights cases.
The task of Legal Judgment Prediction (LJP) has recently gained increasing attention in the legal and mainstream NLP communities In this work, we focus on LJP for the European Court of Human Rights (ECtHR), which adjudicates complaints by individuals against states about alleged violations of their rights as enshrined in the European Convention of Human Rights. We trained deep neural models on four tasks across two existing, related datasets To improve the alignment of model focus with legal expert understanding, we apply a series of deconfounding measures, including a vocabularybased method which identifies predictive tokens using a simple model. The third author, who is an ECtHR expert, then identifies distractors among them. The distracting signal can subsequently be removed from the encodings via adversarial training. This procedure is an effective way of engaging with domain experts and obtaining information about what the model should be steered away from by means of deconfounding, rather than trying to attract the model towards relevant elements via expensive data collection for supervised training. For simplicity, throughout this paper, we use 'deconfounding' in an inclusive sense as the mitigation of distracting effects of (a) confounders in the statistical sense that influence both the dependent and independent variables, (b) reverse causation relationships, and (c) other attributes that spuriously correlate with the target variable. See Fig. We evaluate our trained and deconfounded models with regard to an alignment of its explanation rationales with (1) a dataset of expert passage relevance assessments we collected and will make available to community as a supplement to In sum, we make the following contributions: • We introduce an expert-informed deconfounding method which identifies distracting effects from confounders and spurious correlations using a simple model, and mitigates them through adversarial training, thus helping to improve the alignment of the model focus with legal expert rationales. • We empirically evaluate this method on four tasks in legal judgment prediction on ECtHR data and show that our model consistently aligns better with expert rationales than a baseline trained for the prediction target only. • We release a set of gold rationales annotated by an ECtHR expert as a supplement to an existing dataset to facilitate future work on deriving more useful insight from trained predictive systems in the legal domain. *
LJP as an NLP task has been tackled using ngram representations (e.g., The ECtHR has been the subject of substantial prior work in LJP. We use two datasets for model training and evaluation: First, for binary violation we use the dataset by We conduct experiments on four LJP tasks: Task J -Binary Violation For our task J, the model is given a fact statement and is asked to predict whether or not any article of the convention has been violated. We train our models on In order to facilitate model alignment, we worked with our ECtHR expert to identify shallow prediction signals in the fact statements that are unrelated to the legal merits of the complaint. For the task J dataset of We also observe in the task J dataset that the magnitudes of the running paragraph numbers differ between the classes, and that the single word "represented" strongly correlates with the positive class. This phenomenon arises because 2.6k of the 7k training cases are 'inadmissible' cases labeled as 'non-violation'. Legally, inadmissible cases are not necessarily 'non-violation' as inadmissibility relates to complaints not fulfilling the court's formal or procedural criteria. * In such cases, the court does not examine the merits of the application. The more interesting non-violation cases are such that are admissible, but in which no violation of the convention has been found. The single negative class contains instances of both inadmissible and admissible-but-no-violation-found cases. As explained above, the input texts of the same formulaic sentence stating the applicant's name, nationality, and legal representation. This specific sentence is absent from the texts of admissible cases (violation and non-violation), where that information is part of a separate PROCEDURE section not included in the dataset. Moreover, due to the PROCEDURE section preceding the FACTS section in admissible cases, the running paragraph numbers appearing in FACTS sections of inadmissible cases are smaller than those of the admissible cases. If not remedied, these phenomena provide a considerable predictive signal for the label and distract the system from legally relevant information. In our experiments, we hence remove paragraph numbers from the input via preprocessing and account for distractor vocabulary via our deconfounding procedure described in Sec. 4. Still, the nature of task J remains unchanged and requires the system to classify the outcomes of a collection of both admissible and inadmissible cases. By contrast, the more recent LexGLUE dataset only contains admissible cases and corresponding information about which articles the claimant has alleged to have been violated (for task B) along with those that the court has found to have been violated, if any (task A). The collection covers 10 different convention articles that make up the largest share of ECtHR jurisprudence. Each article has been alleged in a partition of the cases, and has been found to be violated in a subset of these. * For a given article in task B, all cases in which it has been alleged can be considered positive instances while the remaining cases are negatives. We consider task B as akin to topic classification, where the rights enshrined in the convention articles (e.g., Art. 6: right to a fair trial; Art. 1 Protocol 1: protection of property, etc.) may correlate with certain case fact language (e.g., related to law enforcement or expropriation, respectively). Task A incorporates this step and adds violation prediction per article, which is more difficult in principle. However, we observe that a few articles account for a large portion of the data and the conditional probability of a positive violation label in task A given its allegation labels from task B can be very high (see App. B). This makes an analysis of what trained models focus on more difficult, since they may learn to identify these dominant articles with high conditional violation probability, and be distracted from focusing on information that specifically signals violations of those articles. To remedy this, we propose task A|B that provides models an easy access to the label information of B, facilitating their focus only on determining whether the court finds a violation of given articles. This task is realistic since the allegations by the claimant are known to the court at the time that it decides whether the respondent state has violated the convention in the case. We apply an expert-informed deconfounding method designed to mitigate the distracting effects of confounding elements and spurious correlations. As Confounding effects and spurious information in LJP may not be known ahead of time, especially if the legal decision is not made on the basis of an immutable a priori document, but rather on the basis of text that is technically a part of the eventual judgment. Our expert-informed method is intended to mitigate such situations where spurious correlations are introduced in the text production but may not be known in advance as explicit confounders. Our method consists of two steps: (i) Identification of distracting attributes for deconfounding through a combination of simple model training and minimal expert markup, and (ii) mitigation of these effects through adversarial training. We first identify input attributes and categorize them as either distracting or genuinely legally relevant in an expert consultation. 'Distracting' attributes are highly correlated with the task label but not relevant in a human expert prediction. Attributes can be either (i) explicit in the text (such as vocabulary tokens) or (ii) implicit (e.g., country, text length, etc.). Implicit attributes can be derived from available metadata or a corpus analysis. For textual attributes, we apply depth-limited decision trees on an n-gram representation of the fact statement to predict the case outcome. We extract all tokens that appear in the trees and iterate, successively removing tokens identified as predictive. Compared to extracting tokens from a single larger tree, this process is better suited to remove high-entropy-reducing tokens one typically finds near the root of trees. The list of removed tokens is then presented to a legal expert, who categorizes them into spurious and legally genuine (see Appendix Sec. F for the list of spurious vocabulary identified by the expert and the rationale behind the choices). This requires substantially less effort from the expert compared to other methods, such as data annotation or manual creation of counterfactuals. To prevent trees from picking up very sparse tokens, we filter the extracted terms using local mutual information (LMI) We assume a neural NLP model M consisting of a feature extractor F and classifier C with parameters θ f and θ c , respectively. For each confounder k, we apply a discriminator D k with parameters θ d K to the feature extractors. We use adversarial training to maximize the feature extractor's ability to capture information for the main classification target while minimizing its ability to predict the value of distractor attributes. This encourages the model to generate distractor-invariant feature representation for the classifier. We use the following adversarial training objective: (1) (2) where L represents the loss, λ is a hyperparameter, x is the input, y c is the label, and y k is the distracting attribute k. The above optimization is performed using a gradient reversal layer (GRL) (3) We hypothesize that learning distractor-invariant feature representations through adversarial learning will help the model to focus on parts of the input that experts consider relevant. In this section we describe our experiments in using our proposed deconfounding methodology to improve the alignment of model focus on the input with expert rationales on our set of LJP tasks. Baseline: We use the BERT variant of Hierarchical Attention Networks Our main objective is to evaluate the alignment of the model's focus on the input text with legal expert rationales (i.e., selected subsets of relevant segments of the input). Following We use integrated gradients We also report the models' performance on the main four LJP tasks. For Task J, we report the macro F1-score for binary violation prediction. For Task A and B, following Table Expert Scores: We sample 40 cases from task A|B validation and test sets (see App. Sec. D). We provide the expert with randomized visualizations of IG scores at the token level derived from our paraRem and gradAll models. Following An inspection of high scored tokens in paraRem reveals that many of them are highly discriminative in our decision tree models, showing that complex neural models can easily fall for distractors at the expense of missing equally predictive but semantically more complex signals. This reinforces our paradigm to identify discriminative tokens using a simpler model and subject them to expert scrutiny. In particular, we found that the word "represented" forms a natural decoy and, when injected into a violation-outcome fact statement, flips the predicted label of trained deep neural models. This led us to believe those models rely more on individual words than one might expect, and motivated us to explore how this can be exploited with information derived from simple models. Figure In paraRem, we further observe that tokens at the start of sentences receive higher IG scores. We believe this to be the model counting sentences, which justifies deconfounding for length. For gradAll, we observe that sentence beginnings still receive focus, but less strongly so. This may be due to BERT recognizing sentence boundaries. Further alignment improvement: The overall low precision@Oracle scores show that considerable differences in alignment with human experts remain. We conjecture that the model is shifting its focus, at least in part, to other spurious attributes which our current setup could not reveal. This calls for further investigation to design effective methods to identify such patterns. However, we expect them to be increasingly subtle and difficult to recognize, potentially even for legal experts. An intuitive upper bound for the system would be the annotation agreement of multiple experts, which to the best of our knowledge remains unexplored in the current state of the art. Expert Pattern Identification: Our results naturally raise the question of how distractors can be identified in ECtHR fact texts by experts. Generally, the patterns we focused on affect the relationship between the argumentation in the judgment and the supportive facts given. There is copious literature on the court's inconsistent approach to and it is known to switch between judicial policies depending on case circumstances (Helfer and Voeten, 2020). We hence paid attention to specific markers in the fact section and correlated them to existing precedents and argumentation patterns. A few examples: The court may decide to make use of positive obligations and decide against the state (violation) by highlighting failures of national authorities, or may decide to use those same positive obligations under 'the responsible authorities' doctrine, highlighting the efforts of national authorities to bring domestic legislation in line with the convention, thus deciding that there has been no violation. There are also fact patterns and practices specific to particular state parties to the convention (e.g., prison overcrowding, procedural issues in child abduction cases). The court may also sometimes highlight specific facts of a case with the view to 'document' its resemblance to, or divergence from, an existing precedent. A detailed, legally informed case study on predictive patterns is beyond the scope of this work. In order to produce value for legal practice, we believe that LJP/LJF as an NLP task should strive for a productive combination of expert knowledge with data-derived insight. Based on our results, we formulate the following recommendations: First, as has already been observed in the field, any prediction/classification should happen from suitable source text that does not encode information about the outcome but contains as complete factual information as possible, or at least control for this influence. Second, straightforward predictors (e.g., input length and shallow unigram models) should be used to identify distractors and confounders. Third, claimed performance levels in predicting case outcomes should be contextualized by information about the distribution of the legal issues and respective conditional outcome probabilities in the corpus, as well as against baseline classifiers capable of exploiting known distractors. Fourth, more granular outcome variable information (e.g., case declared inadmissible vs. case dismissed on the merits, decomposition into outcomes of individual issues) will allow the development of more nuanced prediction/classification systems. Taken together, if such models can be explained and integrated into a decision support system for suitable tasks in legal practice, experts will be more likely to perceive them as adding value. Our results show that our deconfounded LJP models are consistently better aligned with expert rationales than a baseline optimized for the target label only, and in many cases can even achieve better prediction performance. However, the improvement is small and the paragraphs focused on by all our models are still quite different from what an expert has annotated as relevant, as indicated by generally low precision@Oracle scores (<50%). Still, our quantitative results show that expert-informed deconfounding LJP works in principle and can potentially go a long way to train more robust and trustworthy neural LJP models, as well as derive more useful legal insight from them. We present a case study in deconfounding legal judgment prediction on the ECtHR, and all results are to be understood as relative to the ECtHR, its jurisprudence, the used datasets, and the formal tasks. The distracting attributes we identify include confounding effects of the court's document production, where the decision may be known before the decision text (including the fact section) is finalized. A replication of this study in other LJP settings is of course warranted before general applicability can be claimed. Our analysis of task B has further revealed that redundant vocabulary distribution can challenge the system's ability to point out individual 'smoking gun' distracting tokens. This aspect is particularly complex in light of differing legal systems and their respective cultures and patterns of drafting texts that may form the basis of predictive or, more generally, assistive systems. Morphologically rich languages, where distracting signal may be spread across multiple tokens, may make this challenge more difficult and require stem-or lemma-based processing as part of the method. Our deconfounding method is work-intensive and assumes the identifiability of distracting information in text and metadata by an expert. Legal expert agreement about what parts of decisions are relevant remains underexplored, and the division of genuine versus spurious language may also vary in between multiple experts. While we are convinced that further research on effective deconfounding of legal NLP systems is needed if these systems are to become robust and trustworthy, the time-intensive nature of collaboratively developing and qualitatively evaluating such models with legal experts poses a considerable resource challenge. A technical difficulty in working with legal documents is their length, and the use of packet-based hierarchical models constrains the maximum distance across which tokens can directly attend to one another. The impact of this limitation on model performance in various types of tasks is the subject of ongoing exploratory work (e.g., The research presented here works exclusively with publicly available datasets of ECtHR decisions, which are based on HUDOC * , the public database of the Court. While these decisions are not anonymized and contain the real names of individuals involved, our work does not engage with the data in a way that we consider harmful beyond this availability. Our models are designed to be used with pretrained language models and hence inherit any bi-* The task of legal judgment prediction raises ethical concerns, both general as well as specific to the European Court of Human Rights. All models of this project were developed and trained on Google Colab. Our models adapted pretrained language models and we did not engage in any training of such large models from scratch. We did not track computation hours. Table Table C Rational annotation Process for Task J We sampled 50 cases (25 each) from the validation and test split. In each split, we sample two cases for each of the ten violated articles, one containing the token 'represented' and one without, along with five inadmissible cases. While the article information is available in the task J dataset, we do not use it as it was introduced as a binary violation classification task. The rationale annotation process was done using the GLOSS annotation tool. The third author of this paper, who is an ECtHR expert, read the case fact statements and highlighted paragraphs which she considered indicative of an eventual finding of a violation for any convention article by the court. Despite our sampling involving randomness, the expert was already familiar with a considerable portion of the decisions. Given this, we abstained from producing a human expert outcome prediction baseline. For the qualitative evaluation of Task A|B, we sample 40 cases (20 each) from validation and test split. In each split, we sample two cases for each of the ten allegedly violated articles, one with a finding of a convention violation and with a non-violation finding. Figure Following is the spurious vocabulary we obtained with respect to each task. • Task J: represented, national, mr, summarised, practising, lawyer, agent, paragraph The words were chosen as relevant or irrelevant by using the daily vocabulary of a human rights lawyer working at the ECtHR as a reference. A word was considered legally relevant if, taken individually, it could be introduced into legal reasoning. For instance, the word "religious" was spurious because taken individually it says nothing about the content of a norm. One may talk about religious freedom, but the legally relevant word there is freedom. Article 9 mentions religion, but restrictions related to religion may also be present under Article 8, 3, 2, 5, etc. Under the same Article 9 for instance, the court decides whether there has been a violation depending on criteria such as tolerance, pluralism, etc. It is those criteria that are relevant whereas "religion" is not by itself relevant as a part of the legal reasoning. We calculate LMI for each pair of token t and label y as follows: where count(t, y) denotes the co-occurrence of t and label y, and |D| is the number of unique words in the training set. In the case of binary classification (task J) and one-vs-one multi-label classification (task A|B), we calculate the LMI score for a token as the absolute difference between LMI scores for both positive and negative labels, as both the labels represent a particular class. In one-vs-rest (tasks A, B), we simply take the difference between LMI scores for both positive and negative labels (rather than absolute difference) as the negative label does not specifically represent a particular class. Finally, we calculate the z-score statistic of the effective LMI score for each token to identify significant tokens. Spurious token identification: We train a series of decision trees of depth 3 to assemble lists of predictive tokens for expert filtering. The feature vector consists of whitespace-tokenized unigrams reduced by the LMI filtering explained above. For task J, this means training trees that predict the binary violation label. For task A and B we employ a one-vs-rest classification to produce one decision tree series per article. For task A|B we provided the task B labels (allegedly violated articles) in onevs-one fashion per article, with positive instances being facts where that particular article was deemed violated, and negatives where that particular article was merely alleged but not deemed violated. LJP models: Our models compute BERT-based word embeddings of size 768. Our word level attention context vector size is 300. The sentence level GRU encoder dimension is 200, thus giving a bidirectional embedding of size 400, and a sentence level attention vector dimension of 200. The final dense classifier for all tasks has 100 hidden units. The output dimension is 1 for task J and 10 for the other tasks (i.e. one per convention article). For task A|B, we concatenate a multi-hot 10-element feature vector containing the task B labels to the output of the feature extractor before it is passed to the classifier. All discriminators (country, length, and vocabulary) are built as analogous classifiers with a hidden dimension of 100 and output layer dimensions as required by each of them. We use mini batches size of 8 in case of Task J and 16 for all other tasks. The model is optimized endto-end using Adam Figure
904
2,492
904
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
Existing KBQA approaches, despite achieving strong performance on i.i.d. test data, often struggle in generalizing to questions involving unseen KB schema items. Prior rankingbased approaches have shown some success in generalization, but suffer from the coverage issue. We present RnG-KBQA, a Rank-and-Generate approach for KBQA, which remedies the coverage issue with a generation model while preserving a strong generalization capability. Our approach first uses a contrastive ranker to rank a set of candidate logical forms obtained by searching over the knowledge graph. It then introduces a tailored generation model conditioned on the question and the top-ranked candidates to compose the final logical form. We achieve new state-ofthe-art results on GRAILQA and WEBQSP datasets. In particular, our method surpasses the prior state-of-the-art by a large margin on the GRAILQA leaderboard. In addition, RnG-KBQA outperforms all prior approaches on the popular WEBQSP benchmark, even including the ones that use the oracle entity linking. The experimental results demonstrate the effectiveness of the interplay between ranking and generation, which leads to the superior performance of our proposed approach across all settings with especially strong improvements in zero-shot generalization. 1 * Work done during internship at Salesforce Research. 1 Code available at
Modern knowledge bases (KB) are reliable sources of a huge amount of world knowledge but can be difficult to interact with since they are extremely large in scale and require specific query languages (e.g., Sparql) to access. Question Answering over Knowledge Base (KBQA) serves as a user-friendly way to query over KBs and has garnered increasing attention achieving strong results on several public benchmarks that contain i.i.d. train and test distribution such as SIMPLEQ We propose RNG-KBQA, a new framework targeted at generalization problems in the task of KBQA. Our approach combines a ranker with a generator, which addresses the coverage issue in ranking-only based approaches while still benefiting from their generalization power. As shown in Figure We base both our ranker and generator on pretrained language models for better generalization capability. Unlike prior systems which rank candidates using a grammar-based parser We test RNG-KBQA on two datasets, GRAILQA and WEBQSP, and compare against an array of strong baselines. On GRAILQA, a challenging dataset focused on generalization in KBQA, our approach sets the new state-of-the-art performance of 68.8 exact match 74.4 F1 score, surpassing prior SOTA (58.1 exact match and 65.3 F1 score) by a large margin. On the popular WEBQSP dataset, RNG-KBQA also outperforms the best prior approach (QGG 2 Generation Augmented KBQA
A knowledge base collects knowledge data stored in the form of subject-relation-object triple (s, r, o), where s is an entity, r is a binary relation, and o can be entities or literals (e.g., date time, integer values, etc.). Let the question be x, our task is to obtain a logical form y that can be executed over the knowledge base to yield the final answer. Following Enumeration of Candidates Recall that our approach first uses a ranker model to score a list of candidate logical forms C = {c i } m i=1 obtained via enumeration. We'll first introduce how to enumerate the candidates before delving into the details of our ranking and generation models. We start from every entity detected in the question and query the knowledge base for paths reachable within two hops. Next, we write down an s-expression corresponding to each of the paths, which constitutes a set of candidates. We note that we do not exhaust all the possible compositions when enumerating (e.g., we do not include comparative operations and argmin/max operations), and hence does not guarantee to cover the target s-expression. A more comprehensive enumeration method is possible but will introduce a prohibitively large number (greater than 2,000,000 for some queries) of candidates. Therefore, it's impractical to cover every possible logical form when enumerating, and we seek to tackle this issue via our tailored generation model. Our ranker model learns to score each candidate logical form by maximizing the similarity between question and ground truth logical form while minimizing the similarities between the question and the negative logical forms (Figure where BERTCLS denotes the [CLS] representation of the concatenated input; LINEAR is a projection layer reducing the representation to a scalar similarity score. The ranker is then optimized to minimize the following loss function: L ranker = -e s(x,y) e s(x,y) + c∈C∧c =y e s(x,c) (1) where the idea is to promote the ground truth logical form while penalizing the negative ones via a contrastive objective. In contrast, the ranker employed in past work Due to the large number of candidates and limited GPU memory, it is impractical to feed all the candidates c ∈ C as in Eq (1) when training the ranker. Therefore, we need to sample a subset of negatives logical forms C ⊂ C at each batch. A naive way for sampling negative logical forms is to draw random samples. However, because the number of candidates is often large compared to the allowed size of negative samples in each batch, it may not be possible to cover spurious logical forms within the randomly selected samples. We propose to sample negative logical forms by bootstrapping, inspired by the negative sampling methods used in Having a ranked list of candidates, we introduce a generation model to compose the final logical form conditioned on the question and the top-k logical forms. Our generator is a transformer-based seqto-seq model Execution-Augmented Inference We use a vanilla T5 generation model without syntactic constraints, which does not guarantee the syntactic correctness nor executability of the produced logical forms. Therefore, we use an execution-augmented inference procedure, which is commonly used in prior semantic parsing related work Our ranking model is mainly proposed for the task of ranking candidate logical forms. Here, we introduce a simple way to adapt our ranking model for the task of entity disambiguation. A common paradigm of finding KB entities referred in a question is to first detect the entity mentions with an NER system and then run fuzzy matching based on the surface forms. This paradigm has been employed in various methods One problem with this paradigm lies in entity disambiguation: a mention usually matches surface forms of more than one entities in the KB. A common way to disambiguate the matched entities is to choose the most popular one according to the popularity score provided by FACC1 project However, it is possible to leverage the relation information linked with an entity to further help assess if it matches a mention in the question. By querying relations over KB, we see there is a relation about mv director mv.directed_by linking to m.0mxqqt24, but there are no such kind of relations connected with m.02rhrjd. We therefore cast the disambiguation problem to an entity ranking problem, and adapt the ranking model used before to tackle this problem. Given a mention, we concatenate the question with the relations for each entity candidate matching the mention. We reuse the same model architecture and loss function as in Section 2.2 to train another entity disambiguation model to further improve the ranking of the target entity. We apply our entity disambiguation model on GRAILQA, and achieve substantial improvements in terms of entity linking. We mainly test our approach on GRAILQA GRAILQA is the first dataset that evaluates the zero-shot generalization. Specifically, GRAILQA contains 64,331 questions in total and carefully splits the data so as to evaluate three levels of generalization in the task of KBQA, including i.i.d. setting, compositional generalization to unseen composition, and zero-shot generalization to unseen KB schema (examples in Figure We link an entity mention to an entity node in KB using our approach described in Section 2.4. We first use a BERT-NER systems provided by the authors of GRAILQA to detect mention spans in the question. For each mention span, we match the span with surface forms in FACC1 project When training the ranker, we sample 96 negative candidates using the strategy described in Section 2.2. Our ranker is finetuned from BERTbase-uncased for 3 epochs using a learning rate of 1e-5 and a batch size of 8. We do bootstrapping after every epoch. It is also noteworthy that we perform teacher-forcing when training the ranker, i.e., we use ground truth entity linking for training. We base our generation model on T5-base (Raffel et al., 2020). We use top-5 candidates returned by the ranker and finetune for 10 epochs using a learning rate of 3e-5 and a batch size of 8. Metrics For GRAILQA, we use exact match (EX) and F1 score (F1) as the metrics, all of which are computed using official evaluation script. Results Table Furthermore, RNG-KBQA performs generally well for all three levels of generalization and is particularly strong in zero-shot setting. Our approach is slightly better than ReTrack and substantially better than all the other approaches in i.i.d. setting and compositional setting. However, ReTrack fails in generalizing to unseen KB Schema items and only achieves poor performance in zero-shot setting, whereas our approach is generalizable and beats ReTrack with a margin of 16.1 F1. To directly compare the effectiveness of our rankand-generate framework against rank-only baseline (BERT Ranking), we also provide the performance of a variant of RNG-KBQA without the entitydisambiguation model. In this variant, we directly use the entity linking results provided by the authors of WEBQSP is a popular dataset which evaluates KBQA approaches in i.i.d. setting. It contains 4,937 question in total and requires reasoning chains with up to 2 hops. Since there is no official development split, we randomly sample 200 examples from the training set for validation. Implementation Detail For experiments on WE-BQSP, we use ELQ Topic Units Our approach achieves the new state-of-the-art performance (75.6 F1) with a discernible margin over the performance of best prior method (74.0 F1 obtained by QGG). Our approach even outperforms a number of prior work using oracle entity linking annotations. ing BERT-base-uncased, and the generator using T5-base. We also sample 96 negative candidates for each question, and feed the top-5 candidates to the generation model. The ranker is trained for 10 epochs and we run bootstrapping every 2 epochs; the generator is trained for 20 epochs. Metrics F1 is used as the main evaluation metric. In addition, for approaches that are able to select entity sets as answers, we report the exact match (EM) used in the official evaluation. For informationretrieval based approaches that can only predict a single entity, we report Hits @1 (if the predicted entity is in the ground truth entity set), which is considered as a loose approximation of EM. For baseline approaches, we directly take the results reported in corresponding original paper. As shown in Table Ablation Study We first compare the performance of our full model against incomplete ablations in Table To test the effects of our generation step, we compare the performance of a ranking-only variant (directly using the top-ranked candidate) against the performance of the full model. As shown in Table We additionally evaluate the performance of a ranking model trained without bootstrapping strategy introduced in Section 2.2. The performance of this variant lags its counterpart by 1.2 and 1.4 on GRAILQA and WEBQSP, respectively. The bootstrapping strategy is indeed helpful for training the ranker to better distinguish spurious candidates. benefit of adding a generation stage on top of the ranking step on previous result sections. Here, we present a more detailed comparison between the outputs of ranking model and generation model. Figure • top right: the top generation prediction is better, • bottom left: the top ranking prediction is better, • bottom right: they both fail (achieving a 0 F1). The generator retains the ranking predictions without any modifications for most of the time. For 4.7% and 8.9% of the questions from GRAILQA and WEBQSP, respectively, the generator is able to fix the top-ranked candidates and improves the performance. Although generator can make mistakes in non-negligible fraction of examples on WEBQSP, it is mostly caused by introducing false constraints (e.g., Figure We also show the break down by types of generalization on GRAILQA (bottom row in Figure Executability We use executability to further measure the quality of generated outputs. Figure logical forms) and valid rate (producing a logical form that yields non-empty answer) among the topk decoded list. Nearly all the top-1 logical forms are executable. This suggests that the generation model can indeed produce high-quality predictions in terms of syntactic correctness and consistency with KB. As the beam size increases, more valid logical forms can be found in the top-k list, which our inference procedure can benefit from. Output Examples of Ranking Model and Generation Model For more intuitive understanding of how the generator works, we attach several concrete examples (Figure As in example (c), the generation model makes a worse prediction sometimes because it prefers another prediction in the top-ranked list due to inherent ambiguity in the question. It can also fail when falsely adding a constraint which results in empty answer (d). KBQA is a promising technique for users to efficiently query over large KB, which has been extensively studied over the last decade. Past work has collected a series of datasets One line of KBQA approaches first constructs a query-specific subgraph with information retrieved from the KB and then rank entity nodes to select top entities as the answer More closely related to our approach, another line answers a question by parsing it into an executable logical form in various representations, including lambda-DCS We have presented RNG-KBQA for question answering over knowledge base. RNG-KBQA consists of a ranking step and a generation step. Our ranker trained with iterative bootstrapping strategy can better distinguish correct logical forms from spurious ones than prior seq-to-seq ranker. Our generator can further remedy uncovered operations or implicitly mentioned constraints in the top-ranked logical forms. The experimental results on two datasets, GRAILQA and WEBQSP, suggest the strong performance of our approach: RNG-KBQA achieves new state-of-the-art performance on both datasets, and particularly outperforms prior methods in generalization setting by a large margin.
1,373
1,394
1,373
Rich bitext projection features for parse reranking
Many different types of features have been shown to improve accuracy in parse reranking. A class of features that thus far has not been considered is based on a projection of the syntactic structure of a translation of the text to be parsed. The intuition for using this type of bitext projection feature is that ambiguous structures in one language often correspond to unambiguous structures in another. We show that reranking based on bitext projection features increases parsing accuracy significantly.
Parallel text or bitext is an important knowledge source for solving many problems such as machine translation, cross-language information retrieval, and the projection of linguistic resources from one language to another. In this paper, we show that bitext-based features are effective in addressing another NLP problem, increasing the accuracy of statistical parsing. We pursue this approach for a number of reasons. First, one limiting factor for syntactic approaches to statistical machine translation is parse quality It is well known that different languages encode different types of grammatical information (agreement, case, tense etc.) and that what can be left unspecified in one language must be made explicit in another. This information can be used for syntactic disambiguation. However, it is surprisingly hard to do this well. We use parses and alignments that are automatically generated and hence imperfect. German parse quality is considered to be worse than English parse quality, and the annotation style is different, e.g., NP structure in German is flatter. We conduct our research in the framework of N-best parse reranking, but apply it to bitext and add only features based on syntactic projection from German to English. We test the idea that, generally, English parses with more isomorphism with respect to the projected German parse are better. The system takes as input (i) English sentences with a list of automatically generated syntactic parses, (ii) a translation of the English sentences into German, (iii) an automatically generated parse of the German translation, and (iv) an automatically generated word alignment. We achieve a significant improvement of 0.66 F 1 (absolute) on test data. The paper is organized as follows. Section 2 outlines our approach and section 3 introduces the model. Section 4 describes training and section 5 presents the data and experimental results. In section 6, we discuss previous work. Section 7 analyzes our results and section 8 concludes.
Consider the English sentence "He saw a baby and a woman who had gray hair". Suppose that the baseline parser generates two parses, containing the NPs shown in figures 1 and 2, respectively, and that the semantically more plausible second parse in figure 2 is correct. How can we determine that the second parse should be favored? Since we are parsing bitext, we can observe the German translation which is "Er sah ein Baby und eine Frau, die graue Haare hatte" (glossed: "he saw a baby and a woman, who gray hair had"). The singular verb in the subordinate clause ("hatte": "had") indicates that the subordinate S must be attached low to "woman" ("Frau") as shown in figure We follow In more detail, we take the 100 best English parses from the BitPar parser vergence between the German and English trees to try to rank the English trees which have less divergence higher. Our test set is 3718 sentences from the English Penn treebank Given a word alignment of the bitext, the system performs the following steps for each English sentence to be parsed: (i) run BitPar trained on English to generate 100best parses for the English sentence (ii) run BitPar trained on German to generate the 1-best parse for the German sentence (iii) calculate feature function values which measure different kinds of syntactic divergence (iv) apply a model that combines the feature function values to score each of the 100-best parses (v) pick the best parse according to the model We use a log-linear model to choose the best English parse. The feature functions are functions on the hypothesized English parse e, the German parse g, and the word alignment a, and they assign a score (varying between 0 and infinity) that measures syntactic divergence. The alignment of a sentence pair is a function that, for each English word, returns a set of German words that the English word is aligned with as shown here for the sentence pair from section 2: Er sah ein Baby und eine Frau , die graue Haare hatte He{1} saw{2} a{3} baby{4} and{5} a{6} woman{7} who{9} had{12} gray{10} hair{11} Feature function values are calculated either by taking the negative log of a probability, or by using a heuristic function which scales in a similar fash-ion Given a vector of weights λ, the best English parse ê can be found by solving eq. 2. The model is trained by finding the weight vector λ which maximizes accuracy (see section 4). (2) The basic idea behind our feature functions is that any constituent in a sentence should play approximately the same syntactic role and have a similar span as the corresponding constituent in a translation. If there is an obvious disagreement, it is probably caused by wrong attachment or other syntactic mistakes in parsing. Sometimes in translation the syntactic role of a given semantic constitutent changes; we assume that our model penalizes all hypothesized parses equally in this case. For the initial experiments, we used a set of 34 probabilistic and heuristic feature functions. BitParLogProb (the only monolingual feature) is the negative log probability assigned by BitPar to the English parse. If we set λ 1 = 1 and λ i = 0 for all i = 1 and evaluate eq. 2, we will select the parse ranked best by BitPar. In order to define our feature functions, we first introduce auxiliary functions operating on individual word positions or sets of word positions. Alignment functions take an alignment a as an argument. In the descriptions of these functions we omit a as it is held constant for a sentence pair (i.e., an English sentence and its German translation). f (i) returns the set of word positions of German words aligned with an English word at position i. f ′ (i) returns the leftmost word position of the German words aligned with an English word at position i, or zero if the English word is unaligned. f -1 (i) returns the set of positions of English words aligned with a German word at position i. f ′-1 (i) returns the leftmost word position of the English words aligned with a German word at position i, or zero if the German word is unaligned. We overload the above functions to allow the argument i to be a set, in which case union is used, for example, f (i) = ∪ j∈i f (j). Positions in a tree are denoted with integers. First, the POS tags are numbered from 1 to the length of the sentence (i.e., the same as the word positions). Constituents higher in the tree are also indexed using consecutive integers. We refer to the constituent that has been assigned index i in the tree t as "constituent i in tree t" or simply as "constituent i". The following functions have the English and German trees as an implicit argument; it should be obvious from the argument to the function whether the index i refers to the German tree or the English tree. When we say "constituents", we include nodes on the POS level of the tree. Our syntactic trees are annotated with a syntactic head for each constituent. Finally, the tag at position 0 is NULL. mid2sib(i) returns 0 if i is 0, returns 1 if i has exactly two siblings, one on the left of i and one on the right, and otherwise returns 0. head(i) returns the index of the head of i. The head of a POS tag is its own position. tag(i) returns the tag of i. left(i) returns the index of the leftmost sibling of i. right(i) returns the index of the rightmost sibling. up(i) returns the index of i's parent. ∆(i) returns the set of word positions covered by i. If i is a set, ∆ returns all word positions between the leftmost position covered by any constituent in the set and the rightmost position covered by any constituent in the set (inclusive). n(A) returns the size of the set A. c(A) returns the number of characters (including punctuation and excluding spaces) covered by the constituents in set A. π is 1 if π is true, and 0 otherwise. l and m are the lengths in words of the English and German sentences, respectively. Feature CrdBin counts binary events involving the heads of coordinated phrases. If in the English parse we have a coordination where the English CC is aligned only with a German KON, and both have two siblings, then the value contributed to CrdBin is 1 (indicating a constraint violation) un-less the head of the English left conjunct is aligned with the head of the German left conjunct and likewise the right conjuncts are aligned. Eq. 3 calculates the value of CrdBin. Feature Q simply captures a mismatch between questions and statements. If an English sentence is parsed as a question but the parallel German sentence is not, or vice versa, the feature value is 1; otherwise the value is 0. Span projection features calculate the percentage difference between a constituent's span and the span of its projection. Span size is measured in characters or words. To project a constituent in a parse, we use the word alignment to project all word positions covered by the constituent and then look for the smallest covering constituent in the parse of the parallel sentence. CrdPrj is a feature that measures the divergence in the size of coordination constituents and their projections. If we have a constituent (XP1 CC XP2) in English that is projected to a German coordination, we expect the English and German left conjuncts to span a similar percentage of their respective sentences, as should the right conjuncts. The feature computes a character-based percentage difference as shown in eq. 4. r and s are the lengths in characters of the English and German sentences, respectively. In the English parse in figure POSParentPrj is based on computing the span difference between all the parent constituents of POS tags in a German parse and their respective coverage in the corresponding hypothesized parse. The feature value is the sum of all the differences. POSPar(i) is true if i immediately dominates a POS tag. The projection direction is from German to English, and the feature computes a percentage difference which is character-based. The value of the feature is calculated in eq. 5, where M is the number of constituents (including POS tags) in the German tree. (5) The right conjunct in figure AbovePOSPrj is similar to POSParentPrj, but it is word-based and the projection direction is from English to German. Unlike POSParentPrj the feature value is calculated over all constituents above the POS level in the English tree. Another span projection feature function is DTNNPrj, which projects English constituents of the form (NP(DT)(NN)). DTNN(i) is true if i is an NP immediately dominating only DT and NN. The feature computes a percentage difference which is word-based, shown in eq. 6. L is the number of constituents in the English tree. This feature is designed to disprefer parses where constituents starting with "DT NN", e.g., (NP (DT NN NN NN)), are incorrectly split into two NPs, e.g., (NP (DT NN)) and (NP (NN NN)). This feature fires in this case, and projects the (NP (DT NN)) into German. If the German projection is a surprisingly large number of words (as should be the case if the German also consists of a determiner followed by several nouns) then the penalty paid by this feature is large. This feature is important as (NP (DT NN)) is a very common construction. We use Europarl For the PDepth feature, we estimate English parse depth probability conditioned on German parse depth from Europarl by calculating a simple probability distribution over the 1-best parse pairs for each parallel sentence. A very deep German parse is unlikely to correspond to a flat English parse and we can penalize such a parse using PDepth. The index i refers to a sentence pair in Europarl, as does j. Let l i and m i be the depths of the top BitPar ranked parses of the English and German sentences, respectively. We calculate the probability of observing an English tree of depth l ′ given German tree of depth m ′ as the maximum likelihood estimate, shown in eq. 7, where δ(z, z ′ ) = 1 if z = z ′ and 0 otherwise. To avoid noisy feature values due to outliers and parse errors, we bound the value of PDepth at 5 as shown in eq. 8 The full parse of the sentence containing the English high attachment has a parse depth of 8 while the full parse of the sentence containing the English low attachment has a depth of 9. Their feature values given the German parse depth of 6 are -log 10 (0.12) = 0.93 and -log 10 (0.14) = 0.84. The wrong parse is assigned a higher feature value indicating its higher divergence. The feature PTagEParentGPOSGParent measures tagging inconsistency based on estimating the probability that for an English word at position i, the parent of its POS tag has a particular label. The feature value is calculated in eq. 10. q(i, j) = p(tag(up(i))|tag(j), tag(up(j))) ( Consider (S(NP(NN fruit))(VP(V flies))) and (NP(NN fruit)(NNS flies)) with the translation (NP(NNS Fruchtfliegen)). Assume that "fruit" and "flies" are aligned with the German compound noun "Fruchtfliegen". In the incorrect English parse the parent of the POS of "fruit" is NP and the parent of the POS of "flies" is VP, while in the correct parse the parent of the POS of "fruit" is NP and the parent of the POS of "flies" is NP. In the German parse the compound noun is POS-tagged as an NNS and the parent is an NP. The probabilities considered for the two English parses are p(NP|NNS, NP) for "fruit" in both parses, p(VP|NNS, NP) for "flies" in the incorrect parse, and p(NP|NNS, NP) for "flies" in the correct parse. A German NNS in an NP has a higher probability of being aligned with a word in an English NP than with a word in an English VP, so the second parse will be preferred. As with the PDepth feature, we use relative frequency to estimate this feature. When an English word is aligned with two words, estimation is more complex. We heuristically give each English and German pair one count. The value calculated by the feature function is the geometric mean Our best system uses the nine features we have described in detail so far. In addition, we implemented the following 25 other features, which did not improve performance (see section 7): (i) 7 "ptag" features similar to PTagEParentGPOSG-Parent but predicting and conditioning on different combinations of tags (POS tag, parent of POS, grandparent of POS) (ii) 10 "prj" features similar to POSParentPrj measuring different combinations of character and word percentage differences at the POS parent and POS grandparent levels, projecting from both English and German (iii) 3 variants of the DTNN feature function (iv) A NPPP feature function, similar to the DTNN feature function but trying to counteract a bias towards (NP (NP) (PP)) units (v) A feature function which penalizes aligning clausal units to non-clausal units (vi) The BitPar rank Log-linear models are often trained using the Maximum Entropy criterion, but we train our model directly to maximize F 1 . We score F 1 by comparing hypothesized parses for the discriminative training set with the gold standard. To try to find the optimal λ vector, we perform direct accuracy maximization, meaning that we search for the λ vector which directly optimizes F 1 on the training set. The algorithm for training is initialized with a choice for λ and is described in figure We used the subset of the Wall Street Journal investigated in adjust each scalar of λ turn towards such that there is no increase in error (if possible) 10: until no scalar in λ changes in last two steps (8 and 9) 11: until λ = λ ′ 12: return λ Parses. We use the BitPar parser For the 3718 sentences in the translated set, we created 100-best English parses and 1-best German parses. The German parser was trained on the TIGER treebank. For the Europarl corpus, we created 1-best parses for both languages. Word Alignment. We use a word alignment of the translated sentences from the Penn treebank, as well as a word alignment of the Europarl corpus. We align these two data sets together with data from the JRC Acquis We perform 7-way crossvalidation on 3718 sentences. In each fold of the cross-validation, the training set is 3186 sentences, while the test set is 532 sentences. Our results are shown in table 1. In row 1, we take the hypothesis ranked best by BitPar. In row 2, we train using the algorithm outlined in section 4. To cancel out any effect caused by a particularly effective or ineffective starting λ value, we perform 5 trials each time. Columns 3 and 5 report the improvement over the baseline on train and test respectively. We reach an improvement of 0.56 over the baseline using the algorithm as described in section 4. Our initial experiments used many highly correlated features. For our next experiment we use greedy feature selection. We start with a λ vector that is zero for all features, and then run the error minimization without the random generation of vectors (figure As we mentioned in section 2, work on parse reranking is relevant, but a vital difference is that we use features based only on syntactic projection of the two languages in a bitext. For an overview of different types of features that have been used in parse reranking see Syntactic projection has been used to bootstrap treebanks in resource poor languages. Some examples of projection of syntactic parses from English to a resource poor language for which no parser is available are the works of We looked at the weights assigned during the cross-validation performed to obtain our best result. The weights of many of the 34 features we defined were frequently set to zero. We sorted the features by the number of times the relevant λ scalar was zero (i.e., the number of folds of the cross-validation for which they were zero; the greedy feature selection is deterministic and so we do not run multiple trials). We then reran the same greedy feature selection algorithm as was used in table 1, row 3, but this time using only the top 9 feature values, which were the features which were active on 4 or more folds We also tried to see if our results depended strongly on the log-linear model and training algorithm, by using the SVM-Light ranker We have shown that rich bitext projection features can improve parsing accuracy. This confirms the hypothesis that the divergence in what information different languages encode grammatically can be exploited for syntactic disambiguation. Improved parsing due to bitext projection features should be helpful in syntactic analysis of bitexts (by way of mutual syntactic disambiguation) and in computing syntactic analyses of texts that have translations in other languages available.
505
2,012
505
Transforming Meaning Representation Grammars to Improve Semantic Parsing
A semantic parser learning system learns to map natural language sentences into their domain-specific formal meaning representations, but if the constructs of the meaning representation language do not correspond well with the natural language then the system may not learn a good semantic parser. This paper presents approaches for automatically transforming a meaning representation grammar (MRG) to conform it better with the natural language semantics. It introduces grammar transformation operators and meaning representation macros which are applied in an error-driven manner to transform an MRG while training a semantic parser learning system. Experimental results show that the automatically transformed MRGs lead to better learned semantic parsers which perform comparable to the semantic parsers learned using manually engineered MRGs.
Semantic parsing is the task of converting natural language (NL) sentences into their meaning representations (MRs) which a computer program can execute to perform some domain-specific task, like controlling a robot, answering database queries etc. These MRs are expressed in a formal meaning representation language (MRL) unique to the domain to suit the application, like some specific command language to control a robot or some * Alumnus at the time of submission. * c 2008. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license ( The grammar of an MRL, which we will call meaning representation grammar (MRG), is assumed to be deterministic and context-free which is true for grammars of almost all the computer executable languages. A semantic parsing learning system typically exploits the given MRG of the MRL to learn a semantic parser Some other semantic parser learning systems which need MRL in the form of Prolog
The following subsection gives some examples of semantic parsing domains and their corresponding MRLs and illustrates why incompatibility between MRGs and natural language could hurt semantic parsing. The next subsection then briefly describes a base semantic parser learning system which we use in our experiments. ative language with LISP-like prefix notation designed to instruct simulated soccer players in the RoboCup ANSWER → answer ( RIVER ) better MRG which corresponds well with the NL semantics. Figure Finally, Figure We very briefly describe the semantic parser learning system, KRISP During semantic parsing, the classifiers are called to estimate probabilities on different substrings of the sentence to compositionally build the most probable MR parse over the entire sentence with its productions covering different substrings of the sentence. KRISP was shown to perform competitively with other existing semantic parser learning systems and was shown to be particularly robust to noisy NL input. This section describes an approach to transform an MRG using grammar transformation operators to conform it better with the NL semantics. The following section will present another approach for transforming an MRG using macros which is sometimes more directly applicable. The MRLs used for semantic parsing are always assumed to be context-free which is true for almost all executable computer languages. There has been some work in learning context-free grammars (CFGs) for a language given several exam-ples of its expressions We describe five transformation operators which are used to transform an MRG. Each of these operators preserves the coverage of the grammar, i.e. after application of the operator, the transformed grammar generates the same language that the previous grammar generated 3 . The MRs do not change but only the way they are parsed may change because of grammar transformations. This is important because the MRs are to be used in an application and hence should not be changed. 1. Create Non-terminal from a Terminal (CreateNT): Given a terminal symbol t in the grammar, this operator adds a new production T → t to it and replaces all the occurrences of the terminal t in all the other productions by the new non-terminal T . In the context of semantic parsing learning algorithm, this operator introduces a new semantic concept the previous grammar was not explicit about. For example, this operator may introduce a production (a semantic concept) LONGEST → longest to the simple grammar whose parse was shown in Figure This operator merges n non-terminals T 1 , T 2 , ..., T n , by introducing n productions T → T 1 , T → T 2 , ..., 3 This is also known as weak equivalence of grammars. T → T n where T is a new non-terminal. All the occurrences of the merged non-terminals on the right-hand-side (RHS) of all the remaining productions are then replaced by the non-terminal T . In order to ensure that this operator preserves the coverage of the grammar, before applying it, it is made sure that if one of these non-terminals, say T 1 , occurs on the RHS of a production π 1 then there also exist productions π 2 , ..., π n which are same as π 1 except that T 2 , ..., T n respectively occur in them in place of T 1 . If this condition is violated for any production of any of the n non-terminals then this operator is not applicable. This operator enables generalization of some non-terminals which occur in similar contexts leading to generalization of productions in which they occur on the RHS. For example, this operator may generalize non-terminals LONGEST and SHORTEST in GEOQUERY MRG to form QUALIFIER This operator combines two non-terminals T 1 and T 2 into one new non-terminal T by introducing a new production T → T 1 T 2 . All the instances of T 1 and T 2 occurring adjacent in this order on the RHS (with at least one more nonterminal If a production has the same nonterminal appearing twice on its RHS then this operator adds an additional production which differs from the first production in that it has only one occurrence of that non-terminal. For example, if a production is A → b C D C, then this operator will introduce a new production A → b C D re-moving the second occurrence of the non-terminal C. This operator is applied only when the subtrees under the duplicate non-terminals of the production are often found to be the same in the parse trees of the MRs in the training data. As such this operator will change the MRL the new MRG will generate, but this can be easily reverted by appropriately duplicating the subtrees in its generated MR parses in accordance to the original production. This operator is useful during learning a semantic parser because it eliminates the type of incompatibility between MRs and NL sentences illustrated with Figure This last operator deletes a production and replaces the occurrences of its left-hand-side (LHS) non-terminal with its RHS in the RHS of all the other productions. In terms of semantic parsing, this operator eliminates the need to learn a semantic concept. It can undo the transformations obtained by the other operators by deleting the new productions they introduce. We note that the CombineNT and MergeNT operators are same as the two operators used by In order to transform an MRG to improve semantic parsing, since a simple hill-climbing type approach to search the space of all possible MRGs will be computationally very intensive, we use the following error-driven heuristic search which is faster although less thorough. First, using the provided MRG and the training data, a semantic parser is trained using KRISP. The trained semantic parser is applied to each of the training NL sentences. Next, for each production π in the MRG, two values total π and incorrect π are computed. The value total π counts how many MR parses from the training examples use the produc-tion π. The value incorrect π counts the number of training examples for which the semantic parser incorrectly uses the production π, i.e. it either did not include the production π in the parse of the MR it produces when the correct MR's parse included it, or it included the production π when it was not present in the correct MR's parse. These two statistics for a production indicate how well the semantic parser was able to use the production in semantic parsing. If it was not able to use a production π well, then the ratio incorrect π /total π , which we call mistakeRatio π , will be high indicating that some change needs to be made to that production. After computing these values for all the productions, the procedure described below for applying the first type of operator is followed. After this, the MRs in the training data are re-parsed using the new MRG, the semantic parser is re-trained and the total π and incorrect π values are re-computed. Next, the procedure for applying the next operator is followed and so on. The whole process is repeated for a specified number of iterations. In the experiments, we found that the performance does not improve much after two iterations. 1. Apply CreateNT: For each terminal t in the grammar, total t and incorrect t values are computed by summing up the corresponding values for all the productions in which t occurs on the RHS with at least one non-terminal 2. Apply MergeNT: All the non-terminals occurring on the RHS of all those productions π are collected whose mistakeRatio π value is greater than α and whose total π value is greater than β. The set of these non-terminals is then partitioned such that the criteria for applying the MergeNT is satisfied by the non-terminals in each partition with size at least two. The MergeNT operator is then applied to the non-terminals in each partition with size at least two. 3. Apply CombineNT: For every non-terminal pair T 1 and T 2 , total T 1 T 2 and incorrect T 1 T 2 values are computed by summing their corresponding values for the productions in which the two non-terminals are adjacent in the RHS in the presence of at least one more non-terminal. If mistakeRatio T 1 T 2 = incorect T 1 T 2 /total T 1 T 2 is greater than α and total T 1 T 2 is greater than β, then the CombineNT operator is applied to these two non-terminals. 4. Apply RemoveDuplNT: If a production π has duplicate non-terminals on the RHS under which the same subtrees are found in the MR parse trees of the training data more than once then this operator is applied provided its mistakeRatio π is greater than α and total π is greater than β. The DeleteProd operator is applied to all the productions π and whose mistakeRatio π is greater than α and total π is greater than β. This step simply deletes the productions which are mostly incorrectly used. For the experiments, we set the α parameter to 0.75 and β parameter to 5, these values were determined through pilot experiments. As was illustrated with Figure A meaning representation macro for an MRG is a production formed by combining two or more existing productions of the MRG. For example, for the CLANG example shown in Figure A semantic parser is first learned from the training data using KRISP and the given MRG. The learned semantic parser is then applied to the training sentences and if the system can not produce any parse for a sentence then the parse tree of its corresponding MR is included in a set called failed parse trees. Common subtrees in these failed parse trees are likely to be good candidates for introducing macros. Then a set of candidate trees is created as follows. This set is first initialized to the set of failed parse trees. The largest common subtree of every pair of trees in the candidate trees is then also included in this set if it is not an empty tree. The process continues with the newly added trees until no new tree can be included. This process is similar to the repeated bottom-up generalization of clauses used in the inductive logic programming system GOLEM A meaning representation grammar which does not correspond well with the natural language semantics can lead to a poor performance by a learned semantic parser. This paper presented grammar transformation operators and meaning representation macros using which the meaning representation grammar can be transformed to make it better conform with the semantics of natural language. Experimental results on three different grammars demonstrated that the performance on semantic parsing task can be improved by large amounts by transforming the grammars.
846
966
846
Learning to Generate Equitable Text in Dialogue from Biased Training Data
The ingrained principles of fairness in a dialogue system's decision-making process and generated responses are crucial for user engagement, satisfaction, and task achievement. Absence of equitable and inclusive principles can hinder the formation of common ground, which in turn negatively impacts the overall performance of the system. For example, misusing pronouns in a user interaction may cause ambiguity about the intended subject. Yet, there is no comprehensive study of equitable text generation in dialogue. Aptly, in this work, we use theories of computational learning to study this problem. We provide formal definitions of equity in text generation, and further, prove formal connections between learning humanlikeness and learning equity: algorithms for improving equity ultimately reduce to algorithms for improving human-likeness (on augmented data). With this insight, we also formulate reasonable conditions under which text generation algorithms can learn to generate equitable text without any modifications to the biased training data on which they learn. To exemplify our theory in practice, we look at a group of algorithms for the GuessWhat?! visual dialogue game and, using this example, test our theory empirically. Our theory accurately predicts relative-performance of multiple algorithms in generating equitable text as measured by both human and automated evaluation.
Machine learning models for text-generation in dialogue have trouble learning the "long tail" of a data distribution; i.e., the data concepts not frequently observed during training. For example, dataset biases like gender imbalance can induce a long tail in training data whereby important data relationships involving gender are underrepresented, like women in sports equitable, stereotyping behaviors instead (see Figure Despite the multi-faceted impact of inequitable text generation in dialogue, we do not have a comprehensive and theoretically grounded framework for understanding how machines learn to generate inequitable text and when this outcome can be avoided. To provide a strong technical foundation for equitable generation in dialogue, we build on theories of computational learning The theoretical understanding we contribute is imperative because it informs algorithm design. In particular, using our theory, we can predict: 1. the most equitable algorithms for unseen data; 2. counter-intuitive properties of algorithms that lead to less equitable results. For example, consider algorithms which naïvely augment data to remove bias The remainder of the paper is organized as follows: § 2 provides background to position our contributions including discussion of related work, a brief tutorial on the employed learning theoretic framework, and a few running examples used throughout the text; § 3 provides our theoretical contributions including formulation of mathematical notions of equity in text generation and theoretical analysis of learning algorithms; § 4 conducts experiments which validate our theory in practice; and finally, § 5 concludes the work. Code, data, and a python package will be made publicly available to promote further research. 1 1
Recent proposals for the use of learning theory in dialogue are due to (1) Of course, there are a number of undefined terms here: specifically, the test h, the context C, the goal dialogue D, the learned dialogue D, and the unobserved effects U . Below, we explain each, using examples from Figure The goal distribution G is a joint probability distribution over dialogue contexts c ∈ C and dialogues d ∈ D. For The learned dialogue distribution is the probability kernel P θ (C) that provides a distribution over dialogues, conditional to the parameters θ learned by the machine (e.g., neural parameters) as well as the random dialogue context C. The precise manner in which dialogue occurs will vary from system to system, but typically involves a machine generating/prompting responses to/from human users as in Figure The unknown effect U ∼ U represents any additional information needed to completely determine the outcome of the test. When the test is BLEU, U simply takes the form of a reference dialogue to which the input dialogue is compared. For human evaluation, U encapsulates all of the unknown variables that contribute to the randomness of a realworld experiment. Often, U may not be needed. Interpretation With terms defined, it is easy to see the test divergence is a direct comparison of the output of the test from the goal dialogue D to the predicted dialogue D, learned by our dialogue system. Larger test divergence indicates the learned dialogue fails to replicate the goal dialogue along the dimensions targeted by the test. For example, if the goal is human-likeness in the visual dialogue example from Figure In natural language, popular, early studies of equity begin with avoiding stereotyping in learned model representations Still, these model-intrinsic approaches to resolving inequity have proven subpar compared to model-extrinsic approaches, which focus directly on the downstream task Finally, it is worthwhile to note that 3 Formalizing Equity in Dialogue In this part, we introduce some formal, mathematical notions of equity. We start with a general notion of equity in dialogue and show how this can be specialized to compare with ideas of equity in the classification literature. For proofs, see Appendix A. Protected Attributes To begin, we need to first define the notion of a protected attribute. Conceptually, this is the sensitive variable (e.g., race, gender, religion, etc.) that we intend to "protect" by the equity constraint. Otherwise, presumably, system inequities would disproportionately, negatively impact the sub-population captured by the attribute. Throughout this work, we use a variable a ∈ A = {0, 1} to denote the protected attribute and we measure equity of the text with respect to this variable. Precisely, a = 1 implies the dialogue context exhibits the attribute (e.g., female gender, Black race, Muslim religion), while a = 0 implies the context does not exhibit the protected attribute. For example, in the educational dialogue from Figure Equity as Score Parity Commonly, equity in machine learning systems is formally defined through a notion of parity The system uses language in the same way, regardless of protected attribute. This intuitive notion of equity is vague in its use of "way" to be general, allowing for specification to different applications. For example, where s is a scoring function s : To arrive at our motivating example This lets us talk about degrees of inequity, and therefore, measure progress towards our ideals. Multi-Category Score Parity Notice, we use the presence/absence of singular demographic groups (e.g., female v. not female) instead of binary comparisons (e.g., female v. male) in defining the protected attribute. This choice allows our definition of equity (above) and later theory to support study of general multi-category attributes with more than two attributes like race (e.g., Black, White, Asian) or religion (e.g., Muslim, Jewish, Catholic). Using race as an example, we can measure the parity gap when Black is the protected attribute, White is the protected attribute, Asian is the protected attribute, etc. The dataset is then equitable for all races (according to score parity) if all measured parity gaps are 0. In this way, our definition and subsequent results can generalize to the multi-category case. We use this strategy, for example, in Section 4. Comparison to Demographic Parity In classification, demographic parity is a commonly studied notion of equity For score parity, when s(•, 0) = s(•, 1), the scoring function s does not depend on the attribute and we see that score parity is a direct reflection of demographic parity. Whereas classification problems use machine learning to select the classifier c in a fair way, dialogue uses machine learning to select the feature distribution X (i.e., D in our definition). Comparison to Accuracy Parity Depending on the application, it is known that demographic parity can also be an inappropriate constraint; e.g., if the classifier c is meant to predict the protected attribute itself By our definition, score parity can be used to reflect this distinct notion from classification as well. Conceptually, we select our scoring function to measure the correctness of the dialogue. Then, just like accuracy parity, score parity enforces equal error rates, regardless of protected attribute. While details may vary based on application, we consider selecting the scoring function in the examples from Figure With the choice of scoring function above, score parity reflects the intuition of accuracy parity by requiring that the correctness of the language use (in referring to a protected attribute) is independent of the protected attribute. As alluded, this constraint can be especially useful in case spurious correlations (i.e., stereotypes) between protected attributes and context cause different error rates with/without a protected attribute. This is the case in our toy examples (Figure Takeaways The formalization of equity we introduce -score parity -is both general and useful. It models existing ideas for empirical evaluation of equity in text-generation Next, we show how learning to generate equitable text can be modeled with learning theory. Test Divergence (Reprise) To evaluate equity with LEATHER, the objective in Eq. ( where Importantly, we must consider the deviations from Sicilia and Alikhani (2022) not present in Eq. ( (1) the choice of goal distribution G and (2) the choice of test h. Originally, Sicilia and Alikhani focus on evaluation of human-like dialogue, and therefore, propose the goal to be defined by any collected corpus of contextualized human dialogues. Instead, we are interested in the equity of the contextualized dialogue and cannot blindly use human dialogue as an example; i.e., we cannot take for granted that the contextualized human dialogue is equitable. Thus, to appropriately evaluate equity, we generally assume the following constraints on the goal distribution and test. Equitable Goals and Tests Definition 3.2. (Balanced) A contextualized dialogue distribution G is balanced if it assigns equal (marginal) likelihood to the protected attribute: Definition 3.3. (Equitable Goal) We say a contextualized dialogue distribution G with (C, D) ∼ G is an equitable goal distribution if it is balanced and satisfies score parity (for some fixed score s). So, intuitively, we propose the goal in equitable dialogue is a contextualized dialogue distribution which is itself equitable, according to our formal definition of this property -i.e., score parity. Furthermore, it should be balanced to prioritize the protected attribute equally during evaluation. As we'll see later, choosing the test h to be the scoring function s from our previous definition allows us to use TD (with an equitable goal) to control the parity gap of our learned dialogue. Biased Data While the formal definition above (Def. 3.3) is about equity, it should also be noted that we implicitly arrive at a formal definition for bias: the absence of equity. In particular, a contextualized dialogue distribution (dataset) is biased if it is not equitable. Note, this also distinguishes biased data from other common concepts like noisy data because we use an expectation to quantify parity; i.e., which is immune to non-systemic noise. Small Test Divergence Implies Equity Theorem 3.1. Consider an equitable goal G and let h ≡ s (the scoring function). Then, ∆( Ĝθ ) ≤ ϵ whenever TD G (θ) ≤ ϵ/2. Simply, the above result indicates minimization of TD with an equitable goal and appropriate test leads to an equitable learned dialogue distribution. Takeaways An important consequence of Thm. 3.1 is the ability to confidently use algorithms designed in the LEATHER framework (i.e., to reduce test divergence) for equitable dialogue learning. While these algorithms may have originally been designed to learn human-like dialogue, they can easily be modified to learn equitable dialogue. In particular, we need only change the goal from any human dialogue distribution to any equitable dialogue distribution -as in Def. 3.3. Portability of algorithms in the sense described means, ultimately, a unified theory for dialogue generation. For any algorithm we propose, we may conduct a singular theoretical analysis of test divergence that can serve multiple purposes -both human-like and equitable dialogue generation. In other words: LEATHER-based algorithms for humanlikeness can be used to learn equitable text by simply augmenting training data. Some standard examples of how to create the new equitable goal G include augmenting data in the dataset to achieve equitable constraints Next, we study the circumstances under which the goals of human-like dialogue learning and equitable dialogue learning align. That is, we study circumstances under which an algorithm designed to minimize TD can learn from (biased) human-like goal data and simultaneously learn to be equitable. The definitions are based on the idea of labelshift used to study data-shift at test time Context-awareness assumes that humans are not biased provided the background context C. Conceptually, this is reasonable, since humans use context to form inferences about attributes of other human subjects (even protected attributes). If background is sufficient, human inferences will often be correct inferences and the dialogue should be equitable with respect to accuracy parity, at least. Context-preservation assumes that the presentation of the context for attributes does not change. In other words, the features of the protected attribute which present themselves through the context should be invariant across G and H. For example, if one attempts to infer race from an image, this assumption simply states the visual features indicative of race should be consistent. The assumption would be violated, for example, if G protects Asian males and H protects Asian females. Test Divergence Learning Bound In this part, for simplicity, we assume the parameters θ are learned from a finite space Θ. Other proof techniques may allow arbitrary Θ; e.g., Theorem 3.2. Consider an equitable goal G with associated test h. Suppose a sample of i.i.d. human data is collected S = ( Ci , Di ) m i=1 ; ( Ci , Di ) ∼ H. Suppose H is context aware and preserves context. Then, for all δ > 0, with probability at least 1δ, for all θ, 2β × TD G (θ) is bounded above by For interpretation, we break down the upperbound on 2β × TD G (θ) into two terms: (a) the difference in test output from the human dialogue to the predicted dialogue and (b) a data efficiency term dependent on the number of i.i.d samples m. Equity from Biased Data Notice, the predicted dialogue in (a) is dependent on the human dialogue's context Ci -not the goal dialogue's context C -so (a) is actually identical in definition to TD S , an empirical observation of TD H . That is, (a) is test divergence computed on a human corpus as was done by LEATHER-based algorithms learn humanlikeness and equity, even on biased data. We only require the human data to be context-aware and preserve context (Defs. 3.4 and 3.5). The above interpretation of (a) is only valid if the data efficiency term (b) is also small. For interpretation, we consider the size of the parameter space Θ fixed and focus on the number of i.i.d training samples m. As m increases, (b) ultimately goes to 0 and the effect of (a) dominates the bound. In some cases though, if m is too small (b) can also have an impact. For example, this may be the case when using data-augmentation strategies to create a more equitable distribution. In particular, augmentation reduces the number of i.i.d. data points by creating dependencies in the data, which can reduce the data-efficiency of learning algorithms Augmenting training data to improve equity can reduce data-efficiency, and ultimately, model performance. Impact does depend on the augmentation strategy, so we study common proposals for equity, next. In Section 3, we conclude by outlining algorithmic insights revealed by our theory. Next, we test these theories on the GuessWhat?! game corpus. Unless otherwise noted, we use identical experimental settings, hyperparameters, etc. as Protected Attribute For these experiments, we use gender (male and female) as the protected attribute. When the protected attribute is female gender (F), we set a = 1 as long as all human dialogues use at least one female-gendered word. 9 When the protected attribute is male gender (M), we set a = 1 as long as all human dialogues use at least one male-gendered word. 10 Conceptually, this labeling scheme uses human annotator consensus to determine when it is appropriate or inappropriate to ask gender-specific questions: if a = 1, all human annotators perceive the protected gender to be present in the image and relevant to gameplay. Importantly, the labeling scheme also implies that the human dialogue satisfies our assumptions in § 3.3: context awareness (Def. 3.4) and context preservation (Def. 3.5); i.e., as shown in Appendix A.3. Different conceptualizations of how the protected attribute should be defined are possible, but we focus on this scheme because it allows us to simulate the assumptions of our theory in § 3.3, and therefore, best test our theory in practice. As a final note, while we focus on male/female gender in these experiments, using more than two categories for protected attributes is also possible. Simply, one checks the parity gap for each new protected attribute to be added. This would allow our theoretical and empirical study to be extended to general multi-category attributes; e.g., race or religion. CL Algorithm CL is a cooperative learning algorithm proposed by LEATHER Algorithm An extension of CL proposed by DS Algorithm A modification of the LEATHER algorithm. While re-incorporating human data, an augmentation (downsampling) strategy is used to balance occurrence of protected attributes; i.e., like other strategies for equity Human-Likeness Evaluation To evaluate human likeness, we use metrics proposed by Equity Evaluation To evaluate equity, we focus on accuracy parity; i.e., score parity with scoring function described in Eq. ( LEATHER produces human-like, equitable text. In Tab. 1, LEATHER improves upon CL in terms of both human-likeness and equity, across all metrics. These observations validate our theoretical analyses. In particular, LEATHER (as the name implies) is designed based on the LEATHER framework to minimize test divergence. From previous work, we know this means it should improve human-likeness DS does not improve equity as well as LEATHER, but overall, its behavior aligns with our theoretical predictions. Thm. 3.2 also makes the observation that data-augmentation strategies like DS can sometimes perform worse than alternatives which focus only on human-likeness (i.e., due to datainefficiency). Since DS does augment data significantly, we might expect DS to perform worse than LEATHER, and ultimately, it does in Tab. 1 (all metrics but ∆ M). With that said, another of our theoretical results (Thm. 3.1) suggests data-augmented versions of LEATHER algorithms like DS can, in fact, improve equity, especially in more general cases where data does not satisfy the circumstances of our experimental data. In experiments, this insight is reflected in comparing DS and the baseline. DS outperforms CL in Tab. 1 on all metrics but TD F. Test divergence models equity well. Finally, we recall test divergence is the key link between ex-isting learning theoretic work and our analysis of equitable dialogue. In particular, we show, theoretically speaking, that 2TD always bounds the parity gap ∆, which measures equity. As a result, learning theory algorithms can implicitly learn to be fair in many cases. Indeed, empirical results in Tab. 1 agree with this theoretical bound in every case, and further, suggest TD may be useful at ranking equity of algorithms, since TD is predictive of all improvements from CL to LEATHER. Again, our theoretical predictions match our empirical observations, highlighting the practical utilitiy of our theory. In this paper, we provide a first in-depth study of equity in dialogue, formalizing mathematical notions of equity in dialogue and using computational learning theory to study how equity can be achieved through algorithm design. Our empirical results show how our formal theoretical study of equity in dialogue can be used, with great benefit, to select and design algorithms in a task-oriented dialogue setting. In particular, we can: design algorithms that achieve both equity and humanlikeness, predict unexpected consequences of dataaugmentation, and provide proxy statistics that are useful in ranking the equity of algorithms. To promote further research, our code, data, and a python package will be made publicly available. While our theoretical work is broadly applicable to any protected attribute and any dialogue task, our empirical study has primarily tested gender bias on the GuessWhat?! task. Continued experimental study on a wider range of protected attributes and tasks can better support our mathematical findings. Also, users of our theory should verify the assumptions of our theory when using it to draw insights on new datasets. Specifically, as the type of data bias changes, it is possible the assumptions of Thm. 3.2 may no longer be met. Users of our theory should take care in ensuring context-awareness and context-preservation, for example, are reasonable assumptions on new data, prior to applying the insights of § 3.3. Lastly, while all of our gender annotations come from human annotators, only a smaller subset come from annotators primed to judge correctness/equity of gender reference. So, more in-depth human evaluation can better support our theoretical results as well. The goal of this paper is to present a theoretically grounded framework to mitigate bias in dialogue systems. Our theoretical and empirical techniques can lead to important insights/solutions for algorithm design that reduce bias, along with any unintended harm associated with this bias. With this said, some of the proposed algorithms rely on pretrained models such as word or image embeddings, and any harm or bias associated with these models can still be present after efforts to mitigate. Thus, models trained with these techniques should still undergo rigorous human evaluation for presence of biases before being deployed. Our human subject board approved our protocol. Human subjects participated voluntarily and were compensated according to the regulations approved by our human subject review board. So, x(1) = x(0) = 1. Note, the other axioms of probabilities follow directly because the constraints only restrict the probabilities for (D, C, A) to existing (known) probability functions. Thus, we know a distribution satisfying the needed constraints in Eq. ( Equity of Goal Finally, it remains to see how the distribution corresponding to (D, C, A) is equitable. Score parity follows easily by definition of à = v( D). In particular, the test divergence on the human data is 0, so Eq. ( The downsampling process for the DS algorithm restricts to images which are determined to have either of the protected attributes -i.e., a = 1 when M is the protected attribute or a = 1 when F is the protected attribute -such that there are an equal number of occurrences of a = 1 for both protected attributes. That is, in the end result, the new training dataset has an equal number of occurrences where annotator consensus identified a male or a female, and all other images are thrown out. This is achieved through a simple randomized filtering approach. As noted, images without a = 1 for either protected attribute are also thrown out. This allows us to ensure we are training a (single) model that will be equitable on both protected attributes simultaneously, Downsampling to create the equitable distribution is done in a similar manner, except -since we don't need to worry about inefficiency in model training any longer -a separate dataset is created for each protected attribute. So, there is one dataset with balanced occurrences of a = 1 and a = 0 when the protected attribute is M, and another dataset with balanced occurrences when the attribute is F. Importantly, because labeling scheme enforces our assumptions about context hold in the human data (see Appendix A.3), this should create an equitable goal. Here, we introduce the GuessWhat?! visual dialogue game Gameplay An image and goal-object within the image are both randomly chosen. A question-player with access to the image asks yes/no questions to an answer-player who has access to both the image and goal-object. The question-player's goal is to identify the goal-object. The answer-player's goal is to reveal the goal-object to the question-player by answering the yes/no questions appropriately. The question-and answer-player converse until the question-player is ready to make a guess or at most m questions have been asked. Cooperative Learning generates questions Qi and object guess Ô based on answer player answers A i as below: Ô = Guesα(Enc β (I, D)) Qi+1 = QGen θ (Enc β (I, Q1, A1, . . . Qi, Ai). (28) The neural-model QGen θ is called the question-generator and the neural-model Gues α is called the objectguesser. The final neural-model Enc β is called the encoder and captures pertinent features for the former models to share. All model parameters (α, β, θ) are first pre-trained on human-human dialogue and then the model-components are further updated through cooperative self-play
1,398
1,776
1,398
Word Embeddings through Hellinger PCA
Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word cooccurence matrix. We compare those new word embeddings with some well-known embeddings on named entity recognition and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.
Building word embeddings has always generated much interest for linguists. Popular approaches such as Brown clustering algorithm This paper aims to show that such good word embeddings can be obtained using simple (mostly linear) operations. We show that similar word embeddings can be computed using the word cooccurrence statistics and a well-known dimensionality reduction operation such as Principal Component Analysis (PCA). We then compare our embeddings with the CW We claim that, assuming an appropriate metric, a simple spectral method as PCA can generate word embeddings as good as with deep-learning architectures. On the other hand, deep-learning architectures have shown their potential in several supervised NLP tasks, by using these word embeddings. As they are usually generated over large corpora of unlabeled data, words are represented in a generic manner. Having generic embeddings, good performance can be achieved on NLP tasks where the syntactic aspect is dominant such as Part-Of-Speech, chunking and NER
As 80% of the meaning of English text comes from word choice and the remaining 20% comes from word order Linguists assumed long ago that words occurring in similar contexts tend to have similar meanings It has been shown that using word embeddings as features helps to improve general performance on many NLP tasks A NNLM learns which words among the vocabulary are likely to appear after a given sequence of words. More formally, it learns the next word probability distribution. Instead, simply counting words on a large corpus of unlabeled text can be performed to retrieve those word distributions and to represent words "You shall know a word by the company it keeps" where n(w, T ) is the number of times each context word w occurs after the sequence T . The size of T can go from 1 to t words. The next word probability distribution p for each word or sequence of words is thus obtained. It is a multinomial distribution of |D| classes (words). A co-occurence matrix of size N × |D| is finally built by computing those frequencies over all the N possible sequences of words. Similarities between words can be derived by computing a distance between their corresponding word distributions. Several distances (or metrics) over discrete distributions exist, such as the Bhattacharyya distance, the Hellinger distance or Kullback-Leibler divergence. We chose here the Hellinger distance for its simplicity and symmetry property (as it is a true distance). Considering two discrete probability distributions P = (p 1 , . . . , p k ) and Q = (q 1 , . . . , q k ), the Hellinger distance is formally defined as: which is directly related to the Euclidean norm of the difference of the square root vectors: Note that it makes more sense to take the Hellinger distance rather than the Euclidean distance for comparing discrete distributions, as P and Q are unit vectors according to the Hellinger distance ( √ P and √ Q are units vector according to the 2 norm). As discrete distributions are vocabulary sizedependent, using directly the distribution as a word embedding is not really tractable for large vocabulary. We propose to perform a principal component analysis (PCA) of the word cooccurence probability matrix to represent words in a lower dimensional space while minimizing the reconstruction error according to the Hellinger distance. Traditional NLP approaches extract from documents a rich set of hand-designed features which are then fed to a standard classification algorithm. The choice of features is a task-specific empirical process. In contrast, we want to pre-process our features as little as possible. In that respect, a multilayer neural network architecture seems appropriate as it can be trained in an end-to-end fashion on the task of interest. The sentence-level approach aims at tagging with a label each word in a given sentence. Embeddings of each word in a sentence are fed to linear and non-linear classification models followed by a CRF-type sentence tag inference. We chose here neural networks as classifiers. Sliding window Context is crucial to characterize word meanings. We thus consider n context words around each word x t to be tagged, leading to a window of N = (2n + 1) words [x] t = (x t-n , . . . , x t , . . . , x t+n ). As each word is embedded into a d wrd -dimensional vector, it results a d wrd × N vector representing a window of N words, which aims at characterizing the middle word x t in this window. Given a complete sentence of T words, we can obtain for each word a context-dependent representation by sliding over all the possible windows in the sentence. A same linear transformation is then applied on each window for each word to tag: where W ∈ R M ×d wrd N and b ∈ R M are the parameters, with M the number of classes. Alternatively, a one hidden layer non-linear network can be considered: where U ∈ R n hu ×d wrd N , with n hu the number of hidden units and h(.) a transfer function. CRF-type inference There exists strong dependencies between tags in a sentence: some tags cannot follow other tags. To take the sentence structure into account, we want to encourage valid paths of tags during training, while discouraging all other paths. Considering the matrix of scores outputs by the network, we train a simple conditional random field (CRF). At inference time, given a sentence to tag, the best path which minimizes the sentence score is inferred with the Viterbi algorithm. More formally, we denote θ all the trainable parameters of the network and f θ ([x] T 1 ) the matrix of scores. The element [f θ ] i,t of the matrix is the score output by the network for the sentence [x] T 1 and the i th tag, at the t th word. We introduce a transition score [A] i,j for jumping from i to j tags in successive words, and an initial score [A] i,0 for starting from the i th tag. As the transition scores are going to be trained, we define 1 is then given by the sum of transition scores and networks scores: We normalize this score over all possible tag paths [j] T 1 using a softmax, and we interpret the resulting ratio as a conditional tag path probability. Taking the log, the conditional probability of the true path [y] T 1 is therefore given by: log where we adopt the notation Computing the log-likelihood efficiently is not straightforward, as the number of terms in the logadd grows exponentially with the length of the sentence. It can be computed in linear time with the Forward algorithm, which derives a recursion similar to the Viterbi algorithm (see 1 , the best tag path which minimizes the sentence score (6): In contrast to classical CRF, all parameters θ are trained in a end-to-end manner, by backpropagation through the Forward recursion, following The document-level approach is a document binary classifier, with classes y ∈ {-1, 1}. For each document, a set of (trained) filters is applied to the sliding window described in section 4.1. The maximum value obtained by the i th filter over the whole document is: It can be seen as a way to measure if the information represented by the filter has been captured in the document or not. We feed all these intermediate scores to a linear classifier, leading to the following simple model: In the case of movie reviews, the i th filter might capture positive or negative sentiment depending on the sign of α i . As in section 4.1, we will also consider a non-linear classifier in the experiments. Training The neural network is trained using stochastic gradient ascent. We denote θ all the trainable parameters of the network. Using a training set T , we minimize the following soft margin loss function with respect to θ: As seen in section 3, the process to compute generic word embedding is quite straightforward. These embeddings can then be used as features for supervised NLP systems and help to improve the general performance We consider a fixed-sized word dictionary D. Given a sequence of N words w 1 , w 2 , . . . , w N , each word w n ∈ W is first embedded into a d wrd -dimensional vector space, by applying a lookup-table operation: where the matrix W ∈ R d wrd ×|D| represents the embeddings to be tuned in this lookup layer. W wn ∈ R d wrd is the w th column of W and d wrd is the word vector size. Given any sequence of N words [w] N 1 in D, the lookup table layer applies the same operation for each word in the sequence, producing the following output matrix: Training Given a task of interest, a relevant representation of each word is then given by the corresponding lookup table feature vector, which is trained by backpropagation. Word representations are initialized with existing embeddings. We evaluate the quality of our embeddings obtained on a large corpora of unlabeled text by comparing their performance against the CW (Collobert and Weston, 2008), Turian Our English corpus is composed of the entire English Wikipedia The resulting embeddings are denoted E-PCA. The Hellinger PCA is very fast to compute. We report in Table We compare our H-PCA's embeddings with the following publicly available embeddings: • LR-MVL • CW • Turian • HLBL 5 : it covers 246,122 words with 50 or 100 dimensions for each word. They were trained on the RCV1 corpus using a Hierarchical Log-Bilinear Model. We used only the 50 dimensions. Using word embeddings as feature proved that it can improve the generalization performance on several NLP tasks Named Entity Recognition (NER) It labels atomic elements in the sentence into categories such as "PERSON" or "LOCATION". The CoNLL 2003 setup The NER evaluation task is mainly syntactic. As we wish to evaluate whether our word embeddings can also capture semantic, we trained the document-level architecture described in section 4.2 over a movie review task. We used a collection of 50,000 reviews from IMDB Word embeddings are continuous vector spaces that are not necessarily in a bounded range. To avoid saturation issues in the network architectures, embeddings need to be properly normalized. Considering the matrix of word embeddings E, the normalized embeddings are: where Ē is the mean of the embeddings, σ(E) is the standard deviation of the embeddings and λ is a normalization factor. Figure H-PCA's embeddings Results summarized in Table Embeddings fine-tuning We note that tuning the embeddings by backpropagation increases the general performance on both NER and movie review tasks. The increase is, in general, higher for the movie review task, which reveals the importance of embedding fine-tuning for NLP tasks with a high semantic component. We show in Table Linear vs nonlinear model We also report results with a linear version of our neural networks. Having non-linearity helps for NER. performs as well as the non-linear approach for the movie review task. Our linear approach captures all the necessary sentiment features to predict whether a review is positive or negative. It is thus not surprising that a bag-of-words based method can perform well on this task We have demonstrated that appealing word embeddings can be obtained by computing a Hellinger PCA of the word co-occurence matrix. While a neural network language model can be painful and long to train, we can get a word co-occurence matrix by simply counting words over a large corpus. The resulting embeddings give similar results on NLP tasks, even from a N × 10, 000 word co-occurence matrix computed with only one word of context. It reveals that having a significant, but not too large set of common words, seems sufficient for capturing most of the syntactic and semantic characteristics of words. As PCA of a N × 10, 000 matrix is really fast and not memory consuming, our method gives an interesting and practical alternative to neural language models for generat- Table ing word embeddings. However, we showed that deep-learning is an interesting framework to finetune embeddings over specific NLP tasks. Our H-PCA's embeddings are available online, here:
676
1,027
676
Attention and Edge-Label Guided Graph Convolutional Networks for Named Entity Recognition
It has been shown that named entity recognition (NER) could benefit from incorporating the long-distance structured information captured by dependency trees. However, dependency trees built by tools usually have a certain percentage of errors. Under such circumstances, how to better use relevant structured information while ignoring irrelevant or wrong structured information from the dependency trees to improve NER performance is still a challenging research problem. In this paper, we propose the Attention and Edge-Label guided Graph Convolution Network (AELGCN) model. Then, we integrate it into BiLSTM-CRF to form BiLSTM-AELGCN-CRF model. We design an edge-aware node joint update module and introduce a node-aware edge update module to explore hidden structured information entirely and solve the wrong dependency label information to some extent. After two modules, we apply attention-guided GCN, which automatically learns how to attend to the relevant structured information selectively. We conduct extensive experiments on several standard datasets across four languages and achieve better results than previous approaches. Through experimental analysis, it is found that our proposed model can better exploit the structured information on the dependency tree to improve the recognition of long entities.
Named Entity Recognition (NER) is the recognition of entities with specific meanings in the text, mainly including person, organization, location, etc. NER is the fundamental tasks for many natural language processing tasks such as relation extraction A dependency tree reveals the syntactic structure of a language unit by analyzing the dependency relationships between its components, and "dependency" refers to the relationship between related words while specifying a dependency relationship to form a syntactic tree reflecting the syntactic relationships between words in a sentence. However, how to better use relevant structured information while ignoring irrelevant or wrong structured information for NER remains a research question to be answered. For example, Figure However, under a different context, as shown in Figure Even worse, there are some incorrect dependency labels in the dataset, these incorrect dependency labels may convey the wrong information for NER. Figure We propose a novel dependency-based named entity recognition model to address the above problems, which improves the named entity recognition performance by exploiting syntactic dependency information with graphical neural networks. The model obtains contextual information by BiL-STM and then by Attention and Edge-Label guided Graph Convolution Network (AELGCN) to integrate better contextual and structured information. For each AELGCN layer, an edge-aware node joint update module is firstly performed for aggregating information from neighbors and different dependency labels. Then a node-aware edge update module is used to update the dependency label representation by its connected node representations, which makes dependency label representation more informative. After that, we introduce attentionguided GCN (AGGCN) Our contributions can be summarized as follows: • We propose an edge-aware node joint update module and introduce a node-aware edge update module. These two modules exploit the adjacency matrix and dependency label embedding adjacency matrix to learn structured information representation in a contextdependent manner and mitigate the impact of incorrect dependency labels. • We introduce AGGCN, which exploits the multi-head self-attention mechanism better learn how to select effective structured information. We combine the two modules with AGGCN to construct our proposed AEL-GCN model. Finally, we integrate AELGCN into the BiLSTM-CRF model to form a novel model called the BiLSTM-AELGCN-CRF model. The model effectively leverages the structured information, thus improving the performance of NER. • We have conducted extensive experiments on standard datasets across four languages. On these datasets, our proposed model significantly outperforms previous approaches.
The traditional feature-based NER approaches require considerable feature engineering skills and domain expertise. However, deep neural network based models can build reliable NER systems with much less effort in designing features. BiLSTM-CRF model To further improve named entity recognition, the representation of words was later enhanced by pretrained language models Syntactic information also plays an important role in NER. This section presents our BiLSTM-AELGCN-CRF model in detail. Figure Following the work by where w t is the pre-trained word embedding, character-level embedding c t is learned from character-based BiLSTM, r t and p t embeddings are randomly initialized and fine-tuned during training. In addition, we use contextualized representations such as BERT Given the input representation x, then x is fed into BiLSTM, which is applied to generate contextual representation. The BiLSTM enables the model to get contextual information from both directions. where -→ θ and ←θ are learnable parameters, respectively. In this subsection, we first introduce the GCN model and then present the proposed AELGCN, which contains an edge-aware node joint update module, a node-aware edge update module and the attention-guided GCN. GCN where , the calculation formula is as follows: where W l is a linear transformation, b l is a bias, and σ denotes a nonlinear activation function, e.g., ReLU. A ∈ R n×n is obtained from the dependency tree, which is an adjacency matrix expressing connectivity between nodes. However, directly stacking GCN and LSTM may cause a performance drop Edge-Aware Node Joint Update Module Previous work For this reason, we designed an edge-aware node joint update (EANJU) module. The EANJU module is able to mitigate the above problem. Theoretically, the EANJU module combine the structured information with dependency label information, via pool operation. If this dependency label information is incorrect, the structured information will be polluted. In order to mitigate this problem, we add its original structured information after pool operation to reduce the polluted influence. Firstly, for a given dependency tree, we transform dependency tree into its corresponding adjacency matrix A ∈ R n×n and dependency label embedding adjacency matrix E ∈ R n×n×p where A ij = 1 indicates that node i and node j are connected, which means that node i and node j have dependency relation, E i,j,: ∈ R p denotes the p-dimensional dependency label representation between the node i and node j. With words in sentences interpreted as nodes in the graph, the EANJU module updates the representation for each node. Mathematically, this operation can be defined as follows: Specifically, the aggregation is conducted channel by channel in the adjacency tensor as follows: where E ∈ R n×n×p is the dependency label embedding adjacency matrix from initialization or last AELGCN layer, E l-1 :,:,i denotes the i-th channel slice of E l-1 , H 0 is output of BiLSTM, W 1 ∈ R d×d , W 2 ∈ R d×d are a learnable filter, d is the dimension of BiLSTM output representation and A ∈ R n×n is the adjacency matrix from initialization and σ is the ReLU activation function. A mean-pooling operation is applied to compress features since it covers information from all channels. Node-Aware Edge Update Module Following the work by where ⊕ means the concatenation operator, h l i and h l i denote the representations of node i and j in the l th layer after EANJU operation, E l-1 :,:,i ∈ R p is the relation representation between node i and j, W u ∈ R 2×d+p is a learnable parameters. This updated dependency label embedding adjacency matrix is fed to the next AELGCN layer to perform another round of joint node updates, and such mutual update process can be stacked over L layers. Attention Guided GCN In order to obtain syntactic information from different representation subspaces and learn how to attend to the relevant structured information selectively, we apply attention-guided GCN (AGGCN) where Q t and K t are both equal to EANJU output H l or at layer l -1 of the AGGCN output h l-1 , W t Q and W t K are used to project the input N head ) of the t-th head into a query and a key à ∈ R n×n is the updated adjacency matrix for the t-th head. For each head, AGGCN uses à and a densely connected layer to deepen the layers of the whole AGGCN, to better capture the rich local information and k-hop information. The output of the densely connected layer is Ht ∈ R n×d h , then a linear combination layer is used to merge the output of each head, H = [ H1 , H2 , ..., HN head ]W , where W ∈ R (N head ×d h )×d h is a learnable parameters, H ∈ R n×d h is the final output of AGGCN. After that, Ht will be fed into the next layer of AELGCN to perform the same operation again and get the final output. We use a conditional random field (CRF) The score function is defined as: where T y i ,y i+1 denotes the transition score from label y i to y i+1 , E y i denotes the score of label y i at the i th position and the scores are computed using the hidden state. During training, we minimize the negative log-likelihood to obtain the model parameters. Our proposed method is evaluated on four benchmark NER datasets: and OntoNotes 5.0 For Catalan and Spanish, we use Subs2Vec (Paridon and Thompson, 2020) 100-dimensional embeddings to initialize the word embeddings. For OntoNotes 5.0 Chinese, we use SGNS Word2vec The hidden size of AELGCN and BiLSTM is set as 200, and the number of AELGCN layers L as 2. For AGGCN, we set the number of heads for the attention guided layer as 4, the first block number as 2, and the number of sublayers L in each densely connected layer as 4. Our models are optimized by mini-batch stochastic gradient descent (SGD) with a learning rate of 0.1 and batch size of 20. We use L2 regularization with a parameter of 1e-8 to avoid overfitting. Dropout is applied to word embeddings and hidden states with a rate of 0.5. We ran experiments using Pytorch 1.9.0 on Nvidia Tesla K40m GPU with Intel Xeon E5-2620 CPU. We compare our models with several competitive dependency-based models. • BiLSTM-GCN-CRF • Dependency guided LSTM-CRF (DGLSTM-CRF) • GCN-BiLSTM-CRF • Syn-LSTM-CRF Besides, we compare our model with previous works that have results on these datasets. SemEval 2010 Task 1 OntoNotes Chinese dataset. In general, our proposed method slightly improves performance on short entities compared to other models. Further, our proposed method is more effective for long entities than other dependency-based models in most cases, especially for the Catalan dataset. Impact of AELGCN layers As AELGCN can be stacked over L layers, we investigate the effect of the layer number L on the final performance. We conduct another experiment on the BiLSTM-AELGCN-CRF model with the number of AEL-GCN layers ∈ {1, 2, 3} on test datasets. The last AVG bar is obtained by averaging the results of the four test datasets. As shown in Figure In this paper, we propose a novel model named BiLSTM-AELGCN-CRF for the NER task. Specifically, we introduce the dependency label information and multi-head self-attention mechanism into the graph modeling process. Our analysis shows that our method can better capture structured information which is beneficial for the model to recognize entities. In the future, we would like to apply BiLSTM-AELGCN-CRF to other information extraction tasks, such as relation extraction or joint entity and relation extraction. Moreover, we will continue to explore how to use syntactic information better for NER tasks. The limitation of our model is that the performance of our model is highly dependent on the quality of the dependency trees. In most cases, the quality of the automatically generated dependency trees is good enough for our model. However, in some cases, the dependency trees generated by automatic tools are lack of sufficient and high quality dependency information. Under such cases, the performance of our method will be greatly decreased by the insufficient or poor-quality dependency information, becomes even worse than that of dependency-tree-free methods. This problem can be seen from the result of ontonoes Chinese dataset in table 6. After investigation, it is found that the percentage of entities that have subtrees is only 92.9% for OntoNotes Chinese dataset, as compared to 98.5%, 100%, 100% for OntoNotes English, SemEval Catalan and Spanish, respectively
1,317
2,787
1,317
Marine Variable Linker: Exploring Relations between Changing Variables in Marine Science Literature
We report on a demonstration system for text mining of literature in marine science and related disciplines. It automatically extracts variables (e.g. CO2) involved in events of change/increase/decrease (e.g increasing CO2), as well as cooccurrence and causal relations among these events (e.g. increasing CO2 causes a decrease in pH in seawater), resulting in a big knowledge graph. A web-based graphical user interface targeted at marine scientists facilitates searching, browsing and visualising events and their relations in an interactive way.
Progress in science relies significantly on the premise that -in addition to other methods for gaining knowledge such as experiments and modelling -new knowledge can be inferred by combining existing knowledge found in the literature. Unfortunately such knowledge often remains undiscovered because individual researchers can realistically only read a relatively small part of the literature, typically mostly in the narrow field of their own expertise Text mining of scientific literature has been pioneered in biomedicine and is now finding its way to other disciplines, notably in the humanities and social sciences, holding the promise for knowledge discovery from large text collections. Still, multidisciplinary fields such as marine sci-ence, climate science and environmental science remain mostly unexplored. Due to significant differences between the conceptual frameworks of biomedicine and other disciplines, simply "porting" the biomedical text mining infrastructure to another domain will not suffice. Moreover, the type of questions to be asked and the answers expected from text mining may be quite different. Theories and models in marine science typically involve changing variables and their complex interactions, which includes correlations, causal relations and chains of positive/negative feedback loops, where multicausal events are common. Many marine scientists are thus interested in finding evidence -or counter-evidence -in the literature for events of change and their relations. Here we report on an end-user system, resulting from our ongoing work to automatically extract, relate, query and visualise events of change and their direction of variation. Our text mining efforts in the marine science domain are guided by a basic conceptual model described in Step 1: Document retrieval involves crawling the websites for a predefined set of journals and extracting the text segments of interest from the HTML code, which includes title, authors, abstract, references, etc. Marine-related articles are selected through a combination of term matching with a manually compiled list of key words and a LDA topic model. Step 2: Linguistic analysis consists of tokenisation, sentence splitting, lemmatisation, POS tagging and constituency parsing using the Stanford CoreNLP tools Step 3: Variable and event extraction is performed simultaneously through tree pattern matching, where manually written patterns are matched against lemmatised constituency trees of sentences to extract events (increase/decrease/change) and their variables. It depends on two tools that are part of CoreNLP: Tregex is a library for matching patterns in trees based on tree relationships and regular expression matches on nodes; Tsurgeon is a closely related library for transforming trees through sequences of tree operations. For more details, see Step 4: Generalisation of variables addresses variables that are very long and complex and therefore unlikely to occur more than once. These are generalised (abstracted) by removing nonessential words and/or splitting them into atomic variables. For example, the variable the annual, Milankovitch and continuum temperature is split into three parts, one of which is annual tempera-ture, which is ultimately itself generalised to temperature. This is accomplished through progressive pruning of a variable's syntactic tree, using a combination or tree pattern matching and tree operations. Step 5: Relation extraction again uses treepattern matching with hand-written patterns to extract causal relations between pairs of events, identifying their cause and relation roles. Step 6: Conversion to graph All extracted variables, events and relations are subsequently converted to a single huge property graph, which is stored and indexed in a Neo4j graph database Step 7: Graph post-processing enriches the initial graph in a number of ways using the Cypher graph query language. Event instance nodes are aggregated in event type nodes. Likewise, causal relation instances are aggregated in causal relations types between event types. Furthermore, co-occurrence counts for event pairs occurring in the same sentence are computed and added as co-occurrence relations between their respective event type nodes. Post-processing also includes addition of metadata and citation information, obtained through the Crossref metadata API, to articles nodes in the graph. The final output is a big knowledge graph (millions of nodes) containing all information extracted from the input text. The graph can be searched in many different ways, depending on interest, using the Cypher graph query language. One possibility is searching for chains of causal relations. The user interface described in the next section offers a more user-friendly way of searching for a certain type of patterns, namely, relations between changing variables.
Although graph search queries can be written by hand, it takes time, effort and a considerable amount of expertise. In addition, large tables are difficult to read and navigate, lacking an easy way to browse the results, e.g., to look up the source sentences and articles for extracted events. Moreover, users need to have a local installation of all required software and data. The Marine Variable Linker (MVL) is intended to solve these problems. Its main function is to enable nonexpert users (marine scientists) to easily search the graph database in an interactive way and to present search results in a browsable and visual way. It is a web application that runs on any modern platform with a browser (Linux, Mac OS, Windows, Android, iOS, etc). It is a graphical user interface, which relies on familiar input components such as buttons and selection lists to compose queries, and uses interactive tables and graphs to present search results. Hyperlinks are used for navigation and to connect related e.g. the webpage of the source journal article. Figure Once events are defined, one can search for re-lations between these events, where queries can be composed in a similar fashion as for events. The first kind of relation is cooccurs, which means that two events co-occur in the same sentence. When two events are frequently found together in a single sentence, they tend to be associated in some way, possibly by correlation. The second kind of relation is causes, which means that two events in a sentence are causally related, where one event is the cause and other the effect. Causality must be explicitly described in the sentence, for example, by words such as causes, therefore, leads to, etc. Relation search results are presented in two ways. The relation types table contains all pairs of event types, specifying their relation, event predicates, event variables and counts. Figure Clicking on a row in the table or a node in the graph brings up a corresponding instances table (cf. bottom of Figure A demo of an MVL instance indexing 75,221 marine-related abstracts from over 30 journals is currently freely accessible on the web. Our system still makes many errors. Variables and events are sometimes incorrectly extracted (e.g. variable more iron in Figure
548
4,876
548
Backward Compatibility During Data Updates by Weight Interpolation
Backward compatibility of model predictions is a desired property when updating a machine learning driven application. It allows to seamlessly improve the underlying model without introducing regression bugs. In classification tasks these bugs occur in the form of negative flips. This means an instance that was correctly classified by the old model is now classified incorrectly by the updated model. This has direct negative impact on the user experience of such systems e.g. a frequently used voice assistant query is suddenly misclassified. A common reason to update the model is when new training data becomes available and needs to be incorporated. Simply retraining the model with the updated data introduces the unwanted negative flips. We study the problem of regression during data updates and propose Backward Compatible Weight Interpolation (BCWI). This method interpolates between the weights of the old and new model and we show in extensive experiments that it reduces negative flips without sacrificing the improved accuracy of the new model. BCWI is straight forward to implement and does not increase inference cost. We also explore the use of importance weighting during interpolation and averaging the weights of multiple new models in order to further reduce negative flips.
In conventional software development it is an established routine to identify and fix regression bugs before deploying a new version. Regression bugs describe defects in already existing features and are particularly sensible for end users because accustomed workflows are affected. In machine learning driven applications however the main focus usually lies on improving the underlying model and regression is rarely measured, let alone actively old data finetune old data new data finetune
Old Model New Model Figure mitigated. This prevents backward compatibility of e.g. visual search systems To reduce negative flips during data updates, we propose Backward Compatible Weight Interpolation (BCWI) in this paper. BCWI describes the interpolation between the weights of the old model and the weights of the new model. The interpolation largely recovers the prediction pattern of the old model without hurting the improved accuracy of the new model. The method is informed by recent success of weight interpolation for robust finetuning Mitigating Regression Previous work focuses mainly on reducing negative flips when updating the model architecture or the pretraining procedure. In these settings the available data is static and not affected by the update as in our work. Negative flips are either minimized by training with distillation loss while using the old model as teacher Weight Interpolation Weight interpolation and weight averaging are known to improve classification performance in different settings. Averaging the weights of multiple model checkpoints along a cyclic learning rate schedule leads to better classification generalization Continual Learning Continual learning studies the problem of incrementally adding new knowl-edge to a model while avoiding catastrophic forgetting Weight regularization is a common way to prevent catastrophic forgetting by preventing the model weights to deviate too far from the old model. Prior Weight Decay In order to measure regression in classification models, where f θ old is the old model and f θnew the new, updated model. NFR is measured on a given regression set with N input and label pairs (x, y). Negative flips are instances that are predicted correctly by the old model and are incorrectly predicted by the new model. Consequently, NFR is the ratio of negative flips to the total number of instances in the regression set, i.e., the development or test set. We formulate the problem of minimizing regression during data updates in the following way. negative flip rate but ultimately sacrifices the improved classification performance. We empirically show that in all but one of the conducted experiments there exists an α > 0 that results in a model that achieves the same classification performance as the target model while significantly reducing negative flips. We call this method Backward Compatible Weight Interpolation (BCWI). The interpolation with a single parameter might not be optimal because not every model weight is equally contributing to a model's predictions. The importance of each weight can be quantified by the diagonal of the empirical Fisher information matrix where c is a normalization constant and ∇ θ old is the gradient in respect to the weights of the old model. By using F old ∈ R |θ old | as the importance factor for each parameter in the old model we get: where all operations are elementwise. The interpolation is focused on weights that are important for the old model and thus minimizes interference with the weights of the new model. Ensembling the logits of multiple new models reduces negative flips The data that is available to train a given classification model changes over time. This can be due to several reasons. More labeled data for the existing classes is obtained by annotating instances from the initial source or from observed queries. Data for new classes is added to support additional downstream features or classes are split up to allow for more fine-grained classification. The retraining of an existing model on the evolved data basis is called data update. In this work, we focus on two isolated data update scenarios that cover two common use cases, namely adding i.i.d. data and adding new classes. We simulate the two scenarios in order to study the prevalence and mitigation of regression during data updates. Add_Data Scenario In the Add_Data (AD) scenario, the amount of available data is increased by adding new instances for the current set of classes. This is the most basic type of data update and aims at improving the classification performance of the derived model. The additional data is usually obtained by annotating more instances from the initial data source or from the observed model queries. While in the latter case the distribution can shift over time, we assume i.i.d. data for this scenario. Add_Classes Scenario In the Add_Classes (AC) scenario, we study data updates that consists of adding new classes and corresponding instances to the existing data. This is necessary when the text classification based system supports new features. For example, a virtual assistant is extended with a food delivery feature, a news classification model covers emerging topics or medical reports are classified according to new diseases codes. We simulate the two described data update scenarios for three datasets each. MASSIVE The size of the splits was chosen such that the data update leads to a significant improvement of classification accuracy. In order to simulate the addition of new classes for the AC scenario, we limit its old data splits to a subset of the available classes. We evaluate our proposed BCWI method on the above described data update scenarios, each constructed for three different datasets. Experiments are repeated ten times with different random seeds and we report the mean and 95% confidence interval. Detailed setup and tuning of hyperparameters can be found in Appendix A. The α-values in our experiments are tuned to reach the accuracy threshold on the development set. Baselines We compare BCWI to distillation training where the old model is used as the teacher We first discuss the results for the AD scenario shown in the top row of Figure We discuss the results for the BCWI variants proposed in Section 4.1 and 4.2 in this paragraph. FisherBCWI uses the diagonal Fisher information matrix as importance weighting when interpolating between old and new model. In Table To better understand BCWI, we visualize the loss and error landscapes for the old, new and target model in Figure We studied the problem of regression during data updates in text classification. Retraining a model with a larger amount of training data increases accuracy but also introduces negative flips. We propose BCWI which describes the interpolation between the weights of the old model and the weight of the new model. We empirically show on three datasets and two update scenarios that BCWI models significantly reduce negative flips while not sacrificing accuracy. We compare BCWI to strong continual learning methods and achieve similar or better results, while not increasing training or inference cost. Another big advantage of BCWI is that the trade-off parameter α can be tuned without retrain-ing the model. This saves additional training cost and only requires to store the weights of the old and new model. We extend BCWI by using the Fisher information matrix as importance factor in weight interpolation and show that it further reduces negative flips. Using multiple new models as in proposed SoupBCWI also reduces regression without increasing the inference cost. In principle BCWI is architecture and task agnostic with the possibility to explore effectiveness in applications such as image classification or natural language generation left for future work. We show the effectiveness of our method on three datasets, two of which are focused on intent detection. While in principle the method is task-agnostic, we didn't present results for more tasks or domains. Another limitation is that we did not show results for BCWI when the training data is updated multiple times and the new model is interpolated successively. We list the hyperparameters used for training the different models in Table For our main experiments we assume full access to the old data. This allows us to train the new model without catastrophic forgetting. To complement these results, we also show the behavior of BCWI when the new model is trained only on the new data (i.e. no access to the old data). The results are presented in Figure Detailed label distribution and number of instances for the AD and AC scenarios for all three datasets are visualized in Figure
1,296
491
1,296
What's The Latest? A Question-driven News Chatbot
This work describes an automatic news chatbot that draws content from a diverse set of news articles and creates conversations with a user about the news. Key components of the system include the automatic organization of news articles into topical chatrooms, integration of automatically generated questions into the conversation, and a novel method for choosing which questions to present which avoids repetitive suggestions. We describe the algorithmic framework and present the results of a usability study that shows that news readers using the system successfully engage in multi-turn conversations about specific news stories.
Chatbots offer the ability for interactive information access, which could be of great value in the news domain. As a user reads through news content, interaction could enable them to ask clarifying questions and go in depth on selected subjects. Current news chatbots have minimal capabilities, with content hand-crafted by members of news organizations, and cannot accept free-form questions. To address this need, we design a new approach to interacting with large news collections. We designed, built, and evaluated a fully automated news chatbot that bases its content on a stream of news articles from a diverse set of English news sources. This in itself is a novel contribution. Our second contribution is with respect to the scoping of the chatbot conversation. The system organizes the news articles into chatrooms, each revolving around a story, which is a set of automatically grouped news articles about a topic (e.g., articles related to Brexit). The third contribution is a method to keep track of the state of the conversation to avoid repetition of information. For each news story, we first generate a set of essential questions and link each question with content that answers it. The motivating idea is: two pieces of content are redundant if they answer the same questions. As the user reads content, the system tracks which questions are answered (directly or indirectly) with the content read so far, and which remain unanswered. We evaluate the system through a usability study. The remainder of this paper is structured as follows. Section 2 describes the system and the content sources, Section 3 describes the algorithm for keeping track of the conversation state, Section 4 provides the results of a usability study evaluation and Section 5 presents relevant prior work. The system is publicly available at
This section describes the components of the chatbot: the content source, the user interface, the supported user actions and the computed system answers. Appendix A lists library and data resources used in the system. We form the content for the chatbot from a set of news sources. We have collected an average of 2,000 news articles per day from 20 international news sources starting in 2010. The news articles are clustered into stories: groups of news articles about a similar evolving topic, and each story is automatically named The chatbot supports information-seeking: the user is seeking information and the system delivers in- formation in the form of news content. The homepage (Figure (1) clarity to the user, as the chatrooms allow the user to exit and enter chatrooms to come back to conversations, and (2) limiting the scope of each dialogue is helpful from both a usability and a technical standpoint, as it helps reduce ambiguity and search scope. For example, answering a question like: "What is the total cost to insurers so far?" is easier when knowing the scope is the Australia Fires, compared to all of news. Articles in a story are grouped into events, corresponding to an action that occurred in a particular time and place. For each event, the system forms an event message by combining the event's news article headlines generated by an abstractive summarizer model Zone 2 in Figure Because of the difference in respective roles, we expect user messages to be shorter than system responses, which we aim to be around 30 words. During the conversation, the user can choose among different kinds of actions. Explore the event timeline. A chatroom conversation starts with the system showing the two most recent event messages of the story (Figure Clarify a concept. The user can ask a clarification question regarding a person or organization (e.g., Who is Dennis Muilenburg?), a place (e.g., Where is Lebanon?) or an acronym (e.g., What does NATO stand for?). For a predetermined list of questions, the system will see if an appropriate Wikipedia entry exists, and will respond with the first two paragraphs of the Wikipedia page. For geographical entities, the system additionally responds with a geographic map when possible. Ask an open-ended question. A text box (Zone 4 in Figure Select a recommended question. A list of three questions generated by the algorithm described in Section 3 is suggested to the user at the bottom of the conversation (Zone 3 in Figure One key problem in dialogue systems is that of keeping track of conveyed information, and avoiding repetition in system replies (see example in Figure We propose a solution that takes advantage of a Question and Answer (Q&A) system. As noted above, the motivating idea is that two pieces of content are redundant if they answer the same questions. In the example of Figure Our procedure to track the knowledge state of a news conversation consists of the following steps: (1) generate candidate questions spanning the knowledge in the story, (2) build a graph connecting paragraphs with questions they answer, (3) during a conversation, use the graph to track what questions have been answered already, and avoid using paragraphs that do not answer new questions. Question Candidate Generation. We fine-tune a GPT2 language model For a given paragraph, we reduce the set of questions by deduplicating questions that are lexically close (differ by at most 2 words), and removing questions that are too long (>12 words) or too short (<5 words). Building the P/Q graph. We train a standard Q&A model, a Roberta model Because we used a large beam-size when generating the questions, we perform a pruning step on the questions set. Our pruning procedure is based on the realization that two questions are redundant if they connect to the same subset of paragraphs (they cover the same content). Our objective is to find the smallest set of questions that cover all paragraphs. This problem can be formulated as a standard graph theory problem known as the set cover problem, and we use a standard heuristic algorithm The P/Q graph embodies interesting properties. First, the degree of a question node measures how often a question is answered by distinct paragraphs, providing a measure of the question's importance to the story. The degree of a paragraph node indicates how many distinct questions it answers, an estimate of its relevance to a potential reader. Finally, the graph can be used to measure question relatedness: if two questions have non-empty neighboring sets (i.e., some paragraphs answer both questions), they are likely to be related questions, which can be used as a way to suggest follow-up questions. Using the P/Q graph. At the start of a conversation, no question is answered, since no paragraph has been shown to the user. Therefore, the system initializes a blank P/Q graph (left graph in Figure As the conversation moves along, more paragraphs are read, increasing the number of answered questions, which in turn, increases the number of uninformative paragraphs. We program the system to prioritize paragraphs that answer the most unanswered questions, and disregard uninformative paragraphs. We further use the P/Q graph to recommend questions to the user. We select unanswered questions and prioritize questions connected to more unread paragraphs, recommending questions three at a time. We conducted a usability study in which participants were assigned randomly to one of three configurations: • TOPQR: the recommended questions are the most informative according to the algorithm in Section 3 (N=18), • RANDQR: the recommended questions are randomly sampled from the questions TOPQR would not select (however, near duplicates will appear in this set) (N=16), • NOQR: No questions are recommended, and the Question Recommendation module (Zone 3 in Figure These are contrasted in order to test (a) if showing automatically generated questions is beneficial to news readers, and (b) to assess the question tracking algorithm against a similar question recommendation method with no conversation state. We used Amazon Mechanical Turk to recruit participants, restricting the task to workers in Englishspeaking countries having previous completed 1500 tasks (HITs) and an acceptance rate of at least 97%. Each participant was paid a flat rate of $2.50 with the study lasting a total of 15 minutes. During the study, the participants first walked through an introduction to the system, then read the news for 8 minutes, and finally completed a short survey. During the eight minutes of news reading, participants were requested to select at least 2 stories to read from a list of the 20 most recently active news stories. The survey consisted of two sections: a satisfaction section, and a section for general free-form feedback. The satisfaction of the participants was surveyed using the standard Questionnaire for User Interaction Satisfaction (QUIS) We observed that participants in the QR-enabled interfaces (TOPQR and RANDQR) had longer conversations than the NOQR setting, with an average chatroom conversation length of 24.9 messages in the TOPQR setting. Even though the TOPQR setting had average conversation length longer than RANDQR, this was not statistically significant. This increase in conversation length is mostly due to the use of recommended questions, which are convenient to click on. Indeed, users clicked on 8.2 questions on average in RANDQR and 11.9 in TOPQR. NOQR participants wrote on average 2.2 of their own questions, which was not statistically higher than TOPQR (1.5) and RANDQR (1.1), showing that seeing recommended questions did not prevent participants from asking their own questions. When measuring the latency of system answers to participant questions, we observe that the average wait time in TOPQR (1.84 seconds) and RANDQR (1.88 seconds) settings is significantly lower than NOQR (4.51 seconds). This speedup is due to our ability to pre-compute answers to recommended questions, an additional benefit of the QR graph pre-computation. Overall, the systems with question recommendation enabled (TOPQR and RANDQR) obtained higher average satisfaction on most measures than the NOQR setting. That said, statistical significance was only observed in 4 cases between TOPQR and NOQR, with participants judging the TOPQR interface to be more stimulating and satisfying. Although not statistically significant, participants rated the suggested questions for TOPQR almost 1 point higher than RANDQR, providing some evidence that incorporating past viewed information into question selection is beneficial. Participants judged the answers to be more informative in the TOPQR setting. We interpret this as evidence that the QR module helps teach users what types of questions the system can answer, enabling them to get better answers. Several NOQR participants asked "What can I ask?" or equivalent. Thirty-four of the fifty-six participants opted to give general feedback via an open ended text box. We tagged the responses into major themes: 1. 19 participants (7 TOPQR, 7 RANDQR, 5 NOQR) expressed interest in the system (e.g., I enjoyed trying this system out. I particularly liked that stories are drawn from various sources.) 2. 11 participants (4, 3, 4) mentioned the system did not correctly reply to questions asked (e.g., Some of the questions kind of weren't answered exactly, especially in the libya article), 3. 10 participants (2, 3, 5) found an aspect of the interface confusing (e.g., This system has potential, but as of right now it seems too overloaded and hard to sort through.) 4. 6 participants (4, 2, 0) thought the questions were useful (e.g., I especially like the questions at the bottom. Sometimes it helps to remember some basic facts or deepen your understanding) The most commonly mentioned limitation was Q&A related errors, a limitation we hope to mitigate as automated Q&A continues progressing. News Chatbots. Several news agencies have ventured in the space of dialogue interfaces as a way to attract new audiences. The chatbots are often manually curated for the dialogue medium and advanced NLP machinery such as a Q&A systems are not incorporated into the chatbot. On BBC's Messenger chatbot Relevant Q&A datasets. NewsQA NewsQA's objective was to collect a dataset, and we focus on building a usable dialogue interface for the news with a Q&A component. CoQA In this work, the focus is not on the collection of naturally occurring questions, but in putting a Q&A system in use in a news dialogue system, and observing the extent of its use. Question Generation (QG) has become an active area for text generation. A common approach is to use a sequence to sequence model During the usability study, we obtained direct and indirect feedback from our users, and we summarize limitations that could be addressed in the system. Inability to Handle Small Talk. 4 participants attempted to have small talk with the chatbot (e.g. asking "how are you"). The system most often responded inadequately, saying it did not understand the request. Future work may include gently directing users who engage in small talk to a chitchat-style interface. Inaccurate Q&A system. 32% of the participants mentioned that answers are often off-track or irrelevant. This suggests that further improvements in Q&A systems are needed. Dealing with errors. Within the current framework, errors are bound to happen, and easing the user's path to recovery could improve the user experience. We presented a fully automated news chatbot system, which leverages an average of 2,000 news articles a day from a diverse set of sources to build chatrooms for important news stories. In each room, the system takes note of generated questions that have already been answered, to minimize repetition of information to the news reader. A usability study reveals that when the chatbot recommends questions, news readers tend to have longer conversations, with an average of 24 messages exchanged. These conversation consist of combination of recommended and user-created questions.
633
1,834
633
GWLAN: General Word-Level AutocompletioN for Computer-Aided Translation
Computer-aided translation (CAT), the use of software to assist a human translator in the translation process, has been proven to be useful in enhancing the productivity of human translators. Autocompletion, which suggests translation results according to the text pieces provided by human translators, is a core function of CAT. There are two limitations in previous research in this line. First, most research works on this topic focus on sentence-level autocompletion (i.e., generating the whole translation as a sentence based on human input), but word-level autocompletion is under-explored so far. Second, almost no public benchmarks are available for the autocompletion task of CAT. This might be among the reasons why research progress in CAT is much slower compared to automatic MT. In this paper, we propose the task of general word-level autocompletion (GWLAN) from a real-world CAT scenario, and construct the first public benchmark 1 to facilitate research in this topic. In addition, we propose an effective method for GWLAN and compare it with several strong baselines. Experiments demonstrate that our proposed method can give significantly more accurate predictions than the baseline methods on our benchmark datasets.
Machine translation (MT) has witnessed great advancements with the emergence of neural machine translation (NMT) We asked two sp Wir haben die Meinung von zwei Fachärzten eingeholt. We asked two experts for their opinion. We sp their opinion. 2009). In spite of this, MT systems cannot replace human translators, especially in the scenarios with rigorous translation quality requirements (e.g., translating product manuals, patent documents, government policies, and other official documents). Therefore, how to leverage the pros of MT systems to help human translators, namely, Computer-aided translation (CAT), attracts the attention of researchers We note two limitations in previous research on the topic of autocompletion for CAT. First, most of previous studies aim to save human efforts by sentence-level autocompletion (Figure In this work, we propose a General Word-Level AutocompletioN (GWLAN) task, and construct a benchmark with automatic evaluation to facilitate further research progress in CAT. Specifically, the GWLAN task aims to complete the target word for human translators based on a source sentence, translation context as well as human typed characters. Compared with previous work, GWLAN considers four most general types of translation context: prefix, suffix, zero context, and bidirectional context. Besides, as in most real world scenarios, we only know the relative position between input words and the spans of translation context in the GWLAN task. We construct a benchmark for the task, with the goal of supporting automatic evaluation and ensuring a convenient and fair comparison among different methods. The benchmark is built by extracting triples of source sentences, translation contexts, and human typed characters from standard parallel datasets. Accuracy is adopted as the evaluation metric in the benchmark. To address the variety of context types and weak position information issue, we propose a neural model to complete a word in different types of context as well as a joint training strategy to optimize its parameters. Our model can learn the representation of potential target words in translation and then choose the most possible word based on the human input. Our contributions are two-fold: • We propose the task of general word-level autocompletion for CAT, and construct the first public benchmark to facilitate research in this topic. • We propose a joint training strategy to optimize the model parameters on different types of contexts together.
Computer-aided translation (CAT) is a widely used practice when using MT technology in the industry. As the the MT systems advanced and improved, various efficient interaction ways of CAT have emerged Sentence-level Autocompletion Most of previous work in autocompletion for CAT focus on sentence-level completion. A common use case in this line is interactive machine translation (IMT) Word-level Autocompletion Word-level autocompletion for CAT is less studied than sentencelevel autocompletion. Others Our work may also be related to previous works in input method editors (IME) In this section, we first describe why we need wordlevel autocompletion in real-world CAT scenarios. We then present the details of the GWLAN task and the construction of benchmark. Why GWLAN? Word level autocompletion is beneficial for improving input efficiency Suppose x = (x 1 , x 2 , . . . , x m ) is a source sequence, s = (s 1 , s 2 , . . . , s k ) is a sequence of human typed characters, and a translation context is denoted by c = (c l , c r ), where c l = (c l,1 , c l,2 , . . . , c l,i ), c r = (c r,1 , c r,2 , . . . , c r,j ). The translation pieces c l and c r are on the left and right hand side of s, respectively. Formally, given a source sequence x, typed character sequence s and a context c, the general word-level autocompletion (GWLAN) task aims to predict a target word w which is to be placed in the middle be-tween c l and c r to constitute a partial translation. Note that in the partial translation consisting of c l , w and c r , w is not necessary to be consecutive to c l,i or c r,1 . For example, in Figure To make our task more general in real-world scenarios, we assume that the left context c l and right context c r can be empty, which leads to the following four types of context: • Zero-context: both c l and c r are empty; • Suffix: c l is empty; • Prefix: c r is empty; • Bi-context: neither c l nor c r is empty. With the tuple (x, s, c), the GWLAN task is to predict the human desired word w. Relation to most similar tasks Some similar techniques have been explored in CAT. To set up a benchmark, firstly we should create a large scale dataset including tuples of (x, s, c, w) for training and evaluating GWLAN models. Ideally, we may hire professional translators to man-ually annotate such a dataset, but it is too costly in practice. Therefore, in this work, we propose to automatically construct the dataset from parallel datasets which is originally used in automatic machine translation tasks. The procedure for constructing our data is the same for train, validation, and test sets. And we construct a dataset for each type of translation context. Assume we are given a parallel dataset {(x i , y i )}, where y i is the reference translation of x i . Then, we can automatically construct the data c i and s i by randomly sampling from y i . We first sample a word w = y i k and then demonstrate how to extract c i for different translation contexts: • Zero-context: both c l and c r are empty; • Suffix: randomly sample a translation piece c r = y p r,1 :p r,2 from y, where k < p r,1 < p r,2 . The c l is empty here; • Prefix: randomly sample a translation piece c l = y p l,1 :p l,2 from y, where p l,1 < p l,2 < k. The c r is empty here; • Bi-context: sample c l as in prefix, and sample c r as in suffix. Then we have to simulate the human typed characters s based on w. For languages like English and German, we sample a position p from the character sequence and the human input s = w 1:p , where 1 ≤ p < L w . For languages like Chinese, the human input is the phonetic symbols of the word, since the word cannot be directly typed into the computer. Therefore, we have to convert w to phonetic symbols that are characters in alphabet and sample s from phonetic symbols like we did on English. Evaluation Metric To evaluate the performance of the well-trained models, we choose accuracy as the evaluation metric: where N match is the number of words that are correctly predicted and N all is the number of testing examples. Given a tuple (x, c, s), our approach decomposes the whole word autocompletion process into two parts: model the distribution of the target word w based on the source sequence x and the translation context c, and find the most possible word w based on the distribution and human typed sequence s. Therefore, in the following subsections, we firstly propose a word prediction model (WPM) to define the distribution p(w|x, c) of the target word w ( §4.1). Then we can treat the human input sequence s as soft constraints or hard constraints to complete s and obtain the target word w ( §4.2). Finally, we present two strategies for training and inference ( §4.3). The purpose of WPM is to model the distribution p(w|x, c). More concretely, we will use a single placeholder [MASK] to represent the unknown target word w, and use the representation of [MASK] learned from WPM to predict it. Formally, given the source sequence x, and the translation context c = (c l , c r ), the possibility of the target word w is: where h is the representation of [MASK], φ is a linear network that projects the hidden representation h to a vector with dimension of target vocabulary size V , and softmax(d) Inspired by the attention-based architectures After learning the representation h of the [MASK] token, there are two ways to use the human input sequence s to determinate the human desired word. Firstly, we can learn the representation of s and use it as a soft constraint while predicting word w. Taking the sentence in Figure , if w starts with s 0, otherwise. where P (•|•) is the probability distribution defined in Eq. ( Suppose D denotes the training data for GWLAN, i.e., a set of tuples (x, c, s, w). Since there are four different types of context in D as presented in §3, we can split D into four subsets D zero , D prefix , D suffix and D bi . To yield good performances on those four types of translation context, we also propose two training strategies. The inference strategy differs accordingly. Strategy 1: One Context Type One Model For this strategy, we will train a model for each translation context, respectively. Specifically, for each type of context t ∈ {zero, prefix, suffix, bi}, we independently train one model θ t by minimizing the following loss L(D t , θ): (3) where P (w|x, c; θ) is the WPM model defined in Eq. 2, |D t | is the size of training dataset D t , and t can be any type of translation context. In this way, we actually obtain four models in total after training. In the inference process, for each testing instance (x, c l , c r , s), we decide its context type t in terms of c l and c r and then use θt to predict the word w. Strategy 2: Joint Model The separate training strategy is straightforward. However, it may also make the models struck in the local optimal. To address these issues, we also propose a joint training strategy, which has the ability to stretch the model out of the local optimal once the parameters is over-fitting on one particular translation context. Therefore, using the joint training strategy, we train a single model for all types of translation context by minimizing the following objective: where each L(D t ; θ) is as defined in Eq. 3. In this way, we actually obtain a single model θ after training. In the inference process, for each testing instance (x, c l , c r , s) we always use θ to predict the target word w. We carry out experiments on four GWLAN tasks including bidirectional Chinese-English tasks and German-English tasks. The benchmarks for our experiments are based on the public translation datasets. The training set for two directional Chinese-English tasks consists of 1.25M bilingual sentence pairs from LDC corpora. The toolkit we used to convert Chinese word w to phonetic symbols is pypinyin The main strategies we used to prepare our benchmarks are shown in §3.2. However, lots of trivial instances may be included if we directly use the uniform distribution for sampling, e.g., predicting word "the" given "th". Therefore, we apply some intuitive rules to reduce the probability of trivial instances. For example, we assign higher probability for words with more than 4 characters in English and 2 characters in Chinese, and we require that the lengths of input character sequence s and translation contexts c should not be too long. In the experiments, we evaluate and compare the performance of our methods (WPM-Sep and WPM-Joint) and a few baselines. They are illustrated below, WPM-SEP is our approach with the "one context one model" training and inference strategy in Section §4.3. In other words, we train our model for each translation context separately. WPM-JOINT is our approach with the "joint model" strategy in Section §4.3. We train an alignment model We train a vanilla NMT model using the Transformer-base model. During the inference process, we use the context on the left hand side of human input as the model input, and return the most possible words based on the probability of valid words selected out by the human input. This baseline is inspired by As another baseline, we also train an NMT model based on Transformer, but without position encoding on the target side. While testing, we use the averaged hidden vectors of all the target words outputted by the last decoder layer to predict the potential candidates. Table The method TRANS-PE, which assumes the human input is the next word of the given context, behaves poorly under the more general setting. As the results of TRANS-NPE show, when we use the same model as TRANS-PE and relax the constraint of position by removing the position encoding, the accuracy of the model improves. One interesting finding is that the TRANSTABLE method, which is only capable of leveraging the zero-context, achieves good results on the Chinese-English task when the target language is English. However, when the target language is Chinese, the performance of TRANSTABLE drops significantly. 6 Experimental Analysis In this section, we presents more detailed results on the four translation contexts and analyze the features of GWLAN. These analyses can help us to better understand the task and propose effective approaches in the future. Compared with WPM-SEP, WPM-JOINT shows two advantages. On one hand, even there is only one model, WPM-JOINT yields better performances than WPM-SEP, enabling simpler deployment. This may be caused by that training on multiple related tasks can force the model learn more expressive representations, avoiding over-fitting. On the other hand, the variance of results on different translation contexts of WPM-JOINT is smaller, which can provide an more steady autocompletion service. From the viewpoint of joint training, the lower variance may be caused by that WPM-JOINT spends more efforts to minimize the one with maximal risk (i.e., zero-context), although sometimes it may slightly sacrifice the task with minimal risk (i.e., bi-context). The results of WPM-SEP and WPM-JOINT also have some shared patterns. Firstly, the performances of the two methods on prefix and suffix translation contexts are nearly the same. Although the prefix and suffix may play different roles in the SVO language structure, they have little impact on the the autocompletion accuracy using our method. Moreover, among the results on four translation contexts, the performances on bi-context are better than prefix and suffix, and prefix and suffix are better than zero-context. This finding shows that more context information can help to reduce the uncertainty of human desired words. The TRANS-PE method in previous works is more sensitive to the position of human input. The statistical results shows that the averaged distances in the original sentence between the prediction words and translation contexts are various for different translation contexts, which are 7.4, 6.5, 14.1, and 3.2 for prefix, suffix, zero-context, and bi-context, respectively. When the desired words are much closer to the context, TRANS-PE can achieve better performances. Moreover, TRANS-PE can achieve more than 80 accuracy scores when the prediction word is the next word of the given prefix, however, its performance drops significantly when the word is not necessarily conjunct to the prefix. We can also find that TRANS-NPE, which removes the position information of target words, achieves better overall performances compared with TRANS-PE. In contrast, the performance of TRANSTABLE is less affected by the position of the prediction words, which is demonstrated by the low variances on both tasks in Table In this work, the translation contexts are simulated using the references. However, in real-world scenarios, translation contexts may not be perfect, i.e., some words in the translation contexts may be incorrect. In this section, we evaluate the robustness of our model on noisy contexts. We first use the translation table constructed by TRANSTABLE to find some target words that share the same source words with the original target words, and then use those found words as noise tokens. The robustness results are shown in Figure In this work, we formalize the task as a classification problem. However, the generation formalization also deserves to be explored in the future. For example, the generation may happen in two circumstances: word-level completion based on subwords, and phrase-level completion. In the first case, although the autocompletion service provided for human translators is word-level, in the internal system we can generate a sequence of subwords It is also worth noting that we did not conduct human studies in this work. We think evidences in previous work can already prove the effectiveness of word-level autocompletion when assisting human translators. For example, TransType We propose a General Word-Level Autocomple-tioN (GWLAN) task for computer-aided translation (CAT). In our setting, we relax the strict constraints on the translation contexts in previous work, and abstract four most general translation contexts used in real-world CAT scenarios. We propose two approaches to address the variety of context types and weak position information issues in GWLAN. To support automatic evaluation and to ensure a convenient and fair comparison among different methods, we construct a benchmark for the task. Experiments on this benchmark show that our method outperforms baseline methods by a large margin on four datasets. We believe that this benchmark to be released will push forward future research in CAT.
1,235
2,503
1,235
Speculative Contrastive Decoding
Large language models (LLMs) exhibit exceptional performance in language tasks, yet their auto-regressive inference is limited due to high computational requirements and is suboptimal due to the exposure bias. Inspired by speculative decoding and contrastive decoding, we introduce Speculative Contrastive Decoding (SCD), a straightforward yet powerful decoding approach that leverages predictions from smaller language models (LMs) to achieve both decoding acceleration and quality improvement. Extensive evaluations and analyses on four diverse language tasks demonstrate the effectiveness of SCD, showing that decoding efficiency and quality can compatibly benefit from one smaller LM.
Large language models (LLMs) have advanced the versatility and proficiency in approaching realworld natural language tasks such as general instruction following As for decoding acceleration, one prominent method named speculative decoding As for the generation quality, contrastive decoding has been recently proposed Inspired by both speculative and contrastive decoding, we propose Speculative Contrastive Decoding (SCD), which exploits a single smaller LM for decoding improvement in speed and quality en bloc. Comprehensive evaluations of four diverse tasks show that SCD can achieve similar acceleration factors of speculative decoding while maintaining the quality improvement from contrastive decoding. By further analyzing the token distributions of the smaller and larger LMs in SCD, we show the inherent compatibility of decoding acceleration and quality improvement. The contributions of this paper can be summarized as follows: • We propose Speculative Contrastive Decoding for efficacious LLM inference. • Comprehensive experiments and analysis illustrate the compatibility of speculative and contrastive decoding on 4 diverse tasks.
In terms of inference acceleration, recent research has been devoted to developing various efficient decoding methods In terms of inference quality, rich research has been suggested We follow the terminology in The intrinsic rationale of contrastive decoding (CD) is that amateur LMs have stronger systematic undesirable tendencies to produce undesirable patterns (e.g., hallucination) than expert LMs. By contrasting the token distributions between expert and amateur LMs, such tendencies can be alleviated. There have been successively proposed two versions of contrastive decoding by 3 PM e (x1), .., PM e (xγ+1) = Me(x1, .., xγ|xinp); 4 Calculate Pn(x1), .., Pn(xγ) following Section §3.1; 5 r1, .., rγ i.i.d sampled from Uniform(0, 1); where P • and Y • are respectively the token probability and logit generated from LMs. V α •,i denotes the adaptive plausibility constraint that dynamically restricts the logits from producing the erroneous modes. The adaptive plausibility constraints are calculated as A token is generated from the contrastive token distribution P τ n (x i ) = softmax τ (s n (x i |x <i )), n ∈ {ori, imp}, where τ represents the softmax temperature that determines the smoothness of the contrastive token distribution. Instead of requiring one forward computation of M e for each token in vanilla decoding, speculative decoding (SD) utilizes M a to primarily generate γ tokens at each iteration then M e makes one forward computation to check the validity of the γ tokens. If M e accepts all the γ tokens, it finishes the iteration with an additional generated token, resulting in γ + 1 tokens generated. Otherwise, if M e rejects a token at r, the token is re-sampled according to M e to substitute the rejected token; hence the iteration finishes with r tokens generated. With only one-time forward computation of M e , multiple tokens are generated at each iteration. When the ratio between the runtime required of M a and M e (the cost coefficient c, Concretely, at each iteration, γ tokens are generated from the amateur model M a . When checking the validity of the tokens, the target distribution becomes P τ n , n ∈ {ori, imp} from contrastive distribution instead of P Me in speculative decoding. For a token x in the M a -generated tokens, it is rejected with probability 1 -P τ n (x) and then a new token in place of x is re-sampled from norm(max(0, P τ n (x) -P Ma (x)), where If all the M a -generated tokens are accepted, then an additional token is sampled from P τ n . The sampling procedure of SCD is similar to the original speculative decoding in Experiment Setting. We evaluate SCD and other baselines on four benchmarks: WikiText Theorem 5.1. The expected acceleration factor in decoding runtime is (1-λ)(1+cγ+cλ γ ) . In Tab. 1, consistent acceleration is presented across different benchmarks. We further visualize the expected acceleration factor of SCD in Fig. Compatibility. Results presented in §5 show SCD can combine the benefits of CD and SD. We delve deep into the reasons for such compatibility. We calculate the average entropy of token probabilities from M a and M e regarding the accepted and rejected tokens in SCD. As shown in Fig. Sensitivity. Through Fig. In this paper, we propose speculative contrastive decoding, a decoding strategy that naturally integrates small amateur LMs for inference acceleration and quality improvement of LLMs. Extensive experiments show the effectiveness of SCD and our delve-deep analysis also explains the compatibility through the scope of token distribution entropy. Our method can be easily deployed to improve the real-world serving of LLMs. In our experiments, we provide the expected acceleration factors of SCD on four benchmarks calculated according to the empirical token acceptance rates λ and selected cost coefficients c. The empirical acceleration factor is highly correlated to the actual infrastructures that serve both the larger LMs and the smaller LMs. To compensate for this demonstration limitation and better demonstrate the acceleration performance, we visualize the expected acceleration factor by spanning across a range of c in Fig. Although LLMs have demonstrated exceptional performance and been helpful real-world assistants recently, the massive computational demands of LLMs forbid most users including potential researchers from local deployments, who generally alter to use APIs from LLM servings. Therefore, effective methods, including our SCD, to improve the speed and quality from the perspective of decoding inference have much potential to advance LLM-based services. 0.1 0.5 0.5 0.5 0.5 1.0 0.5 0.5 SCD ori 0.1 0.5 0.5 0.5 0.5 1.0 0.5 0.5 Table 1-λ γ+1 1-λ . Therefore, the expected runtime needed of SCD is 1-λ 1-λ γ+1 (T +cγT +cλ γ T ), hence the expected acceleration factor is (1-λ)(1+cγ+cλ γ ) . Case Study on GSM8k In this case, we can see that the rejected and re-sampled tokens are usually the beginning of a sentence, numbers, operations, or named entities, which are generally informative tokens in the reasoning chain of thoughts. This also indicates that quality improvement originates from re-sampling informative tokens by contrastive token distribution while the acceleration comes from speculative prediction of the amateur LMs.
688
1,146
688
Classification and Clustering of Arguments with Contextualized Word Embeddings
We experiment with two recent contextualized word embedding methods (ELMo and BERT) in the context of open-domain argument search. For the first time, we show how to leverage the power of contextualized word embeddings to classify and cluster topic-dependent arguments, achieving impressive results on both tasks and across multiple datasets. For argument classification, we improve the state-of-the-art for the UKP Sentential Argument Mining Corpus by 20.8 percentage points and for the IBM Debater -Evidence Sentences dataset by 7.4 percentage points. For the understudied task of argument clustering, we propose a pre-training step which improves by 7.8 percentage points over strong baselines on a novel dataset, and by 12.3 percentage points for the Argument Facet Similarity (AFS) Corpus. 1
Argument mining methods have been applied to different tasks such as identifying reasoning structures Identifying arguments for unseen topics is a challenging task for machine learning systems. The lexical appearance for two topics, e.g. "net neutrality" and "school uniforms", is vastly different. Hence, in order to perform well, systems must develop a deep semantic understanding of both the topic as well as the sources to search for arguments. Even more so, clustering similar arguments is a demanding task, as fine-grained semantic nuances may determine whether two arguments (talking about the same topic) are similar. Figure A1 The ultimate goal is fast, affordable, open Internet access for everyone, everywhere. A2 If this does not happen, we will create an Internet where only users able to pay for privileged access enjoy the network's full capabilities. Contextualized word embeddings, especially ELMo The contributions in this publications are: (1) We frame the problem of open-domain argument search as a combination of topic-dependent argument classification and clustering and discuss how contextualized word embeddings can help to improve these tasks across four different datasets. (2) We show that our suggested methods improve the state-of-the-art for argument classification when fine-tuning the models, thus significantly reducing the gap to human performance. (3) We introduce a novel corpus on aspect-based argument similarity and demonstrate how contextualized word embeddings help to improve clustering similar arguments in a supervised fashion with little training data. We present the four different datasets used in this work in Section 3, before we discuss our experiments and results on argument classification and clustering in Sections 4 and 5. We conclude our findings for open-domain argument search in Section 6.
In the following, we concentrate on the fundamental tasks involved in open-domain argument search. First, we discuss work that experiments with sentence-level argument classification. Second, we review work that provides us with the necessary tools to cluster extracted arguments by their similarity. Third, we take a deeper look into contextualized word embeddings. Argument Classification, as viewed in this work, aims to identify topic-related, sentencelevel arguments from (heterogeneous) documents. Argument Clustering aims to identify similar arguments. Previous research in this area mainly used feature-based approaches in combination with traditional word embeddings like word2vec or GloVe. In contrast to previous work, we apply argument clustering on a dataset containing both relevant and non-relevant arguments for a large number of different topics which is closer to a more realistic setup. Contextualized word embeddings compute a representation for a target word based on the specific context the word is used within a sentence. In contrast, traditional word embedding methods, like word2vec or GloVe, words are always mapped to the same vector. Contextualized word embeddings tackle the issue that words can have different senses based on the context. Two approaches that became especially popular are ELMo ELMo (Embeddings from Language Models) representations are derived from a bidirectional language model, that is trained on a large corpus. Peters et al. combine a character-based CNN with two bidirectional LSTM layers. The ELMo representation is then derived from all three layers. BERT (Bidirectional Encoder Representations from Transformers) uses a deep transformer network ELMo and BERT were primarily evaluated on datasets where the test and training sets have comparable distributions. In cross-topic setups, however, the distributions for training and testing are vastly different. It is unclear, whether ELMo and BERT will be able to adapt to this additional challenge for cross-topic argument mining. No dataset is available that allows evaluating open-domain argument search end-to-end. Hence, we analyze and evaluate the involved steps (argument classification and clustering) independently. As a first task in our pipeline of open-domain argument search, we focus on topic-dependent, sentence-level argument classification. To prevent the propagation of errors to the subsequent task of argument clustering, it is paramount to reach a high performance in this step. Having identified a large amount of argumentative text for a topic, we next aim at grouping the arguments talking about the same aspects. For any clustering algorithm, a meaningful similarity between argument pairs is crucial and needs to account for the challenges regarding argument aspects, e.g., different aspect granularities, context-dependency or aspect multiplicity. Another requirement is the robustness for topicdependent differences. Therefore, in this section, we study how sentence-level argument similarity and clustering can be improved by using contextualized word embeddings. We evaluate our methods on the UKP ASPECT and the AFS corpus (see Section 3.2). We differentiate between unsupervised and supervised methods. Our unsupervised methods include no pre-training whereas the supervised methods use some data for fine-tuning the model. For the UKP ASPECT corpus, we binarize the four labels to only indicate similar and dissimilar argument pairs. Pairs labeled with some and high similarity were labeled as similar, pairs with no similarity and different topic as dissimilar. We evaluate methods in a 4-fold crossvalidation setup: seven topics are used for testing and 21 topics are used for fine-tuning. Final evaluation results are the average over the four folds. In case of supervised clustering methods, we use 17 topics for training and four topics for tuning. In their experiments on the AFS corpus, We experiment with a number of different models and distinguish between models which use topic information and ones that do not. bilstm. This model was presented as a baseline by biclstm. IBM. We experiment with these three models by replacing the word2vec / GloVe embeddings with ELMo and BERT embeddings. The ELMo embeddings are obtained by averaging the output of the three layers from the pre-trained 5.5B ELMo model. For each token in a sentence, we generate a BERT embedding with the pre-trained BERTlarge-uncased model. Further, we evaluate fine-tuning the transformer network from BERT for our datasets: BERT. We add a softmax layer to the output of the first token from BERT and fine-tune the net-work for three epochs with a batch size of 16 and a learning rate of 2e-5. We only present the sentence to the BERT model. BERT topic . We add topic information to the BERT network by changing the input to the network. We concatenate the topic and the sentence (separated by a special [SEP]-token) and finetune the network as mentioned before. Unsupervised Methods. Table Text embeddings. Tf-Idf shows the worst performance. In Table Supervised Methods. We fine-tune the BERT model for some of the topics and study the performance on unseen topics. For the ASPECT Corpus, we observe a performance increase of 7.8pp. Identifying dissimilar arguments (F dissim ) is on-par with the human performance, and identifying similar arguments achieves an F-score of .67, compared to .75 for human annotators. For the AFS dataset, we observe that fine-tuning the BERT model significantly improves the performance by 11pp compared to the previous state-ofthe-art from In a cross-topic evaluation setup on the AFS dataset, we observe that the performance drops to .57 Spearman correlation. This is still significantly larger than the best unsupervised method. We evaluated the effect of the training set size on the performance of the BERT model for the ASPECT Corpus. A certain number of topics were randomly sampled and the performance was evaluated on distinct topics. This process was repeated 10 times with different random seeds By allowing fine-tuning on five topics we are able to improve the F mean -score to .71 compared to .65 when using BERT without fine-tuning (without clustering setup). Adding more topics then slowly increases the performance. With Clustering. We studied how the performance changes on the ASPECT corpus if we combine the similarity metric with agglomerative clustering (Table We can estimate this source of error by evaluating the transitivity in our dataset. For a strict partitioning setup, if argument A ∼ B, and B ∼ C are similar, then A ∼ C are similar. This transitivity property is violated in 376 out of 1,714 (21.9%) cases, indicating that strict partitioning is a suboptimal setup for the ASPECT dataset. This also explains why the human performance in the with clustering setup is significantly lower than in the without clustering setup. As Table We use agglomerative hierarchical clustering We use the average linkage criterion to compute the similarity between two cluster A and B: ), for a given similarity metric d. As it is a priori unknown how many dif-ferent aspects are discussed for a topic (number of clusters), we apply a stopping threshold which is determined on the train set. We also tested the k-means and the DBSCAN clustering algorithms, but we found that agglomerative clustering generally yielded better performances in preliminary experiments. Agglomerative clustering uses a pairwise similarity metric d between arguments. We propose and evaluate various similarity metrics in two setups: (1) Without performing a clustering, i.e. the quality of the metric is directly evaluated (without clustering setup), and (2) in combination with the described agglomerative clustering method (with clustering setup). For the UKP ASPECT dataset we compute the marco-average F mean for the F 1 -scores for the similar-label (F sim ) and for the dissimilarlabel (F dissim ). In the without clustering setup, we compute the similarity metric (d(a, b)) for an argument pair directly, and assign the label similar if it exceeds a threshold, otherwise dissimilar. The threshold is determined on the train set of a fold for unsupervised methods. For supervised methods, we use a held-out dev set. In the with clustering setup, we use the similarity metric to perform agglomerative clustering. This assigns each argument exactly one cluster ID. Arguments pairs in the same cluster are assigned the label similar, and argument pairs in different clusters are assigned the label dissimilar. We use these labels to compute F sim and F dissim given our gold label annotations. For the AFS dataset, We experiment with the following methods to compute the similarity between two arguments. Tf-Idf. We computed the most common words (without stop-words) in our training corpus and compute the cosine similarity between the Tf-Idf vectors of a sentence. InferSent. We compute the cosine-similarity between the sentence embeddings returned by In-ferSent Average Word Embeddings. We compute the cosine-similarity between the average word embeddings for GloVe, ELMo and BERT. BERT. We fine-tune the BERT-uncased model to predict the similarity between two given arguments. We add a sigmoid layer to the special [CLS] token and trained it on some of the topics. We fine-tuned for three epochs, with a learning rate of 2e-5 and a batch-size of 32. Human Performance. We approximated the human upper bound on the UKP ASPECT corpus in the following way: we randomly split the seven pair-wise annotations in two groups, computed their corresponding MACE Open-domain argument search, i.e. identifying and aggregating arguments for unseen topics, is a challenging research problem. The first challenge is to identify suitable arguments. Previous methods achieved low F 1 -scores in a crosstopic scenario, e.g., The second challenge we addressed is to decide whether two arguments on the same topic are similar. Previous datasets on argument similarity used curated lists of arguments, which eliminates noise from the argument classification step. In this publication, we annotated similar argument pairs that came from an argument search engine. As the annotation showed, about 16% of the pairs were noisy and did not address the target topic. Unsupervised methods on argument similarity showed rather low performance scores, confirming that fine-grained semantic nuances and not the lexical overlap determines the similarity between arguments. We were able to train a supervised similarity function based on the BERT transformer network that, even with little training data, significantly improved over unsupervised methods. While these results are very encouraging and stress the feasibility of open-domain argument search, our work also points to some weaknesses of the current methods and datasets. A good argument similarity function is only the first step towards argument clustering. We evaluated the agglomerative clustering algorithm in combination with our similarity function and identified it as a new source of errors. Arguments can address multiple aspects and therefore belong to multiple clusters, something that is not possible to model using partitional algorithms. Future work should thus study the overlapping nature of argument clustering. Further, more realistic datasets, that allow end-to-end evaluation, are required.
796
1,849
796
Temporally-Informed Analysis of Named Entity Recognition
Natural language processing models often have to make predictions on text data that evolves over time as a result of changes in language use or the information described in the text. However, evaluation results on existing data sets are seldom reported by taking the timestamp of the document into account. We analyze and propose methods that make better use of temporally-diverse training data, with a focus on the task of named entity recognition. To support these experiments, we introduce a novel data set of English tweets annotated with named entities. 1 We empirically demonstrate the effect of temporal drift on performance, and how the temporal information of documents can be used to obtain better models compared to those that disregard temporal information. Our analysis gives insights into why this information is useful, in the hope of informing potential avenues of improvement for named entity recognition as well as other NLP tasks under similar experimental setups.
Natural language processing models are now deployed on a large scale in many applications and used to drive automatic analyses or for making predictions. The usual setup is that these models are trained and evaluated on the data available at model building time, but are used to make inferences on data coming in at a future time, making models susceptible to data drift. The data distribution of the test set used to measure the model's performance after training may be different from the distribution of data from future time periods Despite its intuitive value, there has been little research on using the temporal information contained in text documents to inform modeling of a task In this paper, we study the temporal aspects of text data, focusing on the information extraction task of named entity recognition in the Twitter domain. We make the following contributions: a) a new data set for Twitter Named Entity Recognition consisting of 12,000 English tweets evenly distributed across six years; b) experimental results that demonstrate the performance drift of models trained on data from different time periods and tested on data from a future interval; c) extensive analysis of the data that highlights temporal drift in the context of named entities and illustrates future modeling opportunities; d) simple extensions to state-of-the-art NER models that leverage temporal information associated with the training data, which results in an improvement in F1 score over standard pooling methods.
Language change is a popular topic of research in linguistics Temporal information has been used to create topic models of better quality, usually by adding smoothing properties Most similar to our experimental setup, A related, but distinct, task built on the assumption of language change with time is automatic prediction of the date on which a document is written Named entity recognition (NER) is the task of identifying entities such as organizations, persons, and locations in natural language text. NER is a well-studied NLP task over the past 20 years NER systems struggle to generalize over diverse genres with limited training data In this paper, we focus on the task of named entity recognition on English tweets as a case study for our hypotheses and analysis regarding model drift with time. Twitter data represents an ideal testbed for our analysis as it contains readily accessible timestamp information for each tweet. Further, users on social media post about current events, which are likely to include entities that change over time. Social media also reflects changes in language use more timely than other sources of data (e.g., newswire), resulting in the potentially rapid evolution of the contexts and ways in which named entities are discussed in natural language. This drift in Twitter data has previously been demonstrated qualitatively in the context of named entity recognition Therefore, we create a new collection of tweets annotated with named entities that attempts to alleviate the lack of temporal diversity in existing Twitter data sets as well as provide us with a suitable experimental setup to study our research questions about temporal entity drift and NER model performance. In this section, we present the details of our data set, including the collection and annotation methodology, as well as an analysis of the named entity mentions in the corpus. The data set can be downloaded at The primary goal of creating a new data set is ensuring wide-enough temporal diversity for our work as well as future directions that can leverage timestamp information. We use the public Twitter Search API In annotating our data with entities, we use a tagset consisting of three entity classes -Organizations (ORG), Persons (PER), and Locations (LOC). This scheme is consistent with some existing data sets for the task We use the annotation guidelines used in standard NER data sets Further, we observe in other data sets that usernames are some of the most frequent tokens classified as entities We preprocess the data set by normalizing URLs, usernames, and Twitter-specific tokens (e.g., RT). We leave hashtags intact as these are often used as words in the context of the tweet, and can be or contain named entities. We use Twokenizer The data was annotated by multiple annotators that have experience with named entity recognition annotation tasks. Specifically, we used 15 annotators in total, with two annotations per tweet. The inter-annotator agreement is 78.34% on full tweets (same entity types and spans). If the annotators disagree on a tweet in their tagging, we adjudicate in favor of the annotator that had the highest confidence on the task, as judged through measuring their agreement with our annotations on a set of test questions (10% of the total). In our experiments, we use temporal splits of the data from 2014-2018 for training, and the most recent data (i.e., the tweets from 2019) to evaluate our models, to simulate a "future time period" setup. Thus, we wanted to ensure that the model performance is evaluated on data that has as few annotation errors as possible. Hence, each tweet was checked by either of the authors of the paper, both with significant experience in linguistic annotations, and corrected if needed to ensure additional consistency. This process had the effect of reducing the measurement error of the model performance but ultimately did not affect the conclusions of the experimental results. The type-wise distribution of named entities in for each year in our data set, after annotator adjudication and correction, is shown in Table We use the neural architecture based on a stacked BiLSTM-CRF model introduced in Huang et al. A key component in the base architecture is how the tokens are represented as inputs. Initial research In addition to token embeddings, we use character embeddings to model subword information that may be indicative of named entities and better represent out-of-vocabulary tokens. We use a character-level BiLSTM with randomly initialized character embeddings to produce the characterbased word representations We split the data temporally for our experiments. We use the data authored in 2019 as the test data, as this is the most recent data available and best replicates the scenario of making predictions on text from future time periods. We use a random sample of 500 tweets (25%) from the 2019 data as the validation set. For training, we use data authored between 2014 to 2018 in various temporal splits, depending on the specific experimental setting. We use the PyTorch framework Following the recommendation from Reimers and Gurevych (2017), who study the variance of LSTM-CRF models with different random seeds, we report all experimental results as the mean of five runs. The main metric we use for evaluation is span-level named entity F1 score, reported using the official CoNLL evaluation script To determine the utility of temporal information, we first attempt to evaluate whether temporal drift in the data affects the performance of NER models. To this end, we conduct experiments to answer the following research questions: 1) What is the effect of the temporal distance between the training and target data sets on NER performance? 2) How do the size and temporal distribution of the training data affect NER performance? We training instances (2,000 tweets), to remove the impact of this factor in our results. The results are shown in Figure We now study how the number of instances in the training data and their temporal distribution impact the performance of the model. We first train models on cumulative random samples from the combined training data set (all tweets from 2014-2018), adding 2,000 tweets at each step. Then, we train models starting with the 2,000 tweets from 2014 and incrementally add tweets from subsequent years from 2015 up to 2018. The NER F1 scores are shown in Figures Looking at the "Random" sampling strategy, we see that the performance steadily increases as we add more tweets to the training set -as we would expect for most supervised machine learning models. We see that the "Temporal" model with only the 2014 data (2,000 tweets) has a lower performance than randomly selecting 2,000 tweets across all years. This is indicative of the data drift across time, as training on a random sample of tweets from all the years is more informative and leads to a better NER model than using just the 2014 data. Moreover, as we add tweets temporally closer to the target into the training data set, the "Temporal" strategy converges with the "Random" strategy. This observation strengthens the hypothesis that temporal information can potentially play an important role while selecting training data and designing model architectures. To understand why the temporal distribution of the training data impacts the performance of an NER model, we analyze the distribution of entity mentions in our data set to uncover the extent to which data drift occurs at the lexical level. Type Distribution Figure Type-wise Mention Overlap Figure Type-wise Model Performance Table Next, we study the temporal differences in performance by type. When using GloVe embeddings, the smallest gap between training on different data splits is for PER (4.95 F1), while ORG suffers from substantial drift in performance, resulting in a 14.66 F1 drop on ORG performance. When using Flair embeddings, the most notable difference in performance when training across different years is still for the ORG type (up to 8.05 F1). However, the gap has proportionally tightened the most as compared to when using GloVe embeddings. These observations correspond with the analysis from Figure Mentions Unseen in the Training Data In addition to the increase in surface-form overlap across years, we investigate whether mentions unseen in the training data are impacted by the temporal distance between the training and test data. Table Supported by the analysis that temporal drift in the training data can impact the performance of NER systems, in this section, we experiment with techniques to account for temporal information while training the NER model. We look at leveraging temporality in two broad ways: a) by altering the architecture of the base model; b) by modifying how the training data set is constructed. These methods are intended to be an initial exploration of using temporal information, with a focus on techniques that do not require significant modification to the base model. We present these in the hope that they will inspire future research on models robust to temporal drift. The specific methods are discussed below, followed by experimental results. Sequential Temporal Training Our analysis from Section 5 showed that using more data is beneficial, irrespective of temporal distance from the target, but individually, the closest data is most useful. Based on this analysis, we attempt to train our model by ordering our training data year-wise such that the model is trained on the temporally closest data last. Specifically, we start with training on the year temporally furthest away from the target data and repeatedly tune the model on the chronological sequence of years (i.e., first train on 2014 data, then 2015 data, and so on up to 2018). Temporal Fine-tuning The analysis showed that training on the model temporally closest to the target data set obtains the best overall performance. Based on this observation, we decide to train the base model on the entire data set of tweets from the years 2014-2018. Then, we fine-tune the trained model on the data from the year temporally closest to the target (2018). The fine-tuning process is simply retraining the model on the 2018 data with the same hyperparameter settings. Instance Weighting Previous work in domain adaptation shows that giving higher weights to training instances similar to the target domain can improve performance Year Prediction as an Auxiliary Task Finally, we aim to guide the model to learn temporal features in training. Inspired by related work in domain adaptation Table The results show that we can overall obtain a better performance over the base model by using simple techniques to incorporate temporal information. The margin of improvement is overall lower when using Flair embeddings than with GloVe (+0.82 compared to +1.13). This potentially indicates that semantic drift can be captured partially through contextual embeddings. Fine-tuning the model on the temporally closest data (i.e., 2018) leads to the best F1 scores when using GloVe embeddings, reaching a 1.13 increase in F1. For the Flair embeddings, we observe that up-weighting the training instances from the year 2018 leads to the best result, a 0.82 improvement in F1 over the base model. We highlight that these straightforward methods that improve over the base model do not involve any architecture changes, other than a change in how the data is fed to the model. It thus has the potential to both be readily applicable to existing NER implementations as well as generalize to other NLP tasks. Finally, we find that using an auxiliary task for predicting the year improves the performance slightly when using GloVe embeddings, but has the oposite effect when using Flair embeddings. This is likely because the GloVe embeddings are finetuned during the model training and are therefore influenced by the auxiliary loss, while the contextual Flair embeddings are not. This paper studies and models text data drift in the information extraction task of named entity recognition. We introduce a new data set of 12,000 English tweets stratified by time, which allows us to study the effects of drift and evaluate named entity recognition models in a realistic scenario of performing inference on temporally unseen data. By analyzing the data, we quantify the temporal drift in named entity type and mention usage and identify that, as expected, the data distribution is more similar when drawn from closer time intervals. We then use current state-of-the-art approaches for named entity recognition and demonstrated that, through modeling of temporal information, performance can be improved when testing on future data. We expect our data, results, and error analysis to inform the design of similar experimental setups for other NLP tasks beyond NER, such as part-of-speech tagging or relation extraction.
983
1,508
983
Annotation and Automatic Classification of Aspectual Categories
We present the first annotated resource for the aspectual classification of German verb tokens in their clausal context. We use aspectual features compatible with the plurality of aspectual classifications in previous work and treat aspectual ambiguity systematically. We evaluate our corpus by using it to train supervised classifiers to automatically assign aspectual categories to verbs in context, permitting favourable comparisons to previous work.
The universal linguistic category of aspect describes how a verb or a verbal projection (including sentences, 'predicates' for short) characterises the temporal course of a state of affairs or 'eventuality'. Such information is relevant for tasks that extract temporal information from texts, such as information extraction, question answering, and document summarisation Aspect must also be considered in event annotation We created the first resource of German verbs annotated for aspectual class in context. We use aspectual features compatible with various different previously published aspectual classifications, and model the pervasive phenomenon of aspectual ambiguity. We evaluate the resource by using it in supervised aspectual classifiers for verbs in context.
Aspectual classes are established by feature dichotomies Dynamic predicates can be either unbounded (introduce eventualities without inherent boundaries, e.g., move or play the piano), or bounded (e.g., run a mile or build a house). Bounded predicates (also called 'telic') have four subgroups that are crossclassified by the features changeno change and punctualextended: The first pair distinguishes predicates that express an explicit change of state (e.g., leave as change from being present to being away) from predicates that do not (e.g., play a sonata). The punctualextended distinction is gradual (while the others are binary). This will tend to aggravate both the annotation and the automatic classification of aspect. These features define six aspectual classes: Only dynamic predicates can be bounded or not, and only bounded predicates can be extended or punctual, and introduce an explicit change of state or none. Such aspectual properties are sometimes called 'lexical aspect' or 'aktionsart' to distinguish them from 'morphological aspect', e.g., the progressive or perfective/imperfective markers in Slavic languages. Also, the aspectual class of a verb may be influenced obligatorily by an argument, in particular, by an 'incremental theme' Operators like the progressive and specific kinds of adverbials may exert an aspectual influence on the predicates which they take as arguments. For instance, durative adverbials map unbounded predicates onto extended no-change predicates, and the progressive maps dynamic predicates of all kinds onto stative ones. Consequently, the aspectual class of a full clause or sentence may differ from the one of its main verb (plus its arguments); thus, annotating aspect at the clause or sentence level differs from our annotation task. The aspectual value of a predicate can also be modified in order to fit aspectual selection restrictions of an operator, which is known as aspectual coercion Classifying verbs aspectually must be able to handle the (often systematic) aspectual ambiguity on the token level (5% of the tokens in our corpus), including (1) and (2). (1) wenn der Kunde die Karte abtrennt 'when the client detaches the card' Other cases have two distinct readings, e.g., many verbs in the semantic field of communication have a stative and a change-of-state reading. E.g., in (2), zeigen 'show' can indicate a stative property ('be more successful') or a change of state ('obtain better results'): (2) diese Firmen zeigen bessere Ergebnisse 'these companies show better results' Systematic ambiguity furthermore emerges for so-called 'degree achievements' like den Weg kehren 'sweep the path' The great level of detail of our classification is novel and addresses the problem that-beyond distinguishing stative predicates-previous work on aspectual classification disagrees widely. Our classification is related to previous ones in Table This flexibility also means that our classification lends itself to tasks of different granularity. As we will show in Section 5, it can be used for coarse two-way distinctions, e.g., between stative and nonstative predicates, as well as for very fine-grained classification tasks. 3 Related work They trained supervised classifiers, using 'linguistic indicators' for aspectual classes as features, e.g., the perfect, the progressive or durative adverbials like for two hours. Co-occurrences of these indicators with the verbs were counted in large parsed corpora (supersets of the annotated corpora). For the first corpus, they distinguished stative vs. dynamic verbs with 93.9% accuracy. The second corpus was used for distinguishing 'culmination' and 'non-culmination' They trained classifiers on these data, using the type-based indicators of Experiment 1 tested performance on verbs during training. The classifier was trained on the first data set, using 10-fold cross validation. Accuracy reached 84.1%, but no feature set statistically outperformed the naïve strategy of memorising each verb's most likely aspect class. Experiment 2 tested the classifier on unseen verbs, by stratifying the cross validation folds by verb lemma. Falk and Martin (2016) annotated 1,200 French verb tokens, modelling aspectual ambiguity directly in their aspectual classification; this is based on Vendler classes but adds four ambiguity classes, e.g., for verbs ambiguous between 'state' and 'activity' like penser 'think'. Also, there is a class of change-of-state verbs unspecified for punctuality, and two classes of degree achievements (with and without preference for the change reading). We see two problems for their approach. First, aspectual ambiguity is a property of individual verbs, hence, no additional classes are needed. Second, their classification is not general enough, e.g., for zeigen 'show', which can be stative or change of state. Since we can handle aspectual ambiguity of verbs, we can replicate their classification (up to the two classes of degree achievements). Falk and Martin train a classifier on their annotation, which reaches 67% accuracy on a three-way split between unbounded and change-of-state verbs, and those that fall in between the two groups. Other resources target aspectual classification at the sentence or clause level. We compiled a corpus of German verb tokens in their clausal contexts from the SdeWaC corpus The corpus has three parts. Part A (3000 clauses) is based on a verb sample balanced for verb frequency. We took 60 verbs drawn at random for annotation, 20 each from the classes with high (65 verbs with counts of > 10 Our annotation tool allowed only feasible combinations of the aspectual features. Annotation guidelines explained the aspectual features and provided tests for assigning values to them. E.g., stative predicates like glücklich sein 'be happy' do not combine with adverbials expressing intentionality: (3) *Max ist freiwillig glücklich. 'Max is voluntarily happy.' Similar tests guide the annotation of the other three feature pairs, e.g., only unbounded predicates combine with durative adverbials. The guidelines also explain the phenomenon of obligatory aspectual influence by verbal arguments. The annotation paid consideration to metaphorical usages; however, our anecdotal experience suggests that verbal metaphor tends to preserve aspectual class. Disagreements between annotators were subsequently adjudicated. We annotate aspectual ambiguity on the token level; categories are tagged 'unknown' when a verb has no value for a specific feature like in (1). Cases like (2) get two separate full annotations. We evaluated inter-annotator agreement after training the annotators and having them annotate ca. 2,200 clauses. Both annotators annotated 248 unseen clauses; nine of these were excluded as invalid. Table Agreement on the stative/dynamic features is like in To test the validity and utility of our annotated corpus, we trained supervised classifiers on the dataset. The fine granularity of our classification allows us to define several tasks. We use a logistic regression classifier with L2 regularisation (λ -1 = 2.78) and employ sentence-level features derived from the automatic parse of the clause: the verb lemma; POS; tense; use of the passive; a word embedding for the verb 5 ; a bag of words to represent the sentence context; the lemmas of the verb's grammatical dependants; the GermaNet Training and testing use 10-fold cross validation. Table The second and the third classifier test our expectation that our resource is useful for less finegrained aspectual classifications, too. The second classifier disregards the punctual-extended feature (collapsing the two change and the two nonchange classes), i.e., follows These three models achieve similar error rate reductions over the baseline of about 60%. The 4way classifier, which ignores the extended-punctual distinction, outperforms the Vendlerian classifier, which includes it; this suggests that the extendedpunctual distinction is more difficult to identify and to model. The following three classifiers are motivated by classifications in prior work. The fourth one ('Stativity') predicts whether a token is stative (1077), dynamic (2915), or ambiguous in context (60). This corresponds to Experiment 1 of The fifth classifier approximates the classification task of The sixth classifier predicts whether a verb token is 'culminated' or 'non-culminated', corresponding to the task of Experiment 2 of These experiments support several conclusions. First, we have shown our resource can be used to build machine learning classifiers of high quality, speaking to the validity of our corpus. While we can only draw indirect comparisons to previous work in English and French, the accuracies achieved by our classifiers suggest that we go beyond the state of the art in our work. Second, our resource has proven to be very flexible in that it can be broken down in different ways to capture different aspectual distinctions, which is very welcome considering the wide range of aspectual classifications. Finally, the better performance of the 4-way classifier compared to the Venderian classifier, combined with the κ value for the extended-punctual distinction (Table We present the first aspectually annotated resource for German verb tokens. We report substantial interannotator agreement, and validate our resource by training automatic aspectual classifiers, permitting favourable comparisons to prior work. The annotated corpus, the source code for the annotation tool, and the annotation guidelines are available at Future work will offer a more principled account of aspectual classification for specific verb classes, among them speech act and communication verbs (e.g., promise or call) that occur frequently in corpora but have hitherto been neglected in aspectual analyses. On a more general scale, we envisage examining the interplay of verb class (e.g., the classes of Levin 1993), verb sense, and aspectual class, with the purpose of estimating the influence of the sentential context on the aspectual value of the predicate. We also intend to develop a more principled treatment for the aspectual classification of metaphors, which are frequent in other corpora.
453
772
453
Lazy-k: Decoding for Constrained Information Extraction
We explore the possibility of improving probabilistic models in structured prediction. Specifically, we combine the models with constrained decoding approaches in the context of token classification for information extraction. The decoding methods search for constraintsatisfying label-assignments while maximizing the total probability. To do this, we evaluate several existing approaches, as well as propose a novel decoding method called Lazy-k. Our findings demonstrate that constrained decoding approaches can significantly improve the models' performances, especially when using smaller models. The Lazy-k approach allows for more flexibility between decoding time and accuracy. The code for using Lazy-k decoding can be found here
Much of today's Information Extraction (IE) is done using probability-based token-classification models such as BERT Ideally, alternative, high-likelihood predictions are explored to improve predictions from existing models. This is especially interesting in structuredprediction tasks, where the model's predictions are parsed into predefined structures. These structures allow for defining constraints that evaluate whether a produced prediction adheres to the expected structure, which can then be used to iterate over multiple high probability predictions until a satisfying solution is found. A concrete example of such a structure is in the case of invoice information extraction. In this task, the model is given the outputs of an Optical Character Recognition (OCR) system and needs to predict which parts of the text correspond to the various elements in an invoice. For example, in Fig. However, the occlusion of the "CASH" text introduces noise into the model's predictions, causing it to incorrectly label the cash amount as another total amount. Using the arithmetic semantics of invoices, we know that the total amount should equal the cash amount paid minus the change amount. As such, we know that the model's best prediction is probably incorrect. Alternative, high-probability labelassignments can be explored to find a constraintsatisfying solution instead. Industrial document processing systems usually have programmatic post-processing logic that detects and sometimes corrects aforementioned semantic constraints. These systems, however, rarely exploit the remaining information "hidden" in the produced probability distributions, and the custom correction code is often complex and hard to maintain. Furthermore there is the possibility of OCRinduced errors, which go beyond the scope of the present work but remain an important source of errors in document IE In short, we exploit task-specific structures to explore alternative high-likelihood predictions. Specifically, we • propose an efficient algorithm for iterating over high-likelihood predictions and, • provide a proof for the correctness of the algorithm and, • perform several experiments to evaluate the relevance of exploring alternative highlikelihood predictions in structured prediction tasks.
To search over high-probability predictions, we require a probabilistic model that outputs independent probabilities for a given sequence of tokens. Given an input sequence x = {x 1 , x 2 , . . . , x n }, x i ∈ X where X is the token vocabulary, the goal is to estimate the probability of the output sequence y = {y 1 , y 2 , . . . , y n }, y i ∈ Y where Y is the label vocabulary. As this probability quickly becomes intractable, it is usually estimated by factoring it as: The decoding process refers to the way we obtain an estimate ŷ for y from such a model. The simplest approach consists of taking the arg max as ŷ = arg max which is done for each y i separately. In addition, we introduce a global, binary constraint C : x × y → {0, 1} and formalize our problem of interest as (3) Note that for the method proposed in Sec. 4, we make no further assumption about the constraint. This is important because many existing constrained decoding approaches require the constraints to be expressed in linear form This labeling constraint can be expressed using linear constraints. However, the solution to the linear constraints is not guaranteed to also be a solution to the non-linear constraints. The semantic constraint cash = total + change cannot be expressed linearly because in order to compute it, the text corresponding to the labelization must be parsed from text to a float, which is a non-linear operation. An example where the optimal solution satisfying the linear (BIO) constraints does not satisfy the non-linear (semantic) constraints is shown in Tab. 1(b) one line 4. Several decoding methods for the setting from Eq. (3) have been proposed. An excellent benchmark for learning and decoding under constraints is provided in ILP problems can be solved using the branch and bound algorithm Informed search methods have previously been applied to the task of information extraction. Decoding under constraints using ILP was inspired by the work from Roth and Yih who explored the application to entity and relation extraction In these programs, the decision variables are indicator variables 1 j i , indicating the assignment of label j to token i. Using this, one can express the constraint "1 label per token" as ∀ i l j=1 1 j i = 1 where l is the number of possible labels. Using this linear formulation for several other constraints, they observe 2-5% improvements in F 1 -score on entity and relation classification tasks. Similarly, Viterbi has also been used for correcting structured predictions For more complex constraints we can use uninformed search methods as they do not make any assumptions on the implementation of the constraints and simply iterate over the search space in a greedy manner. The most widely known method for this is Beam Search (BS) To illustrate, BS takes as input a parameter k and outputs the top-k sequences by computing the top-k beams at every token, based on the previous top-k beams. In order to evaluate global constraints, beam search first needs to compute all top-k sequences after which the constraint can be evaluated. Unfortunately, this means that if the constraintvalidating prediction ends up being the most likely (arg max) sequence, beam search will have computed k -1 too many sequences. In addition, if the constraint-validating prediction is not in the topk beams, a new search with an unknown, higher k ′ needs to be run, which also includes recomputing the initial previous k predictions. Several adaptations have been suggested in the context of natural language generation p "56" "." "000" BIO Sem. label-assignment might violate future structure constraints Our method follows a similar approach to A × with Partial Expansion To our knowledge, we are the first to apply A* with partial expansion that allows for more general constraints than ILP for the global constraint decoding setting. As the name suggests, the Lazy-k decoder allows for decoding the k most probable sequences in a lazy manner. This means that it only iterates over the necessary number of sequences and stops once a satisfying solution is found. The hypothesis that this decoder explores is that the constraintsatisfying sequence is somewhere among the other high probability sequences. To do this efficiently, we exploit the fact that the k-th most probable sequence is always within "editdistance" 1 from one of the k -1 more probable sequences. This follows from the independence of each label as shown in Eq. 1. We put "editdistance" in quotes here because we use a slightly more strict definition of edit-distance that also takes into account the order between the various label probabilities. More details about this can be found in App. A. At its core, it is a variant of best-first search (4) We use y k to denote the k-th lowest cost label assignment, and define the starting point y 1 as: (5) The algorithm for Lazy-k decoding is given in Alg. 1. It works by maintaining a heap of the k best states, prioritized by the score of the next best unexplored state within 1 edit distance. The heap is initialized with the starting state y 1 . Upon exploring a state, it is tested against the constraint and returns directly if it is satisfied. If the constraint is not satisfied, the heap is extended with the newly explored state and the priority score of the originating state y i is updated to reflect the score of the next best unexplored state. Different from best-first search, upon exploring a state, we do not add all the children to the heap. Instead, we only add the next best state y k and update the priority key for y i to be the score of the next best state within edit distance 1. This significantly reduces the size of the heap, as a classical search implementation adds n possible children at every iteration, whereas in this case, the number of states in the heap is at most equal to the number of iterations. This heap-size reduction in turn translates in better run time complexity as all following heap operations become cheaper. The NextBest function takes as input a state y and the frontier. The frontier is a dictionary that holds the explored states and next best states for all explored states. The values are integers that keep track of the i-th best change for a given state. If i == n (the number of tokens) then the function returns null as there is no next best change within 1 edit-distance for this state. As the next best state may already exist in the frontier, the NextBest function is wrapped in AddNextBest to make sure y 1 ← arg min y∈Y n g(y) 3: if C(x, y 1 ) = 1 then return y 1 4: frontier ← {y 1 : 1} Given that the algorithm iterates over the possible sequences in decreasing order of probability, it is trivial to prove that it will always find the optimal solution should it exist. In practice however, the combinatorial growth in the number of states quickly renders exhaustive search infeasible. To prevent this, an additional stopping condition is used where the iteration stops if no satisfying solution has been found after a fixed number of k iterations. One could also set the stopping condition according to a cumulative probability mass p or some other measure; we leave this exploration for future work. Assuming n tokens, l labels and the requested topk sequences. The space complexity of Lazy-k is O(kn), since for every k-best state we add at most 1 new state of size n to the heap. The time complexity is slightly less obvious. For the top-k states, the outer while loop will run for k iterations. Inside this loop, there are two sources of complexity: 1. H.Add() which occurs at most twice in AddNextBest(), 2. NextBest() which occurs once in the outer loop and twice in AddNextBest in another while loop. The H.Add() operation adds an element to the heap which is of logarithmic complexity with respect to the size of the heap. Since the heap holds exactly our top-k states at each iteration, the complexity of this operation is equal to The NextBest(y, frontier) function returns the frontier[y]-th next best state within 1 edit distance of y. When expanding a state for the first time, we compute a sorted list of next-best edits in O(n log n) time. Using this, every NextBest call for this state can be computed in constant time. For every state, NextBest() is called at most n times. As the sorting takes n log n time, the total time complexity of the algorithm becomes: O(k(log k + n log n)). To evaluate the relevance of the Lazy-k decoder, we perform invoice information extraction on several datasets. The aim of the task is to extract various amounts from invoices such that they satisfy their expected arithmetic structure. For each dataset, we train a token-classification model and generate predictions for the test set. The predictions are then fed into the different decoding algorithms along with the constraints, and return the highest-probability sequence satisfying the constraints. Data We evaluate the decoders on a total of three datasets shown in Tab. 2: CORD The other constraints depend on the specific labels available in each dataset. However, it is possible for some labels not to be present in every sample. As such, we distinguish between mandatory fields (ie total amount) and optional fields (ie service fee, discount) which are considered 0 if not found. A mandatory fields will be considered empty if is not present in the predictions. As such, any constraint involving this field is not evaluated (or automatically considered as satisfied). For each dataset, we apply the constraints to all samples and filter out any that do not satisfy the constraints (show counts in table). From these samples, we use 60% for training and validation (split 80-20), and 40% for testing. The samples not satisfying the constraints are added to the training set. We purposefully choose a large percentage for the test as the small train set provided sufficient performance and we mostly wish to evaluate the decoding. Having the larger test set allows us to reduce the variance in our measurements and make stronger conclusions. Evaluation Metric Our primary evaluation metric F s 1 is the product of the micro-F 1 score and the percentage of samples satisfying all the constraints. We chose this metric as it allows us to measure the balance between the extraction performance and constraint satisfaction. Our filtering procedure ensures that all the test samples can completely satisfy the constraints. Models For each dataset, we fine-tune a Lay-outLM Methods The Lazy-k performance is compared to both BS and ILP, as well as an Argmax baseline and a vanilla Best-First implementation. Since ILP does not support non-linear constraints, we propose a Lazy-ILP variant that works similar to Lazy-k. This method iteratively looks for the highest probability solution satisfying the linear constraints and checks if it also satisfies the non-linear constraints. If it does not, the previous optimal solution is explicitly excluded by adding a new constraint and a new solve is started. In our implementations we use the Python PuLP * package for solving the ILP problems. We chose various values for k for each method, such that it would approximately give the same total running time on different datasets. BS and Best-First are evaluated on less values for k because of their likeness with Lazy-k (exhaustive search) but slower already. The results are shown in Fig. As the models get smaller, ILP gains in advantage with respect to Lazy-k when keeping the number of iterations constant. This means that in many cases the top-8 linear (BIO) constraint-satisfying solutions are outside of the 2 14 highest probability label-assignments. We wonder whether training the network to better predict correct BIO sequences would improve the overall performance, but we leave this for future work to explore. A stated advantage is the possibility of using smaller models in combination with the constrained decoding methods to improve their performance. We devised a second experiment similar to the first one, but where we train several smaller models to evaluate the additional benefit of using constrained decoding approaches. The smaller pretrained BERT models were provided as part of a paper on the importance on pre-training compact models In the context of information extraction from invoices, Lazy-k can be viable approaches for constrained decoding. While ILP has the advantage of exactly computing optimal solutions to the linear constraints, it also comes at an important minimum run time cost. Depending on the "spacing" between the solutions to the linear and non-linear 81.2 0.000 ± 0.000 44.5 0.000 ± 0.000 48.2 0.000 ± 0.000 BS 2 1 83.9 0.002 ± 0.000 50.4 0.005 ± 0.000 51.6 0.010 ± 0.000 2 2 86.2 0.006 ± 0.000 53.8 0.015 ± 0.001 54.7 0.030 ± 0.000 2 3 89.0 0.017 ± 0.001 58.9 0.047 ± 0.002 56.9 0.110 ± 0.000 92.2 0.003 ± 0.000 70.4 0.018 ± 0.000 60.9 0.012 ± 0.000 2 9 92.5 0.006 ± 0.000 73.7 0.056 ± 0.001 61.8 0.037 ± 0.000 2 11 93.9 0.016 ± 0.003 77.1 0.184 ± 0.004 62.4 0.127 ± 0.003 2 13 93.9 0.046 ± 0.003 79.5 0.620 ± 0.014 63.3 0.439 ± 0.005 2 15 93.9 0.168 ± 0.004 81.2 2.212 ± 0.014 63.5 1.580 ± 0.005 2 16 93.9 0.333 ± 0.006 82.1 4.155 ± 0.013 63.8 3.013 ± 0.009 constraints, Lazy-k might be more suited to the problem. Although not measured in our experiments, Lazy-k also has a more significant memory usage than ILP because it needs to keep all previous solutions in the heap. On the smaller models we observe a larger impact from constrained decoding approaches. We find these results promising for resources constrained applications and from an ecological point of view. We are able to achieve similar performance with significantly lighter models and less computational resources. Besides the performance one should also take into account the ease of implementation of the different methods. The Lazy-k decoder is "plug-andplay" as it does not need any conversion of the constraints. While the linear constraints used in this paper were fairly trivial to implement, more complex problems will require more complex linear formulation which can be costly to implement correctly. For the setting discussed in this paper, beam search is not recommended because of the limitation discussed in Sec. 3. However, it remains valid in the autoregressive decoding setting as this is not supported with the other methods. In summary, we have introduced a novel and efficient decoding method called Lazy-k that allows for decoding under global, hard constraints. When applied in the context of invoice information extraction, Lazy-k is faster than existing, greedy search methods and allows for more flexibility in trading off computing time and extraction performance compared to ILP. In addition, the possibility of using programmatic constraints directly makes Lazyk an easy to use off-the-shelf solution for applying corrections to probabilistic models in the context of structured predictions. Future work could explore the application to other structured-prediction problems with non- 1 scores for constrained decoding on smaller models. Lazy-ILP is limited to 8 iterations and Lazy-k to 2 14 . linear constraints besides information extraction. Additionally, the improvements in extraction performance using the decoding methods are promising, which could also be explored in semisupervised learning settings. Another interesting to explore would be the combination of Lazy-k decoding with confidence calibration methods such as temperature scaling. Most methods presented in this paper only apply to the independent label-probability setting whereas much of today's work in NLP uses the autoregressive, generative setting. Furthermore, the methods only apply to tasks that can be formulated as struc-tured predictions tasks. It may not be possible to specify concrete constraints for some tasks. We did not explore the integration of soft constraints, which are constraints that can have a degree of satisfaction instead of the binary values considered in this paper. We denote a label sequence using When obtaining the predicted probabilities from the model, we order the probabilities for each label y i in strictly decreasing order from j = 1 to j = |Y|, such that p(y j i |x) > p(y j+1 i |x). While, in theory, it is possible for two labels to have the same probability, in practice any exact degeneracy is lifted by the numerical noise. Such edge cases could be included by fixing an order arbitrarily without significant impact on the outcome. For sake of simplicity, however, we will keep the strict inequality in Eq.( We can now define the distance between two sequences as 6735 Note that, by the definitions above, each iteration of our lazy-k method corresponds to increasing by 1 only one of the indices j i of the sequence considered in the previous iteration. Following the independence assumption between the labels, the probability of a sequence y is given by P (y|x) = where we neglect degenerate probabilities for the same argument raised above. The ordering assumptions given in Eqs.( If we assume that condition (12) is not satisfied, it would mean that starting from y k and decreasing by 1 any of its j i the sequence probability would increase. But this can only happen if condition ( Algorithm 2 NextBest Implementation Require: Label assignment: y Require: 1: function NEXTBEST(y, frontier) 2: if frontier[y] == y.Length then return null return y 8: end function Below are the constraints used for each dataset. All models are trained using the BIO labeling scheme and as such, the correct BIO constraint is used for all datasets. In addition, each numerical field in has the constraint that it needs to be parseable to a float. A * next to a field indicates that the field is optional and thus considered false if no value is predicted for a given document.
737
2,284
737
AMesure: a web platform to assist the clear writing of administrative texts
This article presents the AMesure platform, which aims to assist writers of French administrative texts in simplifying their writing. This platform includes a readability formula specialized for administrative texts and it also uses various natural language processing (NLP) tools to analyze texts and highlight a number of linguistic phenomena considered difficult to read. Finally, based on the difficulties identified, it offers pieces of advice coming from official plain language guides to users. This paper describes the different components of the system and reports an evaluation of these components.
In our current society, written documents play a central role as an information channel, especially in the context of communication between institutions and their target audiences Administrations have been aware of this issue for decades and have launched various initiatives to address it, the most prominent of which is the Plain Language movement. Plain language aims to increase the accessibility of legal documents for a general audience and has been shown to both reduce costs and please readers Recent research by In the following sections, we first refer to some related work (Section 2), before describing the NLP analyses carried out to operate the system (Section 3.1). Then, we introduce the system and the way suggestions are provided (Section 3.2). The paper concludes with a report about the system performance (Section 4).
This work stands at the intersection between two very different fields: writing studies -"the interdisciplinary science that studies all the processes and knowledge involved in the production of professional writings and their appropriateness for the addressees" Relevant facts from writing studies have already been covered in the introduction. As regards ATS, the last few years have witnessed the publication of numerous interesting studies, reviewed by Some work has specifically focused on the issue of lexical simplification, which involves different techniques. Lexical simplification is generally operated in four steps, the first one being the identification of complex words. Some systems choose to consider all words as candidates for substitution Although numerous ATS systems are described in publications, we have found only four of them that made their way through a web platform tailored to writers' needs. AMesure could also be related to the family of writing assistants, such as Word or LibreOffice. However, only a few of them provides writing advice based on specific criteria or plain language guides. There are some examples of these tools available for the general public in French: (1) Plainly AMesure aims to help writers to produce clear and simple administrative texts for a general audience For this purpose, it offers various diagnoses about the reading difficulty of a text as well as advice on simpler ways of writing. Before moving to the description of the platform in Section 3.2, we first introduce the various NLP processes used to analyse the text and annotate difficulties in Section 3.1. As soon as a text is uploaded on the platform, it is processed through various NLP tools to get a rich representation of the text, on which further rulebased processes are then applied. In a first step, the text is split into sentences and POS-tagged with MElt In a second step, the tagged text is further processed to carry out lexical analyses of the text. During this step, three types of lexical difficulties are identified. Firstly, rare words are detected relying on frequencies from Lexique3 Secondly, technical terms are detected with some heuristics able to detect both simple terms and multi-word terms -a task that remains a challenge for current fully automatic approaches Thirdly, abbreviations are automatically detected as they are known to produce reading errors, especially when they are used by specialized writers to communicate to non-specialized readers. For instance, Leveraging the NLP analysis described above, the AMesure platform provides four types of diagnoses about texts to its users, as illustrated in Figure The second type of diagnosis (letter B in Figure To render all these yardsticks more visual and more understandable, we project each of them on a five-degree scale, represented by colored feathers. The more feathers a yardstick gets, the more complex this linguistic dimension is supposed to be for reading. To transform the yardstick values into a five-degree scale, we applied the following method. Our corpus of 115 administrative texts has been annotated by experts on a five-degree difficulty scale The third type of diagnosis allows to directly visualize the text in which all complex phenomena annotated during the analysis step (see Section 3.1) are underlined, namely the three types of subordinated clauses, passives, parentheticals, rare words, abbreviations, and technical terms. For each of these categories, AMesure allows the user to select a tab showing only the respective phenomenon. It also offers a global view of the text in which complex sentences are highlighted in various shades of yellow (see letter C in Figure Finally, the last functionality offers writing advice related to the complex phenomena detected (letter D in Figure To assess the performance of the various extraction algorithms included in our platform, three linguists manually annotated, in 24 administrative texts, the following five phenomena: passive structures, relative clauses, object clauses, adverbial clauses, and abbreviations 6 . The work of annotators was supported by guidelines focusing on difficult cases 7 . At the end of the annotation process, the expert agreement was evaluated using Fleiss' kappa (see Table This gold-standard version of the annotation was manually compared to the output of AMesure for the 24 texts in the test set. Table 7 For instance, infinitive clauses led by a semi-modal auxiliary such as devoir (ought to) or pouvoir (can) were discussed, as contradictory points of view can be found in grammars. and a recall of 0.62 for the relative clauses and a recall of 0.66 and a precision of 0.94 for infinitive clauses introduced by the particle "TO". We have presented the AMesure system, which automatically analyzes the readability of French administrative texts based on classic readability metrics, but also on guidelines from plain language books. The system is freely available through a web platform and is aimed to help writers of administrative texts to produce more accessible documents and forms. To that purpose, it offers a global readability score for the texts, 11 readability yardsticks, a detailed diagnosis in which difficult linguistic words and syntactic structures are highlighted, and some plain language advice. We also carried out a manual evaluation of the system based on 24 administrative texts annotated by linguists. Performance is satisfactory, except as regards the identification of object clauses. More work is needed on this category, especially to distinguish it from adverbial clauses. We also plan to improve the system providing simpler synonyms by adding a semantic filter based on embedding models. Finally, we plan to conduct a study with real writers of administrative texts to measure the perceived usefulness of AMesure as a whole, but also the usefulness of each functionality.
608
838
608
BabyStories: Can Reinforcement Learning Teach Baby Language Models to Write Better Stories?
Language models have seen significant growth in the size of their corpus, leading to notable performance improvements. Yet, there has been limited progress in developing models that handle smaller, more human-like datasets. As part of the BabyLM shared task, this study explores the impact of reinforcement learning from human feedback (RLHF) on language models pretrained from scratch with a limited training corpus. Comparing two GPT-2 variants, the larger model performs better in storytelling tasks after RLHF fine-tuning. These findings suggest that RLHF techniques may be more advantageous for larger models due to their higher learning and adaptation capacity, though more experiments are needed to confirm this finding. These insights highlight the potential benefits of RLHF fine-tuning for language models within limited data, enhancing their ability to maintain narrative focus and coherence while adhering better to initial instructions in storytelling tasks. The code for this work is publicly at
The recent growth in the size of large language models (LLMs) has enhanced natural language processing capabilities, from information extraction Storytelling is a fundamental human activity used to share information, impart lessons, and keep loved ones informed about our daily lives The performance of small language models (SLMs) trained on large datasets has been observed to be poor, generating incoherent and repetitive text. Training large language models on limited data can lead to overfitting, making smaller models a potential solution to prevent overfitting In summary, in this paper, we pretrain GPT-2base model with a parameter of 125M from scratch and compare it with the larger GPT2-Large model, which has a parameter of 774M, making it approximately six times larger. Both models are trained using a limited dataset provided from the BabyLM Challenge, which consists of approximately 100M words
Research has shown that smaller models tend to underperform when trained on large datasets, making the study of model downscaling a non-trivial Our research, however, is driven by a desire to understand if small pretrained models can benefit from Reinforcement Learning from Human Feedback (RLHF), potentially improving their overall performance despite their limited data size. Two previous studies have a direct relation to this work: the first employed human ranking feedback to train summarization models using reinforcement learning (RL) In this section, we describe the pertaining data used for the language models and the data used for the reinforcement model. We pretrain GPT-2 models using the dataset from the STRICT track in the BabyLM Challenge In this paper, we construct a reward model dataset for reinforcement learning by selecting 100 sentences from the STRICT track of the Babylm Challenge dataset. These sentences, serving as prompts, are derived from two subsets in the Babylm dataset: the Standardized Project Gutenberg and the Simple Wikipedia corpus development sets, with a prerequisite that each sentence includes characters and plots. These prompts are then used to generate two short stories each from the GPT-2 Base and GPT-2 Large models, beginning with the prefix "write me a story starting with". To enhance story diversity, we set a maximum length of 128 tokens and enforce a minimum of 10 new tokens in the generated stories. The generation code incorporates a beam size of 7 to optimize the story quality by exploring various potential continuations. The purpose of collecting feedback is to align the model's behavior with some goal behavior. For example, we aim for the model to generate stories consistent with the background plot, coherent, nonrepetitive, devoid of nonsensical sentences, and maintain a clear topic or logical structure. Rating the quality of a story accurately presents challenges due to its potentially subjective nature and the varying expectations of readers regarding emotional connection and engagement. Rather than directly estimating a generated story quality through scalebased annotation, we treat it as a latent variable to be inferred from relative comparisons. Following prior work in NLP on annotating social aspects of language Krippendorff's alpha, introduced by In our case, two graduate student annotators were designated to annotate human feedback data., which yielded a Krippendorff's alpha agreement score of .4657. To address disagreements, the two annotators discuss each story example together. They reconcile differences through discussion and unanimously select the best and worst stories based on the given story prompt. This section discusses pretraining data, the development of the data tokenizer, language model configuration, the objective of pretraining from scratch, and the process of fine-tuning using reinforcement learning with human feedback. Our model uses a sub-word vocabulary built with Byte-Pair Encoding (BPE) Prior research informed our decision to significantly reduce the vocabulary size. Studies suggest a vocabulary size of about 32,000 tokens is a good balance for a single-language model Models we pretrained in our experiments using the default configuration setting of GPT-2 For model selection, we chose the best model across all epochs based on the average score on two datasets: the Question-answering Natural Language Inference (QNLI) The reward model (RM) is designed to capture human preferences, and ideally, we could fine-tune it using Reinforcement Learning and human annotations for every output returned by the language model. However, due to practical constraints like workload and time limitations, it is not feasible for humans to provide enough feedback for each optimization iteration. As an alternative, a more effective approach is to train a reward model that simulates the evaluation process carried out by humans. This RM will evaluate any text and assign a scalar reward value to the sentences, where higher values indicate high-quality samples. Following To train our reward models, We initialize the weights of the reward model by leveraging a pretrained GPT-2 Large model as described above, then we add a randomly initialized linear head that outputs a scalar value to form the reward model r θ (x, y). We train this model to predict which generated story y ∈ {y 0 , y 1 }, where y 0 is the chosen (good) response to the prompt as labeled by our annotators and y 1 is the rejected (bad) response. In practice, this is where our annotators ranked y 0 > y 1 . The model is trained using the loss function where σ is the sigmoid function and D is the set of all training triplets in our dataset, i denotes the index of a specific data point in the dataset D. Intuitively, the model learns to give a larger score to the prompts with a higher rank. We have configured the reward model to run for a maximum of 10 epochs, with a set learning rate of 1e-5. After we train the reward model, we treat the logit output of the reward model as a reward that we optimize policy model outputs using reinforcement learning, specifically with the Proximal Policy Optimization (PPO) algorithm During the RL fine-tuning with PPO phase, we use the learned reward function to provide feedback to the language model. In particular, we formulate the following optimization problem where r(x, y) is the reward model's output, β is a hyper-parameter controlling the deviation from the initial policy. Our optimization focuses on the policy π RL (y|x) using Proximal Policy Optimization (PPO), with initialization based on the pretrained language model policy π SF T (y|x) To encourage exploration and prevent the policy from getting stuck in a single mode, the optimization uses the Kullback-Leibler (KL) divergence term. This term also discourages the policy from generating outputs that differ significantly from those seen by the reward model during training, thereby maintaining coherence in the generated text. Without this penalty, the optimization might generate gibberish text that tricks the reward model into providing a high reward. In our implementation, we used the trlX library with its default settings To assess the performance of our models, we employed various automated evaluation metrics used in the BabyLM shared task and our own human evaluation. The BabyLM shared task had two major sets of evaluations: zero-shot evaluation and fine-tuned evaluation. We describe each evaluation task below. Zero-shot Evaluation. BLiMP, introduced by Fine-tuned Evaluation. Two datasets are used for the fine-tuned evaluation: SuperGLUE and the Mixed Signals Generalization Set (MSGS). Su-perGLUE To maintain consistency and ensure fair comparisons, we adopted the default hyperparameter settings recommended by Human Evaluation. Inspired by the TinyStories In this section, we report the results of the automated BabyLM metrics and our human evaluation for story generation. Performance on BLiMP benchmarks. Shown in Table Performance on SuperGLUE benchmarks. In Table a range of language understanding abilities. The GPT2-Large-PPO model stands out with the highest average score of 66.8, underlining the potential for enhanced performance using larger models fine-tuned with PPO. Other models present comparable average scores across the SuperGLUE tasks. Compared to the Majority Label baseline, the GPT-2 models exhibit varied levels of performance enhancement across different tasks. Specifically, the GPT2-Base model outperforms the baseline in SST-2, QQP (F1), MNLI, MNLImm, QNLI, and BoolQ. Similarly, the GPT2-Base-PPO model surpasses the baseline in the same tasks: SST-2, QQP (F1), MNLI, MNLI-mm, QNLI, and BoolQ. The GPT2-Large model demonstrates superior performance over the baseline in SST-2, MRPC (F1), MNLI, MNLI-mm, QNLI, BoolQ, and WSC. While, the GPT2-Large-PPO model outperforms the majority baseline in all tasks except for CoLA and MultiRC, marking significant performance improvement in SST-2, MNLI-mm, and QNLI, with an increase of 34.1, 28.3, and 44.2 respectively. The performance across various models and tasks exhibits considerable variability, showing that different models may excel in distinct language understanding domains. The superior scores of the GPT2-Large-PPO model suggest that larger models fine-tuned with PPO could enhance performance, yet further examination reveals inconsistencies. Finally, we note that the PPO training only improves the performance of the GPT2-Large model, suggesting that PPO training may require a model with a minimum number of parameters to work in the limited data setting. However, more experiments are needed to confirm this finding. Performance on MSGS benchmarks Table Performance on Age-of-acquisition benchmarks According to Performance on Human Evaluation. In Table We also find significant differences in Consistency (Const.) and Plot Coherence (PCoh) between GPT-Large and GPT2-Large-PPO. Intuitively, these metrics evaluate generative models' capability in following the beginning of the story background rather than just content creation. Our findings indicate that the performance scores for GPT2-Base and GPT2-Base-PPO models are fairly similar, but both are lower than those of the GPT2-Large model variants. Again, this indicates that the large models outperform the smaller models, even though we trained on a relatively small dataset. Moreover, the GPT2-Large-PPO model significantly improves consistency and plot coherence scores compared to the standard GPT2-Large model. This suggests that large models (at least GPT2-Large in our case) can integrate the reward model to generate better outputs than the GPT2base (smaller model). We analyze the large model outputs in Table Summary of Findings and Limitations. Overall, we found that the GPT-2-Large generally works better than GPT-2-base with and without PPO. Also, PPO made significant improvements to the model's consistency and plot coherence on the storytelling task when used with the large model. However, PPO generally hurts performance with the smaller GPT-2-Base model. There were several limitations to our study. First, a major limitation of this work is the lack of comparison with architectures beyond GPT-2. Moreover, comparisons to even larger models should be made in the future. We were limited by the computational resources required for large-scale testing during the BabyLM shared task timeline. Next, we had a limited-size reward model dataset. Future work should explore the impact of reward model dataset size and variety. Future work should explore the impact of reward model dataset size and variety. Additionally, the study did not explore the hyperparameter tuning for the reward model and the loss function in depth. Exploring different settings for hyperparameters and examining alternative methods for reward training, such as varying the weighting of the loss terms, could yield different results and improve the model's performance in the storytelling task. Finally, we only had one annotator for the human evaluation and were limited in size. A more extensive human study could find more intricate differences between the models. In this study, we investigated whether the small pretrained model, with its limited data size, can also benefit from RLHF, thus potentially improving its overall performance. We evaluate the two variants of the GPT-2 model: the GPT-2 Base model with 125M parameters and the larger GPT-2 Large model with 774M parameters. Both variants are pretrained on the 100M words BabyLM Challenge dataset. We then fine-tune both models using RLHF and evaluate their ability to acquire new linguistic patterns and storytelling ability, including generating coherent and creative English text while adhering to the story background. We observe that RLHF has a little or negative effect on the smaller model. However, a substantial increase in model parameters noticeably enhances the larger model's performance in storytelling tasks. In summary, our experiments shed light on the behavior of small language models fine-tuned using RLHF to perform storytelling tasks in a limited dataset setting.
1,009
910
1,009
Exploiting Multi-Word Units in History-Based Probabilistic Generation
We present a simple history-based model for sentence generation from LFG f-structures, which improves on the accuracy of previous models by breaking down PCFG independence assumptions so that more f-structure conditioning context is used in the prediction of grammar rule expansions. In addition, we present work on experiments with named entities and other multi-word units, showing a statistically significant improvement of generation accuracy. Tested on section 23 of the Penn Wall Street Journal Treebank, the techniques described in this paper improve BLEU scores from 66.52 to 68.82, and coverage from 98.18% to 99.96%.
Sentence generation, or surface realisation, is the task of generating meaningful, grammatically correct and fluent text from some abstract semantic or syntactic representation of the sentence. It is an important and growing field of natural language processing with applications in areas such as transferbased machine translation This paper is concerned with sentence generation from Lexical-Functional Grammar (LFG) fstructures We also present work on utilising named entities and other multi-word units to improve generation results for both accuracy and coverage. There has been a limited amount of exploration into the use of multi-word units in probabilistic parsing, for example in We take the generator of The remainder of the paper is structured as follows: in Section 2 we review related work on statistical sentence generation. Section 3 describes the baseline generation model and in Section 4 we show how the new history-based model improves over the baseline. In Section 5 we describe the source of the multi-word units (MWU) used in our experiments and the various techniques we employ to make use of these MWUs in the generation process. Section 6 gives experimental details and results.
In (statistical) generators, sentences are generated from an abstract linguistic encoding via the application of grammar rules. These rules can be handcrafted grammar rules, such as those of Insofar as it is a broad coverage generator, which has been trained and tested on sections of the WSJ corpus, our generator is closer to the generators of Another feature which characterises statistical generators is the probability model used to select the most probable sentence from among the space of all possible sentences licensed by the grammar. One generation technique is to first generate all possible sentences, storing them in a word lattice The probability model described in this paper also incorporates syntactic information, however, unlike the discriminative HPSG models just described, it is a generative history-and PCFG-based model. While C-structures and f-structures are related in a projection architecture in terms of a piecewise correspondence φ. Figure terms of the curvy arrows pointing from c-structure nodes to f-structure components in Figure The up-arrows and down-arrows are shorthand for φ(M(n i )) = φ(n i ) where n i is the c-structure node annotated with the equation. T ree best := argmax Tree P (Tree|F-Str) (1) The generation model of 2). Table Cahill and van Genabith ( From Figures Table Given the input f-structure (for She accepted) in Figure Figure 2: C-and f-structures with φ links for the sentence She hired her. Table The automatic generation grammar transform presented in There is another option available to us, and that is the option we will explore in this paper: instead of applying a generation grammar transform, we will improve the f-structure-based conditioning of the generation rule probabilities. In the original model, rules are conditioned on purely local f-structure context: the set of features/attributes φ-linked to the LHS of a grammar rule. As a direct consequence of this, the conditioning (and hence the model) cannot not distinguish between NP, PRP and NNP rules appropriate to e.g. subject (SUBJ) or object contexts (OBJ) in a given input f-structure. However, the required information can easily be incorporated into the generation model by uniformly conditioning generation rules on their parent (mother) grammatical function, in addition to the local φ-linked feature set. This additional conditioning has the effect of making the choice of generation rules sensitive to the history of the generation process, and, we argue, provides a simpler, more uniform, general, intuitive and natural probabilistic generation model obviating the need for CFG-grammar transforms in the original proposal of In the new model, each generation rule is now conditioned on the LHS rule CFG category, the set of features φ-linked to LHS and the parent grammatical function of the f-structure φ-linked to LHS. In a given c-/f-structure pair, for a CFG node n, the parent grammatical function of the f-structure φ-linked to n is that grammatical function GF, which, if we take the f-structure φ-linked to the mother M(n), and apply it to GF, returns the f-structure φ-linked to n: The basic idea is best explained by way of an example. Consider again Figure Given Figures Note that the new conditioning feature, the fstructure mother grammatical function, GF, is available from structure previously generated in the cstructure tree. As such, it is part of the history of the tree, i.e. it has already been generated in the topdown derivation of the tree. In this way, the generation model resembles history-based models for parsing Section 6 provides evaluation results for the new model on section 23 of the Penn treebank. In another effort to improve generator accuracy over the baseline model we explored the use of multiword units in generation. We expect that the identification of MWUs may be useful in imposing wordorder constraints and reducing the complexity of the generation task. Take, for example, the following The gold version of the sentence contains a multiword unit, New York, which appears fragmented in the generator output. If multi-word units were either treated as one token throughout the generation process, or, alternatively, if a constraint were imposed on the generator such that multi-word units were always generated in the correct order, then this should help improve generation accuracy. In Section 5.1 we describe the various techniques that were used to incorporate multi-word units into the generation process and in 5.2 we detail the different types and sources of multi-word unit used in the experiments. Section 6 provides evaluation results on test and development sets from the WSJ treebank. We carried out three types of experiment which, in different ways, enabled the generation process to respect the restrictions on word-order provided by multi-word units. For the first experiments (type 1), the WSJ treebank training and test data were altered so that multi-word units are concatenated into single words (for example, New York becomes New York). As in In the second experiment (type 2) only the test data was altered with no concatenation of MWUs carried out on the training data. In the final experiments (type 3), instead of concatenating named entities, a constraint is introduced to the generation algorithm which penalises the generation of sequences of words which violate the internal word order of named entities. The input is marked-up in such a way that, although named entities are no longer chunked together to form single words, the algorithm can read which items are part of named entities. See the rightmost f-structure in Figure We carry out experiments with multi-word units from three different sources. First, we use the output of the maximum entropy-based named entity recognition system of For our purposes we are not concerned with the distinctions between different types of named entities; we are merely exploiting the fact that they may be treated as atomic units in the generation model. In all cases we disregard multi-word units that cross the original syntactic bracketing of the WSJ treebank. An overview of the various types of multi-word units used in our experiments is presented in Table All experiments were carried out on the WSJ treebank with sections 02-21 for training, section 24 for development and section 23 for final test results. The LFG annotation algorithm of In Table +MWU Best Automatic displays our best results using automatically identified named entities. These were achieved using experiment type 2, described in Section 5, with the MWUs produced by We now discuss the various MWU experiments in more detail. See Table Our first set of experiments (type 1), where both training data and development set data were MWUchunked, produced the worst results for the automatically chunked MWUs. BLEU score accuracy actually decreased for the automatically chunked MWU experiments. In an error analysis of type 1 experiments with (Chieu and Ng, 2003) concatenated MWUs, we inspected those sentences where accuracy had decreased from the baseline. We found that for over half (51.5%) of these sentences, the input f-structures contained no multi-word units at all. The problem for these sentences therefore lay with the probabilistic grammar extracted from the MWUchunked training data. When the source of MWU for the type 1 experiments was the BBN, however, accuracy improved significantly over the baseline and the result is the highest accuracy achieved over all experiment types. One possible reason for the low accuracy scores in the type 1 experiments with the (Chieu and Ng, 2003) MWU chunked data could be noisy MWUs which negatively affect the grammar. For example, the named entity recogniser of In order to avoid changing the grammar through concatenation of MWU components (as in experiment type 1) and thus risking side-effects which cause some heretofore likely constructions become less likely and vice versa, we ran the next set of experiments (type 2) which leave the original grammar intact and alter the input f-structures only. These experiments were more successful overall and we achieved an improvement over the baseline for both BLEU and String Edit Distance scores with all MWU types. As can be seen from Table It is difficult to compare sentence generators since the information contained in the input varies greatly between systems, systems are evaluated on different test sets and coverage also varies considerably. In order to compare our system with those of We have presented techniques which improve the accuracy of an already state-of-art surface generation model. We found that a history-based model that increases conditioning context in PCFG style rules by simply including the grammatical function of the f-structure parent, improves generator accuracy. In the future we will experiment with increasing conditioning context further and using more sophisticated smoothing techniques to avoid sparse data problems when conditioning is increased. We have also demonstrated that automatically acquired multi-word units can bring about moderate, but significant, improvements in generator accuracy. For automatically acquired MWUs, we found that this could best be achieved by concatenating input items when generating the f-structure input to the generator, while training the input generation grammar on the original (i.e. non-MWU concatenated) sections of the treebank. Relying on the BBN corpus as a source of multi-word units, we gave an upper bound to the potential usefulness of multi-word units in generation and showed that automatically acquired multi-word units, encouragingly, give results not far below the upper bound.
626
1,203
626
LVP-M 3 : Language-aware Visual Prompt for Multilingual Multimodal Machine Translation
Multimodal Machine Translation (MMT) focuses on enhancing text-only translation with visual features, which has attracted considerable attention from both natural language processing and computer vision communities. Recent advances still struggle to train a separate model for each language pair, which is costly and unaffordable when the number of languages increases in the real world. In other words, the multilingual multimodal machine translation (Multilingual MMT) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for multiple languages. Besides, the image modality has no language boundaries, which is superior to bridging the semantic gap between languages. To this end, we first propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages. Then, an effective baseline LVP-M 3 using visual prompts is proposed to support translations between different languages, which includes three stages (token encoding, language-aware visual prompt generation, and language translation). Extensive experimental results on our constructed benchmark datasets demonstrate the effectiveness of LVP-M 3 method for Multilingual MMT.
Multimodal Machine Translation (MMT) extends the conventional text-based machine translation by taking corresponding images as additional inputs However, as shown in Fig. To eliminate the above limitations, we propose a simple and effective LVP-M 3 method, including Token Encoding, Language-aware Visual Prompt Generation (LVPG), and Language Translation. Specifically, in the token encoding stage, we use the pre-trained vision encoder to extract the visual tokens. Then, we follow Extensive experiments are conducted on our proposed benchmark datasets for LVP-M 3 . Results show that our model achieves the state-of-the-art performance in all translation directions, especially outperforming the text-only multilingual model by 4.3 BLEU scores on average. The contributions of this work are summarized as follows: • We first propose the Multilingual Multimodal Machine Translation (Multilingual MMT) to handle the translations for multiple language pairs, which investigates the effect of vision modality for multilingual translation and reduces the computation costs of existing MMT methods for multiple languages. • For Multilingual MMT, we propose an effective language-aware visual prompt generation strategy to produce different visual prompts for different target languages based on the vision modality and type of the target language. • We establish two Multilingual MMT benchmark datasets to nourish the further research on Multilingual MMT, and extensive experiments on these datasets demonstrate the effectiveness of our proposed LVP-M 3 method.
Multimodal Machine Translation. The multimodal context plays a key role in Multimodal Machine Translation (MMT). Recent MMT methods can be divided into three categories: (1) Using global visual features directly (3) Combining other vision tasks with the translation task by multitask learning Vision-Language Models. The success of visionlanguage models can be credited to the following three important reasons: Transformers We introduce two Multilingual MMT benchmark datasets (i.e., M 3 -Multi30K, M 3 -AmbigCaps) using Multi30K Supposing we have M languages {L m } M m=1 and N bilingual corpora {D n } N n=1 under the multilingual setting, the dataset D n consists of K parallel sentences {(x k L i , x k L j )} K k=1 between language L i and L j , where K is the number of training instances and each instance has the corresponding image z k . Given the corpora, we can train a Multilingual MMT model that enables the translation among different languages with the help of image modality. The training objective of the Multilingual MMT is learnt with a combination of different languages: where the Multilingual MMT model uses a complete shared model for all translation directions. In this work, we adopt Transformer as the backbone model for language encoding and pre-trained vision branch of the CLIP model As shown in Fig. as the input of the Transformer decoder to predict the translation results and compute the loss in Eq. 1 using the predicted translation results and the ground-truth target language x k L j . For each image z k , we directly use the vision backbone (e.g., the pre-trained vision branch of the widely-used CLIP model where H denotes the vision encoder and M is the number of visual tokens. Similarly, given the source language x k L i , based on the Transformer encoder E, the source language tokens {s f } F f =1 are extracted as follows: where F is defined as the number of source language tokens. In language-aware visual prompt generation stage of Fig. After that, we generate the language-aware visual prompt {p m } M m=1 as follows: θ is the generated parameters in Eq. 4, which is assigned to the mapping network M. In this way, when translating source language into different target languages, the θ will be generated according to type of target language tokens, and the visual tokens {v m } M m=1 can be mapped into different visual prompts according to the type of the target language. In Fig. Specifically, we utilize the Transformer module implemented by self-attention to fuse the information from other tokens within each modality for {s f } F f =1 and {p m } M m=1 , respectively, and we represent the updated source language tokens and visual prompt as S and P, respectively. Then, we take S as the query, and the P as the key and value in the co-attention module to generate the vision-guided source language tokens {q f } F f =1 as follows: where ∥ H h=1 is the concatenation of the H attentive features along the channel dimension. SF represents the softmax operation. ϕ h Q (•), ϕ h K (•) and ϕ h V (•) are the corresponding linear projection operations of the h-th head for the query, the key and the value, respectively. C denotes the number of feature channels. After the operation of Eq. 6, other operations (e.g., FFN, layer normalization (Ba et al., 2016)) of standard attention scheme Finally, at inference, based on {q f } F f =1 , we use the Transformer decoder to predict the target language sequence in our LVP-M 3 . We evaluate our proposed LVP-M 3 method on the multilingual dataset including 7 languages and 6 translation directions. In all experiments, English (En) is treated as the pivot language for Multilingual MMT setting. Implementation Details. Our implementation is based on the Fairseq Evaluation. We compute the cumulative 4-gram BLEU scores to evaluate the quality of translation. During inference, the beam search strategy is performed with a beam size of 5 for the target sentence generation. We set the length penalty as 1.0. Baseline Methods. As we are the first multilingual method in this area, we reproduce methods including Text Transformer To demonstrate the effectiveness of LVP-M 3 , we compare our method with baseline methods on M 3 -Multi30K under the multilingual MMT setting in Table Results of M 3 -AmbigCaps are presented in Table In this section, we conduct comprehensive ablation study to demonstrate the effectiveness of different components in our proposed LVP-M 3 method on the test set of M 3 -Multi30K. In Table Effect of Different Vision Backbones. In Table 5, we compare the results of LVP-M 3 by using the visual tokens extracted by different vision backbones Visualization of Different Masking Ratios. As shown in Fig. Qualitative Analysis. To further explore the necessity of visual modality for machine translation, we compare the predictions results (i.e., De and Fr) of a sample source language (i.e., En) with the ground truth of these target languages in Fig. In our proposed LVP-M 3 method, first, both encoders (vision and text) and decoder are shared for all language pairs, while previous methods on MMT usually adopt different models for different language pairs. Second, to generate different visual prompts for different language pairs with minimal additional parameters, we just use controller network to generate the parameters of mapping network to map the vision embeddings. Third, different language translation directions are used in training, where the target language token is also prefixed to each source sentence for denoting the translation direction. Last, training separated models will result in huge training costs when compared with the multilingual models as discussed in many multilingual methods. In our work, we first propose the Multilingual MMT task to support the multilingual multimodal machine translations between different language pairs using one single model. Then, we propose an effective LVP-M 3 baseline method for the Multilingual MMT task, where a language-aware prompt generation module is proposed to generate visual prompts for different target languages dynamically. Comprehensive experimental results on our established Multilingual MMT benchmark datasets demonstrate the effectiveness of our proposed LVP-M 3 method for Multilingual MMT. Although our proposed LVP-M 3 method has achieved substantial improvements for Multilingual MMT, we find that there still exists some hyper-parameters (e.g., the number of encoder and decoder layers,) to tune for better results, which may be time-consuming. Besides, in our established datasets, we support seven languages currently, and we will extend to support more languages and more translation directions for Multilingual MMT in the future work.
1,248
1,558
1,248
The economic trade-offs of large language models: A case study
Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. Large Language Models (LLMs) are a natural fit for this use case; however, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model's utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM -prompt engineering, fine-tuning, and knowledge distillation -using feedback from the brand's customer service agents. We find that the usability of a model's responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space.
Amidst increased automation, human agents continue to play an important role in providing excellent customer service. While many conversations are automated in text-based customer support, others are routed to human agents who can handle certain customer concerns more effectively. Agents often handle multiple conversations at once, consulting customer account information and brand policies while maintaining these conversations. As agents are expensive to staff, many companies are seeking ways to make their work more efficient. LivePerson's Conversation Assist, Large Language Models (LLMs) are a natural fit for this technology, as they have achieved high performance on response generation tasks With one brand as a case study, we explore ENCS with various methods of model customization. Using feedback from the brand's customer Figure service agents, we evaluated fine-tuning, prompt engineering, and distillation to adapt and optimize GPT-2 We generalize this case study to a broader range of brands and models. We find that low perplexity correlates with the probability that an agent will use a response, and we extrapolate from this finding to use perplexity to estimate the ENCS for additional model customization strategies. We apply ENCS to each configuration, and while models, prices, and use cases will change over time, we expect that this framework can be continuously leveraged for decision making as technology evolves.
Transformers The size of these LLMs plays a significant role in their high performance Response generation is difficult to evaluate holistically. Some have focused on relevance and level of detail We calculate the ENCS for each model using equation (2), repeated here in (5). ( Using the RU scores in Tables The factor with the largest impact on AR's cost savings is the usefulness of the predictions, as the best annotated model (GPT-3 PE)'s predictions are used or edited only 5% more often than the fastest (GPT-2 BFT BD), while its cost was almost 100 times higher (¢1.09 vs ¢0.0011). Despite this, the difference in ENCS between these two models is minimal and only amounts to about $3k per year. In general, the RU and ENCS are higher for the extrapolated results, which are somewhat less reliable, but they lead to one important insight: in this case, the inference cost for a fine-tuned GPT-3 model is too high for the customer to realize savings. This model makes a number of simplifying assumptions. We assume that agents always have conversations to respond to or some other work to do. We exclude the problem of workforce optimization from our framework, noting that when fewer agents are needed to handle the conversational traffic, workforce can be reduced. We also exclude R&D cost, but return to this factor in section 5. Furthermore, we omit any discussion of the cost of an agent using an inappropriate or factually incorrect response. For the purposes of this model, we assume that agents read all suggestions carefully, but a deeper analysis of the risk and cost of these errors is a critical area for further study. We focus on a single brand to evaluate the use of LLMs for Conversation Assist and explore the application of ENCS for making product decisions. We evaluate three model customization strategies using manual ratings from brand agents. We then evaluate how well these ratings relate to perplexity and use this to assess a larger set of models. Finally, we estimate ENCS and discuss the implications. We partnered with a single brand, who we will refer to as Anonymous Retailer (AR), for this case study. AR's customer base includes both consumers and sellers who consign items through AR's platform. Because AR's agents are trained across different customer concern categories, they can provide expert feedback on a wide range of data. At the time of writing, AR has about 350 human agents who use LivePerson's chat platform. AR supports about 15,000 conversations per month, and uses chat bots for simple tasks and routing, while their human agents send 100,000 messages per month on average. In comparison, the average number of conversations per month for brands on LivePerson's platform is 34,000, with a median of 900 monthly conversations per brand and a standard deviation of 160. We constructed three datasets: brand-specific training, brand-specific test, and general training. We de-identified data, replacing each entity with a random replacement. For the test set, we manually ensured that the de-identification was internally consistent across the conversation for agent and consumer names, addresses, and order numbers. The brand-specific data comprises English customer service conversations from 2022 that include human agent and bot messages. We filtered these conversations to ensure that they had at least two agent turns, more human agent than bot messages, and a positive Meaningful Conversation Score. BFT = fine-tuned on AR brand data, GFT = fine-tuned on the general dataset, BD = distilled using AR brand data, GD = distilled using the general dataset, PE = prompt engineered. From this filtered data, we randomly sampled 100,059 conversations to make up our training set. From the remainder, we curated a brand-specific test set by manually selecting 287 conversations where the customer's goal could be clearly established from the context of the conversation. We constructed the general training set from five additional retail brands whose product lines fall into similar categories as AR. We filtered and processed the data using the method described above and selected 70,000 conversations per brand, or used the entirety of the brand's data if there were fewer than 70,000 conversations. The total size of the general training set is 236,769 conversations. For more details on these datasets, see Appendix D. We explored three standard model customization strategies: prompt engineering, fine-tuning, and knowledge distillation. Using these strategies, we tested eleven configurations (Table GPT-3 We prompted the text-davinci-003 GPT-3 model (OpenAI, 2023a), following OpenAI's best practices for prompt engineering GPT-2 We fine-tuned GPT-2 We fine-tuned GPT-3 with promptcompletion pairs using the OpenAI API. We trained for 4 epochs using a total of 50 examples that were selected and split at random human-agent turns to append the preceding conversation to the prompt and the human-agent turn as the completion. Additionally, the prompt included a brief summary of the context before giving the conversational context, which includes a separator sequence to delineate the summary and the conversation. An example of a prompt-completion pair is given below: Prompt: Summary: The following is a conversation between a CONSUMER and a polite, helpful, customer service AGENT from <BRAND_NAME>. CONSUMER: <consumer_turn> AGENT[non-human]: <agent_turn> ... To reduce latency and cost to serve by almost half, we distilled our fine-tuned GPT-2 models using the Transformers library While previous work has assessed the helpfulness or usability of a response with crowd-sourced judgments Table As the use rate increases, the edit rate and ignore rates both decrease, indicating that conversations resulting in editable prompts for some models can result in usable prompts for another model. We also note, that while the use rate was similar for GPT-2 BFT BD and Cohere PE, the edit rate was much higher for cohere, highlighting the importance of assessing the cost savings of an editable response vs. ignoring the response entirely. We also annotated these conversations for the Foundation Metrics in Adiwardana et al. ( (4) P P (W ) = N 1 P (w 1 ,w 2 ,...,w N ) Using all annotated LLMs' suggested responses across all conversations in the evaluation set, we fit a set of linear regression models using the perplexity of the generated agent turn as our independent variable, and the probability of use, edit, and ignore as our dependent variables. Individual linear models trained on the output of a single LLM did not show statistical significance; however, models trained on the output of all LLMs did show significance in the F-statistic (p < 0.05 for P(edit), p < 0.001 for P(use) and P(ignore)). Extrapolating from these linear models allows us to illustrate potential cost savings for more models than we were able to annotate. These linear models predict the RU scores in Table To decide which of these models will lead to the greatest ROI for a brand, we must consider the break-even point for each model based on the ENCS (which includes agent labor and model inference costs) as well as R&D cost and message volume. This can be visualized with Figure (6 Given that the difference in ENCS per message across the models explored in this paper is not large, low R&D cost is the main consideration to reach the fastest ROI. For a small brand sending 500,000 agent messages per year and saving about $24,000 per year with any of the models, reducing the upfront R&D cost would be critical. On the other hand, a large enterprise brand who will save $950,000 per year over 20 million messages, will break-even on any R&D cost fairly quickly. As a model with lower inference cost will offset high R&D cost more quickly and lead to more savings over a longer period of time, inference cost is a much more important factor for a brand with high traffic. In Appendix K, we provide a detailed example of the impacts of these costs. It is also worth noting that when choosing between in-house and third-party models, the difference in R&D and maintenance cost may not be as significant as one might expect. While an in-house model requires up-front investment to train and serve, OpenAI and Cohere's LLMs at the time of writing require a fair amount of effort to prompt engineer for the best performance and these prompts should be customized to some degree for different brands and scenarios. From a maintenance perspective, we similarly find that while an in-house model must be refreshed, prompts must also be redesigned as third-party providers update and release new models. Brands might also wish to consider factors that are not accounted for in this framework. Some brands would prefer to use an in-house model so that they can retain control over their data and protect their customer privacy by limiting access of their data to third-party vendors. An in-house model also provides more control over the model's suggestions, as well as control over when the model is updated or deprecated. Especially as technology develops, models become less expensive to train, and the performance of open-source models improves, these factors may carry even more weight. In this case study, we demonstrated the utility of LLMs for agent assistance products, exploring 3 model adaptation strategies across 11 model configurations. Based on feedback from real customer service agents, we found that bigger is not always better, as the distilled GPT-2 model resulted in greater cost-savings than GPT-3, despite lower quality responses, because, at the time of writing, its inference cost is so much lower. These results empower near-term decision-making for integrating models like these into production. However, with the rapidly shifting NLP landscape, a framework to assess the cost benefits of new technologies is critical to facilitate decisions about integrating them into products. The flexible framework presented in this paper, ENCS, enables NLP practitioners to invest in innovations that lead to tangible business benefits. We found that for this product, the impact of model quality far outweighs inference cost, pointing to the importance of continuing to push the state of the art, while considering practical expense. This framework empowers the NLP community to invest in the most cost-effective technology for their specific needs, even as that technology, its capability, and its pricing evolve. To protect customer and agent privacy, the data used to train and evaluate models was fully anonymized by replacing all customer or agent names, addresses, phone numbers, or other personal identifiers with a random name or string. We also compensated agents for annotations in line with their standard rate as agents at AR. While the tools described in this paper have the explicit goal of making agents' jobs easier, theyand specifically the lens of a cost savings analysishave the potential to be used to motivate reductions in workforce, and we acknowledge the impact that this can have on the agents themselves. We also note that these tools can also improve the customer experience by reducing wait times, which can lead to fewer frustrated customers when they do interact with agents. In this study, we collected feedback on the usefulness of model responses from customer service agents at AR. These agents were recommended based on their availability and experience with Conversation Assist; however, we did not receive details about the agents such as their level of training or experience, which may have an impact on their preferences using the suggested responses. Furthermore, while agents in our study received a flat rate per judgment with no bonus or penalties to how they judged the response, some businesses have existing agent metrics (e.g. actual handle time, AHT targets, etc.) that could incentivize the agents to behave differently while performing their jobs. These metrics have the potential to exert pressure on agents in real-life situations to accept responses at a higher rate than in this study. The linear models in section 4.4.2 are based on the judgments of 5 agents on 3 LMM model outputs for 287 conversations. While they have shown a statistically significant relationship between usage rates and perplexity, this is a small pilot analysis. Additional data will be necessary to determine how well this generalizes. Our cost savings framework also makes a number of simplifying assumptions about workforce optimization. We've noted some of these assumptions in section 3.1, and they should be considered when leveraging this framework for different types of products. In addition, while the explicit goal of these models is to make agents' jobs easier, we expect from previous work studying vigilance tasks We started with a learning rate of 0.00008 with a linear scheduler and no warm up steps. The model was trained for 34000 steps across 4 Nvidia Tesla V100 GPUs, which equates to roughly 3 epochs for the AR dataset and 5 epochs for the general dataset. Completion: Cohere To fine-tune the Cohere model, we experimented with different configurations for preprocessing the input data that varied the input prompts and whether or not to use an end-ofsequence token between conversation turns. These selections were all motivated by the Cohere guide for prompt-engineering, which applies to both training and inference. The first prompt we experimented with was longer and more verbose, using sequences to indicate which part of the prompt was the instruction and which was the conversation to complete. The second prompt we used was shorter and did not have clear delimiters between the instructions and conversation. The full prompts can be seen in in the prompt engineering appendix (Appendix C). GPT-2 We distilled our fine-tuned GPT-2 models using the distillation code provided by Huggingface. The dataset was preprocessed with the same beginning and ending tokens as in the fine-tuning stage. The resulting model has 81M parameters across 6 layers, reduced from 117M parameters across 12 layers with the same vocabulary size. Training started with a learning rate of 0.0005 using a linear scheduler and ran for a maximum of 3 epochs on 1 Nvidia Tesla V100 GPU. This resulted in 67,014 and 164,352 steps for distilling on the AR and general datasets, respectively. GPT-3 We experimented with several prompts before choosing one that gave adequate results without consuming too much of the token limit. That is, we wanted to provide enough information to get the best results in the most concise way. First, we varied the verbosity of the framing of the request, changing factors such as whether the brand name was provided or whether there was a description of the product line: The quality of the responses did not vary based on the amount of detail given here, nor did they change when this was omitted, so we chose to omit it. The next aspect we varied was the amount of detail given in the description of the examples: Here are examples of good interactions Here are examples of good interactions between a consumer and an agent Here are examples of good interactions between a consumer and an agent where the agent is able to address the consumer's question Here is an example of a good consumer agent interaction where the agent is able to address the consumer's question. Consumer turns start with "CONSUMER:", customer service representative turns start with "AGENT:" And finally, we varied the description of the task we requested: Your job is to generate the next agent turn for the following conversation Your job is to generate the next agent turn for the following conversation to properly address the consumer's question. Results were best when the words "to properly address the consumer's question" were provided, but it did not matter whether they appeared in describing the examples or in the final instruction. Based on these findings, we selected the following prompt framing to use in the GPT-3 experiments: Here are examples of good interactions between a consumer and an agent. <sample conversation> Generate the next agent turn for the following conversation to properly address the consumer's issue <conversation> The next task was to find an exemplar conversation to use in the prompt. The prompt used with the example conversation (few shot, n = 1) and without (zero shot) did not differ in the quality of the responses, though it did differ in the exact wording (we also found that 2 runs in a row, same conditions, had similar differences in wording), showing that in these cases, the example we give it did not greatly affect the appropriateness of the response. Therefore, we went with a generic, hand-curated example based on observing trends in the data: Cohere For Cohere prompt engineering, we experimented with two separate prompts based on the instructions given in the Cohere prompt engineering documentation and the efforts that were made towards GPT-3 prompt engineering. The first prompt we used was a shorter prompt that did not include delimiting to indicate which part was instruction and which was the conversation to complete. The second prompt was more verbose and used the Cohere prompt engineering guidelines to indicate instruction and conversation. In both cases, we followed Cohere's recommendation on using stop-sequences by inserting <EOS> at the end of every turn. Without the stop sequence, Cohere would continue to generate multiple agent and consumer turns until it hit the maximum token count. With the stop-sequence, the Cohere model would only generate a single agent turn. Additionally, both prompts end with "AGENT[human]:" to prompt the model to generate the human-agent turn every time. The shorter prompt ultimately performed better so we only report the results for prompt engineering using the shorter prompt, however, both prompts used are given below: We constructed our brand-specific dataset using conversational data from our case-study brand, Anonymous Retailer (AR), from every month of the year 2022. From the year's data, we removed conversations that did not meet the following criteria: • 2 or more agent turns • an automated conversational quality score of neutral or higher 9 • proportionally more human agent then bot turns From the remaining data, we randomly sampled 100 conversations per month for a development and test set. The final test set contains 287 conversations that were chosen to represent a variety of common scenarios where the agent's response was not always dependent on a database-style lookup, and therefore could be reliably generated without a database integrated on the back-end. The development set was used to experiment with different prompt engineering configurations. The remaining data, not sampled for the development or test sets, was used for fine-tuning. Specific dataset sizes are given in the number of conversations, messages, and the average count of agent turns per conversation. All data was de-identified using an internal Personally Identifiable Information (PII) masker that replaces personal names, locations, and digit strings with a random stand-in. The evaluation set, which would undergo a round of human annotation, was reviewed to ensure that agent and consumer names, order numbers, addresses, etc, were internally consistent within a conversation. For the general dataset, we chose five retail brands whose product lines were a close match to AR's. These were filtered using the same method that was applied to the AR data. We then sampled 70,000 conversations from each brand, or used all the data available if the brand had less than 70,000, resulting in 236,769 conversations, as shown in Table As described in 4.4.1, to evaluate the usefulness of suggestions to agents, we asked nine agents from AR to look at turns in a conversation and tell us, based on their experiences as an agent for AR, whether the suggestion was one that they would use, edit, or ignore. Agents were given access to an internal annotation tool where they viewed conversations one at a time, with names and numbers replaced with random stand-ins to protect personally identifiable information, so that they could decide with the correct context what they would do in a given suggestion. They were given the following guidelines: What we're building: We want to build a tool that will offer agents suggestions for what to say next in conversations with customers. The tool would be like a powered-up Conversation Assist, where custom recommendations would be based on the entire conversation. We are investigating different techniques to train machine learning models so they can offer responses that are specific to a brand, and we want to understand how well they work. We want your guidance: We want to directly use your expertise as agents to evaluate how good these models are at giving you useful suggestions. We'll show you snippets of real conversations between a customer and a human agent, one at a time, as well as a suggestion for the next agent message. We would like you to consider the suggestion in the context of the conversation and decide whether you would use it, edit it, or ignore it. Our goal for this task is to evaluate the models that will be responsible for suggesting possible agent responses. This helps us understand exactly how useful they would be to agents like you, and gives us data to improve our models. The real conversations you'll see are specific to AR, with names and numbers replaced with a random stand-in to protect personal identifiable info. We have also replaced references to AR with a made-up brand, Republic of Fashion. We'll ask you to look at turns in a conversation and tell us, based on your experiences as an agent for AR whether the suggestion is one that you would use, edit, or ignore. The quality of these suggestions will be widely varied. Please make your decisions both on the content of the suggestions, and whether they match the appropriate tone for AR. We encourage you to go with your instincts here on what you would prefer to do in a real conversation. For example, the line between editing a response vs. ignoring it is often flexible, depending on how much editing you think it needs. We want to build tools that are the most useful to you, so feel free to go with your gut. It's possible that you could see the same conversation shown with an alternate suggestion at another point. That's fine -we don't need to compare differences in the suggestions. Our goal for this evaluation is to understand: would an experienced agent use the suggestion or not? The data from this will help us improve our suggestion models. You can find more details on the labels below. We won't be asking you to provide reasons for your responses. In the future, we might ask to do focus groups, or interviews to learn more about your thought process and why you selected answers, but it's not required for this task. Use suggestion: Select this label if you would use this message as-is if you were the agent handling this conversation. This includes: if you would make a formatting change (for example, splitting the turn into multiple messages) and if you would use the message suggested, and also send additional messages afterwards Edit suggestion: Select this label if you would choose this message, and then make edits before sending. Edits in this case include instances where personal or factual information (consumer names, agent names, discount percentages, etc.) would need to be verified and changed. The amount of editing needed does not matter; if you would change the message at all before sending it, please select this label. Ignore suggestion: Select this label if you would not use or edit the suggestion, but would rather type your own message. There are many valid reasons not to use a suggestion (it's irrelevant, repetitive, inappropriate, etc). In any case where the annotation tool does not properly display a suggestion, choose the fourth option, "No suggestion displayed". To better understand the Response Usability results, we annotated each response following a variation of the Foundation Metrics from Thoppilan et al. We omit Interestingness, as we found it irrelevant in a customer support setting. Additionally, because the models in this case study are not connected to the back-end system that the agents use to look up account details, we do not consider the accuracy of entities and therefore omit Groundedness and made adjustments to our understanding of Informative. The metrics used and their guidelines for annotation are below: Sensible: A suggestion is sensible if it is a logical continuation of the conversation, or a logical follow-up question or request. It also does not contradict earlier information given by the Agent in the conversation. A suggestion can be sensible or not sensible regardless of whether or not it is Specific or Informative. Specific: A suggestion is specific if it shows understanding of the context of the conversation. This may be shown in a reflection of something mentioned earlier in the conversation, a reflection of the question the consumer is trying to answer, etc. Whether or not a suggestion is Specific was considered only if Sensible = true. Informative: A suggestion is Informative if it provides factual information that would be able to be shown to be correct or incorrect. Smalltalk or opinions would not be Informative; statements about order numbers, general policies or available time slots would be. Agent actions taken that could be true or untrue (I've forwarded your inquiry/I've resent your package) would also be Informative. As mentioned, because the nature of the suggested responses was often specific to AR and these annotations were not done with access to the AR knowledge base, we had no basis on which to judge Groundedness as outlined in LaMDA Helpful: A suggestion is Helpful if it is first Informative (i.e., could be judged on correctness, as above). Then, given a presumption that the information provided is correct, it is Helpful if it fits the standard definition of "helpful" as judged by the annotator. Helpful should only be considered if Informative = true. Safety: A suggestion is considered Safe if it does not contain content that: could cause users mental or physical harm; may be misinformation about public figures or events; could be construed as financial advice or an unsubstantiated health and safety claim; has obscene (violent/gory, sexual, profane, or bigoted) material; reveals personal information that appears to be outside the context of the conversation (not related to the consumer or company). Safety was considered independent of other metrics. The response looks like something a consumer-facing agent might say, consistent with the role of an agent for AR. This consistency does not rely on being consistent with other infor-mation in the conversation and is considered independent of other metrics; that information is captured in Sensibility. Figure The Foundation Metrics label frequencies for each model are shown in Figure For Foundation Metrics, the GPT-3 PE responses were rated on par with the HUMAN responses, and it was considered even more specific and helpful than the human. To better understand the relationship between Response Usability and Foundation Metrics, we calculated the Pearson correlation coefficient (Table 6). The strongest positive correlations are between "sensible", "specific" and "role-consistent" and "use", while the strongest negative correlations are between those labels and "ignore". "Edit" does not correlate strongly with any labels, which we take as an indication that there are a wide range of reasons to edit messages, from the presence of information to the inclusion of non-sensible phrases amidst more useful text. It should be noted that very few of the generated responses were judged not "safe", hence the low correlations to all Response Usability measures. GPT-2 BFT BD outputs were labeled "ignore" by the agents much more often relative to "edit" than they were for the other models. The Foundation Metrics shed light on this, as GPT-2 BFT BD has the lowest score for each of these metrics, with the exception of "safe", which did not correlate with usability As mentioned in Section 4.4.1, nine different annotators annotated the usability of different models' suggested responses, and we gathered five annotations per response. We calculate the agreement level using Fleiss Kappa. The overall agreement level and the agreement level for each model are shown in Table In addition, we show the average suggested response length (as number of tokens) for tasks with high agreement rates. These averages are shown in Figure Figure These linear models show significance in the Fstatistic (p < 0.05 to p < 0.01). This indicates that the null hypothesis is rejected and that there is a relationship between the perplexity of the generated output and the agent's choice to use, ignore, or edit the suggestion. Foundation Metrics We used same method to calculate the relationship between the humanannotated foundation metrics and the perplexity of the generated output, except that we did not need to convert the annotations into a probability distribution, because the foundation metrics were a binary judgment. Perplexity outliers were removed from this data using the IQR method, and the R (R Core Team, 2021) base lm function was used to fit a linear equation to the data. Figure These linear models also show significance in the F-statistic (p < 0.05), indicating that there is a relationship between the perplexity of the generated model and the human judgments of the foundation metrics. Discussion This is a pilot study where 861 generated responses were judged by 5 annotators for the Response Usability metrics, and 3 annotators reached consensus on the Foundation Metrics. These models show that there is a significant relationship (p < 0.05) between perplexity and human judgments of Response Usability and the Foundation Metrics. These models, however, are considered only a starting point from which to build. Below we have the cost-saving models that we developed as the basis of ENCS (section 3). Table Table
1,101
1,442
1,101
From Raw Text to Universal Dependencies -Look, No Tags!
We present the Uppsala submission to the CoNLL 2017 shared task on parsing from raw text to universal dependencies. Our system is a simple pipeline consisting of two components. The first performs joint word and sentence segmentation on raw text; the second predicts dependency trees from raw words. The parser bypasses the need for part-of-speech tagging, but uses word embeddings based on universal tag distributions. We achieved a macroaveraged LAS F1 of 65.11 in the official test run, which improved to 70.49 after bug fixes. We obtained the 2nd best result for sentence segmentation with a score of 89.03.
The CoNLL 2017 shared task differs from most previous multilingual dependency parsing tasks not only by using cross-linguistically consistent syntactic representations from the UD project The Uppsala team has adopted a minimalistic stance in this respect and developed a system that does not predict any linguistic structure over and above a segmentation into sentences and words and a dependency structure over the words of each sentence. In particular, the system makes no use of part-of-speech tags, morphological features, or lemmas, despite the fact that these annotations are available in the training and development data. In this way, we go against a strong tradition in dependency parsing, which has generally favored pipeline systems with part-of-speech tagging as a crucial component, a tendency that has probably been reinforced by the widespread use of data sets with gold tags from the early CoNLL tasks The Uppsala system is a very simple pipeline consisting of two main components. The first is a model for joint sentence and word segmentation, which uses the BiRNN-CRF framework of The second main component of our system is a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence. The starting point for this model is the transitionbased parser described in Our original plans included training a single universal model on data from all languages, with cross-lingual word embeddings, but in the limited time available we could only start exploring two simple enhancements. First, we constructed word embeddings based on the RSV model Our system was trained only on the training sets provided by the organizers However, after the test phase was concluded, we discovered two bugs that had affected the results negatively. For comparison, we therefore also include post-evaluation results obtained after eliminating the bugs but without changing anything else. This gives us a macro-average LAS F1 of 70.49 and a top ten position in the post-evaluation ranking. We discuss our results in Section 6 and refer to the shared task overview paper
We model joint sentence and word segmentation as a character-level sequence labeling problem in a Bi-RNN-CRF model In the BiRNN-CRF architecture, charactersregardless of writing system -are represented as dense vectors and fed into the bidirectional recurrent layers. We employ the gated recurrent unit (GRU) As illustrated in Figure Multi-word tokens are transcribed without considering contextual information. For most languages, the number of unique multi-word tokens is rather limited and can be covered by dictionaries built from the training data. However, if there are more than 200 unique multi-word tokens contained in the training data, we employ an attention-based encoder-decoder Table The network is trained using back-propagation, and all embeddings are fine-tuned during training by back-propagating gradients. Adagrad The general segmentation model is applied to all languages with small variations for Chinese and Vietnamese. For Chinese, the concatenated trigram model introduced in Shao et al. ( Bug in test results: After the official evaluation, we discovered a bug in the segmenter, which affected words and punctuation marks immediately before sentence boundaries. After fixing the bugs, both word segmentation and sentence segmentation results improved, as seen from our post-evaluation results included in Section 6. The transition-based parser from The LEFT-ARC d transition removes the first item on top of the stack (i) and attaches it as a modifier to the first item of the buffer j with label d, adding the arc (j, d, i). The RIGHT-ARC d transition removes the first item on top of the stack (j) and attaches it as a modifier to the next item on the stack (i), adding the arc (i, d, j). The SHIFT transition moves the first item of the buffer i to the stack. To conform to the constraints of UD representations, we have added a new precondition to the LEFT-ARC d transition to ensure that the special root node has exactly one dependent. Thus, if the potential head i is the root node, LEFT-ARC d is only permissible if the stack contains exactly one element (in which case the transition will lead to a terminal configuration). This precondition is applied only at parsing time and not during training. A configuration c is represented by a feature function φ(•) over a subset of its elements and for each configuration, transitions are scored by a classifier. In this case, the classifier is a multi-layer perceptron (MLP) and φ(•) is a concatenation of BiLSTM vectors on top of the stack and the beginning of the buffer. The MLP scores transitions together with the arc labels for transitions that involve adding an arc (LEFT-ARC d and RIGHT-ARC d ). For more details, see Kiperwasser and Figure 2: Transitions for the arc-hybrid transition system with an artificial root node (0) at the end of the sentence. The stack Σ is represented as a list with its head to the right (and tail σ) and the buffer B as a list with its head to the left (and tail β). The main modification of the parser for the shared task concerns the construction of the BiLSTM vectors, where we remove the reliance on part-of-speech tags and instead add characterbased representations. For an input sentence of length n with words w 1 , . . . , w n , we create a sequence of vectors x 1:n , where the vector x i representing w i is the concatenation of a word embedding, a pretrained embedding, and a character vector. We construct a character vector ch e (w i ) for each w i by running a BiLSTM over the characters ch j (1 ≤ j ≤ m) of w i : As in the original parser, we also concatenate these vectors with pretrained word embeddings pe(w i ). The input vectors x i are therefore: Our pretrained word embeddings are further described in Section 4. A variant of word dropout is applied to the word embeddings, as described in Finally, each input element is represented by a BiLSTM vector, v i : For each configuration c, the feature extractor concatenates the BiLSTM representations of core elements from the stack and buffer. Both the embeddings and the BiLSTMs are trained together with the model. The model is represented in Figure With the aim of training a multilingual parser, we additionally created a variant of the parser The final change we made to the parser was to use pseudo-projective parsing to deal with nonprojective dependencies. Pseudo-projective parsing, as described in In order for information about nonprojectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached. We used MaltParser (Nivre transitions in a given configuration. The configuration (stack and buffer) is depicted in the top left corner. Each transition is scored using an MLP that is fed the vectors of the first word in the buffer and the three words at the top of the stack, and a transition is picked greedily. Each vector is a BiLSTM encoding of the word. Each xi is a concatenation of a word vector, a character vector, and an additional external embedding vector for the word. Character vectors are obtained using a BiLSTM over the characters of the word. An example is given at the bottom left of the figure. The figure depicts a single-layer BiLSTM, while in practice we use two layers. When parsing a sentence, we iteratively compute scores for all possible transitions and apply the best scoring action until the final configuration is reached. We did no hyper-parameter tuning for the parser component but instead mostly used the values that had been found to work well in The code is available at Our word embedding method is based on the RSV method introduced by Basirat and Nivre (2017). RSV extracts a set of word vectors in three main steps. First it builds a co-occurrence matrix for words that appear in certain contexts. Then, it normalizes the data distribution in the co-occurrence matrix by a power transformation. Finally, it builds a set of word vectors from the singular vectors of the transformed co-occurrence matrix. We propose to restrict the contexts used in RSV to a set of universal features provided by the UD corpora. The universal features can be any combination of universal POS tags, dependency relations, and other universal tags associated with words. Given the set of universal features, each word is associated with a high-dimensional vector whose dimensions correspond to the universal features. The space formed by these vectors can be seen as a multi-lingual syntactic space which captures the universal syntactic properties provided by the UD corpora. We define the set of universal features as {t w , t h , (t w , d, t h )}, where t w and t h are the universal POS tags of the word of interest and its parent in a dependency tree, and d is the dependency relation between them. It results in a set of universal word vectors with fairly large dimensions, 13 794. The values of the vector elements are set with the probability of seeing each universal feature given the word. These vectors are then centered around their mean and the final word vectors are built from the top k right singular vectors of the matrix formed by the high-dimensional universal word vectors: where v is the size of vocabulary, V is the matrix of right singular vectors, λ is the scaling factor that controls the variance of the data. The word vectors are extracted from the training part of the UD corpora for all words whose frequencies exceed 5, resulting in 204, 024 unique words. The number of dimensions, k, is set to 50 and the scaling parameter λ is set to 0.1 as suggested by The shared task contained four surprise languages, Buryat, Kurmanji, North Sami, and Upper Sorbian, for which there was no data available until the last week, when we had a few sample sentences for each language. Two of the ordinary languages, Kazakh and Uyghur, had a similar situation, since they had less than 50 sentences in their training data. We therefore decided to treat those two languages like the surprise languages. For segmentation, we utilized the small amount of available annotated data as development sets. We applied all the segmentation models trained on larger treebanks and adopted the one that achieved the highest F1-score as the segmentation model for the surprise language. We thus selected Bulgarian for Buryat, Slovenian for North Sami, Czech for Upper Sorbian, Turkish for Kurmanji, Russian for Kazakh as well as Persian for Uyghur. For parsing, we trained our parser on a small set of languages. For each surprise language, we used the little data we had for that language, and in addition a set of other languages, which we will call support languages. In this setting we took advantage of the language embedding implemented in the parser. Since the treebanks for the support languages have very different sizes, we limited the number of sentences from each treebank used per epoch to 2263 for North Sami and 2500 for the other languages, in order ot use a more balanced sample. For each epoch we randomly picked a new sample of sentences for each treebank larger than this ceiling. We chose the support languages for each surprise language based on four criteria: • Language relatedness, by including the languages that were most closely related to each surprise language. • Script, by choosing at least one language sharing the same script as each surprise language, which might help our character embeddings. • Geographical closeness to the surprise language, since geographically close languages often influence each other and can share many traits and have loan words. • Performance of single models, by evaluating individual models for all other languages on each surprise language, and choosing support languages from the set of best performing languages. We used a single multi-lingual model for Kazakh and Uyghur, since they are related. Table Table Support languages Buryat Russian-SynTagRus gps , Russian gs , Japanese pr , Kazakh ps , Bulgarian s Kurmanji Turkish gs , Persian r , Finnish-FTB ps , German ps , Slovenian-SST ps North Sami Finnish rs , Finnish-FTB prs , Estonian rs , Hungarian prs , Norwegian-Nynorsk gps Upper Sorbian Czech prs , Slovak prs , Slovenian prs , Polish prs , German gs Kazakh+Uyghur Russian-SynTagRus gps , Hungarian p , Turkish pr , Persian s , Arabic s Table Looking first at the LAS scores, we see that our system improves over the baseline in most cases and by a comfortable margin. In addition, we think we can distinguish three clear patterns: • Our results are substantially worse than the baseline (only) on the six low-resource languages. This indicates that our cross-lingual models perform poorly without the help of part-of-speech tags when it has little training data. It should, however, also be kept in mind that the baseline had a special advantage here as it was allowed to train segmenters and taggers using jack-knifing on the test sets. • Our results are substantially better than the baseline on languages with writing systems that differ (more or less) from European style alphabetic scripts, including Arabic, Chinese, Hebrew, Japanese, Korean, and Vietnamese. For all languages except Korean, this can be partly (but not wholly) explained by more accurate word segmentation results. • Our results are substantially better than the baseline for a number of morphologically rich languages, including Ancient Greek, Arabic, Basque, Czech, Finnish, German, Latin, Polish, Russian, and Slovenian. This shows that character-based representations are effective in capturing morphological regularities and compensate for the lack of explicit morphological features. To further investigate the efficiency of our crosslingual models, we ran them for two of the support languages with medium size training data that were not affected by the capping of data. Table For word segmentation, we have already noted that our universal model works well on some of the most challenging languages, such as Chinese, Japanese and Vietnamese, and also on the Semitic languages Arabic and Hebrew. This is not surprising, given that the model was first developed for Chinese word segmentation, but it is interesting to see that it generalizes well and gives competitive results also on European style alphabetic scripts, where it is mostly above or very close to the baseline. After fixing the bug mentioned in Section 2, our word segmentation results are in fact second best overall, only 0.02 below the best system. The sentence segmentation results are generally harder to interpret, with much greater variance and really low scores especially for some of the classical languages that lack modern punctuation. Nevertheless, we can conclude that performing sentence segmentation jointly with word segmentation is a viable approach, as our system achieved the second highest score of all systems on sentence segmentation in the official test results. After bug fixing, it is the best of all. All in all, we are pleased to see that a bare-bones model, which does not make use of part-of-speech tags, morphological features or lemmas, can give reasonable performance on a wide range of languages. At the time of writing, our corrected test results put us in the top ten on the list of postevaluation results for LAS F1, with some of the best scores for word and sentence segmentation. We have described the Uppsala submission to the CoNLL 2017 shared task on parsing from raw text to universal dependencies. The system consists of a segmenter, which extracts words and sentences from a raw text, and a parser, which builds a dependency tree over the words of each sentence, without relying on part-of-speech tags or any other explicit morphological analysis. Our parsing results (after correcting two bugs) are on average 2.14 points above the baseline, despite very poor performance on surprise languages, and the system has competitive results especially on languages with rich morphology and/or non-European writing systems. Given the simplicity of our system, we find the results very encouraging. There are many different lines of future research that we want to pursue. First of all, we want to explore the use of multilingual models with language embeddings, trained on much larger data sets than was practically possible for the shared task. In this context, we also want to investigate the effectiveness of our multilingual word embeddings based on universal part-of-speech tags, deriving them from large parsed corpora instead of the small training sets that were used for the shared task. Finally, we want to extend the parser so that it can jointly predict part-of-speech tags and (selected) morphological features. This will allow us to systematically study the effect of using explicit linguistic categories, as opposed to just relying on inference from raw words and characters. For segmentation, we want to investigate how our model deals with multiword tokens across languages.
611
2,101
611
FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Despite recent progress in open-domain dialogue evaluation, how to develop automatic metrics remains an open problem. We explore the potential of dialogue evaluation featuring dialog act information, which was hardly explicitly modeled in previous methods. However, defined at the utterance level in general, dialog act is of coarse granularity, as an utterance can contain multiple segments possessing different functions. Hence, we propose segment act, an extension of dialog act from utterance level to segment level, and crowdsource a largescale dataset for it. To utilize segment act flows, sequences of segment acts, for evaluation, we develop the first consensus-based dialogue evaluation framework, FlowEval. This framework provides a reference-free approach for dialog evaluation by finding pseudo-references. Extensive experiments against strong baselines on three benchmark datasets demonstrate the effectiveness and other desirable characteristics of our FlowEval, pointing out a potential path for better dialogue evaluation. * Equal contributions. Wanyu participated in building the dataset, while doing her internship with Prof. Liwei Wang.
Dialogue evaluation plays a crucial role in the recent advancement of dialogue research. While human evaluation is often considered as a universal and reliable method by the community Traditional word-overlap metrics, like BLEU
Segment Act Flow Speaker1: How are you? May I have a cup of coffee? greeting , directive Speaker2: Hmm. Certainly. What kind of coffee do you like? We have espresso and latte. backchannelsuccess , commissive , question , inform Table 2021) and harnessing the power of large models One difficulty of using segment act for opendomain dialogue evaluation is the lack of related data. Since there is no dataset for segment act, we follow the ISO 24617-2 annotation criteria Another challenge of incorporating segment act into open-domain dialogue evaluation lies in finding a suitable way to assess dialogues with the segment act feature. Modeling segment act flow is not trivial. On the one hand, dialogues have different numbers of turns and, thus, have varying lengths of segment act sequences. On the other hand, defining and finding the ground-truth segment act flow for a dialogue are almost infeasible, discouraging the development of any reference-based methods. To overcome this challenge, we design the first consensus-based reference-free open-domain dialogue evaluation framework, FlowEval. For a dialogue to be evaluated, our FlowEval first obtains the segment act flow, e.g., from a trained classifier. Then, we harvest segment act features, from a dedicated BERT-like Extensive experiments are carried out against the state-of-the-art baselines on Controllable Dialogue dataset In summary, the contributions of this work are three-fold: 1. We propose to model the segment level act as the dialog flow information for open-domain dialogue evaluation. 2. We are the first to propose a consensus-based framework for open-domain dialogue evaluation. Our studies show that the consensus approach can work efficiently even when the size of the search set, i.e., the number of dialogues in the training set, is around ten thousand. This attainable size shows the promise of our consensus approach for dialogue evaluation and other natural language evaluation tasks. 3. Our method can reach the best or comparable performance when compared with state-ofthe-art baselines. Additional experiments are conducted to examine detailed properties of our method and consensus process. We will release all code and data once the paper is made public. RUBER Different from Flow score and other related works, our method explicitly models the segment acts of a dialog, which deliver clear and interpretable functions for each utterance segment, rather than dense representations. Dialog act In this work, we propose segment act, an extension of dialog act to the utterance segment level, and design its corresponding tagset. Our segmentfocused arrangement can not only cover the diverse scenarios of open-domain dialogues, but also provide finer-grained information for dialogue evaluation than prevailing dialog act designs. Consensus-based methods have been adopted in image captioning We propose the new concept of segment act, extracting the core function of each segment in an utterance. We then crowdsource a large-scale opendomain dialogue dataset with our proposed segment act labels, called ActDial. We design an open-domain segment act tagset based on the ISO 24617-2 annotation criteria We crowdsourced segment act annotations on Con-vAI2 The ConvAI2 dataset is based on the Per-sonaChat dataset The DailyDialog dataset Following our definition of segment acts, we split each utterance into multiple segments using NLTK 4 FlowEval: A Segment-Act-Flow Aware Evaluation Metric In this section, we describe the details of our proposed dialogue evaluation framework, FlowEval. FlowEval is implemented in three stages: segment act harvesting, retrieval, and assessment. In order to utilize the segment act flow, we first need to harvest the segment act labels for an unseen raw dialogue U . In our experiments unless specified, the segment act labels are acquired by a text classification model, based on RoBERTa-large In the end, we will have the annotated segment act flow , where a i is the segment act label for i-th segment and n is the number of segments in U . For the retrieval process, FlowEval retrieves two sets of dialogues based on segment act features and content features respectively. The search space for FlowEval is our ActDial dataset and the unseen raw dialogue U serves as query. FlowEval first extracts segment act features from a masked segment act model, and retrieves k a nearest neighbors for U based on our defined similarity function. Then, FlowEval extracts content features from a RoBERTa-large model, and retrieves k c nearest neighbours for U based on another similarity function. The final outcome of this retrieval stage is k = k a + k c relevant dialogues for the unseen dialogue U . Figure Segment Act Flow Features. To extract segment act flow features, we treat every segment act label as a word and a segment act flow of a dialogue as a sequence. We then train a masked language model We further employ TF-IDF features to constrain the retrieved dialogues to have a similar topic as U . We collect the word count statistics from our ActDial dataset and compute the TF-IDF feature Having the feature set { Hh U , T U } of U and { Hh R , T R } of a human dialogue R in ActDial, we define an segment-act-based similarity metric S a to retrieve k a nearest neighbors (1) where cos is the cosine similarity. S a in Eq. 1 will only score high if R has a segment act flow as well as a topic closed to U . Content Features. Retrieval with segment act features only might miss dialogues that discussed similar contents as U but speakers communicated in a different way to U . Therefore, we retrieve from ActDial again but using features with regard to the content of U . We use RoBERTa-large S c in Eq. 2 will output a high score if R's content is closed to U . The final retrieved set of dialogues will be We define a metric to find the closest R * ∈ {R i } k to U by treating this small retrieved set {R i } k as pseudo-references. The distance between R * and U will be the final score of U . Concretely, we have the following scoring function F : where w is a hyper-parameter between 0 and 1. Eq. 3 assess U from two aspects: F a , computed by Eq. 4, indicates whether speakers in U interact naturally and is evaluated by ActBERT in Eq. 1 and BLEU score 5 Experiments and Analysis Controllable Dialogue Dataset contains the human-to-bot conversation data collected by We describe all the baselines used for comparison and the implementation details of our method. FED metric DynaEval The common practice to show the effectiveness of a dialogue evaluation metric is to calculate the Pearson, Spearman's, and Kendall correlation between human evaluation and the automatic evaluation FlowEval Reaches Comparable Performance. Across three datasets, our FlowEval achieves the best or comparable correlations with human evaluation. On Controllable Dialogue dataset, all baseline metrics fail to reach meaningful correlation , while FlowEval becomes the top performer. On the other two datasets, the results of FlowEval are comparable with most baselines, though the gap to the best method is obvious. We perform an ablation study on Controllable Dialogues to further demonstrate the effectiveness of segment acts and our consensus-based framework. Detailed description and results are documented in the Appendix B. We also list one success case and one failure case in the Appendix F to enable a closer observation of our approach. Automatic Evaluation Metrics Lack Transferability. We can observe that the best method on one dataset becomes mediocre on the other datasets, including our FlowEval. FlowEval outperforms all other methods on Controllable Dialogue dataset, but can only get to the second tier on the other two datasets. DynaEval, the best method on FED dataset, loses its advantage when tested on other datasets. The same story also happens for Flow score, a state-of-the-art metric in the DSTC9 dataset. This observation is consistent with study from previous work One reason for the brittleness of these methods is that their calculations rely on large models. The data used to train these large models plays an decisive role, as we can see from the performance difference between DynaEval_emp and DynaEval_daily. In addition, FlowEval depends on the segment act labels and these labels on FED dataset and DSTC9 dataset are annotated by a trained classifier. Even though the classifier has relatively high accuracy (90%), it still injects some errors to the segment act flow, which hinders the application of FlowEval on new datasets. These observations indicate that how to construct a robust dialogue evaluation metric remains a problem for the community. FlowEval Can Provide Complementary Information to Other Methods. Similar to Here we experiment with BERTScore, as it is the best performing reference-based metric on Controllable Dialogue. The reference-free form of BERTScore, called Consensus BERTScore, is similar to our FlowEval, except that we do not employ segment act features in the retrieval step and we exclude the segment act score, i.e., Eq. 4, in the assessment step. As shown in the third row of Table This promising result shows the potential of our consensus-based framework. It leads a new way to rethink the usability of reference-based metrics in dialogue evaluation. Dialogue Evaluation? Compared with semantic-meaning-focused metrics, what does segment act bring to dialogue evaluation? We hypothesize the explicit involvement of segment acts can bring useful information, complementary to semantic-meaning-focused metrics. We illustrate our hypothesis in Figure We conduct experiments on the test set of Controllable Dialogue dataset to validate our hypothesis. Two of the popular semantic-meaning-focused metrics are BERTScore As we could observe from the first three rows of Table We investigate why consensus-based framework can perform well in dialogue evaluation by visualizing the segment act feature space, an essential aspect in the retrieval process of FlowEval. We compare the segment act feature distribution between the three test sets and their corresponding retrieval sets, projecting these features to 2dimensional space by t-SNE (van der Maaten and Hinton, 2008) as shown in Figure The core idea of consensus lies on using the nearest neighbors as references to measure a newcomer. Only if the suitable nearest neighbors consistently exist, will the consensus of them have meaningful indication to evaluate a new subject. We can observe from Figure In this work, we propose a consensus-based reference-free framework for open-domain dialog evaluation with segment act flows. From extensive experiments against the state-of-the-art baselines, our method can reach the best or comparable correlation with human evaluation. Our segmentact-based methods complement well to previous semantic-meaning-focused methods, pushing the ceiling of correlations. Moreover, the promise of our consensus-based framework encourages us to step further in the direction of dialog evaluation. Our segment act dataset, ActDial, is constructed based on two widely-adopted open-domain dialogue datasets, ConvAI2 This work also brings the consensus-based framework into open-domain dialogue evaluation. We show the effectiveness of this framework when incorporating segment act flow and content information. Yet, the full potential of the consensusbased framework still needs more exploration. We will leave this as future work. A big part of this work contains (1) the data annotation on two existing benchmark datasets of conversation modeling: the ConvAI2 dataset and the DailyDialog dataset and (2) human evaluation on the overall quality of generated conversations. As our ActDial is built upon the existing datasets, we follow the original copyright statements of these two datasets and will further release our segment act annotations to the research community. During annotation, we only collected the segment act annotations, and no demographic or annotator's identity information was collected. In addition, we provide a detail description of human evaluation design in Appendix C. For the formal definitions and examples of segment act, please refer to Table We segmented all the dialogue utterances using the NLTK sentence punctuation tokenizer We crowdsourced segment act annotation from annotators whose native language is Mandarin Chinese (zh-cmn), but more importantly, they are proficient in English (en-US). More than 50 annotators participated after rigorous training to ensure data quality. Each segment is annotated by three different annotators. If the initial three annotations are all different, further round(s) of annotation on this segment would be conducted until it got a majority vote (at least two annotations are the same). Besides Fleiss' kappa Since the segment act distribution is unbalanced, we calculated another Fleiss' kappa excluding all the annotations with the most dominant segment act, i.e., inform, to eliminate potential bias. In this setting, the new kappa is 0.768 for DailyDialog and 0.775 for ConvAI2, staying roughly the same as the overall ones. These results prove the robustness of our annotations. Although it is impossible to check the correctness of every single annotation, we do perform sampling inspection when collecting the annotations everyday. In total, We sampled 8,000 segments randomly and annotated these segments by ourselves. Since we have a deeper understanding than our annotators and our annotations are examined multiple times by ourselves, our annotations on these 8,000 segments can be considered as ground truth. The majority votes of crowdsourced annotations are later compared with the ground truth labels to obtain sample accuracy. The sample accuracy in DailyDialog annotation is 0.90 and that in ConvAI2 is 0.93. The small gap of the accuracy is due to the difference in dialogue complexity. For the ConvAI2 dataset, we collected 481,937 segment acts on the training set, and 29,232 segment acts on the validation set. Since the testing set is not publicly available, we did not annotate it. For the DailyDialog dataset, we gathered 178,604 segment acts on the training set, 16,500 segment acts on the validation set, and 16,028 segment acts on the testing set. Note that even though ConvAI2 and DailyDialog split their data for training, validation, and testing purpose, it is not always necessary to mechanically follow the splits. Our annotations on ConvAI2 and DailyDialog can be used as a unity, ActDial, depending on the research problems. Table We perform ablation study on Controllable Dialogues and obtained positive results. This experiment is designed to reveal the effectiveness of segment act, so content-related information and features are excluded from the whole process. Specifically, we remove the content feature and only used the segment act flow feature during the retrieval (Section 4.2). We later assessed each dialogue on this shrunk retrieval set. The Pearson, Spearman's, and Kendall correlations in this setting are 0.298, 0.252, and 0.189 respectively. These results de-crease slightly from our full version of FlowEval (0.301, 0.256, and 0.193) but remain higher than the previous SOTA (0.282, 0.214, and 0.162). This ablation study strengthens our claim on the effectiveness of segment acts and our consensus-based framework. We collected human judgements from Amazon Mechanical Turk (AMT). The crowd-workers are provided with the full multi-turn conversation for evaluation. We ask crowd-workers to evaluate the relevancy, avoiding contradiction, avoiding repetition, persona consistency and overall quality of the conversation. The reason for designing the human evaluation on different aspects is that we assume a good conversation between human and a dialogue system should satisfy the following properties: (1) generating relevant and non-repetitive responses (relevancy and avoiding repetition), (2) memorizing the dialogue history and generating non-contradictory information (avoiding contradiction), (3) maintaining a consistent persona/topic (persona/topic consistency), (4) formulating a natural conversation (overall quality). The first four aspects are formulated as binarychoice questions, and the overall quality is formulated as Likert question on a 1-5 scale, where higher is better. During evaluation, we did not distinguish whether an utterance is generated by human or by dialogue model, because we want the evaluation is about the full conversation, rather than just utterances generated by the dialogue model. To ensure better data quality, Turkers are selected by their job success rate and geographic location (only admits turkers from English speaking countries). Before starting our evaluation job, turkers must read through our detailed guideline. For each dialogue, a turker is asked to evaluate the dialogue from the following perspectives: 1. Irrelevant response (binary): Whether or not the speaker generates a response which seems to come out of nowhere according to the conversation history. Binary score. Whether or not the speaker generates a response which contradicts to the common sense or to what himself just said in the previous conversation. Binary score. 3. Repetitive response (binary): Whether or not the speaker generates a response which has the same meaning as his previous utterance(s). Binary score. Whether or not the speaker generates a response which is not consistent with his persona profile. Only used if the dialoguesto-evaluate follow ConvAI2 setting and are generated with personas. Binary score. 5. Topic shifts (binary): Whether or not the speaker generates a response which belongs to a completely different topic compared with the previous conversation history. Only used if the dialogues-to-evaluate follow Daily Dialogue setting and are not generated with personas. Binary score. 6. Overall score (1-5): An overall impression of the dialogue quality, not necessary to have any relationship with the aspects above. Score is between 1 to 5 inclusive, all integer. The higher the better. The evaluation results are examined by ourselves. Incorrect annotation would be rejected and re-evaluated by another turker. The final evaluation results are shown as Table ActBERT follows the architecture of RoBERTa Controllable Dialogue We take dialogues, from the testing set of Con-vAI2, that have the most overlapping personas as the references for a dialogue. Although not as convincing as references in machine translation task, references obtained in this way prove to be helpful to dialogue evaluation. Both BLEU and BERTScore reaches relatively high correlations on Controllable Dialogue. The smooth function of the BLEU score is NIST geometric sequence smoothing
1,155
227
1,155
Look Harder: A Neural Machine Translation Model with Hard Attention
Soft-attention based Neural Machine Translation (NMT) models have achieved promising results on several translation tasks. These models attend all the words in the source sequence for each target token, which makes them ineffective for long sequence translation. In this work, we propose a hard-attention based NMT model which selects a subset of source tokens for each target token to effectively handle long sequence translation. Due to the discrete nature of the hard-attention mechanism, we design a reinforcement learning algorithm coupled with reward shaping strategy to efficiently train it. Experimental results show that the proposed model performs better on long sequences and thereby achieves significant BLEU score improvement on English-German (EN-DE) and English-French (EN-FR) translation tasks compared to the soft-attention based NMT.
In recent years, soft-attention based neural machine translation models Different attention mechanisms have been proposed to improve the quality of the context vector. For example, To overcome the shortcomings of the above approaches, we propose a hard-attention mechanism for a deep NMT model
A typical NMT model based on encoder-decoder architecture generates a target sequence y = {y 1 , • • • , y n } given a source sequence x = {x 1 , • • • , x m } by modeling the conditional probability p(y|x, θ). The encoder (θ e ) computes a set of representations Z = {z 1 , • • • , z m } ∈ R m×d corresponding to x and the decoder (θ d ) generates one target word at a time using the context vector computed using Z. It is trained on a set of D parallel sequences to maximize the log likelihood: where θ = {θ e , θ d }. In recent years, among all the encoder-decoder architectures for NMT, Transformer For each target word ŷt , the second sub-layer in the decoder computes encoder-decoder attention a t based on the encoder representations, Z. In practice we compute the attention vectors simultaneously for all the time steps by packing ŷt 's and z i 's in to matrices. The soft attention of the encoder-decoder, A i , for all the decoding steps is computed as follows: (2) where d is the dimension and Ŷ i-1 ∈ R n×d is the decoder output from the previous layer. Section 3.1 introduces our proposed hard-attention mechanism to compute the context vector for each target token. We train the proposed model by designing a RL algorithm with reward shaping strategy -described in Section 3.2. Instead of computing the weighted average over all the encoder output as shown in Eq. 2, we specifically select a subset of encoder outputs (z i 's) for the last layer (N ) of the decoder using the hard-attention mechanism as shown in We then compute the attention scores S as, We apply the hard-attention mechanism on attention scores (S) to dynamically choose multiple relevant encoder tokens for each decoding token. Given S, this mechanism generates an equal length of binary random-variables, β = {β 1 , • • • , β m } for each target token, where β i = 1 indicates that z i is relevant whereas β i = 0 indicates that z i is irrelevant. The relevant tokens are sampled using bernoulli distribution over each β i for all the target tokens. This hard selection of encoder outputs introduces discrete latent variables and estimating them requires RL algorithms. Hence, we design the following reinforcement learner policy for the hard-attention for each decoding step t. where β t i ∈ β represents the probability of a encoder output (agent's action) being selected at time t, and s t ∈ S is the state of the environment. Now, the hard encoder-decoder attention, Ã, is calculated as, follows: Unlike the soft encoder-decoder attention A in Eq. 2, which contains the weighted average of entire encoder outputs, the hard encoder-decoder attention à in Eq. 6 contains information from only relevant encoder outputs for each decoding step. The model parameters come from the encoder, decoder blocks and reinforcement learning agent, which are denoted as θ e , θ d and θ h respectively. Estimation of θ e and θ d is done by using the objective J 1 in Eq. 1 and gradient descent algorithm. However, estimating θ h is difficult given their discrete nature. Therefore, we formulate the estimation of θ h as a reinforcement learning problem and design a reward function over it. An overveiw of the proposed RL training is given in Algorithm 1. We use BLEU using Eq. 1 and Eq. 9 11 Update the parameters θ with gradient descent: θ = θ -α∇J(θ) 12 end 13 Return: θ Reward Shaping To generate the complete target sentence, the agent needs to take actions at each target word, but only one reward is available for all these tens of thousands of actions. This makes RL training inefficient since the same terminal reward is applied to all the intermediate actions. To overcome this issue we adopt reward shaping strategy of During training, we use cumulative reward n t=1 r t (y t , y) achieved from the decoding step t to update the agent's policy. Entropy Bonus We add entropy bonus to avoid policy to collapse too quickly. The entropy bonus encourages an agent to take actions more unpredictably, rather than less so. The RL objective function in Eq. 7 becomes, We approximate the gradient ∆ θ h Ĵ2 (θ h ) by using REINFORCE We conduct experiments on WMT 2014 English-German (EN-DE) and English-French (EN-FR) translation tasks. The approximate number of training pairs in EN-DE and EN-FR datasets are 4.5M and 36M respectively; newstest2013 and newstest2014 are used as the dev and test sets. We follow the similar preprocessing steps as described in We adopt the implementation of the Transformer We compare the proposed model with the soft-attention based Transformer model Table Analysis To see the effect of the hard-attention mechanism for longer sequences, we group the sequences in the test set based on their length and compute the BLEU score for each group. Table Even though RL based models are difficult to train, in recent years, multiple works Recently, several innovations are proposed on top of the Transformer model to improve performance and training speed. For example, In this work, we proposed a hard-attention based NMT model which focuses solely on a few relevant source sequence tokens for each target token to effectively handle long sequence translation. We train our model by designing an RL algorithm with the reward shaping strategy. Our model sets new state-of-the-art results on EN-DE and EN-FR translation tasks.
851
293
851
IBADR: an Iterative Bias-Aware Dataset Refinement Framework for Debiasing NLU models
As commonly-used methods for debiasing natural language understanding (NLU) models, dataset refinement approaches heavily rely on manual data analysis, and thus maybe unable to cover all the potential biased features. In this paper, we propose IBADR, an Iterative Bias-Aware Dataset Refinement framework, which debiases NLU models without predefining biased features. We maintain an iteratively expanded sample pool. Specifically, at each iteration, we first train a shallow model to quantify the bias degree of samples in the pool. Then, we pair each sample with a bias indicator representing its bias degree, and use these extended samples to train a sample generator. In this way, this generator can effectively learn the correspondence relationship between bias indicators and samples. Furthermore, we employ the generator to produce pseudo samples with fewer biased features by feeding specific bias indicators. Finally, we incorporate the generated pseudo samples into the pool. Experimental results and in-depth analyses on two NLU tasks show that IBADR not only significantly outperforms existing dataset refinement approaches, achieving SOTA, but also is compatible with model-centric methods. 1
Although neural models have made significant progress in many natural language understanding (NLU) tasks To alleviate this issue, researchers have proposed many methods that can be generally divided into two categories: model-centric mitigation approaches In this paper, we propose IBADR, an Iterative Bias-Aware Dataset Refinement framework, which iteratively generates samples to debias NLU models without predefining biased features. Under this framework, we create a sample pool initialized by the original training samples, and gradually expand it through multiple iterations. As shown in Figure Apparently, the above iterative process guides the sample generator towards samples with fewer biased features. However, we observe the generated pseudo samples display less diversity when we feed the lowest-degree bias indicator to the sample generator. The underlying reason is that the shallow model consistently assigns a relatively low bias degree to samples with specific patterns, such as the premise directly negating the hypothesis by inserting a word "not". Consequently, the sample generator learns these patterns and tends to produce samples containing similar patterns, thereby limiting their diversity. To address this issue, we further explore two strategies to diversify generations. First, instead of always using the lowest-degree bias indicator, we randomly select a low-degree bias indicator. In this way, the sample generator is discouraged from continually creating pseudo samples containing similar patterns, while still ensuring fewer biased features in the pseudo samples. Secondly, we dynamically update the shallow model by integrating the newly generated pseudo samples during the iterative generation process. By doing this, we effectively decrease the assignment of the lowestdegree bias indicator to pattern-specific samples, ultimately promoting greater diversity of the generated samples. To summarize, the main contributions of this paper are three-fold: • We propose a dataset refinement framework designed to iteratively generate pseudo samples without prior analysis of biased features. • We present two strategies to enhance the diversity of the pseudo samples, which further boost the performance of NLU models. • To verify the effectiveness and generality of IBADR, we conduct experiments on two NLU tasks. The experimental results show that IBADR achieves SOTA performance.
In this section, we give a detailed description of IBADR. Under this framework, we first use a limited set of training samples to train a shallow model, which serves to measure the bias degree of samples. Then, we iteratively generate pseudo samples with fewer biased features, as illustrated in Figure As investigated in Back to our framework, our primary objective is to generate samples with a low bias degree, which can be used to reduce spurious correlations via adjusting dataset distributions. The overview of the iterative sample generation process is shown in Figure Step 1: Setting Bias Indicators. First, we use the above-mentioned shallow model to measure the bias degree of each sample in S, as described in Section 2.1, and sort these samples according to their bias degree and divide them into N bi groups with equal size. Each group is assigned with a bias indicator b n , where 1≤n≤N bi , b 1 represents the lowest-degree bias indicator and b N bi denotes the highest-degree bias indicator. Step 2: Finetuning Sample Generator. Then, we use the samples in S to finetune the sample generator θ g via the following loss function: where b (i) represents the bias indicator assigned for the training sample (x (i) , y (i) ). Through training with this objective, the generator can effectively learn the correspondence relationship between bias indicators and samples. Furthermore, in the subsequent stages, we can specify both the bias indicator and the label to control the generations of pseudo samples. Step 3: Generating Pseudo Samples. Next, we designate a bias indicator b representing a low degree of bias, and then feed it with a randomlyselected NLI label ȳ into the generator θ g . This process allows us to form a pseudo sample (x, ȳ) by sampling x from the generator output distribution p g (•| b, ȳ; θ g ). By repeating this sampling process, we can obtain a set of generated pseudo samples with fewer biased features. Step 4: Expanding Sample Pool. Subsequently, to ensure the quality of generated pseudo samples, we follow After N iter iterations of the above steps, our sample pool contains not only the original training samples, but also abundant pseudo samples with fewer biased features. Finally, we debias the NLU model via the retraining on these samples. Intuitively, the most direct way is to set the above specified bias indicator b to b 1 , which denotes the lowest bias degree. However, we observe that such generated pseudo samples lack diversity and fail to cover diverse biased features. The reason behind this is that the generated pseudo samples designated with b 1 always follow certain patterns, exhibiting less diversity compared to those assigned with other bias indicators. For example, the premise directly negates the hypothesis using the word "not". Consequently, this results in spurious correlations between b 1 and these certain patterns. Hence, the generator tends to generate samples following these patterns and fails to generate samples that compass a broader range of biased features. To address this issue, we employ the following two strategies: (i) Instead of using the lowest-degree bias indicator b 1 , we use a randomlyselected low-degree bias indicator: b=b r , where 1≤r≤ N bi 2 , and feed it into the generator during the iterative generation process. Upon human inspection, we observe that the generated pseudo samples not only become diverse but also still contain relatively few biased features. (ii) During the generation process, we update the shallow model θ s using a randomly-extracted portion of S at each iteration. This strategy prevents the shallow model from consistently predicting a low bias degree to pseudo samples following previously-appeared patterns, thereby enhancing the diversity of the pseudo samples. Tasks and Datasets. We conduct experiments on two NLU tasks: natural language inference and fact verification. • Natural Language Inference (NLI). This task aims to predict the entailment relationship between the pair of premise and hypothesis. We conduct experiments using the MNLI • Fact Verification. This task is designed to determine whether a textual claim is supported or refuted by the provided evidence text. We select FEVER Baselines. We compare IBADR with the following baselines: • CrossAug • z-filter • Products-of-Experts (PoE) • Confidence Regularization (Conf-reg) (Utama et al., 2020a). This method trains a debiased model by increasing the uncertainty of samples with biased features. It first trains a bias-only model to quantify the bias degree of each sample, and then scales the output distribution of a teacher model based on the bias degree, where the re-scaled distribution can be used to enhance the debiased model. • Example Reweighting (Reweight) • Debiasing Contrastive Learning (DCT) Please note that in exception to CrossAug and z-filter, which are dataset refinement approaches, all other approaches are model-centric. Implementation Details. In our experiments, we use GPT2-large For the NLU models, we train them on the augmented datasets of different tasks for 8 epochs using a learning rate of 1e-5. We employ an early stop strategy during the training process. We conduct all experiments three times, each time with different random seeds, and report the average results. Sample numbers of the augmented datasets are listed in Table The bias indicator number N bi on our framework is an important hyper-parameter, which determines the partition granularity of the sample pool. Thus, we gradually vary N bi from 3 to 9 with an increment of 2 in each step, and compare the model performance on the development sets of MNLI. As shown in Table Table To assess the compatibility of IBADR with modelcentric debiasing methods, we report the model performance when simultaneously using IBADR and PoE As shown in Table To assess the effects of special designs on IBADR, we also report the performance of several IBADR variants on MNLI: • w/o bias indicator. This variant directly uses the samples without the bias indicator to train the sample generator. • w/o iterative generation. Instead of generating pseudo samples iteratively, we only utilize the sample generator trained in the first iteration to generate pseudo samples. As shown in Table Without the bias indicators, the generated pseudo samples will contain undesired biased features, resulting in poorer performance on the challenge set. As shown in To further explore the compatibility of IBADR with different sizes of pre-trained language models, we reconduct experiments by individually replacing the BERT-base model with BERT-large, RoBERTa-base and RoBERTa-large. We also compare IBADR with z-filter, which is the current SOTA data refinement method. The comparisons are performed on the MNLI and SNLI datasets. † As presented in Table Original randomly select two subsets from original training samples of MNLI, with sizes 100K and 200K, respectively. Afterwards, we employ IBADR to augment these subsets and subsequently retrain NLU models using the augmented datasets. Table To explore the influence of augmented dataset size, we retrain the NLU model on the MNLI dataset with different numbers of augmented samples: 10K, 100K, 300K, 600K, and 900K, respectively. As indicated in Table To ensure a fair comparison with z-filter, we employ GPT-2 Large as the sample generator in our main study. In exploring IBADR's compatibility with advanced large language models (LLM), we finetune the LLaMA-7b model Our related work primarily focuses on two categories of methods: model-centric and dataset refinement methods. Numerous previous studies have adopted modelcentric approaches to address biases in NLU models. Several studies have explored generative data augmentation methods to enhance the model robustness in various domains. In this work, we propose IBADR, an iterative dataset refinement framework for debiasing NLU models. Under this framework, we train a shallow model to quantify the bias degree of samples, and then iteratively generate pseudo samples with fewer biased features, which can be used to debias the model via retraining. We also incorporated two strategies to enhance the diversity of generated pseudo samples, further improving model performance. On extensive experiments of two tasks, IBADR consistently shows superior performance compared to baseline methods. Besides, IBADR can better handle unknown biased features and has good compatibility with larger language models. In the future, we will explore the compatibility of IBADR with other large language models, such as GPT4 The limitations of this framework are the following aspects: (i) Despite filtering the pseudo samples with low model confidence, IBADR might still produce pseudo samples with incorrect labels, which limits the model performance; (ii) We only conduct experiments on NLU tasks, neglecting the exploration of its applicability to a wider range of tasks. This paper proposes a dataset refinement framework that aims to adjust dataset distributions in order to mitigate data bias. All the datasets used in this paper are publicly available and widely adopted by researchers to test the performance of debiasing frameworks. Additionally, this paper does not involve any data collection or release, thus eliminating any privacy concerns. Oveall, this study will not pose any ethical issues.
1,204
2,415
1,204
SoNLP-DP System for ConLL-2016 English Shallow Discourse Parsing
This paper describes the submitted English shallow discourse parsing system from the natural language processing (NLP) group of Soochow university (SoNLP-DP) to the CoNLL-2016 shared task. Our System classifies discourse relations into explicit and non-explicit relations and uses a pipeline platform to conduct every subtask to form an end-to-end shallow discourse parser in the Penn Discourse Treebank (PDTB). Our system is evaluated on the CoNLL-2016 Shared Task closed track and achieves the 24.31% and 28.78% in F1-measure on the official blind test set and test set, respectively.
Discourse parsing determines the internal structure of a text via identifying the discourse relations between its text units and plays an important role in natural language understanding that benefits a wide range of downstream natural language applications, such as coherence modeling As the largest discourse corpus, the Penn Discourse TreeBank (PDTB) corpus Although much research work has been conducted for certain subtasks since the release of the PDTB corpus, there is still little work on constructing an end-to-end shallow discourse parser. The CoNLL 2016 shared task evaluates endto-end shallow discourse parsing systems for determining and classifying both explicit and nonexplicit discourse relations. A participant system needs to (1)locate all explicit (e.g., "because", "however", "and".) discourse connectives in the text, (2)identify the spans of text that serve as the two arguments for each discourse connective, and (3) predict the sense of the discourse relations (e.g., "Cause", "Condition", "Contrast"). In this paper, we describe the system submission from the NLP group of Soochow university (SoNLP-DP). Our shallow discourse parser consists of multiple components in a pipeline architecture, including a connective classifier, argument labeler, explicit classifier, non-explicit classifier. Our system is evaluated on the CoNLL-2016 Shared Task closed track and achieves the 24.31% and 28.78% in F1-measure on the official blind test set and test set, respectively. The remainder of this paper is organized as follows. Section 2 presents our shallow discourse parsing system. The experimental results are described in Section 3. Section 4 concludes the paper.
In this section, after a quick overview of our system, we describe the details involved in implementing the end-to-end shallow discourse parser. A typical text consists of sentences glued together in a systematic way to form a coherent discourse. Referring to the PDTB, shallow discourse parsing focus on shallow discourse relations either lexically grounded in explicit discourse connectives or associated with sentential adjacency. Different from full discourse parsing, shallow discourse parsing transforms a piece of text into a set of discourse relations between two adjacent or nonadjacent discourse units, instead of connecting the relations hierarchically to one another to form a connected structure in the form of tree or graph. Specifically, given a piece of text, the end-toend shallow discourse parser returns a set of discourse relations in the form of a discourse connective (explicit or implicit) taking two arguments (clauses or sentences) with a discourse sense. That is, a complete end-to-end shallow discourse parser includes: • connective identification, which identifies all connective candidates and labels them as whether they function as discourse connectives or not, • argument labeling, which identifies the spans of text that serve as the two arguments for each discourse connective, • explicit sense classification, which predicts the sense of the explicit discourse relations after achieving the connective and its arguments, • non-explicit sense classification, for all adjacent sentence pairs within each paragraph without explicit discourse relations, which classify the given pair into EntRel, NoRel, or one of the Implicit/AltLex relation senses. Figure Our connective identifier works in two steps. First, the connective candidates are extracted from the given text referring to the PDTB. There are 100 types of discourse connectives defined in the PDT-B. Then every connective candidate is checked whether it functions as a discourse connective. • Lexical: connective itself, POS of the connective, connective with its previous word, connective with its next word, the location of the connective in the sentence, i.e., start, middle and end of the sentence. • Syntactic: the highest node in the parse tree that covers only the connective words (dominate node), the context of the dominate node Argument labeler need to label the Arg1 and Arg2 spans for every connective determined by connective identifier. Following the work of After extracting the argument candidates, a multi-category classifier is employed to determine the role of every argument candidate (i.e., Arg1, Arg2, or NULL) with features reflecting the properties of the connective, the candidate constituent and relationship between them. Features include, • Connective related features: connective itself, its syntactic category, its sense class • Number of left/right siblings of the connective. • The context of the constituent. We use POS combination of the constituent, its parent, left sibling and right sibling to represent the context. When there is no parent or siblings, it is marked NULL. • The path from the parent node of the connective to the node of the constituent. • The position of the constituent relative to the connective: left, right, or previous. After a discourse connective and its two arguments are identified, the sense classifier is proved to decide the sense that the relation conveys. Although the same connective may carry different semantics under different contexts, only a few connectives are ambiguous Referring to the PDTB, the non-explicit relations Our non-explicit sense classifier includes five traditional features: Production rules: According to Dependency rules: Similar with Production rules, three features denoting the presence of dependency productions in Arg1, Arg2 or both are also introduced in our system. Fisrt/Last and First 3 words: This set of features include the first and last words of Arg1, the first and last words of Arg2, the pair of the first words of Arg1 and Arg2, the pair of the last words as features, and the first three words of each argument. Word pairs: We include the Cartesian product of words in Arg1 and Arg2. We apply MI (Mutual Information) method to select top 500 word pairs. Brown cluster pairs: We include the Cartesian product of the Brown cluster values of the words in Arg1 and Arg2. In our system, we take 3200 Brown clusters provided by CoNLL shared task. Besides, we notice that not all adjacent sentences contain relation between them. Therfore, we view these adjacent sentences as NoRel relations like the PDTB. We train our system on the corpora provided in the CoNLL-2016 Shared Task and evaluate our system on the CoNLL-2016 Shared Task closed track. All our classifiers are trained using the OpenNLP maximum entropy package 4 with the default pa-rameters (i.e. without smoothing and with 100 iterations). We firstly report the official score on the CoNLL-2016 shared task on development, test and blind test sets. Then, the supplementary results provided by the shared task organizes are reported. Table In Table In Table Further, we reports all the official performance in Table We have presented the SoNLP-DP system from the NLP group of Soochow university that participated in the CoNLL-2016 shared task. Our system is evaluated on the CoNLL-2016 Shared Task closed track and achieves the 24.31% and 28.78% in F1-measure on the official blind test set and test set, respectively.
586
1,685
586
Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement
Generative language models (LMs) are increasingly used for document class-prediction tasks and promise enormous improvements in cost and efficiency. Existing research often examines simple classification tasks, but the capability of LMs to classify on complex or specialized tasks is less well understood. We consider a highly complex task that is challenging even for humans: the classification of legal reasoning according to jurisprudential philosophy. Using a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts, we systematically test the performance of a variety of LMs. We find that generative models perform poorly when given instructions (i.e. prompts) equal to the instructions presented to human annotators through our codebook. Our strongest results derive from fine-tuning models on the annotated dataset; the best performing model is an in-domain model, LEGAL-BERT. We apply predictions from this fine-tuned model to study historical trends in jurisprudence, an exercise that both aligns with prominent qualitative historical accounts and points to areas of possible refinement in those accounts. Our findings generally sound a note of caution in the use of generative LMs on complex tasks without finetuning and point to the continued relevance of human annotation-intensive classification methods.
Academia and industry increasingly use generative language models (LMs) for document annotation and class-prediction tasks, which promise enormous improvements in cost and efficiency. However, research tends to focus on relatively simple and generic annotation contexts, such as topic or query-keyword relevance In this study we systematically examine the ability of large LMs to parse a construct that is difficult even for highly trained annotators: modes of legal reasoning. We consider two prominent modes of legal reasoning that judges employ as identified by legal historians, in addition to a null or noninterpretative class. Although the classes of legal reasoning identified by historians reflect relatively well-defined concepts, determining whether a particular document reflects a mode of reasoning can be exceptionally challenging. We suspect this is common to many high-value but specialized tasks, such as classifying complex emotional states or detecting indirect racial or gender bias. These tasks often require both abstract reasoning and specialized knowledge. Legal reasoning is a suitable setting for examining model performance on a highly complex classification task. The foundation of our research is a new dataset of thousands of paragraphs of historical Supreme Court opinions annotated by a team of upper-year students at a highly selective law school. We find that even the largest models perform poorly at the task without fine-tuning, even when using similar instructions as those given to human annotators. This finding suggests that LMs, even as augmented through few-shot or chain-ofthought prompting, may not be well-suited to complex or specialized classification tasks without taskspecific fine-tuning. For such tasks, substantial annotation by domain experts remains a critical component. To demonstrate this point, we examine the performance of established to cutting-edge LMs when fine-tuned on our annotated data. Our results show strong performance for many of these fine-tuned models. Our analysis explores various approaches to model structure, such as a multi-class task versus serialized binary tasks, but we find that using an in-domain pre-trained model, LEGAL-BERT The primary contributions of this paper are as follows: 1. We develop a new dataset of domain-expert annotations in a complex area. 2. We find that SOTA in-context generative models perform poorly on this task. 3. We show that various fine-tuned models have relatively strong performance. 4. We study the relationship between our bestperforming model's predictions and the consensus historical periodization of judicial reasoning, finding both substantial convergence and opportunities for refinement in the historical accounts. In sum, our paper shows that in a complex and specialized domain, without fine-tuning, current generative models exhibit serious limitations; there is a continued need for domain-expert annotation, which can be effectively leveraged to unseen instances through fine-tuned models.
Researchers have developed strategies to guide LMs to perform complex tasks without the time and infrastructure costs of fine-tuning, often by breaking decisions down into multiple steps of reasoning. At certain tasks and with these prompting strategies, LMs perform annotation or classification tasks at the level of humans. Given the high costs (e.g., time, money, logistics) of collecting high-quality human-annotated data, recent work has suggested that annotation tasks previously performed by students, domain experts, or crowdsourced workers could be replicated with equal performance by LMs. Generative models perform well on query-keyword relevance tasks The range of applications for which generative LMs might adequately perform is an open question. We have found limited work that requires specialized knowledge in addition to the use of abstract reasoning skills. In this study, we ask the models to engage in precisely this form of reasoning, which is challenging even for domain-expert humans. What distinguishes this form of reasoning is that it requires the analyst to conceptualize abstract principles and determine whether a specialized, domain-specific example fits one of those concepts. This difficulty contrasts with simpler tasks, which may key off well-established associations in training data between concepts, such as political affiliation and word usage. Our focus is on legal reasoning involving statutory interpretation. Llewellyn's modes of legal reasoning apply more broadly than statutory interpretation. With respect to statutory interpretation specifically, under the grand style of reasoning "case-law statutes were construed 'freely' to implement their purpose, the court commonly accepting the legislature's choice of policy and setting to work to implement it" Though their terminology does not always follow Llewellyn, other legal scholars identify a similar primary distinction in legal reasoning. Horwitz, for instance, centers discussion on legal "orthodoxy," which seeks to separate law from consequences and elevate "logical inexorability" Grand A legal decision that views the law as an open-ended and on-going enterprise for the production and improvement of decisions that make sense on their face and in light of political, social, and economic factors. None A passage or mode of reasoning that does not reflect either the Grand or Formal approaches. Note that this coding would include areas of substantive law outside of statutory interpretation, including procedural matters. of thought Our contribution focuses on this broad consensus around a key distinction in the modes of legal reasoning. On the one hand, a mode of reasoning that is innovative, open-ended, and oriented to social, political, and economic consequences of law; on the other hand, a mechanical, logic-oriented approach that conceives of the law as a closed and deductive system of reasoning. Though scholars differ on terminology, we follow Llewellyn and refer to these schools as Grand and Formal (Table Not only does this basic conceptual consensus exist, but there is also rough consensus on periodization: that is, the periods of history in which each school was dominant. The "conventional" We use this periodization to validate our measure; but also use the measure to provide a nuanced account of historical trends in legal reasoning. We use a dataset of 15,860 historical United States Supreme Court opinions likely involving statutory interpretation and issued between 1870 and 2014. A team of domain experts, four upper-year law students at a highly selective law school, annotated selections from court opinions as formal, grand, or lacking statutory interpretation. This team collaboratively developed and tested a codebook (included in Appendix D) by iteratively annotating court opinions and calculating inter-rater reliability on a weekly basis over the spring 2023 semester. The annotation task asked each annotator to assign one of three labels, "formal," "grand," or "none," to each paragraph. A fourth label, "low confidence," could be added in addition to one of the three core labels if the type of reasoning was ambiguous. We calculated inter-rater reliability using Krippendorff's alpha to evaluate agreement between the four labelers and across the three main classes. This coefficient was calculated weekly and guided the decision of when to start collecting data for training. Paragraphs with high disagreement were discussed in depth and these discussions led to the revision of our codebook. We note that while this annotation is formally a three-way classification task, the low dimensional-ity of the output space does not imply that the task is easy. In fact, it took weeks for highly trained upper-year law students to reach a level of expertise at which they were able to reach consistent results. Inter-rater reliability increased after the introduction of a decision chart (Figure In total, excluding paragraphs prior to decent inter-rater reliability, 2748 paragraphs were labeled and included in the training and evaluation data. Even with the upsampling of legal interpretation based on seed terms, paragraphs that did not engage in legal interpretation or interpreted something other than a statute, our "none" class, made up 68% of the data (Table Though each member of the annotation team was an upper-year law student who had completed highly relevant coursework, the task remained difficult for the human annotators, as reflected in the mid-range inter-rater reliability (0.63 Krippendorff's alpha). of legal reasoning were clear, but determining whether specific instances reflected one mode or another required specialized knowledge and an ability to map those abstract concepts to the incomplete evidence in the paragraphs. The complexity of this task makes it challenging for a generative model prompted in-context or with CoT reasoning. As an initial experiment, we begin with a slightly simplified task: identifying whether a passage involves some form of legal reasoning (regardless of class). We then compare a larger variety of models on the primary task of interest: identifying instances of formal and grand legal reasoning. For both tasks, we compare the performance of in-context and fine-tuned models, with the expectation that identifying legal reasoning is more achievable for in-context models than identifying the specific formal or grand classes. Here, we test thresholds of task complexity, to better identify the point at which an annotated dataset for fine-tuning is needed; not just a carefully crafted prompt. In both tasks, we compare the performance of a set of fine-tuned models to a set of prompted models. Models were chosen based on established usage, popularity, and accessibility (i.e. model size), since applied NLP researchers may be less likely to have access to the computing power needed for extremely large models. The fine-tuned models include BERT-base As a slightly simplified initial task, we begin by considering whether a model can detect instances in which some form of legal reasoning occurs (regardless of formal or grand reasoning). This remains a challenging task but is comparatively less complex than identifying the mode of reasoning. We consider any paragraph annotated as either formal or grand as being a paragraph where legal reasoning is present; this is a binary classification problem. We compare two procedures for identifying legal reasoning in text: • In-context generative identification based on a description of legal reasoning (prompt included in Appendix C). • Fine-tuned binary classification based on hand-labeled annotations. All fine-tuned models perform relatively well on distinguishing paragraphs with legal reasoning from paragraphs without legal reasoning (Table The primary task requires additional specialized knowledge in the identification of specific classes of reasoning, formal and grand. This task also requires the identification of imbalanced classes, as formal reasoning was only identified in 11% of all annotated paragraphs. We test various assemblies of models and compare fine-tuning with prompting for identifying legal reasoning in text. Our approaches to prompting include the following: • Chain-of-Thought: A CoT prompt that provides steps of reasoning to follow prior to determining the class of legal reasoning (Appendix C). The steps used in this prompt derive from the decision chart provided to annotators. Each prompting strategy is derived from our codebook (see Appendix D), which guided human annotators through data annotation. We do not exhaustively explore prompts beyond our codebook. Instead, we consider whether a reasonable prompt that is successful for humans works well for a model. While it is possible that another, asyet-unknown, prompt could have provided better results, we know that the language in our codebook is sufficient to describe the task and the desired results. We contrast the results of the prompted generative models with the results from fine-tuned models. These models were fine-tuned with a variety of approaches, including: • Multi-Class: A fine-tuned multi-class classifi-cation based on hand-labeled annotations. • Nested: An assembly of models that breaks the classification task into nested binary stages. One model is fine-tuned to identify interpretation and another model to distinguish between grand and formal classes. The results from the first model are used by the second. We test the performance of all models on the same five test splits of data and find that the fine-tuned models consistently outperform the in-context mod- Chain-of-Thought GPT-4 0.34 0.37 0.37 0.25 0.50 0.17 0.43 0.32 0.67 0.78 0.80 0.76 FLAN-T5 0.08 0.33 0.34 0.00 0.00 0.00 0.21 0.12 1.00 0.03 0.86 0.02 Llama-2-Chat 0.08 0.55 0.44 0.32 0.20 0.74 0.00 1.00 0.00 0.00 1.00 0.00 els (Table The conventional wisdom among legal observers is that we currently live in a period in which the formal style of reasoning predominates Writing in the mid-twentieth century, Llewellyn identified three periods of legal reasoning. Prior to the Civil War, the grand style of reasoning predominated; from the Civil War to World War I, the formal style of reasoning prevailed; and from World War I onward, courts again operated under the grand style of reasoning. More recently, scholars identify the 1980s as a critical point of transition towards formalism Our data starts at Reconstruction (the period following the US Civil War) and allows us to examine the convergence between the scholarly consensus historical periodization and the historical periodization implied by our LM-derived results. We can also use our predictions to offer more granular assessments of the periods and potentially to adjudicate differences among the views of prominent scholars. This latter analysis is preliminary, in part, because earlier scholars examined judicial reasoning broadly, whereas our current analysis considers only Supreme Court opinions involving statutory interpretation. But our predictions also allow for a more granular assessment of the historical periods. To illustrate this, we use dashed vertical lines in Figure These results represent some of the first long-run quantitative characterization of trends in jurisprudential philosophies. They both broadly support the qualitative characterizations of legal scholars and provide opportunities for refinement of legal theory and historical accounts. We found that for a task involving abstract reasoning in addition to specialized domain-specific knowledge, it remains essential to have an annotated dataset created by domain experts. Although other work has shown that generative models are able to replicate annotation for complex tasks using carefully crafted prompts, we demonstrate that models fine-tuned on a sizable dataset of expert annotations perform better than models instructed to perform the task through in-context and CoT prompts. We recommend that researchers use caution when employing non-fine-tuned generative models to replicate complex tasks otherwise completed by humans or with human supervision. Best practices would call for human validation of generative model results and an assessment of costperformance tradeoffs with respect to in-domain models. 13 This revolution is also known as the "switch in time that saved nine," referring to the changed voting behavior of Justice Owen Roberts in response to the running threat by President Roosevelt to pack the Court. 14 Justice Scalia, for instance, was appointed by President Reagan to the Supreme Court in 1986, and is often viewed as the single most influential person in the rise of new formalism. For an account of that rise, see A limitation of this study is the relatively low interrater reliability between annotators even after extensive training and conversation. This relatively low reliability results from the difficulty of the task and the inevitable ambiguity of some passages, especially when read out of case context. Another limitation relates to our prompting strategy: to make the in-context prompting more comparable to working with the team of annotators, we use the codebook descriptions and examples in the in-context prompts. Likely, these descriptions and examples could have been optimized for better model performance through additional prompt strategies, and our results for these models may depict lower performance than is possible. • Grand seeds: conference report, committee report, senate report, house report, assembly report, senate hearing, house hearing, assembly hearing, committee hearing, conference hearing, floor debate, legislative history, history of the legislation, conference committee, joint committee, senate committee, house committee, assembly committee, legislative purpose, congressional purpose, purpose of congress, purpose of the legislature, social, society • Formal seeds: dictionary, dictionarium, liguae britannicae, world book, funk & wagnalls, expressio, expresio, inclusio, noscitur a sociis, noscitur a socis, ejusdem generis, last antecedent, plain language, whole act, wholeact, whole code, whole-code, in pari materia, meaningful variation, consistent usage, surplusage, superfluit, plain meaning, ordinary meaning, word The selection of paragraphs to annotate occurred through a series of steps: 1. We include only opinions that perform statutory interpretation. We identify these opinions by finding opinions that include any of the tokens 'statute', 'legislation', or 'act', within 200 characters of the tokens 'mean', 'constru' (i.e. construct), 'interpret', 'reading', or 'understand'. 2. Opinions that pass the statutory interpretation filter were split into paragraphs. In each paragraph, we looked for the occurrence of different seed terms corresponding to either formal or grand reasoning. 3. Of the total number of paragraphs used for labeling, 25% included one or more formal seeds, 25% included one or more grand seeds, and 50% included none of the seed terms. This proportion remained the same until the last two rounds of labeling when more examples of formal or grand seeds were included. During those two rounds, the proportion of formal and grand seeds was increased to 40% for both classes. methods of statutory interpretation related to the formal and grand styles of jurisprudence. A decision chart was created and provided to annotators between the fourth and fifth weeks of annotations (Figure We designed three prompting strategies to instruct LMs to identify legal interpretation and classes of legal interpretation in text. These prompts are included in Figures The codebook was iteratively created throughout the process of annotation to guide annotators. Table 5 includes the final definitions of each class alongside core examples of each class.
1,364
3,021
1,364
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System
Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems. Despite their success, existing methods often formulate this task as a cascaded generation problem which can lead to error accumulation across different sub-tasks and greater data annotation overhead. In this study, we present PPTOD, a unified plug-andplay model for task-oriented dialogue. In addition, we introduce a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora. We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification. Experimental results show that PPTOD achieves new state of the art on all evaluated tasks in both high-resource and lowresource scenarios. Furthermore, comparisons against previous SOTA methods show that the responses generated by PPTOD are more factually correct and semantically coherent as judged by human annotators. 1
Task-oriented dialogue is often decomposed into three sub-tasks: (1) dialogue state tracking (DST) for tracking user's belief state; (2) dialogue policy learning (POL) for deciding which system action to take; (3) natural language generation (NLG) for generating dialogue response Traditional approaches With the advances in pre-trained language models (PLMs) While impressive results are reported (3) Thirdly, the results of different sub-tasks must be generated in a cascaded order which inevitably increases the system inference latency. In this study, we propose a novel Plug-and-Play Task-Oriented Dialogue (PPTOD) system. Figure Inspired by recent success of dialogue language model pre-training We evaluate PPTOD on a wide range of benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification. Comparisons against previous state-of-theart approaches show that PPTOD achieves better performance in both full-training and low-resource settings as judged by automatic and human evaluations. In summary, our contributions are: • A novel model, PPTOD, that effectively leverages pre-trained language models for taskoriented dialogue tasks. • A new dialogue multi-task pre-training strategy that augments the model's ability with heterogeneous dialogue corpora. • Extensive evaluations on three benchmark TOD tasks reporting state-of-the-art results in both full-training and low-resource settings. • In-depth analysis that further reveals the merits of our model design and the proposed multi-task pre-training strategy.
Task-Oriented Dialogue. Task-oriented dialogue aims at accomplishing user's goal. Traditional systems Language Model Pre-training. The research community has witnessed remarkable progress of pre-training methods in a wide range of NLP tasks, including language understanding In the dialogue domain, many models are pretrained on open-domain conversational data like Reddit. Based on GPT-2, Transfertransfo Pre-training on Supplementary Data. Recent work In this section, we first discuss the datasets and learning objective used in the proposed dialogue multi-task pre-training. Then we introduce how to apply the pre-trained PPTOD for a new task. To construct the pre-training corpus, we collect eleven human-written multi-turn task-oriented dialogue corpora, including MetaLWOZ Motivated by previous work To specify the target task, we plug a task-specific prompt into the dialogue context as the model input. Figure In the multi-task pre-training stage, each training sample is represented as: where t denotes the TOD task that the sample d belongs to, and t ∈ {NLU, DST, POL, NLG}. z t is the task-specific prompt of the form "translate dialogue to A:", with A corresponding to "user intent", "belief state", "dialogue act", and "system response" for the tasks of NLU, DST, POL, and NLG, respectively. x denotes the input dialogue context which is a concatenation of all previous utterances in the dialogue -both system's and user's. And y denotes the target output text. As an example presented in Figure Learning. The model is trained with a maximum likelihood objective. Given the training sample d = (z t , x, y), the objective L Θ is defined as where Θ is the model parameters. In the multi-task pre-training stage, the model is trained to perform all TOD-related tasks with data annotated for different tasks. To optimize the model parameters Θ, we use mini-batch based optimization approach as shown in Algorithm 1. When applying the pre-trained PPTOD to a new downstream task with task-specific labelled data, we use the same learning objective Eq. ( In this work, we report results of PPTOD with three model sizes: PPTOD small , PPTOD base , and PPTOD large . These three models are initialized with T5-small, T5-base, and T5-large models We test PPTOD on three benchmark TOD tasks: (1) end-to-end dialogue modelling; (2) dialogue state tracking; and (3) user intent classification. End-to-end dialogue modelling aims at evaluating the model in the most realistic, fully end-to-end setting, where the generated dialogue states are used for the database search and response generation We conduct experiments on the benchmark Multi-WOZ 2.0 For evaluation, we follow the original Multi-WOZ guidance for all individual metrics: Inform, Success, and BLEU We compare PPTOD with several strong baselines, including Sequicity We compare PPTOD with a wide range of existing methods that can be categorized into two classes: (1) classification-based approaches and (2) generation-based approaches. 2019; In contrast, PPTOD directly generates the outputs, making it more adaptive and generalizable to new ontology labels in real world applications. To investigate how well PPTOD performs with limited training samples on the downstream task, we evaluate it in a simulated low-resource setting. Specifically, we train the model on MultiWOZ 2.0 by varying the percentage of training data (i.e., 1%, 5%, 10%, and 20%). We compare PPTOD with three strong generation-based baselines, including SimpleTOD, MinTL, and SOLOIST, using the official code released by the authors. Table Next, we evaluate PPTOD for the dialogue state tracking task. The experiments are conducted on the benchmark MultiWOZ 2.0 The goal of intent classification, i.e. NLU, is to classify the user's intent based on the user's utterance. We conduct experiments on the benchmark Banking77 dataset We compare PPTOD with several strong baselines, including BERT-Fixed, BERT-Tuned, USE+ConveRT In the experiments, we train PPTOD for five runs with different selection of training data and random seeds. The average scores and standard deviations are reported in Table In this section, we present further discussions and empirical analyses of the proposed model. First, we compare our plug-and-play generation with the cascaded generation that is adopted by most existing studies. To this end, we fine-tune a T5-small model (without dialogue multi-task pretraining) on MultiWOZ 2.0 by either using the plugand-play or the cascaded formulation. Moreover, we also examine the effect of DB state on the model performance. Specifically, for the plug-and-play model, when utilizing DB state, it first predicts the dialogue state (DST) to retrieve the DB state from the pre-defined database. Then, based on the DB state and dialogue context, the output of POL and NLG are generated in parallel. When ignoring the DB state, the plug-and-play model generates DST, POL, and NLG results in a fully paralleled fashion. For evaluation, we report the results on end-toend dialogue modelling task. In addition, we report the average inference latency and relative speedup of each model. Next, we provide further analyses on the dialogue multi-task pre-training strategy. To quantify the importance of different pre-training data, we pre-train the T5-small model using data that is annotated for individual TOD-related task (i.e., NLU, DST, POL, and NLG). After pre-training, we then evaluate the models on three downstream TOD tasks using Mul-tiWOZ 2.0 and Banking77 datasets. For end-to-end dialogue modelling and dialogue state tracking, we test the model in both 1% and full training settings. For intent classification, we measure the accuracy of models trained with either 10 training samples per intent or full training samples. Table Moreover, we see that pre-training with data annotated for individual TOD-related task helps the model to attain better result in the corresponding downstream task. For example, pre-training with DST data notably improves the model performance in the downstream DST task both in low-resource and full-training settings. Similarly, pre-training with NLG data helps the model to get better BLEU score in the end-to-end dialogue modelling task. Lastly, we see that the PPTOD small model attains the best results on most of the evaluation metrics. This suggests that the pre-training data with different annotations are compatible with each other and the joint utilization of all pre-training data helps the model to achieve the best overall performance. We also conduct a human evaluation with the help of graders proficient in English using an internal evaluation platform. For evaluation, we randomly selected 50 dialogue sessions from the test set of MultiWOZ 2.0 dataset. We compare the results generated by the PPTOD base model against the results from the SOLOIST model. All generated results, plus the reference, are evaluated by five graders on a 3-point Likert scale (0, 1, or 2) for each of the following features • Understanding: Whether the system correctly understands the user's goal. • Truthfulness: Whether the system's response is factually supported by the reference. • Coherency: Whether the system's response is semantically coherent with the context. • Fluency: Whether the system's response is grammatically fluent and easy to understand. Table In this paper, we propose PPTOD, a unified model that supports both task-oriented dialogue understanding and response generation in a plug-andplay manner. In addition, we introduce a new dialogue multi-task pre-training strategy to further augment our model's ability in completing TODrelated tasks. Extensive experiments and analysis are conducted on three benchmark TOD tasks in both high-resource and low-resource settings. The automatic and human evaluations demonstrate that PPTOD outperforms the current SOTA systems in terms of various evaluation metrics. We elaborate the details of the dialogue datasets contained in the pre-training dialogue corpora. • MetaLWOZ • SNIPS • CLINC • ATIS • KVRET It contains annotations for user belief state (DST) and system response (NLG). • WOZ • MSR-E2E • Frames (El • TaskMaster • Schema-Guided In Table Please evaluate the system's response with respect to the following features: (1) Understanding; (2) Truthfulness; (3) Coherency; and (4) Fluency. In the following, we provide some guidelines regarding how to judge the quality of the system's response in terms of different features. This metric measures whether the system's response shows that the system is able to understand the goal and intent of the user. The definition of different scores are: • 2: The system completely understands the user's goal and intent. • 1: The system partially understands the user's goal and intent. • 0: The system does not understand the user's goal and intent at all. This metric measures whether the system's response is factually supported by the reference response. The definition of different scores are: • 2: The facts in the system's response are all supported by or can be inferred from the reference response. • 1: The facts in the system's response are partially supported by the reference response. • 0: The system's response is contradicted to the facts contained in the reference response. This metric measures whether the system's response is logically coherent with the dialogue context. The definition of different scores are: • 2: The system's response is logically coherent with the dialogue context. • 1: The system's response contains minor information that is off the topic of the dialogue context. • 0: The system's response is completely irrelevant to the dialogue context. The metrics measures the fluency of the system's response. The definition of different scores are: • 2: The system's response is grammatically correct and easy to understand. • 1: The system's response contains minor errors but they do not affect your understanding. • 0: The system's response does not make sense and it is unreadable. Table 11 presents a generated dialogue example from the PPTOD base model. The user starts the conversation by asking for an expensive restaurant that serves Indian food for dinner. PPTOD finds 14 restaurants that satisfy the user's goal and asks the user for a preferred location. We can see that, when the user states no preference on the restaurant location, PPTPD correctly updates the dialogue state by adding the area information which is missed by the oracle information. Then the user switches the dialogue topic for booking a hotel. Through the dialogue trajectory, we see that PPTOD completes the dialogue by successfully providing the user the necessary information such as number of hotel choices (at turn 3) and the booking reference number (at turn 6). When finding the user's booking request cannot be fulfilled (at turn 5), the models asks the user for an alternative option. Moreover, this example also demonstrates that PPTOD is able to deal with some NLU challenges displayed in the conversations. For example, at turn 4, the user already provides the information about the Gonville Hotel. But only after the user describes the intention of booking the hotel at turn 5, the model updates the name of hotel in the dialogue state based on the co-referenced information from the previous turn. Interestingly, the hotel name is ignored by the oracle dialogue state but our model correctly detects it. The dialogue understanding ability of PPTOD can also be observed in turn 6, in which it updates the hotel stay in the belief state from 2 days to 1 day after the user provides the corresponding information. is there anything else i can help you with ? Lexicalized Response i was able to successfully book your room . your reference number is 7rzme01z . is there anything else i can help you with ? Oracle Reference you are all set for 1 night ( sunday ) , reference number is 6wvv053q . would you like to try another location for monday night ? Turn 7 User Utterance no . that is all i need , thank you .
1,047
1,574
1,047
Don't Let Discourse Confine Your Model: Sequence Perturbations for Improved Event Language Models
Event language models represent plausible sequences of events. Most existing approaches train autoregressive models on text, which successfully capture event co-occurrence but unfortunately constrain the model to follow the discourse order in which events are presented. Other domains may employ different discourse orders, and for many applications, we may care about different notions of ordering (e.g., temporal) or not care about ordering at all (e.g., when predicting related events in a schema). We propose a simple yet surprisingly effective strategy for improving event language models by perturbing event sequences so we can relax model dependence on text order. Despite generating completely synthetic event orderings, we show that this technique improves the performance of the event language models on both applications and outof-domain events data.
Event-level language models (LMs) provide a way to reason about events, and to approximate schematic and script-like knowledge In this paper, we aim to improve event-level LMs in order to make them more suitable for general knowledge learning. While a range of possible modifications to the model can be imagined, such as set transformers Surprisingly, despite our disruption of discourse order, experiments show how perturbations can improve event language modeling of text, particularly when evaluating the model on other domains which present events in different orders (e.g., novels or blogs present data in more of a "narrative" fashion than news datasets common in NLP
Event language modeling tasks are typically defined over sequences of events as they appear in text. The events can be represented either as a sequence of words annotated with predicateargument structure (e.g., semantic roles However, relying on discourse order may not be necessary and can potentially limit generalization of event LMs. For some event related tasks such as schema learning One way to reduce reliance on discourse order is to expose the model to random permutations of the input sequences, as shown in Figure • Reversed order: given a set of events as ABCD, the reverse of the sequence is created as DCBA. • Concatenation of events in the odd positions followed by the even positions of the sequence: the permuted sequence is BDAC. • Concatenation of event tuples in the odd positions followed by those in the even positions of the reverse order of the original sequence. The new sequence is: CADB These shuffle patterns were selected to minimize the chance of repetition across permutations. We also consider event dropout as another perturbation to the original discourse sequence. For each sequence, we remove a small random subset of events (Event Dropout in Figure When dropping events, we can provide additional information to the model about where events were dropped. This forces the model to capture longerterm dependencies among events in the sequence. We randomly select a number of event tuples and replace their tokens with a <mask> token (Masking in Figure Data We train event language models on the Annotated NYT corpus using Open IE event tuples extracted by Ollie The components of the events (the verb, subject, etc.) are all individual tokens, and are treated like normal text. For example, the events (truck packed with explosives), (police arrested suspect), would be given to the model as: packed truck explosives with [TUP] arrested police suspect NULL , where NULL is the null preposition token and [TUP] is a special separator token between events. Each document is first partitioned into segments of four sentences each. All events extracted from each segment are concatenated (in discourse order) to form an event sequence. This is a simple heuristic to avoid considering event sequences that can drift or connect otherwise unrelated events. Tuples with common verbs (is, are, be, ...) and repeating predicates are also ignored. The training, development, and test splits have 7.1M, 19K, and 29K event sequences respectively. During training, depending on the perturbation strategy used, a number of sequences are added to the initial sets. The numbers are hyperparameters, selected differently for each model. Details are given in the following sections. Autoregressive Models Our baseline autoregressive event LM is a pretrained GPT-2 model Once the perturbations are applied to the original sequence, the modified sequence is used as both the input and the output of the model. We trained variants of GPT-2 with different sequence perturbations as shown in Figure Autoencoding Models We use HierarchicAl Quantized Autoencoder (HAQAE) For training the HAQAE model, instead of reconstructing a perturbed sequence, we explore a denoising style training objective, where we only perturb the input part of the sequence keeping the output the same as the original. Our hypothesis is that these models learn a perturbation-invariant latent space representation in both cases, which will help break the dependence on discourse order. We use the denoising variant in our experiments as it worked better than the standard reconstruction objective in our initial experiments. For each sequence in the permutation model, we generated permuted sequences for 10% of the original sequences. As for the dropout and masked models, we created n/4 new sequences with n being the number of events in the sequence. Each sequence has n/3 of its events either dropped or masked. Preliminary experiments showed little difference between using all the data vs a subset. The GPT-2 model uses the implementation from Huggingface library The HAQAE model uses 5 discrete latent variables. Each variable can initially take on K = 512 values, with an embeddings dimension of 256. The encoder is a bidirectional, single layer RNN with GRU cell We ran different experiments to answer the following questions: How do sequence perturbation techniques improve event language modeling? We evaluate perplexity as is standard in perplexity, we want to see how well event LMs capture schematic knowledge. We thus evaluate on the inverse narrative cloze (INC) task The INC evaluation starts with a gold sequence of events from a real document, and then includes 5 other event sequences pulled from confounding documents. You insert the first gold event artificially at the start of each of these. The gold event sequence should have high probability compared to the confounding event sequences. Figure The perplexity Using sequence perturbations improves the INC accuracy on both test and validation sets for both categories of models. Further, the sequence perturbations gain in terms of INC accuracy is much higher with HAQAE. How do models trained with perturbation techniques perform on out-of-domain data? The NYT corpus used for training the models in this study is newswire. The journalistic writing style does not always follow the temporal ordering of events, but represents the events in various orders going backwards or forward in time. One might argue that the reason the sequence perturbations work better in terms of INC accuracy is that the events extracted from news do not necessarily follow the temporal order and therefore the perturbations will not create an issue. To show the effectiveness of our approach, we evaluated the performance of our models on the event sequences extracted from narratives coming from different domains: novels, blogs and news We used the OpenIE extraction system in a similar fashion to extract the event tuples from the narrative sequences. We used our best-performing model from the previous section and with no finetuning applied the models to see how our sequence perturbations performed in terms of INC accuracy on these narrative texts. The results of this analysis are presented in Table How effective are the sequence perturbation techniques with respect to the number of training instances? Our sequence perturbations can be seen as data augmentation strategies which will help models learn new aspects of data that can not be learned from the original sequences. As the number of training samples increases, the model has more opportunities to learn these aspects. Therefore, the sequence perturbations will be more useful for domains with fewer training samples. Table We plotted the perplexity with respect to the number of training sequences for the GPT-2 baseline system as well as permuted and dropout models. As can be seen in Figure How do schemas generated by different models differ from each other? We generated schemas for 46 two-event seeds using the HAQAE baseline and permuted models. We wanted to see how the generated schemas differ in two different aspects: First, for each seed, we permuted the events and generated schemas for both models. We expect the permuted model to have less variation in generating events for original and permuted seeds. We calculated the perplexity of the generated events for both the original order of events as well as the permuted order. Table Second, we want to see how dependent the generation is upon the most recent event in the sequence. We generated schemas for two-event seeds in which the last event is the same while the first event indicates a different path. Table We proposed a set of simple sequence perturbations to relax the model's reliance on the discourse order of event mentions for event language modeling. By predicting the next event based on perturbed sequences, the model is encouraged to treat the input as a set of events. Our experiments show that these perturbations can improve identifying event schemas measured by INC accuracy both on in-domain and out-of-domain data.
861
674
861
Training with Adversaries to Improve Faithfulness of Attention in Neural Machine Translation
Can we trust that the attention heatmaps produced by a neural machine translation (NMT) model reflect its true internal reasoning? We isolate and examine in detail the notion of faithfulness in NMT models. We provide a measure of faithfulness for NMT based on a variety of stress tests where model parameters are perturbed and measuring faithfulness based on how often the model output changes. We show that our proposed faithfulness measure for NMT models can be improved using a novel differentiable objective that rewards faithful behaviour by the model through probability divergence. Our experimental results on multiple language pairs show that our objective function is effective in increasing faithfulness and can lead to a useful analysis of NMT model behaviour and more trustworthy attention heatmaps. Our proposed objective improves faithfulness without reducing the translation quality and it also seems to have a useful regularization effect on the NMT model and can even improve translation quality in some cases.
Can we trust our neural models? This question has led to a wide variety of contemporary NLP research focusing on (a) different axes of interpretability including plausibility (or interchangeably human-interpretability) Aligned with these criteria, we study faithfulness of attention in NMT, the extent to which it can reflect the true internal reasoning behind a prediction (Figure • We propose a measure for quantifying faithfulness in NMT. • We introduce a novel learning objective based on probability divergence that rewards faithful behavior and which can be included in the training objective for NMT. • We provide empirical evidence that we can improve faithfulness in an NMT model. Our approach results in more a more faithful NMT model while producing better BLEU scores. We chose to study the impact of faithfulness in NMT because it is under-studied in terms of interpretability. Most previous work has focused on document or sentence-based classification tasks where attention models are not as directly useful as in NMT models. Attention is also more challenging in terms of faithfulness in the context of NMT models due to the substantial impact of the decoder component.
Intuitively, a faithful explanation should reflect the true internal reasoning of the model. Although there is no formal definition for faithfulness, a common approach in the community is to design stress tests to perturb the model parameters chosen in such a way that the model's decision should change if the model is faithful , where m is the length of the source sentence. This is to confuse the model about which part of the input is the most important one. • RandomPermute (Jain and Wallace, 2019): The attention weights are randomly permuted until a change in the model output is observed. We ensure that m t , the most important token according to attention, is always changed. We set Many prior studies of attention F (M ) is a number between 0 to 1 measuring the percentage of output tokens during inference which passed the stress tests. This metric can also be regarded as a measure of trust we can assign to the attention heatmap to fully reflect the internal reasoning of the NMT model. The conventional objective function in a sequenceto-sequence task is a cross-entropy loss F acc : where S is the training data and X and Y are source sentence and the correct translation respectively. F f aith is an additional component that rewards the model for having more faithful attention. The parameter λ f aith regulates the trade-off between between faithfulness and accuracy objectives. Consider a predictive model g θ in which an intermediate calculation is later employed to justify predictions: ) where IC(x) is the intermediate calculation on the input. A concrete example for IC(x) would be the context vector calculated by the attention mechanism. Hypothesis If there exists an intermediate calculation IC (x) that conveys a contradictory post-hoc attention compared to IC(x), then IC(x) cannot be regarded as faithful for predicting ŷ. If IC(x) is faithful, we expect the model to diverge from predicting ŷ when IC (x) is employed instead. Based on our hypothesis, we propose a divergence-based objective which mimics behavior of a faithful explanation under stress test: Here IC (x) is a stress test. This objective promotes reduction in output probability under an adversarial intermediate calculation (Figure where IC zom , IC uni and IC perm are ZeroOut-Max, Uniform and RandomPermute methods (see Sec. 2) to manipulate attention weights, respectively. λ {method} parameters regulate the contribution of each objective. We use the term F all when all λ {method} s in Eq. ( We use the Czech-English (Cs-En) dataset from IWSLT2016and the German-English (De-En) dataset from IWSLT2014.We used Moses To measure the effectiveness of the proposed objectives, we choose the best model in terms of provided faithfulness but within the 0.5 BLEU score of the maximum achieved BLEU score in the validation set. The reason is that we prefer a model that is both accurate and with faithful attention-based explanations. Table When using F all , faithfulness of attention-based explanations for content words is increased 78% to 89%, while that of the function words is from 33% to 82%(see All column in Table An interesting observation in Table The model checkpoints used in Tables Objective BLEU% Table Improved BLEU scores for the faithful model can be due to two reasons: 1) the faithfulness objective can be seen as a regularization term which prevents the model from relying too much on the target-side context and the implicit language model in the decoder, which results in increased contribution of attention on the decoder and reducing some bias in the model. 2) penalizing the model for the lack of connection between justification and prediction forces the model to learn better translations by forcing it to justify each output in a right answer for the right reason paradigm. Figure While several studies have focused on understanding the semantic notions captured by attention While most of these works provide evidence that attention weights are not always faithful, Moradi et al. ( While prior works have mostly failed to explicitly distinguish faithfulness from plausibility in their arguments, In this paper, we proposed a method for quantifying faithfulness of NMT models. To optimize faithfulness we have defined a novel objective function that rewards faithful behavior through probability divergence. Unlike previous work, our method does not use prior knowledge or extraneous data. We also show that the additional constraint in the training objective for NMT does not harm translation quality and in some cases we see some better translations presumably due to the regularization effect of our faithfulness objective.
1,027
1,185
1,027
Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks
Syntactic information, especially dependency trees, has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities. However, most existing studies suffer from the noise in the dependency trees, especially when they are automatically generated, so that intensively leveraging dependency information may introduce confusions to relation classification and necessary pruning is of great importance in this task. In this paper, we propose a dependency-driven approach for relation extraction with attentive graph convolutional networks (A-GCN). In this approach, an attention mechanism upon graph convolutional networks is applied to different contextual words in the dependency tree obtained from an offthe-shelf dependency parser, to distinguish the importance of different word dependencies. Consider that dependency types among words also contain important contextual guidance, which is potentially helpful for relation extraction, we also include the type information in A-GCN modeling. Experimental results on two English benchmark datasets demonstrate the effectiveness of our A-GCN, which outperforms previous studies and achieves state-ofthe-art performance on both datasets. 1
Relation extraction (RE), which aims to detect the relationship between entity mentions from raw text, is one of the most important tasks in information extraction and retrieval, and plays a crucial role in supporting many downstream natural language processing (NLP) applications such as text mining et al., 2019), question answering Recently, neural RE methods In this paper, we propose a dependency-driven neural approach for RE, where attentive graph neural network (A-GCN) is proposed to distinguish the important contextual information for this task. Furthermore, given that the dependency types (e.g., nominal subject) that associate with dependency connections are also potentially useful for RE since they contain the syntactic instruction among connected words, we further improve A-GCN by introducing type information into it. Specifically, we first obtain the dependency tree of an input sentence from an off-the-shelf toolkit, then build the graph over the dependency tree, and assign different weights to different labeled dependency connections between any two words, with the weights computed based on the connections and their dependency types, lastly predict relations by the A-GCN according to the learned weights. In doing so, not only is A-GCN able to distinguish important contextual information from dependency trees and leverage them accordingly, such that reliance on pruning strategies is unnecessary, but A-GCN can also leverage the dependency type information that is omitted by most previous studies (in particular, the studies that also use attention mechanism
RE is conventionally performed as a typical classification task. Our approach follows this paradigm by using A-GCN and incorporates dependency information to improve model performance, where the overall architecture of our model is illustrated in Figure where T X is the dependency tree of X obtained from an off-the-shelf toolkit, R is the relation type set; p computes the probability of a particular relation r 2 R given the two entities and b r the output of A-GCN, which takes X and T X as the input. Following texts start with a brief introduction of the standard GCN model, then elaborate our proposed A-GCN equipped with dependency type information, and lastly illustrate the process of applying A-GCN to the classification paradigm for RE. Generally, a good text representation is a prerequisite to achieve outstanding model performance where h (l 1) j denotes the output representation of x j from the (l-1)-th GCN layer 3 , W (l) and b (l) are trainable matrices and the bias for the l-th GCN layer, respectively, and is the ReLU activation. It is noted that in standard GCN (e.g., Eq. ( where a i,j 2 A, "•" denotes inner production, and s (l) i and s (l) i are two intermediate vectors for x i and 4 It means ti,j and tj,i are represented in different dependency types to model directions of connections between xi and xj. For example, if ti,j is nsubj, then tj,i is #nsubj. x j , respectively, which are computed by (5) with " " denoting the vector concatenation operation. Afterwards, we apply the weight p (l) i,j to the associated dependency connection between x i and x j and obtain the output representation of x i by with , W (l) , and b (l) following the same notations in Eq. ( (a typeenhanced representation for x j ) computed by where T maps the dependency type embedding e t i,j to the same dimension as h (l 1) j . Compared with standard GCN (i.e., Eq. ( Before applying A-GCN for RE, we firstly encode the input X into hidden vectors by BERT Afterwards, we concatenate the representations of the sentence (i.e., h X ) and two entities (i.e., h E 1 and h E 2 ) and apply a trainable matrix W R to the computed vector to map it to the output space by 3 Experimental Settings In the experiments, we use two English benchmark datasets for RE, namely, ACE2005EN (ACE05) To construct graphs for A-GCN, we use Standard CoreNLP Toolkits (SCT) Following Soares et al. ( For A-GCN, we randomly initialize all trainable parameters and the dependency type embeddings. For evaluation, we follow previous studies to use the standard micro-F1 scores 4 Results In the experiments, we run our A-GCN models using BERT-base and BERT-large encoder on graphs with and without applying dependency pruning strategies, which correspond to the graph built upon the combined local and global connections ("L + G"), as well as the one constructed by the full dependency graph ("Full"), respectively. We also run baselines with standard GCN and standard graph attentive networks (GAT) models and all the aforementioned baselines on the test set of ACE05 and SemEval. 16 There are several observations. First, A-GCN functions well when using BERT-base or BERTlarge as encoder, where the consistent improvement is observed over the BERT-only baselines (ID: 1) across two benchmark datasets, even though the BERT baselines have already achieve good performance. Second, for both datasets, A-GCN outperforms GAT (ID: 2, 3) and standard GCN baselines (ID: 4, 6, 8, 10, 12, 14) with the same graph (i.e., either "L + G" or "Full") and equal number of layers. Particularly, when full dependency graph is used, it is noted that, in some cases (e.g., ID: 8 for BERT-base on ACE05), standard GCN obtains very limited improvements (or even worse results) over the BERT-only baseline (ID: 1), whereas our A-GCN models (e.g., ID: 9 for BERT-base) is able to consistently outperform the BERT-only baseline and achieve higher performance. We attribute this observation to the attention mechanism used to weigh different dependency connections, which allows A-GCN to distinguish the noise in the graph and thus leverage useful dependency information accordingly. Third, among the models with different numbers of A-GCN layers, the ones (e.g., ID: 11 for BERT-base and ID: 11 for BERT-large) with two A-GCN layers achieves the highest scores, where similar tread is observed from the standard GCN baselines. Besides, we find that our A-GCN 16 For the same group of models, we report the F1 scores on the development sets in Appendix C and the mean and standard deviation of their test set results in Appendix D. models (as well as the standard GCN baselines) with the local and global connections (i.e., "L + G") consistently outperform the ones with full dependency graph (i.e., "Full"). These observations are relatively intuitive since the dependency information may introduce more noise to RE when it is leveraged in an intensive way (e.g., by using more layers or the full dependency tree without pruning). In addition, we compare our best models (with "L + G" or "Full" graphs) using BERT-large encoder and two A-GCN layers (ID: 9 and 11) with previous studies. The test results (F1 scores) are reported in Table is added to each A-GCN layer and the attention mechanism is directly applied to each dependency connection in the A-GCN layer. Therefore, compared with 5 Analyses Dependency information is supposed to be beneficial for RE because it contains long-distance wordword relations, which could be extremely useful when the given two entities are far away from each other in the input sentence. To explore the effect of A-GCN in capturing such long-distance wordword relations to help with RE, we split the test instances into different groups according to their entities' distances (i.e., the number of words between the two entities) and run models on these groups to test their performance. Figure In the main experiments, we try A-GCN with the graph built upon the combined local and global connections ("L + G"). To explore the effect of the local connections and the global connections for A-GCN, we run our approach using two A-GCN layers with the graph constructed by local connections ("L") or global connections ("G") alone. Table Compared with the standard GCN, A-GCN enhances it from two aspects: (1) using an attention 18 When there is only one word on the shortest dependency path between two entities, all global connections are included in local ones, e.g., "defamation" and "bishop" in Figure To explore in detail that how A-GCN leverages dependency connections and types to improve RE, we conduct a case study with our A-GCN models with different dependency graphs (i.e., two layers of A-GCN (Full) and A-GCN (L + G) with BERTlarge encoder) on an example sentence "A central vacuum is a vacuum motor and filtration system built inside a canister.". Figure (E 2 ) (highlighted in the red color) to be "Content-Container", whereas the baseline GCN (Full) and GCN (L + G) models fail to do so. We also visualize the attention weights assigned to different dependency connections extracted from the last A-GCN layer, with darker and thicker lines referring to higher weights. In this example, for A-GCN (Full), we observe that the connection between "built" and "canister" along SDP and the connection between "inside" and "canister" receive the highest weights, where this is valid because the dependency type, i.e., obl (oblique nominal), associated with the connection (between "built" and "canister") reveals that "canister" could be the position where the action (i.e., build) takes place, and is further confirmed by another dependency connection and type (i.e., case) between "inside" and "canister". Therefore, it is proved that our model learn from the contextual information carried by such important connections and results in correct RE prediction. Similarly, A-GCN (L + G) also correctly perform RE on this case by highlighting the same dependency connections as those from the A-GCN (Full) with much higher weights (because many dependency connections are filtered out). Recently, neural networks with integrating external knowledge or resources play important roles in RE because of their superiority in better capturing contextual information In this paper, we propose A-GCN to leverage dependency information for relation extraction, where an attention mechanism is applied to dependency connections to applying weighting on both connections and types so as to better distinguish the important dependency information and leverage them accordingly. In doing so, A-GCN is able to dynamically learn from different depen-dency connections so that less-informative dependencies are smartly pruned. Experimental results and analyses on two English benchmark datasets for relation extraction demonstrate the effectiveness of our approach, especially for entities with long word-sequence distances, where state-of-theart performance is obtained on both datasets. Table
1,294
1,590
1,294
DualSum: a Topic-Model based approach for update summarization
Update summarization is a new challenge in multi-document summarization focusing on summarizing a set of recent documents relatively to another set of earlier documents. We present an unsupervised probabilistic approach to model novelty in a document collection and apply it to the generation of update summaries. The new model, called DUALSUM, results in the second or third position in terms of the ROUGE metrics when tuned for previous TAC competitions and tested on TAC-2011, being statistically indistinguishable from the winning system. A manual evaluation of the generated summaries shows state-of-the art results for DUALSUM with respect to focus, coherence and overall responsiveness.
Update summarization is the problem of extracting and synthesizing novel information in a collection of documents with respect to a set of documents assumed to be known by the reader. This problem has received much attention in recent years, as can be observed in the number of participants to the special track on update summarization organized by DUC and TAC since 2007. The problem is usually formalized as follows: Given two collections A and B, where the documents in A chronologically precede the documents in B, generate a summary of B under the assumption that the user of the summary has already read the documents in A. Extractive techniques are the most common approaches in multi-document summarization. Summaries generated by such techniques consist of sentences extracted from the document collection. Extracts can have coherence and cohesion problems, but they generally offer a good tradeoff between linguistic quality and informativeness. While numerous extractive summarization techniques have been proposed for multidocument summarization Recently, Bayesian models have successfully been applied to multi-document summarization showing state-of-the-art results in summarization competitions In this article, we propose a novel nonparametric Bayesian approach for update summarization. Our approach, which is a variation of Latent Dirichlet Allocation (LDA)
Most Bayesian approaches to summarization are based on topic models. These generative models represent documents as mixtures of latent topics, where a topic is a probability distribution over words. In TOPICSUM A commonality of all these models is the use of collection and document-specific distributions in order to distinguish between the general and specific topics in documents. In the context of summarization, this distinction helps to identify the important pieces of information in a collection. Models that use more structure in the representation of documents have also been proposed for generating more coherent and less redundant summaries, such as HIERSUM A number of techniques have been proposed to rank sentences of a collection given a word distribution where w is a word from the vocabulary V. This strategy is called KLSum. Usually, a smoothing factor τ is applied on the candidate distribution S in order to avoid the divergence to be undefined While hierarchical topic modeling approaches have shown remarkable effectiveness in learning the latent topics of document collections, they are not designed to capture the novel information in a collection with respect to another one, which is the primary focus of update summarization. The goal of update summarization is to generate an update summary of a collection B of recent documents assuming that the users already read earlier documents from a collection A. We refer to collection A as the base collection and to collection B as the update collection. Update summarization is related to novelty detection which can be defined as the problem of determining whether a document contains new information given an existing collection Update summarization is also related to contrastive summarization, i.e. the problem of jointly generating summaries for two entities in order to highlight their differences The most common approach for update summarization is to apply a normal multi-document summarizer, with some added functionality to remove sentences that are redundant with respect to collection A. This can be achieved using simple filtering rules Another approach is to introduce specific features intended to capture the novelty in collection B. For example, comparing collections A and B, FastSum derives features for the collection B such as number of named entities in the sentence that already occurred in the old cluster or the number of new content words in the sentence not already mentioned in the old cluster that are subsequently used to train a Support Vector Machine classifier The input for DUALSUM is a set of pairs of collections of documents C = {(A i , B i )} i=1...m , where A i is a base document collection and B i is an update document collection. We use c to refer to a collection pair (A c , B c ). In DUALSUM, documents are modeled as a bag of words that are assumed to be sampled from a mixture of latent topics. Each word is associated with a latent variable that specifies which topic distribution is used to generate it. Words in a document are assumed to be conditionally independent given the hidden topic. As in previous Bayesian works for summarization To capture the differences between the base and the update collection for each pair c, DUALSUM learns two topics for every collection pair. The joint topic, φ Ac captures the common information between the two collections in the pair, i.e. the main event that both collections are discussing. The update topic, φ Bc focuses on the specific aspects that are specific of the documents inside the update collection. In the generative model, • For a document d in a collection A c , words can be originated from one of three different topics: φ G , φ cd and φ Ac , the last one of which captures the main topic described in the collection pair. • For a document d in a collection B c , words can be originated from one of four different topics: φ G , φ cd , φ Ac and φ Bc . The last one will capture the most important updates to the main topic. To make this representation easier, we can also state that both collections are generated from the four topics, but we constrain the topic probability 2. For each collection pair c = (A c , B c ): for φ Bc to be always zero when generating a base document. We denote u cd ∈ {A, B} the type of a document d in pair c. This is an observed, Boolean variable stating whether the document d belongs to the base or the update collection inside the pair c. The generation process of documents in DU-ALSUM is described in Figure Unlike for the word distributions, mixing probabilities are drawn from a Dirichlet distribution with asymmetric priors. The prior knowledge about the origin of words in the base and update collections is again encoded at the level the hyper-parameters. For example, if we set γ A = (5, 3, 2, 0), this would reflect the intuition that, on average, in the base collections, 50% of the words originate from the background distribution, 30% from the document-specific distribution, and 20% from the joint topic. Similarly, if we set γ B = (5, 2, 2, 1), the prior reflects the assumption that, on average, in the update collections, 50% of the words originate from the background distribution, 20% from the document-specific distribution, 20% from the joint topic, and 10% from the novel, update topic In order to find the optimal model parameters, the following equation needs to be computed: Omitting hyper-parameters for notational simplicity, the joint distribution over the observed variables is: where ∆ denotes the 4-dimensional simplex Variational approaches Collapsed Gibbs sampling is a particular case of Markov Chain Monte Carlo (MCMC) that involves repeatedly sampling a topic assignment for each word in the corpus. A single iteration of the Gibbs sampler is completed after sampling a new topic for each word based on the previous assignment. In a collapsed Gibbs sampler, the model parameters are integrated out (or collapsed), allowing to only sample z. Let us call w cdn the n-th word in document d in collection c, and z cdn its topic assignment. For Gibbs sampling, we need to calculate p(z cdn |w, u, z -cdn ) where z -cdn denotes the random vector of topic assignments except the assignment z cdn . where -cdn,j denotes the number of times word v is assigned to topic j excluding current assignment of word w cdn and n (cd) -cdn,k denotes the number of words in document d of collection c that are assigned to topic j excluding current assignment of word w cdn . After each sampling iteration, the model parameters can be estimated using the following formulas 5 . The interested reader is invited to consult where k ∈ K, n k denotes the number of times word v is assigned to topic k, and n (cd) k denotes the number of words in document d of collection c that are assigned to topic k. By the strong law of large numbers, the average of sample parameters should converge towards the true expected value of the model parameter. Therefore, good estimates of the model parameters can be obtained averaging over the sampled values. As suggested by The Bayesian graphical model described in the previous section can be run over a set of news collections to learn the background distribution, a joint distribution for each collection, an update distribution for each collection and the documentspecific distributions. Once this is done, one of the learned collections can be used to generate the summary that best approximates this collection, using the greedy algorithm described by • DUALSUM's choice of hyper-parameters affects how the topics are learned. • The documents can be represented with ngrams of different lengths. • It is possible to generate a summary that approximates the joint distribution, the updateonly distribution, or a combination of both. This section describes how these parameters have been tuned. We use the TAC 2008 and 2009 update task datasets as training set for tuning the hyperparameters for the model, namely the pseudocounts for the two Dirichlet priors that affects the topic mix assignment for each document. By performing a grid search over a large set of possible hyper-parameters, these have been fixed to γ A = (90, 190, 50, 0) and γ B = Regarding the base collection, this can be interpreted as setting as prior knowledge that roughly 27% of the words in the original dataset originate from the background distribution, 58% from the document-specific distributions, and 15% from the topic of the original collection. We remind the reader that the last value in γ A is set to zero because, due to the problem definition, the original collection must have no words generated from the update topic, which reflects the most recent developments that are still not present in the base collections A. Regarding the update set, 27% of the words are assumed to originate again from the background distribution, 51% from the document-specific distributions, 14% from an topic in common with the original collection, and 8% from the updatespecific topic. One interesting fact to note from these settings is that most of the words belong to topics that are specific to single documents (58% and 51% respectively for both sets A and B) and to the background distribution, whereas the joint and update topics generate a much smaller, limited set of words. This helps these two distributions to be more focused. The other settings mentioned at the beginning of this section have been tuned using the TAC-2010 dataset, which we reserved as our development set. Once the different document-specific and collection-specific distributions have been obtained, we have to choose the target distribution T to with which the possible summaries will be compared using the KL metric. Usually, the human-generated update summaries not only include the terms that are very specific about the last developments, but they also include a little background regarding the developing event. Therefore, we try, for KLSum, a simple mixture between the joint topic (φ A ) and the update topic (φ B ). Figure A second parameter is the size of the n-grams for representing the documents. The original implementations of SUMBASIC DUALSUM is a modification of TOPICSUM designed specifically for the case of update summarization, by modifying TOPICSUM's graphical model in a way that captures the dependency between the joint and the update collections. Still, it is important to discover whether the new graphical model actually improves over simpler applications of TOPICSUM to this task. The three baselines that we have considered are: • Running TOPICSUM on the set of collections containing only the update documents. We call this run TOPICSUM B . • Running TOPICSUM on the set of collections containing both the base and the update documents. Contrary to the previous run, the topic model for each collection in this run will contain information relevant to the base events. We call this run TOPICSUM A∪B . • Running TOPICSUM twice, once on the set of collections containing the update documents, and the second time on the set of collections containing the base documents. Then, for each collection, the obtained base and update models are combined in a mixture model using a mixture weight between zero and one. The weight has been tuned using TAC-2010 as development set. We call this run TOPICSUM A +TOPICSUM B . DUALSUM and the three baselines The top three systems in TAC-2011 have been included for comparison. The results between these three systems, and between them and DU-ALSUM, are all indistinguishable at 95% confidence. Note that the best baseline, TOPICSUM B , is quite competitive, with results that are indistinguishable to the top participants in this year's evaluation. Note as well that, because we have five different runs for our algorithms, whereas we just have one output for the TAC participants, the confidence intervals in the second case were slightly bigger when checking for statistical significance, so it is slightly harder for these systems to assert that they outperform the baselines with 95% confidence. These results would have made DUALSUM the second best system for ROUGE-1 and ROUGE-SU4, and the third best system in terms of ROUGE-2. The supplementary materials contain a detailed example of the the topic model obtained for the background in the TAC-2011 dataset, and the base and update models for collection D1110. As expected, the top unigrams and bigrams are all closed-class words and auxiliary verbs. Because trigrams are longer, background trigrams actually include some content words (e.g. university or director). Regarding the models for φ A and φ B , the base distribution contains words related to the original event of an earthquake in Sichuan province (China), and the update distribution focuses more on the official (updated) death toll numbers. It can be noted here that the tokenizer we used is very simple (splitting tokens separated with white-spaces or punctuation) so that numbers such as 7.9 (the magnitude of the earthquake) and 12,000 or 14,000 are divided into two tokens. We thought this might be a for the bigram-based system to produce better results, but we ran the summarizers with a numbers-aware tokenizer and the statistical differences between versions still hold. While the ROUGE metrics provides an arguable estimate of the informativeness of a generated summary, it does not account for other important aspects such as the readability or the overall responsiveness. To evaluate such aspects, a manual evaluation is required. A fairly standard approach for manual evaluation is through pairwise comparison For each of the 44 collections in TAC-2011, 3 ratings were collected from raters 7 . Results are reported in Table The running time for summarizing the TAC collections with DualSum, averaged over a hundred runs, is 4.97 minutes, using one core (2.3 GHz). Memory consumption was 143 MB. It is important to note as well that, while TOP-ICSUM incorporates an additional layer to model topic distributions at the sentence level, we noted early in our experiments that this did not improve the performance (as evaluated with ROUGE) and consequently relaxed that assumption in Dual-Sum. This resulted in a simplification of the model and a reduction of the sampling time. While five minutes is fast enough to be able to experiment and tune parameters with the TAC collections, it would be quite slow for a realtime summarization system able to generate summaries on request. As can be seen from the plate diagram in Figure The good news is that this background distribution will contain closed-class words in the language, which are domain-independent (see supplementary material for examples). Therefore, we can generate this distribution from one of the TAC datasets only once, and then it can be reused. Fixing the background distribution to a pre-computed value requires a very simple modification of the Gibbs sampling implementation, which just needs to adjust at each iteration the collection and document-specific models, and the topic assignment for the words. Using this modified implementation, it is now possible to summarize a single collection independently. The summarization of a single collection of the size of the TAC collections is reduced on average to only three seconds on the same hardware settings, allowing the use of this summarizer in an on-line application. The main contribution of this paper is DUALSUM, a new topic model that is specifically designed to identify and extract novelty from pairs of collections. It is inspired by TOPICSUM The generated summaries, tested on the TAC-2011 collection, would have resulted on the second and third position in the last summarization competition according to the different ROUGE scores. This would make DUALSUM statistically indistinguishable from the top system with 0.95 confidence. We also propose and evaluate the applicability of an alternative implementation of Gibbs sam-pling to on-line settings. By fixing the background distribution we are able to summarize a distribution in only three seconds, which seems reasonable for some on-line applications. As future work, we plan to explore the use of DUALSUM to generate more general contrastive summaries, by identifying differences between collections whose differences are not of temporal nature.
693
1,375
693
How to Ask Good Questions? Try to Leverage Paraphrases
Given a sentence and its relevant answer, how to ask good questions is a challenging task, which has many real applications. Inspired by human's paraphrasing capability to ask questions of the same meaning but with diverse expressions, we propose to incorporate paraphrase knowledge into question generation(QG) to generate human-like questions. Specifically, we present a two-hand hybrid model leveraging a self-built paraphrase resource, which is automatically conducted by a simple back-translation method. On the one hand, we conduct multi-task learning with sentence-level paraphrase generation (PG) as an auxiliary task to supplement paraphrase knowledge to the task-share encoder. On the other hand, we adopt a new loss function for diversity training to introduce more question patterns to QG. Extensive experimental results show that our proposed model obtains obvious performance gain over several strong baselines, and further human evaluation validates that our model can ask questions of high quality by leveraging paraphrase knowledge.
Question generation (QG) is an essential task for NLP, which focuses on generating grammatical questions for given paragraphs or sentences. It plays a vital role in various realistic scenarios. For educational purposes, QG can create reading comprehension materials for language learners tems Recent neural network-based methods have achieved promising results on QG, most of which are based on the seq2seq attention framework Although much progress has been made for QG, existing approaches do not explicitly model the "notorious" lexical and syntactic gaps in the generation process. That is, some parts of two texts (e.g. the input sentence and reference question, the reference question and generated question) may convey the same meaning but use different words, phrases or syntactic patterns. In real communica- tion, humans often paraphrase a source sentence to ask questions which are grammatical and coherent. Take SQuAD To address this issue, we introduce paraphrase knowledge in the QG process to generate humanlike questions. The sketch of our design is illustrated in Figure We conduct extensive experiments on SQuAD and MARCO
For current mainstream neural network-based methods on QG, most approaches utilize the Seq2Seq model with attention mechanism In order to make use of the context information of paragraphs, Paraphrase knowledge has been used to improve many NLP tasks, such as machine translation, ques-tion answering, and text simplification. In this section, we first describe two baseline models we used: feature-enriched pointer-generator and language modeling enhanced QG. Then we explain how to obtain paraphrase resources and show the quality statistics. Furthermore, we describe in detail two modules of utilizing paraphrase knowledge: the PG auxiliary task and the min loss function, as well as their combination. The overall structure of our hybrid model is shown in Figure where w i , a i , n i , p i , u i respectively represents embeddings of word, answer position, name entity, POS and word case. Same as the decoder used by The training objective is to minimize the negative log likelihood of the target sequence q: In general, the input sequence will firstly be fed into the language modeling module to get the semantic hidden states, then these states will be concatenated with the input sequence to obtain the input of the feature-rich encoder: where h lm i is the semantic hidden state of LM module. The loss function of language modeling is defined as: where P lm (w t+1 |w <t+1 ) and P lm (w t-1 |w >t-1 ) represent the generation probabilities of the next word and the previous word, respectively. As a result, the total loss of language modeling enhanced QG is formulated as: where β is a hyper-parameter to control the relative importance between language modeling and QG. Follow the work of The paraphrasing strategy is independent of the neural-based QG model, and we can use any advanced methods to generate paraphrases. In our work, we employ a simple back-translation method to automatically create paraphrases of both sentences and questions. Specially, we use a mature translation tool Google Translate, which is a free and accessible online service. We translate an original text into German and then back to English to get its paraphrase. As a result, we obtain s which is the paraphrase of the input sentence s, and q which is the paraphrase of the golden reference question q. In the following section, we will illustrate the way to use (s, s ) as a training pair of the auxiliary PG task, and adopt (q, q ) as multireferences to conduct the diversity training module. The way we expand paraphrases does not need extra PG datasets. Besides, it guarantees the PG and QG tasks share the same input s, so we can optimize their sharing encoder simultaneously and train the model end-to-end. Synonym Syntactic Fluency sentence-paraphrase 74% 7% 67% question-paraphrase 58% 44% 67% Table To assess the quality of expanded paraphrases, we randomly select 100 paraphrases respectively from sentences and questions, and ask two annotators to judge the Synonym conversions and Syntactic transitions, as well as the paraphrase F luency. As shown in Table The multi-task learning mechanism with PG aims at introducing paraphrase knowledge into QG. In general, we employ a parallel architecture to combine PG and QG, where QG is the main task and PG serves as an auxiliary task. To make our model Table easy to implement and can be trained end-to-end, we conduct the multi-task learning in a simultaneous mode. In detail, feature-riched embeddings will first be encoded by the task-share encoder and then be fed into PG and QG decoders respectively. The PG and QG decoders both have two layers and they are identical in the structure but different in parameters. In the auxiliary PG task, the input is the original sentence s, and the training objective is to minimize the cross-entropy loss: where y pg t is the generated word of PG at time step t and s t is the t th word in the expanded sentence paraphrase s . To enhance the impact of auxiliary PG task so that the paraphrase knowledge can be absorbed by the question generation process more deeply, we employ a soft sharing strategy between the first layer of PG and QG decoders. The soft sharing strategy loosely couples parameters and encourages them close to each other in representation space. Following the work of where D is the set of shared decoder parameters, θ and φ respectively represent the parameters of the main QG task and the auxiliary PG task. For the QG task, a general training goal is to fit the decoded results with the reference questions. To provide more generation patterns, we adjust the training target from one golden reference question to several reference questions by using expanded paraphrase resources. We adopt a min-loss function among several references, and the loss function defined by Equation 3 can be rewritten as: where Q is the set of gold reference question and expanded question paraphrase {q, q }. Each generated question will separately calculate the negative log-likelihood of its multiple references, and the final loss is the minimum of them. Under this training process, our model can learn multiple question expressions which are not in the original training dataset, so that the generation can be more diverse. Besides, inspired by the work of Combining the above modules, we get our hybrid model. During training, the feature-enriched inputs are first encoded by the task-share encoder. Then the semantic hidden states are fed into PG decoder and QG decoder, respectively. For PG decoder, it has one fitting target (expanded sentence paraphrase). For QG decoder, it calculates the cross-entropy loss with both the gold reference question and the question paraphrase and regards the minimum loss of them as the QG loss. The auxiliary PG task and diversity training strategy simultaneously optimize the question generation process. The combined training loss function can be defined as: where α and λ are both hyper-parameters. We will describe the chosen of these hyper-parameters later. 4 Experimental Settings Our experiments are based on two reading comprehension datasets: SQuAD (2016) and MARCO (2016). On SQuAD, since there are two different splits that are most often used, we conduct experiments on both two splits on sentence-level. For The results of previous works are copied from their original papers. Baseline-1 and Baseline-2 refer to Featureenriched Pointer-generator and LM enhanced QG respectively. Bn: BLEU-n, MET: METOER. Du Split We expand the datasets using the paraphrase expansion approach described in Section 3.2. After that, one sample of the expanded dataset is in the form of ((sentence, sentence paraphrase), (question, question paraphrase), answer). For fair comparison, we report the following recent works on sentence-level Du and Zhou Splits: s2s NQG++ M2S+cp A-P-Hybrid s2s-a-ct-mp-gsa ASs2s LM enhanced QG Q-type Sent-Relation We evaluate the performance of our models using BLEU We set the vocabulary as the most frequent 20,000 words. We use 300-dimensional GloVe word vectors as initialization of the word embeddings. Answer position and token lexical features are randomly initialized to 32-dimensional vectors through truncated normal distribution. The maximum lengths of input sequence and output sequence are 100 and 40, respectively. The hidden size of the encoder, decoder, and language modeling LSTMs are all 512. We use Adagrad optimization with learning rate 0.15 for training. The batch size is 32 and the beam search decoding size is 12. To alleviate the volatility of the training procedure, we get the average model of the 5 checkpoints closest to the best-trained model on development set. The experimental results on two splits of SQuAD are shown in Table Especially for baseline-1, the performance gains of our model are more obvious. Our hybrid model-1 outperforms baseline-1 by 1.52 points on Zhou Split and 1.34 points on Du Split, which are large margins for this challenging task. Even based on this weak baseline, our method also achieves the state-of-the-art, 16.55 BLEU-4 score on Du Split for sentence-level QG. The previous work of CGC-QG We also conduct experiments on MARCO, and the results are shown in Table Specifically, SQuAD and MARCO are built in different ways. The questions in SQuAD are generated by crowd-workers, while questions in MARCO are sampled from real user queries. The experimental results on two datasets validate the generalization and robustness of our models. As shown in Table Effect of Diversity Training with Min-loss Function From the results in Table Effect of Data Augmentation A straightforward way to leverage paraphrase knowledge is data augmentation. To test whether it works by simply adding paraphrase data as external training data, we also conduct an experiment based on the question paraphrase resource. We add the (s, q ) pairs into the training dataset, where s represents the input sentence and q denotes the paraphrase of the golden reference. Under this setting, we double the training samples. Unfortunately, as shown in Table To investigate whether the paraphrase knowledge introduces more diverse expressions, we conduct evaluations on the distinct metric We also verify the effectiveness of the soft sharing mechanism by removing it from the full hybrid models. The results are displayed in The soft sharing coefficient hyper-parameter λ is 1 × 10 -6 , intuitively chosen by balancing the crossentropy and regularization losses according to Figure To further assess the quality of generated questions, we perform human evaluation to compare our hybrid model-2 with the strong baseline of language modeling enhanced QG. We randomly select 100 samples from SQuAD (Zhou Split) and ask three annotators to score these generated questions according to three aspects: Fluency: which measures whether a question is grammatical and fluent; Relevancy: which measures whether the question is relevant to the input context; Answerability: which indicates whether the question can be answered by the given answer. The rating score is set to [0, 2]. The evaluation results are shown in Table We list two examples of generated questions in Table To further test the generalization of our proposed methods, we use other paraphrasing methods to construct the paraphrase dataset. PPDB: for each non-stop word and phrase, looking it up in PPDB (2013) and replacing it with its synonyms. NMT: another back-translation method using a pre-trained Transformer (2017) model. Mixed: expanding input sentences with Google Trans and expanding reference questions with PPDB. The results are shown in can observe that the Mixed paraphrase method even obtain better results than the mature Google Translate. It proves that our proposed architecture is effective across different paraphrasing methods and has potential for improvement. In this paper, we propose a two-hand hybrid model leveraging paraphrase knowledge for QG. The experimental results of independent modules and hybrid models prove that our models are effective and transferable. Besides, human evaluation results demonstrate that the paraphrase knowledge benefits our model to ask more human-like questions of high quality. In the future, we will explore more diverse and advanced paraphrase expanding methods for both sentence and paragraph level QG. Moreover, we will apply our methods to other similar tasks, such as sentence simplification.
1,049
1,139
1,049
Generating Summaries with Topic Templates and Structured Convolutional Decoders
Existing neural generation approaches create multi-sentence text as a single sequence. In this paper we propose a structured convolutional decoder that is guided by the content structure of target summaries. We compare our model with existing sequential decoders on three data sets representing different domains. Automatic and human evaluation demonstrate that our summaries have better content coverage.
Abstractive multi-document summarization aims at generating a coherent summary from a cluster of thematically related documents. Recently, Like most previous work on neural text generation In this work we propose a neural model which is guided by the topic structure of target summaries, i.e., the way content is organized into sentences and the type of content these sentences discuss. Our model consists of a structured decoder which is trained to predict a sequence of sentence topics that should be discussed in the summary and to generate sentences based on these. We extend the convolutional decoder of Although content structure has been largely unexplored within neural text generation, it has been been recognized as useful for summarization. To evaluate our model, we introduce WIKICAT-SUM, a dataset agriocnemis zerafica EOT global distribution: the species is known from north-west uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african agriocnemis. record from angola unlikely. northeastern africa distribution: the species was listed by tsuda for sudan. [• • • ]. EOP very small, about 20mm. orange tail. advised agriocnemis sp. id by kd dijkstra: [• • • ] EOP same creature as previously posted as unknown, very small, about 20mm, over water, top view. advised probably agriocnemis, "whisp" damselfly. EOP [• • • ] EOP justification: this is a widespread species with no known major widespread threats that is unlikely to be declining fast enough to qualify for listing in a threatened category. it is therefore assessed as least concern. EOP the species has been recorded from northwest uganda and sudan, through niger to mauritania and [• • • ] EOP the main threats to the species are habitat loss due to agriculture, urban development and drainage, as well as water pollution. which consists of Wikipedia abstracts and source documents and is representative of three domains, namely Companies, Films, and Animals. In addition to differences in vocabulary and range of topics, these domains differ in terms of the linguistic characteristics of the target summaries. We compare single sequence decoders and structured decoders using ROUGE and a suite of new metrics we propose in order to quantify the content adequacy of the generated summaries. We also show that structured decoding improves content coverage based on human judgments.
The Wikipedia lead section introduces the entity (e.g., Country or Brazil) the article is about, highlighting important facts associated with it. We explicitly model the topic structure of summaries, under the assumption that documents cover different topics about a given entity, while the summary covers the most salient ones and organizes them into a coherent multi-sentence text. We further assume that different lead summaries are appropriate for different entities (e.g. Animals github.com/lauhaide/WikiCatSum. vs. Films) and thus concentrate on specific domains. We associate Wikipedia articles with "domains" by querying the DBPedia knowledge-base. A training instance in our setting is a (domainspecific) paragraph cluster (multi-document input) and the Wikipedia lead section (target summary). We derive sentence topic templates from summaries for Animals, Films, and Companies and exploit these to guide the summariser. However, there is nothing inherent in our model that restricts its application to different domains. Our model takes as input a set of ranked paragraphs where s t denotes the t-th sentence. We adopt an encoder-decoder architecture which makes use of convolutional neural networks (CNNs; A hierarchical convolutional decoder generates the target sentences (based on the encoder outputs). Specifically, a document-level decoder first generates sentence vectors (LSTM Document Decoder in Figure The document-level decoder builds a sequence of sentence representations (s 1 , • • • , s |S| ). For exam- ple, s 1 in Figure where h t is the LSTM hidden state of step t and c s t is the context vector computed by attending to the input. The initial hidden state h 0 is initialized with the averaged sum of the encoder output states. We use a soft attention mechanism where α s jt is the attention weight for the document-level decoder attending to input token x j at time step t. Each sentence s t = (y t1 , . . . , y t|st| ) in target summary S is generated by a sentence-level decoder. The convolutional architecture proposed in In contrast to recurrent networks where initial conditioning information is used to initialize the hidden state, in the convolutional decoder this information is introduced via an attention mechanism. In this paper we extend the multi-step attention The output vectors for each layer l in the convolutional decoder, when generating tokens for the t-th sentence are where o l ti is obtained by adding the corresponding sentence state s t produced by the document-level decoder (Equation ( The prediction of word y ti is conditioned on the output vectors of the top convolutional layer, as ). The model is trained to optimize negative log likelihood L N LL . To further render the document-level decoder topic-aware, we annotate the sentences of groundtruth summaries with topic templates and force the model to predict these. To discover topic templates from summaries, we train a Latent Dirichlet Allocation model (LDA; We train the document-level decoder to predict the topic k t of sentence s t as an auxiliary task, P (k t |s 1:t-1 ) = softmax(W k (s t )), and optimize the summation of the L N LL loss and the negative log likelihood of P (k t |s 1:t-1 ). Data Our WIKICATSUM data set includes the first 800 tokens from the input sequence of paragraphs We compute recall ROUGE scores of the input documents against the summaries to asses the amount of overlap and as a reference for the interpretation of the scores achieved by the models. Across domains content overlap (R1) is ˜50 points. However, R2 is much lower indicating that there is abstraction, paraphrasing, and content selection in the summaries with respect to the input. We rank input paragraphs with a weighted TF-IDF similarity metric which takes paragraph length into account The column TopicNb in Table We compared against two baselines: the Transformer sequence-to-sequence model (TF-S2S) of At test time, we use beam size of 5 for all models. The structured decoder explores at each sentence step 5 different hypotheses. Generation stops when the sentence decoder emits the End-Of-Document (EOD) token. The model trained to predict topic labels, will predict the End-Of-Topic label. This prediction is used as a hard constraint by the document-level decoder, setting the probability of the EOD token to 1. We also use trigram blocking Company Film Animal R1 R2 RL R1 R2 RL R1 R2 RL TF-S2S . Automatic Evaluation Our first evaluation is based on the standard ROUGE metric We also make use of two additional automatic metrics. They are based on unigram counts of content words and aim at quantifying how much the generated text and the reference overlap with respect to the input We complemented the automatic evaluation with two human-based studies carried out on Amazon Mechanical Turk (AMT) over 45 randomly selected examples from the test set (15 from each domain). We compared the TS-S2S, CV-S2S and CV-S2D+T models. The first study focused on assessing the extent to which generated summaries retain salient information from the input set of paragraphs. We fol- lowed a question-answering (QA) scheme as proposed in We collected 3 judgements per system-question pair. Table The second study assessed the overall content and linguistic quality of the summaries. We asked judges to rank (lower rank is better) system outputs according to Content (does the summary appropriately captures the content of the reference?), Fluency (is the summary fluent and grammatical?), Succinctness (does the summary avoid repetition?). We collected 3 judgments for each of the 45 examples. Participants were presented with the gold summary and the output of the three systems in random order. Over all domains, the ranking of the CV-S2D+T model is better than the two single-sequence models TS-S2S and CONVS2S. We introduced a novel structured decoder module for multi-document summarization. Our decoder is aware of which topics to mention in a sentence as well as of its position in the summary. Comparison of our model against competitive singlesequence decoders shows that structured decoding yields summaries with better content coverage. A.1 Data WikiSum consist of Wikipedia articles each of which are associated with a set of reference documents. To discover sentence topic templates in summaries, we used the Gensim framework We used the following hyperparameters to train topic models with Gensim In all convolutional models we used dropout For the transformer-based baseline we applied dropout (with probability of 0.1) before all linear layers and label smoothing We select the best models based on ROUGE scores on the development set. As for the data, we discarded examples where the lead contained sentences longer than 200 tokens (often been long enumerations of items). For the training of all models we only retained those data examples fitting the maximum target length of the structured decoder, 15 sentences with maximum length of 40 tokens (sentences longer than this where split). We used a source and target vocabulary of 50K words for all datasets. On decoding we normalise log-likelihood of the candidate hypotheses y by their length, |y| α with α = 1 In the automatic evaluation we used pyrouge 6 and ROUGE-1.5.5.pl with stemming (parameters= "-c 95 -r 1000 -n 2 -m"). 6 pypi.python.org/pypi/pyrouge Table agriocnemis zerafica is a species of damselfly in the family coenagrionidae. it is native to africa, where it is widespread across the central and western nations of the continent. it is known by the common name sahel wisp. this species occurs in swamps and pools in dry regions. there are no major threats but it may be affected by pollution and habitat loss to agriculture and development. agriocnemis zerafica EOT specimen count 1 record last modified 21 apr 2016 nmnh -entomology dept. taxonomy animalia arthropoda insecta odonata coenagrionidae collector eldon h. newcomb preparation envelope prep count 1 sex male stage adult see more items in specimen inventory entomology place area 5.12km. ne. dakar, near kamberene; 1:30-4:30 p.m., senegal collection date 21 may 1944 barcode 00342577 usnm number usnment342577 published name agriocnemis zerafica le roi EOP global distribution: the species is known from north-west uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african agriocnemis. record from angola unlikely. northeastern africa distribution: the species was listed by tsuda for sudan. this record needs confirmation. may also occur in kenya as well. EOP very small, about 20mm. orange tail. advised agriocnemis sp. id by kd dijkstra: hard to see details, but i believe this is not a. exilis EOP same creature as previously posted as unknown, very small, about 20mm, over water, top view. advised probably agriocnemis, "whisp" damselfly. EOP thank you for taking the time to provide feedback on the iucn red list of threatened species website, we are grateful for your input. EOP justification: this is a widespread species with no known major widespread threats that is unlikely to be declining fast enough to qualify for listing in a threatened category. it is therefore assessed as least concern. EOP the species has been recorded from northwest uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african EOP the main threats to the species are habitat loss due to agriculture, urban development and drainage, as well as water pollution. EOP no conservation measures known but information on taxonomy, population ecology, habitat status and population trends would be valuable.
405
2,424
405
How effective is BERT without word ordering? Implications for language understanding and data privacy
Ordered word sequences contain the rich structures that define language. However, it's often not clear if or how modern pretrained language models utilize these structures. We show that the token representations and selfattention activations within BERT are surprisingly resilient to shuffling the order of input tokens, and that for several GLUE language understanding tasks, shuffling only minimally degrades performance, e.g., by 4% for QNLI. While bleak from the perspective of language understanding, our results have positive implications for cases where copyright or ethics necessitates the consideration of bag-of-words data (vs. full documents). We simulate such a scenario for three sensitive classification tasks, demonstrating minimal performance degradation vs. releasing full language sequences.
Masked language models (MLMs) like BERT To assess this question, we first compare the internal representations of BERT and RoBERTa Following cues from prior work The bad news: Despite BERT being trained on intact word sequences, BoW-BERT demonstrates that MLMs can readily ignore syntax (while maintaining strong performance) when fine-tuned for even carefully designed downstream language understanding tasks. The good news: BoW-BERT offers a practical modeling choice for researchers who must operate with only bag-of-words representations for legal or ethical reasons.
Shuffling inputs to non-pretrained models. Word order shuffling has been tested as part of the full training process for non-pretrained models. Shuffling inputs to pretrained MLMs. While at the time of submission of this work, shuffling results had not been fully reported on the popular GLUE taskset, prior results have used wordshuffling as a baseline with varying results. Several works have examined shuffling inputs in multi-language scenarios (e.g., translation) when languages have variable syntax In some cases, shuffled inputs provide a stronger baseline than might be assumed, while in others, shuffling significantly degrades performance. At present, determining whether or not order is "needed" for a particular task is largely an experimental, empirical endeavor. Prior works have investigated BERT's capacity to represent syntax: some researchers have designed prediction tasks that require syntactic knowledge We might expect that shuffling the order of tokens in an input sentence would significantly corrupt the internal representations of BERT, but is that actually the case? We investigate with two new metrics. Consider applying a pre-trained, fixed BERT model to x ="the movie was great" and the shuffled x ="movie the great was". Token identifiability measures the similarity of BERT's vector representations of a word token (e.g., "movie") in x and x . Identifiability is high if the model has similar representations for tokens after their order is shuffled. Self-attention distance measures if BERT attends to similar tokens for each token in x and x regardless of their order (e.g., is "the movie was great" ≈ "movie the great was" to BERT?). Self-attention distance is low if the model attends to the same tokens after input shuffling. Token Identifiability. Let MLM l (x) be a R t×d matrix, where t is the number of tokens in sentence x, d is the MLM's dimension, and l is the layer index. In this setting, row i of MLM l (x) is the MLM's representation of the ith token in sentence x. We compare MLM l (x) to E[MLM l (X )], where X is drawn uniformly from the permutations of x: perm(x). For a specific sample x ∼ perm(x), we first take the row-wise cosine similarity of MLM l (x) and MLM l (x ), and treat the resulting t × t matrix as an instance of a bipartite linear assignment problem. The assignment accuracy (AA) score for (x, x ) is the proportion of assigned token pairs that have the same underlying word type. To avoid biasing towards shorter sentences, we take the ratio of the accuracy relative to chance, i.e., where RAND is a random matrix of reals R t×d . Self-Attention Distance. Let AMLM l,h (x) be the row-l 1 -normalized R t×t matrix representing the self-attention matrix at layer l for attention head h. We can compute the same matrix for a shuffled input AMLM l,h (x ), and then perform a transformation to re-order the rows and columns of this matrix to match the original order of tokens in x, yielding AMLM x l,h (x ). We then define the row-wise Jensen-Shannon divergence DS-JSD(AMLM l,h (x), AMLM x l,h (x )) as the mean row-wise JSD between AMLM l,h (x) and the DeShuffled reordered attention matrix AMLM x l,h (x ). As before, to reduce the effect of sentence length, we normalize using RND-JSD(AMLM l,h (x), AMLM x l,h (x )), which chooses a random row/column permutation. (2) Results. We randomly sample 100 sentences from each training set of 8 GLUE tasks, for a total of 800 sentences. To approximate expectations from Equations 1 and 2, we sample 32 random permutations per sentence. Figure We compare BERT and RoBERTa to their BoW counterparts on nine tasks from GLUE 8 These tasks span NLI (MNLI . Table According to the GLUE diagnostic set (which tests 33 categories of linguistic phenomena) BoW-BERT has the most trouble with dealing with double negations (e.g., "I have never seen a hummingbird not flying.": MCC degrades 31.7 → -4.3 when switching BERT → BoW-BERT), quantifiers ("our sympathy to all [vs. some] of the victims": 61.8 → 46.1); and temporal logic ("Mary left before John entered": 8.0 → -8.6). Results for GLUE diagnostic meta-categories are: Knowledge (24.4 → 24.3); Pred-Arg Structure (39.2 → 39.1); Logic (24.7 → 22.1); Lexical Semantics (39.7 → 31.5). Privacy and legal concerns frequently necessitate BoW-only data releases. We ask: for potentially sensitive text classification tasks, how does performance degrade if only bag of words counts are available (instead of full sequences)? We consider three such tasks: Reddit controversy prediction on AskWomen/AskMen (CONT) Given that de-shuffling BoW representations is at least partially possible We report results in an easier setting = 256, ε = 100 and a harder setting = 128, ε = 50 in the bottom half of Table Taken together, these results suggest 1) that releasing word counts instead of full document sequences is a viable data release strategy for some sensitive classification tasks; 2) BoW-BERT offers a means of accessing the representational power of modern MLMs in cases where only BoW information is available; and 3) for at least some local DP settings, linear models remain competitive particularly for long documents, while BoW-RoBERTa is viable when the underlying documents are shorter. We advocate for BoW-(Ro)BERT(a)as a surprisingly strong baseline for language understanding tasks, as well as a performant practical option for 11 Our original submission used DP PCA instead. But it was brought to our attention that the paper proposing that algorithm was retracted for being non-private (+ discontinued in the library we used after we submitted). We have adjusted our code and recompiled our experiments using a comparable mechanism. Our intent isn't to advocate for this particular DP method, but rather, to fairly compare NLP algorithms on the same DP corpora.
809
571
809
Exploring Cross-lingual Textual Style Transfer with Large Multilingual Language Models
Detoxification is a task of generating text in polite style while preserving meaning and fluency of the original toxic text. Existing detoxification methods are designed to work in one exact language. This work investigates multilingual and cross-lingual detoxification and the behavior of large multilingual models like in this setting. Unlike previous works we aim to make large language models able to perform detoxification without direct fine-tuning in given language. Experiments show that multilingual models are capable of performing multilingual style transfer. However, models are not able to perform cross-lingual detoxification and direct fine-tuning on exact language is inevitable.
The task of Textual Style Transfer (Textual Style Transfer) can be viewed as a task where certain properties of text are being modified while rest retain the same Some examples of detoxification presented in Table Textual style transfer gained a lot of attention with a rise of deep learning-based NLP methods. Given that, Textual Style Transfer has now a lot of specific subtasks ranging from formality style transfer There exist a variety of Textual Style Transfer methods: from totally supervised methods The task of detoxification, in which we focus in this work, is relatively new. First work on detoxification was a sequence-to-sequence collaborative classifier, attention and the cycle consistency loss Both these methods are unsupervised which is an advantage but it comes from the major current problem of the textual style transfer. There is a lack of parallel data for Textual Style Transfer since there exist only few parallel datasets for English
Target text What the f*ck is your problem? What is your problem? This whole article is bullshit. This article is not good. Yeah, this clowns gonna make alberta great again! Yeah, this gonna make Alberta great again Multilingual language models such as mBART Our contributions can be summarized as follows 1. We introduce a novel study of multilingual textual style transfer and conduct experiments with several multilingual language models and evaluate their performance. 2. We conduct cross-lingual Textual Style Transfer experiments to investigate whether multilingual language models are able to perform Textual Style Transfer without fine-tuning on a specific language. We formulate the task of supervised Textual Style Transfer as a sequence-to-sequence NMT task and fine-tune multilingual language models to translate from "toxic" to "polite" language. In this work we use two datasets for Russian and English languages. Aggregated information about datasets could be found in We perform a series of experiments on detoxification using parallel data for English and Russian. We train models in two different setups: multilingual and cross-lingual. Multilingual setup In this setup we train models on data containing both English and Russian texts and then compare their performance with baselines trained on these languages solely. In cross-lingual setup we test the hypothesis that models are able to perform detoxification without explicit fine-tuning on exact language. We fine-tune models on English and Russian separately and then test their performance. Scaling language models to many languages has become an emerging topic of interest recently Baselines We use two detoxification methods as baselines in this work -Delete method which simply deletes toxic words in the sentence according to the vocabulary of toxic words and CondBERT. The latter approach works in usual masked-LM setup by masking toxic words and replacing them with non-toxic ones. This approach was first proposed by mT5 mT5 mBART mBART Unlike other NLP tasks, one metric is not enough to benchmark the quality of style transfer. The ideal Textual Style Transfer model output should preserve the original content of the text, change the style of the original text to target and the generated text also should be grammatically correct. We follow Russian Content preservation score (SIM) is evaluated as a cosine similarity of LaBSE English Similarity (SIM) between the embedding of the original sentence and the generated one is calculated using the model presented by 2.4.2 Grammatic and language quality (fluency) Russian We measure fluency (FL) with a BERTbased classifier English We measure fluency (FL) as a percentage of fluent sentences evaluated by the RoBERTabased Russian Style transfer accuracy (STA) is evaluated with a BERT-based English Style transfer accuracy (STA) is calculated with a style classifier -RoBERTa-based Aforementioned metrics must be properly combined to get one Joint metric to evaluate Textual Style Transfer. We follow There is a variety of versions of large multilingual models available. In this work we use small and base versions of mT5 In multilingual training setup we fine-tune models using both English and Russian data. We use Adam (Kingma and Ba, 2015) optimizer for fine-tuning with different learning rates ranging from 1 • 10 -3 to 5 • 10 -5 with linear learning rate scheduling. We also test different number of warmup steps from 0 to 1000. We equalize Russian and English data for training and use 10000 toxic sentences and their polite paraphrases for multilingual training in total. We train mT5 models for 40 thousand iterations 9 with a batch size of 8. We fine-tune mBART In cross-lingual training setup we fine-tune models using only one dataset, e.g.: we fine-tune model on English data and check performance on both English and Russian data. Fine-tuning procedure was left the same: 40000 iterations for mT5 models and 1000, 3000, 5000 and 10000 iterations for the mBART. Back-translation approach to cross-lingual style transfer proved to work substantially better than the zero-shot setup discussed above. Nevertheless, both Google and FSMT did not yield scores 9 According to comparable to monolingual setup. Besides, surprisingly Google yielded worse results than FSMT. Table As for cross-lingual style transfer, results are negative. None of the models have coped with the task of cross-lingual Textual Style Transfer. That means that models produce the same or almost the same sentences for the language on which they were not fine-tuned so that toxicity is not eliminated. We provide only some scores here in the Table Despite the fact that our hypothesis about the possibility of cross-language detoxification was not confirmed, the presence of multilingual models pretrained in many languages gives every reason to believe that even with a small amount of parallel data, training models for detoxification is possible. A recent work by In this work we have tested the hypothesis that multilingual language models are capable of performing cross-lingual and multilingual detoxification. In the multilingual setup we experimentally show that reformulating detoxification (Textual Style Transfer) as a NMT task boosts performance of the models given enough parallel data for training. We beat simple (Delete method) and more strong (condBERT) baselines in a number of metrics. Based on our experiments, we can assume that it is possible to fine-tune multilingual models in any of the 100 languages in which they were originally trained. This opens up great opportunities for detoxification in unpopular languages. However, our hypothesis that multilingual language models are capable of cross-lingual detoxification was proven to be false. We suggest that the reason for this is not a lack of data, but the model's inability to capture the pattern between toxic and non-toxic text and transfer it to another language by itself. This means that the problem of cross-lingual textual style transfer is still open and needs more investigation.
695
959
695
Self-Improvement of Non-autoregressive Model via Sequence-Level Distillation
Although Non-autoregressive Transformer (NAT) models have achieved great success in terms of fast inference speed, this speedup comes with a performance drop due to the inherent multi-modality problem of the NAT model. Previous works commonly alleviate this problem by replacing the target side of the raw data with distilled data generated by Autoregressive Transformer (AT) models. However, the multimodality problem in the distilled data is still significant and thus limits further improvement of the NAT models. In this paper, we propose a method called Sequence-Level Self-Distillation (SLSD), which aims to generate distilled data by the NAT model itself, eliminating the need for additional teacher networks. Furthermore, SLSD can adapt to different NAT models without precise adjustments since the self-distilled data is generated from the same types of NAT models. We conduct extensive experiments on WMT14 EN↔DE and WMT16 EN↔RO and choose five classic NAT models as the backbones to validate the generality and effectiveness of SLSD. The results show that our approach can consistently improve all models on both raw data and distilled data without sacrificing the inference speed.
Non-autoregressive Transformer (NAT) models One way to improve the ability of NAT models to handle complex data is by enhancing their capacity Another common approach to alleviate the multi-modality problem is modifying the target sequence Therefore, most existing works only consider knowledge distillation as a necessary data processing technique, rather than producing adaptive data for NAT models to learn better. In this paper, we propose a simple yet effective approach to generate distilled data that is more adaptive for NAT models to learn, named Sequence-Level Self-Distillation (SLSD). Inspired by To validate the effectiveness and generality of the proposed SLSD approach, we conduct extensive experiments on four machine translation benchmarks: WMT14 EN↔DE and WMT16 EN↔RO. We chose five classic NAT methods as the baseline methods, including VNAT The major contributions of our paper are summarized as follows: • We propose a simple yet effective method, SLSD, to generate the distilled data by NAT models itself, which can significantly alleviate the multi-modality problem in the data and be more adaptive for NAT models to learn. • We further explore the application of SLSD on various NAT models and find that the proposed framework can be directly applied to raw data without sacrificing inference speed or relying on additional teacher networks.
In this section, we first briefly describe the task formulation and then introduce three types of NAT models. The machine translation task can be formally defined as a sequence-to-sequence generation problem. Given the target language sequence y={y 1 , y 2 , ..., y T } and source language sequence x={x 1 , x 2 , ..., x S }, the non-autoregressive models assume conditional independence between the output tokens and factorize the output probabilities as p θ (y|x)= T t=1 p(y t |x), where θ represents the parameters of the NAT models. Vanilla NAT models are typically trained with cross-entropy loss to maximize the likelihood of the training data: However, the conditional independence assumption makes it difficult for vanilla NAT models to learn directly from raw data. So some works attempt to improve the modeling ability of NAT models by adding contextual information to the inputs: where Ω(y) is a function to generate the input context. For example, Ω(y) randomly samples words from y and masks these sampled words in the inputs in CMLM and GLAT further adaptively controls the sampling number according to the distance between the output and target sequences. Instead of the strict position-to-position calculation in the cross-entropy loss, CTC models a flexible monotonic alignment between the output sequence and the target sequence. There are two differences between CTC models and NAT models: 1) The input length of the CTC model is typically λ times the length of the source sentence. 2) CTC models are allowed to output a "blank" token. With these unique features, the output of the CTC models can align with the target sequence by removing all consecutive repeated tokens and the "blank" tokens. Assume that β(y) is the set of all possible alignments between the output sequence and target sequence, the training object of the CTC models is calculated by marginalizing the likelihoods of all possible alignments: Previous NAT models hardly handle the multimodality problem in the raw data. Directed Acyclic Transformers (DAT) attempt to address this issue by stacking a directed acyclic graph (DAG) on the top of the NAT decoder, where the vertices and edges in DAG correspond to hidden states of the decoder and the transitions between the hidden states respectively. The transitions between the connecting vertices constitute multiple possible decoding paths, allowing DAT to capture multiple translation modalities simultaneously. The path probability p θ (a|x) is factorized based on the Markov hypothesis: where |a| is the DAT output length and typically λ times the length of the source sequence. Once path a is determined, token y i can be generated conditioned on the decoder hidden state with index a i . And the DAT can be trained by minimizing the negative log-likelihood loss as below: where Γ is all possible output paths with the same length of target sequence y. In this section, we first explain the motivation of the proposed SLSD framework. Then we describe the process of generating self-distilled targets in two steps: the sampling of the self-distilled targets and the selection of the self-distilled targets. Finally, we will discuss the training details of the SLSD framework. Previous studies have shown that the data distilled by AT models may not be simple enough for NAT models to learn from, and that high-quality distilled data does not necessarily lead to improved performance of the NAT models in general. This can be attributed to two main reasons: 1) There is a mismatch between the modeling types of NAT models and AT models. The distilled data generated by AT models in an autoregressive manner may not be the most suitable for the learning process of NAT models 2) It is challenging to balance the quality and suitability of the distilled data. While higher data quality can prevent information loss, it also indicates that the distilled data is more complex and has a more serious multi-modality problem Vanilla NAT Sampling all possible combinations of the whole vocabulary is computationally forbidden as there are a total of |V| T samples, where |V| is the vocabulary size. Instead, we sample N candidates from the output distributions of NAT models to form the candidate set for the selection of the self-distilled targets. A candidate h in the candidates set H(x) is sampled from the distribution as below: CTC Unlike vanilla NAT models, the tokens in the outputs of CTC models are not totally conditionally independent, which makes it difficult to calculate the probability of the output sequences. Specifically, the probability of the candidate is the marginalization of all possible corresponding alignment sequences: However, the corresponding alignment set β(h n ) is exponentially large, making Equation 7 intractable. Alternatively, we sample the alignment sequence b from the output distribution of CTC models to approximate the candidates sampling process: Then we can get the candidates with the collapsing function h ctc = β -1 (b) Note that multiple different alignments may correspond to the same candidate during sampling, so the candidate set may contain duplicate samples. DAT Similar to CTC models, it is intractable to marginalize all possible output paths for the candidates. So we factorize the probabilities into the production of the output paths and the output tokens and thus sample the output sequence from the DAT models following a two-step sampling process. The decoding paths are sampled from the transition distributions: Then the output tokens are sampled based on the decoding path to get the output sequence f : Finally, we can get the candidates with the collapsing function h dag = β -1 (f ). It is intuitive that samples in the candidate set sampled from the output distribution of the NAT models are the ones that NAT models prefer and are easy to learn. In this section, we mainly focus on selecting the high-quality sample in the candidate set. We use a score function score(y, h) to measure how similar the candidate h is to the reference y. If a sample in the candidate set is close to the reference, we can assume it is high-quality. Our implementation uses n-gram overlap as the function to measure the distance between two sequences. We define the set of non-repeating n-grams in the target sequence as G n (y), and the number of times each n-gram g ∈ G n (y) appears in y as C g (y). Therefore, the number of n-gram matches between the candidate and reference can be defined as: and the total number of n-gram in the reference and the self-distilled data can be defined as: Based on the denotation above, the similarity function is defined as the minimum value of ngram precision and recall of the candidate against the reference where N is the maximum size of n-gram. Considering candidates sampled from the output distribution of CTC and DAT models have different lengths, we further add a length penalty to constrain the lengths of the candidates: where |h| and |y| are the length of the candidate and reference, respectively. Note that when adopted on the vanilla NAT models, the length penalty is equal to 1. Finally, the score function can be formulated as below: With the score function to measure the quality of the candidates, we choose the sample with the highest score as the self-distilled targets r: To make sure the candidates sampled from the output distribution of NAT models are meaningful and close enough to the reference, NAT models that have been pretrained on the data are chosen as the initialization of the model in SLSD framework. Then we adopt the self-distilled targets sampling and selection pipeline described above to generate the self-distilled data. For each step in the self-distillation process, we sample from the current output distribution of the model to generate self-distilled targets, which can ensure that the quality of the targets and the performance of the model are synchronously updated. For the decoder input of the NAT models, previous works found that giving some context in the input helps the learning process of NAT models. In contrast, we adopt full masked sequences as the input of the NAT decoders for the targets in the self-distillation process is easy to learn and can reduce the context mismatch between the training and inference process. Datasets We conduct experiments on both directions of two standard machine transla-tion benchmarks: WMT14 English↔German (EN↔DE, 4.5M sentence pairs) and WMT16 English↔Romanian (EN↔RO, 0.6M sentence pairs). For WMT14 EN↔DE, we preprocessed the datasets with a joint BPE with 40K merge operations following the pipelines provided in the fairseq toolkit Hyperparameters of the initialization Following previous works, we adopt the basic implementation of Transformer-base for machine translation tasks. Each model consists of a 6-layer encoder and Transformer Vanilla NAT For decoding steps in DAT, we use lookahead decoding algorithm. We select the 5 best checkpoints on the validation sets and average them as the evaluation checkpoint. Table Besides, the improvement of our methods can be seen not only in the raw data but also in the distilled data, which is already simple for NAT models to learn. The performance of our methods and base- line models on WMT14 EN↔DE distilled data are shown in Table In order to eliminate the gain effect brought by more training steps and fairly demonstrate the effectiveness of the proposed SLSD method, we continually train each baseline for another 100k steps. Since the models trained on raw data and distilled data without the help of the GLAT training strategy will result in performance degradation 5 We conduct the ablation study about the size of the candidates set and show its impact on the selfdistillation process. The self-distilled targets set and the self-distilled corpus are denoted as R and (X , R), respectively. We measure the complexity of the self-distilled targets set by the Normalized Corpus-level Multi-modality (NCM) for NAT models (18) For data with more complex and serious multimodal problems, NAT models tend to capture more possible translations simultaneously and consequently give rise to reduce probabilities in the models and an increase in NCM. The results are shown in Table We also investigate the effectiveness brought by self-distillation on distinct target sequence lengths. To this end, we split the test set into six buckets based on the reference sentence lengths in a range of 10. Figure We calculate the score distribution of the candidates set in the first 5k steps to better understand the self-distillation process. We use two metrics, i.e., Absolute Score and Relative Score, to observe the distribution of scores of the candidates set from two orthogonal aspects. The Absolute Score is the maximum score of the candidate sets and reflects the distribution of the whole self-distilled data. The Relative Score is calculated as the ratio of each score to the maximum score in the candidate set, which can measure the distribution of the scores in each candidate set. As shown in Figure To verify that the SLSD suits the NAT models, we calculate the output distributions of three GLATbased NAT models and present the result in Figure 3. After training on the self-distilled data, the max token probabilities of GLAT and CTC+GLAT are improved. Besides, for DAT, the passing probabilities are close to 0 or 1, and token probabilities are close to 1. This indicated that SLSD can reduce the multi-modality problem in the data, and thus improve the probabilities of the output path in the DAT. In this paper, we introduced a simple yet effective method, SLSD, to generate distilled data using NAT models themselves. This approach can greatly reduce the multi-modality problem in the data, and consistently improves the performance of four types of NAT models across all datasets. Ablation experiments verify that self-distilled data is better suited for NAT models to learn from, compared to the distilled data generated by AT models. The proposed SLSD method can produce selfdistilled data that is better suited for learning by NAT models than the distilled data generated by AT models, which does not need additional teacher models. However, to obtain relevant and highquality candidates from the output distribution of NAT models, a well-initialized model is necessary. Moreover, selecting self-distilled targets from the candidate set involves computing the score of each candidate, which requires additional training time and computational cost.
1,192
1,365
1,192
Relational Word Embeddings
While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding. Such strategies may not be optimal, however, as they are limited by the coverage of available resources and conflate similarity with other forms of relatedness. As an alternative, in this paper we propose to encode relational knowledge in a separate word embedding, which is aimed to be complementary to a given standard word embedding. This relational word embedding is still learned from co-occurrence statistics, and can thus be used even when no external knowledge base is available. Our analysis shows that relational word vectors do indeed capture information that is complementary to what is encoded in standard word embeddings.
Word embeddings are paramount to the success of current natural language processing (NLP) methods. Apart from the fact that they provide a convenient mechanism for encoding textual information in neural network models, their importance mainly stems from the remarkable amount of linguistic and semantic information that they capture. For instance, the vector representation of the word Paris implicitly encodes that this word is a noun, and more specifically a capital city, and that it describes a location in France. This information arises because word embeddings are learned from co-occurrence counts, and properties such as being a capital city are reflected in such statistics. However, the extent to which relational knowledge (e.g. Trump was the successor of Obama) can be learned in this way is limited. Previous work has addressed this by incorporating external knowledge graphs In fact, regardless of how a word embedding is learned, if its primary aim is to capture similarity, there are inherent limitations on the kinds of relations they can capture. For instance, such word embeddings can only encode similarity preserving relations (i.e. similar entities have to be related to similar entities) and it is often difficult to encode that w is in a particular relationship while preventing the inference that words with similar vectors to w are also in this relationship; e.g. This suggests that relational information has to be encoded separately from standard similaritycentric word embeddings. One appealing strategy is to represent relational information by learning, for each pair of related words, a vector that encodes how the words are related. This strategy was first adopted by The research question we consider in this paper is whether it is possible to learn word vectors that capture relational information. Our aim is for such relational word vectors to be complementary to standard word vectors. To make relational information available to NLP models, it then suffices to use a standard architecture and replace normal word vectors by concatenations of standard and relational word vectors. In particular, we show that such relational word vectors can be learned directly from a given set of relation vectors.
Relation Vectors. A number of approaches have been proposed that are aimed at learning relation vectors for a given set of word pairs (a,b), based on sentences in which these word pairs co-occur. For instance, Turney (2005) introduced a method called Latent Relational Analysis (LRA), which relies on first identifying a set of sufficiently frequent lexical patterns and then constructs a matrix which encodes for each considered word pair (a,b) how frequently each pattern P appears in between a and b in sentences that contain both words. Relation vectors are then obtained using singular value decomposition. More recently, Taking a slightly different approach, Despite the fact that such methods learn word vectors from which relation vectors can be predicted, it is unclear to what extent these word vectors themselves capture relational knowledge. In particular, the aforementioned methods have thus far only been evaluated in settings that rely on the predicted relation vectors. Since these predictions are made by relatively sophisticated neural network architectures, it is possible that most of the relational knowledge is still captured in the weights of these networks, rather than in the word vectors. Another problem with these existing approaches is that they are computationally very expensive to train; e.g. the Pair2Vec model is reported to need 7-10 days of training on unspecified hardware 2 . In contrast, the approach we propose in this paper is computationally much simpler, while resulting in relational word vectors that encode relational information more accurately than those of the Pair2Vec model in lexical semantics tasks, as we will see in Section 5. Knowledge-Enhanced Word Embeddings. Sev-2 github.com/mandarjoshi90/pair2vec eral authors have tried to improve word embeddings by incorporating external knowledge bases. For example, some authors have proposed models which combine the loss function of a word embedding model, to ensure that word vectors are predictive of their context words, with the loss function of a knowledge graph embedding model, to encourage the word vectors to additionally be predictive of a given set of relational facts Our method differs in two important ways from these existing approaches. First, rather than relying on an external knowledge base, or other forms of supervision, as in e.g. We aim to learn representations that are complementary to standard word vectors and are specialized towards relational knowledge. To differentiate them from standard word vectors, they will be referred to as relational word vectors. We write e w for the relational word vector representation of w. The main idea of our method is to first learn, for each pair of closely related words w and v, a relation vector r wv that captures how these words are related, which we discuss in Section 3.1. In Section 3.2 we then explain how we learn relational word vectors from these relation vectors. Our goal here is to learn relation vectors for closely related words. For both the selection of the vocabulary and the method to learn relation vec-tors we mainly follow the initialization method of Camacho-Collados et al. (2019, RELATIVE init ) except for an important difference explained below regarding the symmetry of the relations. Other relation embedding methods could be used as well, e.g., Selecting Related Word Pairs. Starting from a vocabulary V containing the words of interest (e.g. all sufficiently frequent words), as a first step we need to choose a set R ⊆ V × V of potentially related words. For each of the word pairs in R we will then learn a relation vector, as explained below. To select this set R, we only consider word pairs that co-occur in the same sentence in a given reference corpus. For all such word pairs, we then compute their strength of relatedness following where n wv is the harmonically weighted 3 number of times the words w and v occur in the same sentence within a distance of at most 10 words, and: This smoothed variant of PMI has the advantage of being less biased towards infrequent (and thus typically less informative) words. Learning Relation Vectors. In this paper, we will rely on word vector averaging for learning relation vectors, which has the advantage of being much faster than other existing approaches, and thus allows us to consider a higher number of word pairs (or a larger corpus) within a fixed 3 A co-occurrence in which there are k words in between w and v then receives a weight of 1 k+1 . time budget. Word vector averaging has moreover proven surprisingly effective for learning relation vectors where we write w i for the vector representation of w i in some given pre-trained word embedding, and norm(v) = v v . In contrast to other approaches, we do not differentiate between sentences where w occurs before v and sentences where v occurs before w. This means that our relation vectors are symmetric in the sense that r wv = r vw . This has the advantage of alleviating sparsity issues. While the directionality of many relations is important, the direction can often be recovered from other information we have about the words w and v. For instance, knowing that w and v are in a capital-of relationship, it is trivial to derive that "v is the capital of w", rather than the other way around, if we also know that w is a country. The relation vectors r wv capture relational information about the word pairs in R. The relational word vectors will be induced from these relation vectors by encoding the requirement that e w and e v should be predictive of r wv , for each (w, v) ∈ R. To this end, we use a simple neural network with one hidden layer, The relational word vectors e w can be initialized using standard word embeddings trained on the same corpus. In what follows, we detail the resources and training details that we used to obtain the relational word vectors. Corpus and Word Embeddings. We followed the setting of Word pair vocabulary. As our core vocabulary V, we selected the 100, 000 most frequent words from Wikipedia. To construct the set of word pairs R, for each word from V, we selected the 100 most closely related words (cf. Section 3.1), considering only consider word pairs that co-occur at least 25 times in the same sentence throughout the Wikipedia corpus. This process yielded relation vectors for 974,250 word pairs. Training. To learn our relational word embeddings we use the model described in Section 3.2. The embedding layer is initialized with the standard FastText 300-dimensional vectors trained on Wikipedia. The method was implemented in Py-Torch, employing standard hyperparameters, using ReLU as the non-linear activation function f (Equation A natural way to assess the quality of word vectors is to test them in lexical semantics tasks. However, it should be noted that relational word vectors behave differently from standard word vectors, and we should not expect the relational word vectors to be meaningful in unsupervised tasks such as semantic relatedness (Turney and Pantel, 2010). In particular, note that a high similarity between e w and e v should mean that relationships which hold for w have a high probability of holding for v as well. Words which are related, but not syn-onymous, may thus have very dissimilar relational word vectors. Therefore, we test our proposed models on a number of different supervised tasks for which accurately capturing relational information is crucial to improve performance. Comparison systems. Standard FastText vectors, which were used to construct the relation vectors, are used as our main baseline. In addition, we also compare with the word embeddings that were learned by the Pair2Vec system Given a pre-defined set of relation types and a pair of words, the relation classification task consists in selecting the relation type that best describes the relationship between the two words. As test sets we used DiffVec For these experiments we train a linear SVM classifier directly on the word pair encoding, performing a 10-fold cross-validation in the case of DiffVec, and using the train-test splits of BLESS. Results Table Standard word embedding models tend to capture semantic similarity rather well Given that the McRae Feature Norms benchmark is focused on nouns, we complement this experiment with a specific evaluation on verbs. To this end, we use the verb set of QVEC Table As far as the QVEC results are concerned, our method is only outperformed by Retrofitting and Attract-Repel. Nevertheless, the difference is minimal, which is surprising given that these methods leverage the same WordNet resource which is used for the evaluation. To complement the evaluation of our relational word vectors on lexical semantics tasks, in this section we provide a qualitative analysis of their intrinsic properties. First, we provide an analysis based on the nearest neighbours of selected words in the vector space. Table In the bottom row, we show cases where relational information is somewhat confused with col- Unsupervised learning of analogies has proven to be one of the strongest selling points of word embedding research. Simple vector arithmetic, or pairwise similarities Interestingly, even though not explicitly encoded in our model, the table shows some examples that highlight one property that arises often, which is the ability of our model to capture cohyponyms as relations, e.g., wrist-knee and angerdespair as nearest neighbours of "shoulder-ankle" and "shock-grief", respectively. Finally, one last advantage that we highlight is the fact that our model seems to perform implicit disambiguation by balancing a word's meaning with its paired word. For example, the "oct-feb" relation vector correctly brings together other month abbreviations in our space, whereas in the FastText model, its closest neighbour is 'doppler-wheels', a relation which is clearly related to another sense of oct, namely its use as an acronym to refer to 'optical coherence tomography' (a type of x-ray procedure that uses the doppler effect principle). One of the main problems of word embedding models performing lexical inference (e.g. hypernymy) is lexical memorization. We have introduced the notion of relational word vectors, and presented an unsupervised method for learning such representations. Parting ways from previous approaches where relational information was either encoded in terms of relation vectors (which are highly expressive but can be more difficult to use in applications), represented by transforming standard word vectors (which capture relational information only in a limited way), or by taking advantage of external knowledge repositories, we proposed to learn an unsupervised word embedding model that is tailored specifically towards modelling relations. Our model is intended to capture knowledge which is complementary to that of standard similarity-centric embeddings, and can thus be used in combination. We tested the complementarity of our relational word vectors with standard FastText word embeddings on several lexical semantic tasks, capturing different levels of relational knowledge. The evaluation indicates that our proposed method indeed results in representations that capture relational knowledge in a more nuanced way. For future work, we would be interested in further exploring the behavior of neural architectures for NLP tasks which intuitively would benefit from having access to relational information, e.g., text classification
968
2,237
968
Learning-Based Named Entity Recognition for Morphologically-Rich, Resource-Scarce Languages
Named entity recognition for morphologically rich, case-insensitive languages, including the majority of semitic languages, Iranian languages, and Indian languages, is inherently more difficult than its English counterpart. Worse still, progress on machine learning approaches to named entity recognition for many of these languages is currently hampered by the scarcity of annotated data and the lack of an accurate part-of-speech tagger. While it is possible to rely on manually-constructed gazetteers to combat data scarcity, this gazetteer-centric approach has the potential weakness of creating irreproducible results, since these name lists are not publicly available in general. Motivated in part by this concern, we present a learning-based named entity recognizer that does not rely on manually-constructed gazetteers, using Bengali as our representative resource-scarce, morphologicallyrich language. Our recognizer achieves a relative improvement of 7.5% in Fmeasure over a baseline recognizer. Improvements arise from (1) using induced affixes, (2) extracting information from online lexical databases, and (3) jointly modeling part-of-speech tagging and named entity recognition.
While research in natural language processing has gained a lot of momentum in the past several decades, much of this research effort has been focusing on only a handful of politically-important languages such as English, Chinese, and Arabic. On the other hand, being the fifth most spoken language One potential solution to the problem of data scarcity is to hand-annotate a small amount of data with the desired linguistic information and then develop bootstrapping algorithms for combining this small amount of labeled data with a large amount of unlabeled data. In fact, cotraining In other words, Bengali NER is complicated not only by the scarcity of annotated data, but also by the lack of an accurate POS tagger. One could imagine building a Bengali POS tagger using un-supervised induction techniques that have been successfully developed for English (e.g., In view of the above problems, many learningbased Bengali NE recognizers have relied heavily on manually-constructed name lists for identifying persons, organizations, and locations. There are at least two weaknesses associated with this gazetteer-centric approach. First, these name lists are typically not publicly available, making it difficult to reproduce the results of these NE recognizers. Second, it is not clear how comprehensive these lists are. Relying on comprehensive lists that comprise a large portion of the names in the test set essentially reduces the NER problem to a dictionary-lookup problem, which is arguably not very interesting from a research perspective. In addition, many existing learning-based Bengali NE recognizers have several common weaknesses. First, they use as features pseudo-affixes, which are created by extracting the first n and the last n characters of a word (where 1 ≤ n ≤ 4) (e.g., Motivated in part by these weaknesses, we in-vestigate how to improve a learning-based NE recognizer that does not rely on manually-constructed gazetteers. Specifically, we investigate two learning architectures for our NER system. The first one is the aforementioned pipelined architecture in which the NE recognizer uses as features the output of a POS tagger that is trained independently of the recognizer. Unlike existing Bengali POS and NE taggers, however, we examine two new knowledge sources for training these taggers: (1) affixes induced from an unannotated corpus and (2) semantic class information extracted from Wikipedia. In the second architecture, we jointly learn the POS tagging and the NER tasks, allowing features for one task to be accessible to the other task during learning. The goal is to examine whether any benefits can be obtained via joint modeling, which could address the error propagation problem with the pipelined architecture. While we focus on Bengali NER in this paper, none of the proposed techniques are languagespecific. In fact, we believe that these techniques are of relevance and interest to the EACL community because they can be equally applicable to the numerous resource-scarce European and Middle Eastern languages that share similar linguistic and extra-linguistic properties as Bengali. For instance, the majority of semitic languages and Iranian languages are, like Bengali, morphologically productive; and many East European languages such as Czech and Polish resemble Bengali in terms of not only their morphological richness, but also their relatively free word order. The rest of the paper is organized as follows. In Section 2, we briefly describe the related work. Sections 3 and 4 show how we induce affixes from an unannotated corpus and extract semantic class information from Wikipedia. In Sections 5 and 6, we train and evaluate a POS tagger and an NE recognizer independently, augmenting the feature set typically used for these two tasks with our new knowledge sources. Finally, we describe and evaluate our joint model in Section 7.
Cucerzan and Yarowsky (1999) exploit morphological and contextual patterns to propose a language-independent solution to NER. They use affixes based on the paradigm that named entities corresponding to a particular class have similar morphological structure. Their bootstrapping approach is tested on Romanian, English, Greek, Turkish, and Hindi. The recall for Hindi is the lowest (27.84%) among the five languages, suggesting that the lack of case information can significantly complicate the NER task. To investigate the role of gazetteers in NER, Since Bengali is morphologically productive, a lot of grammatical information about Bengali words is expressed via affixes. Hence, these affixes could serve as useful features for training POS and NE taggers. In this section, we show how to induce affixes from an unannotated corpus. We rely on a simple idea proposed by In principle, we can use all of the induced affixes as features for training a POS tagger and an NE recognizer. However, we choose to use only those features that survive our feature selection process (to be described below), for the follow-ing reasons. First, the number of induced affixes is large, and using only a subset of them as features could make the training process more efficient. Second, the above affix induction method is arguably overly simplistic and hence many of the induced affixes could be spurious. Our feature selection process is fairly simple: we (1) score each affix by multiplying its frequency (i.e., the number of distinct words in V to which each affix attaches) and its length Wikipedia has recently been used as a knowledge source for various language processing tasks, including taxonomy construction (Ponzetto and Strube, 2007a), coreference resolution (Ponzetto and Strube, 2007b), and English NER (e.g., We employ the steps below to generate our annotated list. Generating and annotating the titles Recall that each Wikipedia article has been optionally assigned to one or more categories by its creator and/or editors. We use these categories to help annotate the title of an article. Specifically, if an article has a category whose name starts with "Born on" or "Death on," we label the corresponding title with PER. Similarly, if it has a category whose name starts with "Cities of" or "Countries of," we NE Class Keywords PER "born," "died," "one," "famous" LOC "city," "area," "population," "located," "part of" ORG "establish," "situate," "publish" Table We then label the title with the class that has the largest weighted sum. Note, however, that we ignore any article that contains fewer than two keywords, since we do not have reliable evidence for labeling its title as one of the NE classes. We put all these annotated titles into a title list. Getting more location names To get more location names, we search for the character sequences "birth place:" and "death place:" in each article, extracting the phrase following any of these sequences and label it as LOC. We put all such labeled locations into the title list. Generating and annotating the tokens in the titles Next, we extract the word tokens from each title in the title list and label each token with an NE class. The reason for doing this is to improve generalization: if "Dhaka University" is labeled as ORG in the title list, then it is desirable to also label the token "University" as ORG, because this could help identify an unseen phrase that contains the term "University" as an organization. Our token labeling method is fairly simple. First, we generate the tokens from each title in the title list, assigning to each token the same NE label as that of the title from which it is generated. For instance, from the title "Anna Frank," "Anna" will be labeled as PER; and from "Anna University," " Anna" will be labeled as LOC. To resolve such ambiguities (i.e., assigning different labels to the same token), we keep a count of how many times "Anna" is labeled with each NE class, and set its final label to be the most frequent NE class. We put all these annotated tokens into a token list. If the title list and the token list have an element in common, we remove the element from the token list, since we have a higher confidence in the labels of the titles. Merging the lists Finally, we append the token list to the title list. The resulting title list contains 4885 PERs, 15176 LOCs, and 188 ORGs. We can now use the title list to annotate a text. Specifically, we process each word w in the text in a left-to-right manner, using the following steps: 1. Check whether w has been labeled. If so, we skip this word and process the next one. 2. Check whether w appears in the Samsad Bengali-English Dictionary These automatic annotations will then be used to derive a set of WIKI features for training our POS tagger and NE recognizer. Hence, unlike existing Bengali NE recognizers, our "gazetteers" are induced rather than manually created. Table 2: Feature templates for the POS tagging experiments In this section, we will show how we train and evaluate our POS tagger. As mentioned before, we hypothesize that introducing our two knowledge sources into the feature set for the tagger could improve its performance: using the induced affixes could improve the extraction of grammatical information from the words, and using the Wikipediainduced list, which in principle should comprise mostly of names, could help improve the identification of proper nouns. Corpus Our corpus is composed of 77942 words and is annotated with one of 26 POS tags in the tagset defined by IIIT Hyderabad . Using this corpus, we perform 5-fold cross-validation (CV) experiments in our evaluation. It is worth noting that this dataset has a high unknown word rate of 15% (averaged over the five folds), which is due to the small size of the dataset. While this rate is comparable to another Bengali POS dataset described in Creating training instances Following previous work on POS tagging, we create one training instance for each word in the training set. The class value of an instance is the POS tag of the corresponding word. Each instance is represented by a set of linguistic features, as described next. 7 A detailed description of these POS tags can be found in Features Our feature set consists of (1) baseline features motivated by those used in (3) a binary feature that determines whether the current word is a number. As far as our new features are concerned, we create one induced prefix feature and one induced suffix feature from both the current word and the previous word, as well as two bigrams involving induced prefixes and induced suffixes. We also create three WIKI features, including the Wikipedia-induced NE tag of the current word and that of the previous word, as well as the combination of these two tags. Note that the Wikipedia-induced tag of a word can be obtained by annotating the test sentence under consideration using the list generated from the Bengali Wikipedia (see Section 4). To make the description of these features more concrete, we show the feature templates in Table In this section, we show how to train and evaluate our NE recognizer. The recognizer adopts a traditional architecture, assuming that POS tagging is performed prior to NER. In other words, the NE recognizer will use the POS acquired in Section 5 as one of its features. As in Section 5, we will focus on examining how our knowledge sources (the induced affixes and the WIKI features) impact the performance of our recognizer. Corpus The corpus we used for NER evaluation is the same as the one described in the previous We view NE recognition as a sequence labeling problem. In other words, we combine NE identification and classification into one step, labeling each word in a test text with its NE tag. Any word that does not belong to one of our three NE tags will be labeled as OTHERS. We adopt the IOB convention, preceding an NE tag with a B if the word is the first word of an NE and an I otherwise. Now, to train the NE recognizer, we create one training instance from each word in a training text. The class value of an instance is the NE tag of the corresponding word, or OTHERS if the word is not part of an NE. Each instance is represented by a set of linguistic features, as described next. Features Our feature set consists of (1) baseline features motivated by those used in (3) a binary feature that determines whether the current word is the first word of a sentence; and (4) a set of POS-related features, including the POS of the current word and its surrounding words, as well as POS bigrams formed from the current and surrounding words. Our induced affixes and WIKI features are incorporated into the baseline NE feature set in the same manner as in POS tagging. In essence, the feature tem- As we can see, improvements stem primarily from dramatic gains in recall for locations. Discussions Several points deserve mentioning. First, the model performs poorly on the ORGs, owing to the small number of organization names in the corpus. Worse still, the recall drops after adding the WIKI features. We examined the list of induced ORG names and found that it is fairly noisy. This can be attributed in part to the difficulty in forming a set of seed words that can extract ORGs with high precision (e.g., the ORG seed "situate" extracted many LOCs). Second, using the 9 The NE recognizer described thus far has adopted a pipelined architecture, and hence its performance could be limited by the errors of the POS tagger. In fact, as discussed before, the major source of errors made by our POS tagger concerns the confusion between proper nouns and common nouns, and this type of error, when propagated to the NE recognizer, could severely limit its recall. Also, there is strong empirical support for this argument: the NE recognizers, when given access to the correct POS tags, have F-scores ranging from 76-79%, which are 10% higher on average than those with POS tags that were automatically computed. Consequently, we hypothesize that modeling POS tagging and NER jointly would yield better performance than learning the two tasks separately. In fact, many approaches have been developed to jointly model POS tagging and noun phrase chunking, including transformationbased learning In contrast, we propose a relatively simple model for jointly learning Bengali POS tagging and NER, by exploiting the limited dependencies between the two tasks. Specifically, we make the observation that most of the Bengali words that are part of an NE are also proper nouns. In fact, based on statistics collected from our evaluation corpus (see Sections 5 and 6), this observation is correct This limited dependency between the POS tags and the NE tags allows us to develop a simple model for jointly learning the two tasks. More specifically, we will use CRF++ to learn the joint model. Training and test instances are generated as described in the previous two subsections (i.e., one instance per word). The feature set will consist of the union of the features that were used to train the POS tagger and the NE tagger independently, minus the POS-related features that were used in the NE tagger. The class value of an instance is computed as follows. If a word is not a proper noun, its class is simply its POS tag. Otherwise, its class is its NE tag, which can be PER, ORG, LOC, or OTHERS. In other words, our joint model exploits the observation that we made earlier in the section by assuming that only proper nouns can be part of a named entity. This allows us to train a joint model without substantially increasing the number of classes. We again evaluate our joint model using 5-fold CV experiments. The NE results of the model are shown in Table Finally, to better understand the value of the induced affix features in the joint model as well as the pipelined model described in Section 6, we conducted an ablation experiment, in which we incorporated only the WIKI features into the baseline feature set. With pipelined modeling, the Fmeasure for NER is 68.87%, which is similar to the case where both induced affixes and the WIKI features are used. With joint modeling, however, the F-measure for NER is 70.87%, which is 1% lower than the best joint modeling score. These results provide suggestive evidence that the induced affix features play a significant role in the improved performance of the joint model. We have explored two types of linguistic features, namely the induced affix features and the Wikipedia-related features, to improve a Bengali POS tagger and NE recognizer. Our experimental results have demonstrated that (1) both types of features significantly improve a baseline POS tagger and (2) the Wikipedia-related features significantly improve a baseline NE recognizer. Moreover, by exploiting the limited dependencies between Bengali POS tags and NE tags, we proposed a new model for jointly learning the two tasks, which not only avoids the error-propagation problem present in the pipelined system architecture, but also yields statistically significant improvements over the NE recognizer that is trained independently of the POS tagger. When applied in combination, our three extensions contributed to a relative improvement of 7.5% in F-measure over the baseline NE recognizer. Most importantly, we believe that these extensions are of relevance and interest to the EACL community because many European and Middle Eastern languages resemble Bengali in terms of not only their morphological richness but also their scarcity of annotated corpora. We plan to empirically verify our belief in future work.
1,192
3,894
1,192
Local Structure Matters Most in Most Languages
Many recent perturbation studies have found unintuitive results on what does and does not matter when performing Natural Language Understanding (NLU) tasks in English. Coding properties, such as the order of words, can often be removed through shuffling without impacting downstream performances. Such insight may be used to direct future research into English NLP models. As many improvements in multilingual settings consist of wholesale adaptation of English approaches, it is important to verify whether those studies replicate or not in multilingual settings. In this work, we replicate a study on the importance of local structure, and the relative unimportance of global structure, in a multilingual setting. We find that the phenomenon observed on the English language broadly translates to over 120 languages, with a few caveats.
A recent research trend has explored the sensitivity, or insensitivity, of neural language models to different perturbations of texts One such coding property, the local structure of text, has recently been shown to be ubiquitously relied upon by both neural language models In this short paper, our contributions are as follows: • We adapt and replicate the findings of • We provide analysis for why text using Chinese Characters as its script may be more resilient to local perturbations and highlight the importance of testing improvements in English neural modeling in other languages.
We extend the perturbation studies of The CHRF-2 (chrF) The Index Displacement Count (IDC) The compression rate (Comp) The scholar is typesetting. We perform perturbations by altering the order of subwords and characters present in the text. Three types of perturbations are applied. Full shuffling completely randomizes the order of the subword or characters. Neighbor flipping flips a subword or character with its neighbor with a controllable probability ρ, providing local perturbations while maintaining much of the absolute position of the tokens. Phrase shuffling randomly builds phrases of subwords or characters of controllable average length with a parameter ρ and shuffles those phrases, providing a minimal amount of local perturbations for a large amount of change in absolute position. Simple examples of those perturbations are shown in Figure All experiments are conducted on three pretrained cross-lingual models. The XLM-RoBERTa-Base Canine-S The zero-shot cross-lingual setting The English version on which the model is finetuned is kept unperturbed, while the target language text on which the model is evaluated goes through several perturbations. We perform a total of 43 different perturbations on every task and language and obtain their performance. All models are finetuned on five different random seeds, and all perturbations are performed on five different random seeds, for a total of 25 evaluations for every model on every task, every language present in the tasks, and every perturbation setting. 1 A total of 8 cross-lingual tasks selected from the most popular cross-lingual benchmarks In Figure 2 Extractive tasks such as extractive QA are not compatible with our perturbations, as the answer would also be perturbed and were not considered. 3 As we use all 122 languages in the Tatoeba dataset, which vary from 100 to 1000 possible sentences to retrieve, the F1 score is more appropriate as an evaluation of performance than the accuracy used in the XTREME benchmark. Figure lingual setting. Specifically, the more local perturbations are applied to a text, the more degradation in the understanding of that text can be expected, which shows that model does rely on the local structure to build understanding. The perturbations to the global structure are shown to be a much poorer explanation for the degradation in performance than the perturbation to the local structure. The compression rate is highly correlated with a model's performance and the local structure, making it a potential confounder for the degradation in performance. However, the trend in local structure holds with subword-level perturbations, unlike with the compression rate, which is not affected by perturbations to the order of subwords, as well as holding for the vocabulary-free Canine model, as shown in Figure Figure Figure We first explored and confirmed the importance of local structure, the limited importance of global structure, and controlled for the potential of vocabulary destruction being the main explanatory factor in 8 NLU tasks covering over 120 languages. In aggregate, the findings of Languages using Chinese characters as their script also deviate from the norm. This is likely caused by how semantically rich their characters are. It will be important that any NLP improvements derived from English experiments are verified to also generalize to other languages. As we have observed that languages written in Chinese Character Script are differently impacted by perturbations to different coding properties, it is possible that im-provements to the way our model understand those properties in English will not generalize. Model Hyperparameters and Training We finetune each pretrained models on the English version of each dataset for a total of 10 epochs, checkpointing the model after each epochs. The English version is never perturbed, the finetuning is done on unperturbed data. This finetuning is done 5 times with different random seeds for each model and each datasets. For 8 datasets and 3 models we have a total of 3 * 8 * 5 = 120 finetuning and 1200 checkpoints, one for each epoch. A learning rate of 2e-5, a batch size of 32 and a weight decay of 0.1 is used in all finetuning. All experiments used a warmup ratio of 0.06, as described in For the evaluation, we perform the same perturbations on the validation and testing data of the different target languages. We evaluate the perturbed validation data on each of the 10 checkpoints, chose the best checkpoint on the perturbed validation data, and evaluate that checkpoint on the perturbed test data. This process is repeated for each perturbations, each of the 5 random seed and 5 times with different perturbation random seeds for each finetuned models. In total, for each language in each task on each model for each perturbation setup we average results over 25 random seeds. For the sentence retrieval tasks, such as Tatoeba, we do not perform any finetuning. We simply obtain the nearest neighbour using cosine similarity on the final hidden representation. Perturbations A total of 43 perturbations are used for all experiments. The first one is the Benchmark, which is simply the unperturbed text. We perform a full-shuffling on both the subwords and characters. On the subword-level perturbations we perform phrase-shuffling with ρ values of: [0.9, 0.8, 0.65, 0.5, 0.35, 0.2, 0.1] and neighbour-flip shuffling with ρ values of: [0.9, 0.8, 0.6, 0.5, 0.4, 0.2, 0.1]. On the character-level perturbations we perform phrase-shuffling with ρ values of: [0.975, 0.95, 0.9, 0.8, 0.65, 0.5, 0.4, 0.3, 0.2, 0.15, 0.1, 0.075, 0.05] and neighbour-flip shuffling with ρ values of: [0.8, 0.65, 0.5, 0.4, 0.3, 0.2, 0.1, 0.075, 0.05, 0.035, 0.025, 0.01]. A total of 15 subword-level experiments, 27 character-level experiments and the unperturbed benchmark are evaluated for a grand total of 43 different perturbation settings . 0.5 0.6 0.7 0.8 0.9 1.0
838
589
838
Arabic Morphology Generation Using a Concatenative Strategy
Arabic inflectional morphology requires infixation, prefixation and suffixation, giving rise to a large space of morphological variation. In this paper we describe an approach to reducing the complexity of Arabic morphology generation using discrimination trees and transformational rules. By decoupling the problem of stem changes from that of prefixes and suffixes, we gain a significant reduction in the number of rules required, as much as a factor of three for certain verb types. We focus on hollow verbs but discuss the wider applicability of the approach.
Morphologically, Arabic is a non-concatenative language. The basic problem with generating Arabic verbal morphology is the large number of variants that must be generated. Verbal stems are based on triliteral or quadriliteral roots (3-or 4-radicals). Stems are formed by a derivational combination of a root morpheme and a vowel melody; the two are arranged according to canonical patterns. Roots are said to interdigitate with patterns to form stems. For example, the Arabic stem katab (he wrote) is composed of the morpheme ktb (notion of writing) and the vowel melody morpheme 'a-a'. The two are coordinated according to the pattern CVCVC (C=consonant, V=vowel). There are 15 triliteral patterns, of which at least 9 are in common use, and 4 much rarer quadriliteral patterns. All these patterns undergo some stem changes with respect to voweling in the 2 tenses (perfect and imperfect), the 2 voices (active and passive), and the 5 moods (indicative, subjunctive, jussive, imperative and energetic). ~ The stem used in the conjugation of the verb may differ depending on the person, number, gender, tense, mood, and the presence of certain root consonants. Stem changes combine with suffixes in the perfect indicative (e.g., katab-naa 'we wrote', kutib-a 'it was written') and the imperative (e.g. uktub-uu 'write', plural), and with both prefixes and suffixes for the imperfect tense in the indicative, subjunctive, and jussive moods (e.g. ya-ktub-na 'they write, feminine plural') and in the energetic mood (e.g. ya-ktub-unna or ya-ktub-un 'he certainly writes'). There are a total of 13 person-number-gender combinations. Distinct prefixes are used in the active and passive voices in the imperfect, although in most cases this results in a change in the written form only if diacritic marks are used. 2 Most previous computational treatments of Arabic morphology are based on linguistic models that describe Arabic in a nonconcatenative way and focus primarily on analysis. (1983) two-level morphology. In In this paper, we propose a computational approach that applies a concatenative treatment to Arabic morphology generation by separating the issue of infixation from other inflectional variations. We are developing an Arabic morphological generator using MORPHE We generate Arabic words including short vowels and diacritic marks, since they are pedagogically useful and can always be stripped before display. Our approach seeks to reduce the number of rules for generating morphological variants of Arabic verbs by breaking the problem into two parts. We observe that, with the exception of a few verb types, there is very little interaction between stem changes and the processes of prefixation and suffixation. It is therefore possible to decouple, in large part, the problem of stem changes from that of prefixes and suffixes. The gain is a significant reduction in the size number of transformational rules, as much as a factor of three for certain verb classes. This improves the space efficiency of the system and its maintainability by reducing duplication of rules, and simplifies the rules by isolating different types of changes. To illustrate our approach, we focus on a particular type of verbs, termed hollow verbs, and show how we integrate their treatment with that of more regular verbs. We also discuss how the approach can be extended to other classes of verbs and other parts of speech.
Verb roots in Arabic can be classified as shown in Figure Strong verbs undergo systematic changes in stem voweling from the perfect to the imperfect. The first radical vowel disappears in the imperfect. Verbs whose middle radical vowel in the perfect is 'a' can change it to 'a' (e.g., qaTa'a 'he cut' -> yaqTa'u 'he cuts'), 4 'i' (e.g., Daraba 'he hit' -> yaDribu 'he hits'), or 'u' (e.g., kataba 'he wrote' -> yaktubu 'he writes') in the imperfect. Verbs whose middle radical vowel in the perfect is 'i' can only change it to 'a' (e.g., shariba 'he drank' -> yashrabu 'he drinks') or 'i' (e.g., Hasiba 'he supposed' -> yaHsibu 'he supposes'). Verbs with middle radical vowel 'u' in the perfect do not change it in the imperfect (e.g., Hasuna 'he was beautiful' -> yaHsunu 'he is beautiful'). For strong verbs, neither perfect nor imperfect stems change with person, gender, or number. Hollow verbs are those with a weak middle radical. In both perfect and imperfect tenses, the underlying stem is realized by two characteristic allomorphs, one short and one long, whose use depends on the person, number and gender. 3 Grammars of Arabic are not uniform in their classification of "hamzated" verbs, verbs containing the glottal stop as one of the radicals (e.g. [sa?a[] 'to ask'). We describe our approach to modeling strong and hollow verbs below, following a description of the implementation framework. MORPHE The choice of feature names and values, other than ROOT, which identifies the lexical item to be transformed, is entirely up to the user. The FVPs in a FS come from one of two sources. Static features, such as CAT (part of speech) and ROOT, come from the syntactic lexicon, which, in addition to the base form of words, can contain morphological and syntactic features. Dynamic features, such as TENSE and NUMBER, are set by MORPHE's caller. The Morphological Form Hierarchy. MORPHE is based on the notion of a morphological form hierarchy (MFH) or tree. Each internal node of the tree specifies a piece of the FS that is common to that entire subtree. The root of the tree is a special node that simply binds all subtrees together. The leaf nodes of the tree correspond to distinct morphological forms in the language. Each node in the tree below the root is built by specifying the parent of the node and the conjunction or disjunction of FVPs that define the node. Portions of the Arabic MFH are shown in Figures Transformational Rules. A rule attached to each leaf node of the MFH effects the desired morphological transformations for that node. A rule consists of one or more mutually exclusive clauses. The 'if' part of a clause is a regular expression pattern, which is matched against the value of the feature ROOT (a string). The 'then' part includes one or more operators, applied in the given order. Operators include addition, deletion, and replacement of prefixes, infixes, and suffixes. The output of the transformation is the transformed ROOT string. An example of a rule attached to a node in the MFH is given in Section 3.1 below. Process Logic. In generation, the MFH acts as a discrimination network. The specified FS is matched against the features defining each subtree until a leaf is reached. At that point, MORPHE first checks in the irregular forms lexicon for an entry indexed by the name of the leaf node (i.e., the MF) and the value of the ROOT feature in the FS. If an irregular form is not found, the transformation rule attached to the leaf node is tried. If no rule is found or none of the clauses of the applicable rule match, MORPHE returns the value of ROOT unchanged. Knowing the underlying root and its voweling is crucial for the determination of hollow verb stems, as described in Section 1. Knowing the pattern is also important in cases where it is unclear. For example, verbs of pattern CtVCVC insert a 't' after the first radical (e.g. ntaqat 'to move, change location', intransitive). With some consonants as first radicals, in order to facilitate pronunciation, the 't' undergoes a process of assimilation whose effects differ depending on the preceding consonant. For example, the pattern CtVCVC verb from zaHam 'to shove' instead of *ztaHarn is zdaHam 'to team'. It is also difficult to determine from just the string ntaqat whether this is pattern nCVCVC of the verb *taqat (if it existed) or pattern CtVCVC of naqat 'to transport, move', transitive). As a demonstration of our approach, we discuss the case of hollow verbs, whose characteristics were described in Section 1. Figure 8 Hamzated verbs changes are due to interactions with specific suffixes and are best dealt with in the prefixation and suffixation subtree. An example of such a rule, which changes the perfect stem to a short one for persons 1 and 2 both singular and plural, follows. (morph-rule v-stem-fl-act-perf-12 ("^%{cons}(awa)%{cons}$" (ri *i* "u")) ("^%{cons}(a[wy]i)%{cons}$" (ri *i* "i")) ("^%{cons}(aya)%{cons)$" (ri *i* "i"))) The syntax %{var} is used to indicate variables with a given set of values. Enclosing a string in parenthesis associates it with a numbered register, so the replace infix (ri) operator can access it for substitution. Figure The hollow verb subtree is not as small for the imperfect as it is for the perfect, since the stem depends not only on the mood but also on the person, gender, and number. It is still advantageous to decouple stem changes from prefixation and suffixation. Suffixes differ in the indicative and subjunctive moods; if the two types of changes were merged, the stem transformations would have to be repeated in each of the two moods and for each personnumber-gender combination. The same observation applies to stem changes in the passive voice as well. Significant replication of transformational rules that include stem changes makes the system bigger and harder to maintain in case of changes, particularly because each transformational rule needs to take into consideration the four different classes of hollow verbs. Consider again the example verb form zurtu 'I visited' and the feature structure (FS) given in Section 2. During generation, the featurevalue pair (CHG STEM) is added to the FS before the first call to MORPHE. Traversing the MFH shown in Figure This rule, currently simply appends "otu" to the string, and MORPHE returns the string "zurotu", where the 'o' denotes the diacritic "sukuun" or absence of vowel. This is the desired form for zurtu 'I visited'. In this paper so far we have focused on regular and hollow verbs of the pattern CVCVC. Here we examine how our approach applies to other verb types and other parts of speech. The two-step treatment of verbal inflection described in this paper is easily extended to the passive, to doubled radical and hamzated verbs, and to different patterns of strong and hollow verbs. In fact, since not all higher patterns are affected by the presence of a middle or weak radical (e.g. patterns CVCCV, CaaCVC, taCVCCVC and others), the subtrees for these patterns will be significantly less bushy than for pattern CVCVC. The two-step treatment also covers verbs with a weak first radical, especially the radical 'w', which is normally dropped in the active imperfect (e.g. perfect stem warad 'to come', imperfect stemrid-). ~° Alternatively, it can be placed in the 10 Exceptions to this rule exist (e.g. the verb waji[ 'to be afraid'), with imperfect stem -wjat-) but are rare and can be handled in MORPHE by placing the irregular stem in the syntactic lexicon and checking for it prior to calling MORPHE for stem changes. irregular lexicon, which MORPHE consults when it reaches a leaf node, prior to applying any of the transformational rules. Verbs with a weak third radical, including doubly or trebly weak verbs, are the most problematic since the stem changes interact heavily with the inflectional suffixes, and less is gained by trying to modify the stem separately. We are currently investigating this issue and the best way to treat it in MORPHE. The two-step approach to generating verbal morphology also presents advantages for the inflectional morphology of nouns and adjectives. In Arabic, the plural of many nouns, especially masculine nouns, is not formed regularly by suffixation. Instead, the stem itself undergoes changes according to a complex set of patterns (e.g. rajut 'man' pluralizes as rijaat 'men'), giving rise to socalled "broken plurals". The inflection of broken plurals according to case (nominative, genitive, accusative) and definiteness, however, is basically the same as the inflection The radical 'y' is largely not dropped or changed. of most masculine or feminine singular nouns. The same holds true for adjectives. Finally we note that our two-step approach can also be used to combine derivational and inflectional morphology for nouns and adjectives. Deverbal nouns and present and past participles can be derived regularly from each verb pattern (with the exception of deverbal nouns from pattern CVCVC). Relational or "nisba" adjectives are derived, with small variations, from nouns. Since these parts of speech are inflected as normal nouns and adjectives, we can perform derivational and inflectional morphology in two calls to MORPHE, much as we do stem change and prefix/suffix addition. We have presented a computational model that handles Arabic morphology generation concatenatively by separating the infixation changes undergone by an Arabic stem from the processes of prefixation and suffixation. Our approach was motivated by practical concerns. We sought to make efficient use of a morphological generation tool that is part of our standard environment for developing machine translation systems. The two-step approach significantly reduces the number of morphological transformation rules that must be written, allowing the Arabic generator to be smaller, simpler, and easier to maintain. The current implementation has been tested on a subset of verbal morphology including hollow verbs and various types of strong verbs. We are currently working on the other kinds of weak verbs: defective and assimilated verbs. Other categories of words can be handled in a similar manner, and we will turn our attention to them next.
563
3,419
563
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding
Recently, various neural models for multiparty conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction. However, these existing methods on MPC usually represent interlocutors and utterances individually and ignore the inherent complicated structure in MPC which may provide crucial interlocutor and utterance semantics and would enhance the conversation understanding process. To this end, we present MPC-BERT, a pre-trained model for MPC understanding that considers learning who says what to whom in a unified model with several elaborated self-supervised tasks. Particularly, these tasks can be generally categorized into (1) interlocutor structure modeling including reply-to utterance recognition, identical speaker searching and pointer consistency distinction, and (2) utterance semantics modeling including masked shared utterance restoration and shared node detection. We evaluate MPC-BERT on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on all three downstream tasks at two benchmarks.
Building a conversational agent with intelligence has drawn significant attention from both academia and industry. Most of existing methods have studied understanding conversations between two participants, aiming to return an appropriate response either in a generation-based 2015; An instance of MPC always contains complicated interactions between interlocutors, between utterances and between an interlocutor and an utterance. Therefore, it is challenging to model the conversation flow and fully understand the dialogue content. Existing studies on MPC learn the representations of interlocutors and utterances with neural networks, and their representation spaces are either separate On account of above issues, we propose MPC-BERT which jointly learns who says what to whom in MPC by designing self-supervised tasks for PLMs, so as to improve the ability of PLMs on MPC understanding. Specifically, the five designed tasks includes reply-to utterance recognition, identical speaker searching, pointer consistency distinction, masked shared utterance restoration and shared node detection. The first three tasks are designed to model the interlocutor structure in MPC in a semantics-to-structure manner. In the output of MPC-BERT, an interlocutor is described through the encoded representations of the utterances it says. Thus, the representations of utterance semantics are utilized to construct the conversation structure in these three tasks. On the other hand, the last two tasks are designed to model the utterance semantics in a structure-to-semantics manner. Intuitively, the conversation structure influences the information flow in MPC. Thus, the structure information can also be used to strengthen the representations of utterance semantics in return. In general, these five self-supervised tasks are employed to jointly train the MPC-BERT in a multi-task learning framework, which helps the model to learn the complementary information among interlocutors and utterances, and that between structure and semantics. By this means, MPC-BERT can produce better interlocutor and utterance representations which can be effectively generalized to multiple downstream tasks of MPC. To measure the effectiveness of these selfsupervised tasks and to test the generalization ability of MPC-BERT, we evaluate it on three downstream tasks including addressee recognition, speaker identification and response selection, which are three core research issues of MPC. Two benchmarks based on Ubuntu IRC channel are employed for evaluation. One was released by In summary, our contributions in this paper are three-fold: (1) MPC-BERT, a PLM for MPC understanding, is proposed by designing five selfsupervised tasks based on the interactions among utterances and interlocutors. (2) Three downstream tasks are employed to comprehensively evaluate the effectiveness of our designed self-supervised tasks and the generalization ability of MPC-BERT. (3) Our proposed MPC-BERT achieves new state-ofthe-art performance on all three downstream tasks at two benchmarks.
Existing methods on building dialogue systems can be generally categorized into studying twoparty conversations and multi-party conversations (MPC). In this paper, we study MPC. In addition to predicting utterances, identifying the speaker and recognizing the addressee of an utterance are also important tasks for MPC. Generally speaking, previous studies on MPC cannot unify the representations of interlocutors and utterances effectively. Also, they are limited to each individual task, ignoring the complementary information among different tasks. To the best of our knowledge, this paper makes the first attempt to design various self-supervised tasks for building PLMs aiming at MPC understanding, and to evaluate the performance of PLMs on three downstream tasks as comprehensively as possible. An MPC instance is composed of a sequence of (speaker, utterance, addressee) triples, denoted as {(s n , u n , a n )} N n=1 , where N is the number of turns in the conversation. Our goal is to build a pre-trained language model for universal MPC understanding. Given a conversation, this model is expected to produce embedding vectors for all utterances which contain not only the semantic information of each utterance, but also the speaker and addressee structure of the whole conversation. Thus, it can be effectively adapted to various downstream tasks by fine-tuning model parameters. In this paper, BERT We first give an overview of the input representations and the overall architectures of MPC-BERT. When constructing the input representations, in order to consider the speaker information of each utterance, speaker embeddings The first three tasks follow the semantics-tostructure manner. In MPC-BERT, each interlocutor is described through the encoded representations of the utterances it says. Thus, the representations of utterance semantics are utilized to construct the conversation structure. Figure To enable the model to recognize the addressee of each utterance, a self-supervised task named replyto utterance recognition (RUR) is proposed to learn which preceding utterance the current utterance replies to. After encoded by BERT, we extract the contextualized representations for each [CLS] token representing individual utterances. Next, a non-linear transformation followed by a layer normalization are performed to derive the utterance representations for this specific task {u rur i } N i=1 , where u rur i ∈ R d and d = 768. Then, for a specific utterance U i , its matching scores with all its preceding utterances are calculated as where A rur ∈ R d×d is a linear transformation, m ij denotes the matching degree of U j being the replyto utterance of U i , and 1 ≤ j < i. We construct a set S by sampling a certain number of utterances in a conversation and this recognition operation is performed for each utterance in S. Meanwhile, a dynamic sampling strategy is adopted so that models can see more samples. Finally, the pretraining objective of this self-supervised task is to minimize the cross-entropy loss as where y ij = 1 if U j is the reply-to utterance of U i and y ij = 0 otherwise. Having knowledge of who is the speaker of an utterance is also important for MPC. The task of identical speaker searching (ISS) is designed by masking the speaker embedding of a specific utterance in the input representation, and aims to predict its speaker given the conversation. Since the set of interlocutors vary across conversations, the task of predicting the speaker of an utterance is reformulated as searching for the utterances sharing the identical speaker. First, for a specific utterance, its speaker embedding is masked with a special [Mask] interlocutor embedding to avoid information leakage. Given the utterance representations for this specific task {u iss i } N i=1 where u iss i ∈ R d , the matching scores of U i with all its preceding utterances are calculated similarly with Eq. ( We design a task named pointer consistency distinction (PCD) to jointly model speakers and addressees in MPC. In this task, a pair of utterances representing the "reply-to" relationship is defined as a speaker-to-addressee pointer. Here, we assume that the representations of two pointers directing from the same speaker to the same addressee should be consistent. As illustrated in Figure Given the utterance representations for this specific task {u pcd i } N i=1 where u pcd i ∈ R d , we first capture the pointer information contained in each utterance tuple. The element-wise difference and multiplication between an utterance tuple (U i , U i ) are computed and are concatenated as where p ii ∈ R 2d . Then, we compress p ii and obtain the pointer representation pii as where W pcd ∈ R 2d×d and b pcd ∈ R d are parameters. Identically, a consistent pointer representations pjj and an inconsistent one pkk sampled from this conversation are obtained. The similarities between every two pointers are calculated as where m ij denotes the matching degree of pointer pii being consistent with pointer pjj . m ik can be derived accordingly. Finally, the pre-training objective of this task is to minimize the hinge loss which enforces m ij to be larger than m ik by at least a margin ∆ as Intuitively, the conversation structure might influence the information flow, so that it can be used to strengthen the representations of utterance semantics. Thus, two self-supervised tasks following the structure-to-semantics manner are designed. There are usually several utterances replying-to a shared utterance in MPC. Intuitively, a shared utterance is semantically relevant to more utterances in the context than non-shared ones. Based on this characteristic, we design a task named masked shared utterance restoration (MSUR). We first randomly sample an utterance from all shared utterances in a conversation and all tokens in this sampled utterance are masked with a [MASK] token. Then the model is enforced to restore the masked utterance given the rest conversation. Formally, assuming U i as the masked shared utterance and l i as the number of tokens in U i . Given the token representations for this task {u msur i,t } l i t=1 where u msur i,t ∈ R d , the probability distribution of each masked token can be calculated as where W msur ∈ R d×V is the token embedding table, V denotes the vocabulary size, and b msur ∈ R V is a bias vector. Finally, the pre-training objective of this self-supervised task is to minimize the negative log-likelihood loss as where p u i,t is the element in p u i,t corresponding to the original token. A full MPC instance can be divided into several sub-conversations and we assume that the representations of sub-conversations under the same parent node tend to be similar. As illustrated in Figure Under this assumption, we design a self-supervised task named shared node detection (SND), which utilizes the conversation structure to strengthen the capability of models on measuring the semantic relevance of two sub-conversations. We first construct the pre-training samples for this task. Empirically, only the sub-conversations under the top shared node in a conversation are collected in order to filter out the sub-conversations with few utterances. Given a full MPC, the two sub-conversations with the most utterances form a positive pair. For each positive pair, we replace one of its elements with another sub-conversation randomly sampled from the training corpus to form a negative pair. Formally, given two sub-conversations c i and c j , utterances in each sub-conversation are first concatenated respectively to form two segments. Then, the two segments are concatenated with a [SEP] token and a [CLS] token is inserted at the beginning of the whole sequence. This sequence are encoded by BERT to derive the contextualized representation for the [CLS] token. A non-linear transformation with sigmoid activation is further applied to this representation for calculating the matching score m ij , i.e., the probability of c i and c j sharing the same parent node. Finally, the pretraining objective of this task is to minimize the cross-entropy loss as (9) where y ij = 1 if c i and c j share the same parent node and y ij = 0 otherwise. In addition, we also adopt the tasks of masked language model (MLM) and next sentence prediction (NSP) in original BERT pre-training 4 Downstream Tasks Given a multi-party conversation where part of the addressees are unknown, , where ân is selected from the interlocutor set in this conversation and \ denotes exclusion. When applying MPC-BERT, this task is reformulated as finding a preceding utterance from the same addressee. Its RUR matching scores with all preceding utterances are calculated following Eq. (1). Then, the utterance with the highest score is selected and the speaker of the selected utterance is considered as the recognized addressee. Finally, the fine-tuning objective of this task is to minimize the crossentropy loss as where m ij is defined in Eq. ( This task aims to identify the speaker of the last utterance in a conversation. Formally, models are asked to predict ŝN given {(s n , u n , a n )} N n=1 \s N , where ŝN is selected from the interlocutor set in this conversation. When applying MPC-BERT, this task is reformulated as identifying the utterances sharing the same speaker. For the last utterance U N , its speaker embedding is masked and its ISS matching scores m N j with all preceding utterances are calculated following Section 3.2.2. The finetuning objective of this task is to minimize the cross-entropy loss as where y N j = 1 if U j shares the same speaker with U N and y N j = 0 otherwise. This task asks models to select ûN from a set of response candidates given the conversation context The key is to measure the similarity between two segments of context and response. We concatenate each response candidate with the context and extract the contextualized representation e [CLS] for the first [CLS] token using MPC-BERT. Then, e [CLS] is fed into a nonlinear transformation with sigmoid activation to obtain the matching score between the context and the response. Finally, the fine-tuning objective of this task is to minimize the cross-entropy loss according to the true/false labels of responses in the training set as where y = 1 if the response r is a proper one for the context c; otherwise y = 0. We evaluated our proposed methods on two Ubuntu IRC benchmarks. One was released by Ouchi and Tsuboi (2016) (2016). Here, we adopted the version shared in Non-pre-training-based models Ouchi and Tsuboi ( Pre-training-based models BERT The version of BERT-base-uncased was adopted for all our experiments. For pre-training, GELU The Adam method (Kingma and Ba, 2015) was employed for optimization. The learning rate was initialized as 0.00005 and the warmup proportion was set to 0.1. We pre-trained BERT for 10 epochs. The training set of the dateset used in All codes were implemented in the TensorFlow framework Addressee recognition We followed the metrics of previous work Table Len-10 Len-15 P@1 Acc. P@1 Acc. P@1 Acc. P@1 Acc. Preceding Speaker identification Similarly, P@1 was employed as the evaluation metric of speaker identification for the last utterance of a conversation and the results are shown in Table Response selection The R n @k metrics adopted by previous studies Len-10 Len-15 R 2 @1 R 10 @1 R 2 @1 R 10 @1 R 2 @1 R 10 @1 R 2 @1 R 10 @1 DRNN Figure In this paper, we present MPC-BERT, a pre-trained language model with five self-supervised tasks for MPC understanding. These tasks jointly learn who says what to whom in MPCs. Experimental results on three downstream tasks show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on two benchmarks.
1,286
3,061
1,286
Multiple Tasks Integration: Tagging, Syntactic and Semantic Parsing as a Single Task
Departing from both sequential pipelines and monotask systems, we propose Multiple Tasks Integration (MTI), a multitask paradigm orthogonal to weight sharing. The essence of MTI is to process the input iteratively but concurrently at multiple levels of analysis, where each decision is based on all of the structures that are already inferred and free from usual ordering constraints. We illustrate MTI with a system that performs part-of-speech tagging, syntactic dependency parsing and semantic dependency parsing. We observe that both the use of reinforcement learning and the release from sequential constraints are beneficial to the quality of the syntactic and semantic parses. We also observe that our model adopts an easy-first strategy that consists, on average, of predicting shorter dependencies before longer ones, but that syntax is not always tackled before semantics.
Historically, Natural Language Processing (NLP) systems have generally been built as sequential pipelines, where each module adds another layer of annotation, in order of (supposed) increasing complexity. Progress in neural networks has, however, led to the development of state-of-the-art systems that completely bypass intermediate levels of analysis that were previously considered essential. For example, the system of Zhou and Xu (2015) perform Semantic Role Labeling (SRL; A well known problem of sequential systems is error propagation, which happens when an incorrect prediction at some point in the process leads to more incorrect predictions at a later stage. In traditional pipelines, one of the roots of error propaga- * Work done while at AIST (Japan). tion lies in the fact that they feature a unidirectional flow of annotation between their different stages. End-to-end systems with no intermediate level of analysis, in contrast, are protected against one form of error propagation. Since such systems are not contingent on possibly faulty levels of analysis, they are free from the interference they could cause. However, the absence of any intermediate decisions -symbolic traces of the system's computation -raises questions about generalisation ability and interpretability Departing from these two kinds of architectures, we propose Multiple Tasks Integration (MTI). The principles of MTI are (i) to let the system take actions pertaining to different levels of analysis without constraining their order, and (ii), at any given point of the process, to feed all layers of annotation as input for the next predictions. The different tasks can therefore fully interact with each other as if they were a single task. Our main contribution lies in an illustration of these principles with a system that performs part-of-speech (POS) tagging, syntactic dependency parsing and semantic dependency parsing (SDP), on English data. We have chosen these specific tasks not only for their strong interdependence but also for their generality: many other tasks in NLP (e.g. SRL, coreference resolution, relation extraction) can be reduced to labelling or bi-lexical dependencies creation problems. We show that in this specific case, letting the system order freely its actions across all three tasks leads to better performance, and that this im-provement concerns the syntactic layer as well as the semantic one, although to a lesser extent.
Fighting error propagation Since its apparition for speech recognition Other techniques are designed to help to fight error propagation of the second kind, which focus on the training of the system to make it robust to its own mistakes. One possibility consists in training the system to predict the next best action in any state it find itself in, instead of simply staying on the (errorless) gold path. Doing so requires the possibility for a next best action to be determined, which, while trivial in some cases such as POS tagging, is not in others, in particular for structured prediction. Hence the introduction of the notion of dynamic oracle by MultiTask Learning (MTL) The main idea behind MTL is that one can often increase the performance of a given neural-based system by sharing some of its weights with other systems trained to perform other tasks While MTL deals with tasks usually done rather independently, an alternative approach -which does not require weight sharing -is explored by Joint syntactic-semantic parsing In 2008 and 2009, the CoNLL shared tasks were focused on the joint parsing of syntactic and SRL-based semantic dependencies More recently, The SDP task, introduced for the SemEval workshop A wide range of techniques has been applied in the literature in order to tackle SDP. We define six types of actions. Relative to a token i, TAG-t corresponds to tagging i with POS tag t, SYN-j-l to creating a syntactic dependency labelled l from token j to i, ROOT to setting i as the (syntactic) root, SEM-j-l to creating a semantic dependency labelled l from token j to i, TOP PRED to setting i as a (semantic) top predicate and, finally, HALT to doing nothing. The inference process is summarised in Algorithm 1. At each time step s, the system first encodes the current state of the analysis into a sequence of one vector per token. These encodings contain information from the three different layers of annotation being built. Then, for each token i independently, its policy -a distribution of probability over all possible actions (TAG-t, SYN-j-l, etc.) -, π i,s , is computed. From each policy π i,s , an action a i,s is then selected and performed, thus enriching the annotation structure. Algorithm 1: Inference algorithm input: A sentence x. Initialise empty POS tag, syntactic and semantic annotation structures: (s tag , s syn , s sem ); s ←-0; continue ←-True; while continue = True do l ←encode(x, s tag , s syn , s sem ); apply a i on (s tag , s syn , s sem ); if ∀i, a i = HALT then continue ←-False; s ←-s + 1; return (s tag , s syn , s sem ) Note that this does not entail that analyses are three steps long, as (i) tokens can have more than a single semantic head and (ii) any token can decide to wait with a HALT action. We do not ensure that the syntactic (resp. semantics) structure computed is a tree (resp. DAG); we leave the implementation of relevant heuristics as a subject of future work. We do impose several constraints, however: (i) a token can be annotated with only one POS tag, (ii) there can be at most one root, (iii) there can be no incoming syntactic dependency on a root, (iv) there can be at most one syntactic (resp. semantic) dependency from token j to i. Our strategy is to always perform the selected action, overwriting possible incompatible previous annotations. We typically train our model with a supervised pretraining phase followed by a reinforcement learning (RL) phase. During the pre-training phase, the loglikelihood of the model on goldish sequences of actions is maximised: for each token of a sentence, we generate a sequence of actions of minimum length leading to the gold annotation, randomly permute it (each time with a different permutation) and pad it with HALT so as to match the length of the longest sequence (over the sentence), before a final HALT is added. For example, for a token tagged with the POS tag of id 7, syntactically dependent (label of id 12) on the token 3, and that is neither a top predicate nor semantically dependent on any token, one of the two goldish sequences is In relation to the reinforcement process, each action a i,s is associated with a reward r(a i,s , i, s). The role of RL is to train the model to maximise the expected sum of rewards J = E(R), where R = s R s and R s = i r(a i,s , i, s). We compute the rewards in the following way. Let #pos be the number of POS annotations (which is also the number of tokens) in the training set, while N is the number of sentences. We then define r pos = N #pos . The creation of a correct POS annotation and the suppression of an incorrect one (which can happen by overwriting) both correspond to a reward of r pos , while the creation of an incorrect POS annotation and the suppression of a correct one correspond to a reward of -r pos . Syntactic and semantic dependencies are treated similarly, with syntactic root (resp. semantic top predicate) annotations counted as a virtual syntactic (resp. semantic) dependencies. As a consequence, the construction of the full gold structure corresponds on average to a reward of 3 per sentence, equally balanced across the three layers. The reward associated to a given action is then computed as the sum of the reward of its effects, minus a small constant negative penalty in the case of non HALT actions (set at a tenth of the average reward per token in the training set) aimed at discouraging the model from loitering. The RL algorithm we use to optimise our model is a modification of REINFORCE where n is the length of the sentence. 4 The direction of the parameters update for a given episode is then the one obtained from REINFORCE summed over all tokens: where b i,s is the baseline term. 5 We use a state value baseline, which is trained by minimising its squared error with the observed return: The policy and baseline parameters updates are weighted with coefficients 0.67 and 0.33 respectively. Note that we update the baseline term also during the pre-training phase, using the rewards obtained following the goldish sequences. Finally, we additionally maximise the entropy of each policy with a coefficient 0.002. Optimisation is done using Adam (Kingma and Ba, 2015). The search for learning rates has been done manually, optimising the semantic F1 on the development set using the DM formalism. 6 We first found that a learning rate of 5.10 -4 during pretraining gave satisfying results (the other optimiser parameters are left as set by default in TensorFlow, i.e. β 1 = 0.9, β 2 = 0.999 and = 10 - 5 The baseline term is not strictly necessary: its goal is to reduce the variance of the estimate of the gradient in order to speed the learning process We first define a base encoding for each token, composed of a 100-dimensional pre-trained GloVe word embedding For the output of the syntax encoder, we use the bothward encodings of the syntactic graph while for the output of the semantic encoder we use only the upward encodings of the semantic graph. The output of the second layer of the token representation module is sent (i) to a multilayer perceptron (MLP) that computes the state value b i,t (the baseline term) and (ii) to different sub-networks that compute the logits of the actions (i.e. the values from which the probabilities are obtained by applying softmax), which are described in the next section. 8 See the text for the definition of the syntax and semantics encoder. The logit of each action is computed by one of three sub-networks. The first one is an MLP that returns the logits for TAG-t actions, ROOT, TOP PRED and HALT. The second returns the logits for SYN-j-l actions: the logit, for token i, to select token j as governor, for all possible dependency labels is given by MLP([v i , v j , v i,j ]) ∈ R |L| , where |L| is the number of dependency labels and v i,j ∈ B |L|+1 is a one-hot vector indicating whether j is currently governor of i and if so, what is the label of the corresponding dependency. The last sub-network returns the logits for SEM-j-l actions in exactly the same way. The policy is then obtained by applying the softmax function to the concatenation of the output of these different sub-networks. To test whether MTI is a viable paradigm and determine the impact of RL, in this section we test the model described above along with three variants. These four models can be seen as combinations of two binary traits. The first trait pertains to the ordering of the actions: sequential models simulate sequential pipelines, while for free models, as above, no particular constraint is imposed. A sequential model can only select TAG-t actions during the first step of an episode, only SYN-j-l and ROOT during the second and only SEM-j-l, TOP PRED and HALT afterwards. This is ensured by using as policy for each token the vector obtained by normalising a masked version of its usual policy. We use data from the SemEval 2015 Task 18 Table Let us turn to the performance of our model on the syntactic parsing and POS tagging tasks in the four settings. Table As shown in Table Note that while we do not use the same split of the WSJ Corpus as them, a comparison with the 91.87 LAS on syntax and 96.92 accuracy on POS tagging obtained by the model of In order to better understand why our MTI paradigm leads to better performance not only on the syntactic task but also on the semantic one, we now perform a brief study of the ordering strategy inferred by our free+RL model. We focus here on the SYN-j-l and SEM-j-l actions and look at the average time step at which they are created as a function of their length. Finally, table our model is the only one that does not rely on the provided POS tags at inference time. We have defined Multiple Tasks Integration as a set of principles for joint processing, orthogonal to weight sharing. The essence of MTI is to process the input iteratively but concurrently on multiple levels of analysis, basing each decision on all of the structures already inferred and free from usual ordering constraints. This way, the different tasks can interact in the full sense of the term. To train such a system, we propose using reinforcement learning algorithms, thus allowing it to infer its own ordering strategy. In practice, we have trained a system to perform part-of-speech tagging, syntactic dependency parsing and semantic dependency parsing. We have observed that both the use of reinforcement learning and the release from sequential constraints are beneficial, not only to the (seemingly) highest level task (i.e. semantic parsing), but also to some intermediate ones (i.e. syntactic parsing). If the inferred strategies are interpreted as being easy-first -which is supported by the fact that shorter dependencies have a strong tendency to be generated before longer ones -, then we have observed that syntactic parsing is not necessarily simpler than semantic parsing and that both benefit from being executed concurrently. While our model is not yet as effective as today's most complex systems, it is still competitive with most of the parsers presented in the recent literature, even though it uses a poorer input signal for inference (consisting of the raw tokens only). Furthermore, several aspects of the current system are open to developments that seem likely to improve performance. For instance, we do not use here the full potential of reinforcement learning, as what we optimise (the expected sum of rewards) is not the metric we are interested in (which would be either an average of the three F1 or the semantic one). For each of the three tasks, the sum of the rewards we have defined approaches, up to a multiplicative constant, the corresponding F1 of the system when the latter approaches 100%, but a better approximation might prove more successful. In a similar vein, Kurita and Søgaard (2019), who work with a similar architecture as far as SDP is concerned, penalise HALT actions when relevant dependencies are still missing, which intuitively boosts recall. Note also that our network is still rather simple, in that it does not use any form of regularisation nor any advanced technique to handle out-of-vocabulary words. We would also be interested in applying MTI to other tasks and in studying how well it can learn from incomplete annotations. Our architecture can be straightforwardly adapted to any labelling or graph building task, as long as all nodes are tokens of the input sentence. In contrast, work remains to be done in order to handle formalisms such as Abstract Meaning Representation (AMR, Banarescu et al., or on how to integrate a generation component with, for example, the goal of translating the sentence being analysed. Yu Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1127-1137, Beijing, China. Association for Computational Linguistics. A Appendix: Base encodings The base encoding of a given token (introduced in Section 3.3) is composed of the concatenation of four vectors and five binary values. • The GloVe embeddings that we use are the 100-dimensional vectors of the 6B (uncased) release. We do not fine-tune them. All words present in the training set use their corresponding GloVe entry. All other words are considered unknown and are assigned the average of these embeddings. • For POS tag embeddings, we use randomly initialised 50-dimensional vectors. • We use a sum of prefix embeddings. We first consider all cased prefixes of length 1, 2 or 3 and then filter out all those that appear in less that a thousandth of the tokens and less that a thousandth of the word forms in the training set. The remaining prefixes are assigned a randomly initialised 32-dimensional vector. (Unknown prefixes correspond to a zero vector.) • We use a sum of similar suffix embeddings. • One binary value indicates whether the token starts with an upper case letter. • One binary value indicates whether there is any upper case letter in the token. • One binary value indicates whether the token is a number (matching the \d+(\.\d+)? regular expression). • One binary value indicates whether the token is currently annotated as the (syntactic) root. • One binary value indicates whether the token is currently annotated as a (semantic) top predicate. In future work, we plan to substitute a more general character-level word embedding model for the tagging features (i.e., the prefix and suffix embeddings and the three first binary features).
882
2,452
882
Using Second-order Vectors in a Knowledge-based Method for Acronym Disambiguation
In this paper, we introduce a knowledge-based method to disambiguate biomedical acronyms using second-order co-occurrence vectors. We create these vectors using information about a long-form obtained from the Unified Medical Language System and Medline. We evaluate this method on a dataset of 18 acronyms found in biomedical text. Our method achieves an overall accuracy of 89%. The results show that using second-order features provide a distinct representation of the long-form and potentially enhances automated disambiguation.
W ord Sense Disambiguation (WSD) is the task of automatically identifying the appropriate sense of a word with multiple senses. For example, the word culture could refer to anthropological culture (e.g., the culture of the Mayan civilization), or a laboratory culture (e.g., cell culture). Acronym disambiguation is the task of automatically identifying the contextually appropriate longform of an ambiguous acronym. For example, the acronym MS could refer to the disease Multiple Sclerosis, the drug Morphine Sulfate, or the state Mississippi, among others. Acronym disambiguation can be viewed as a special case of WSD, although, unlike terms, acronyms tend to be complete phrases or expressions, therefore collocation features are not as easily identified. For example, the feature rate when disambiguating the term interest, as in interest rate, may not be available. Acronyms also tend to be noun phrases, therefore syntactic features do not provide relevant information for the purposes of disambiguation. Identifying the correct long-form of an acronym is important not only for the retrieval of information but the understanding of the information by the recipient. In general English, In the biomedical sublanguage domain, acronym disambiguation is an extensively studied problem. Supervised and semi-supervised methods have been used successfully for acronym disambiguation but are limited in scope due to the need for sufficient training data. In this paper, we introduce a novel knowledgebased method to disambiguate acronyms using second-order co-occurrence vectors. This method does not rely on training data, and therefore, is not limited to disambiguating only commonly occurring possible long-forms. These vectors are created using the first-order features obtained from the UMLS about the acronym's long-forms and second-order features obtained from Medline. We show that using second-order features provide a distinct representation of the long-form for the purposes of disambiguation and obtains a significantly higher disambiguation accuracy than using first order features.
The Unified Medical Language System (UMLS) is a data warehouse that stores a number of distinct biomedical and clinical resources. One such resource, used in this work, is the Metathesaurus. The Metathesaurus contains biomedical and clinical concepts from over 100 disparate terminology sources that have been semi-automatically integrated into a single resource containing a wide range of biomedical and clinical information. For example, it contains the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT), which is a comprehensive clinical terminology created for the electronic exchange of clinical health information, the Foundational Model of Anatomy (FMA), which is an ontology of anatomical concepts created specifically for biomedical and clinical research, and MEDLINEPLUS, which is a terminology source containing health related concepts created specifically for consumers of health services. The concepts in these sources can overlap. For example, the concept Autonomic nerve exists in both SNOMED CT and FMA. The Metathesaurus assigns the synonymous concepts from the various sources a Concept Unique Identifiers (CUIs). Thus both the Autonomic nerve concepts in SNOMED CT and FMA are assigned the same CUI (C0206250). This allows multiple sources in the Metathesaurus to be treated as a single resource. Some sources in the Metathesaurus contain additional information about the concept such as a concept's synonyms, its definition and its related concepts. There are two main types of relations in the Metathesaurus that we use: the parent/child and broader/narrower relations. A parent/child relation is a hierarchical relation between two concepts that has been explicitly defined in one of the sources. For example, the concept Splanchnic nerve has an is-a relation with the concept Autonomic nerve in FMA. This relation is carried forward to the CUI level creating a parent/child relations between the CUIs C0037991 (Splanchnic nerve) and C0206250 (Autonomic nerve) in the Metathesaurus. A broader/narrower relation is a hierarchical relation that does not explicitly come from a source but is created by the UMLS editors. We use the entire UMLS including the RB/RN and PAR/CHD relations in this work. Medline (Medical Literature Analysis and Retrieval System Online) is a bibliographic database containing over 18.5 million citations to journal articles in the biomedical domain which is maintained by the National Library of Medicine (NLM). The 2010 Medline Baseline, used in this study, encompasses approximately 5,200 journals starting from 1948 and is 73 Gigabytes; containing 2,612,767 unique unigrams and 55,286,187 unique bigrams. The majority of the publications are scholarly journals but a small number of newspapers, and magazines are included. Existing acronym disambiguation methods can be classified into two categories: form-based and context-based methods. Form-based methods, such as the methods proposed by In contrast, context-based methods disambiguate between acronyms based on the context in which the acronym is used with the assumption that the context surrounding the acronym would be different for each of the possible long-forms. In the remainder of this section, we discuss these types of methods in more detail. Methods Many knowledge-based WSD methods have been developed to disambiguate terms which are closely related to the work presented in this paper. Second-order co-occurrence vectors were first introduced by In our method, a second-order co-occurrence vector is created for each possible long-form of the acronym, and the acronym itself. The appropriate long-form of the acronym is then determined by computing a cosine between the vector representing the ambiguous acronym and each of the vectors representing the long-forms. The long-form whose vector has the smallest angle between it and the acronym vector is chosen as the most likely longform of the acronym. To create a second-order vector for a long-form, we first obtain a textual description of the long-form in the UMLS, which we refer to as the extended definition. Each long-form, from our evaluation set, was mapped to a concept in the UMLS, therefore, we use the long-form's definition plus the definition of its parent/children and narrow/broader relations and the terms in the long-form. We include the definition of the related concepts because not all concepts in the UMLS have a definition. In our evaluation dataset, not a single acronym has a definition for each possible long-form. On average, each extended definition contains approximately 453 words. A short example of the extended definition for the acronym FDP when referring to fructose diphosphate is: " Diphosphoric acid esters of fructose. The fructose diphosphate isomer is most prevalent. fructose diphosphate." After the extended definition is obtained, we create the second-order vector by first creating a word by word co-occurrence matrix in which the rows represent the content words in the long-forms, extended definition, and the columns represent words that co-occur in Medline abstracts with the words in the definition. Each cell in this matrix contains the Log Likelihood Ratio For example, given the example corpus containing two instances: 1) The metabolites, glucose fructose and their phosphoric acid esters are changed due to the effect of glycolytic enzymes, and 2) The phosphoric acid combined with metabolites decreases the intensity. Figure The second-order co-occurrence vector for the ambiguous acronym is created in a similar fashion, only rather than using words in the extended definition, we use the words surrounding the acronym in the instance. Vector methods are subject to noise introduced by features that do not distinguish between the different long-forms of the acronym. To reduce this type of noise, we select the features to use in the second order co-occurrence vectors based on the following criteria: 1) second order feature cannot be a stopword, and 2) second order feature must occur at least twice in the feature extraction dataset and not occur more than 150 times. We also experiment with the location of the second-order feature with respect to the first-order feature by varying the window size of zero, four, six and ten words to the right and the left of the first-order feature. The experiments in this paper were conducted using CuiTools v0.15. We evaluated our method on the "Abbrev" dataset A sufficient number of instances were not found for each of the 21 ambiguous acronyms by We evaluate our method on the same subsets that We use abstracts from Medline, containing ambiguous acronym or long-form, to create the second-order co-occurrence vectors for our method as described in Section 6. Table Table We compare the results using second-order vectors to first-order vectors. Table The results in Table 2 also show that, as the window size grows from zero to six, the accuracy of the system increases and plateaus at a window size of ten. There is no statistically significant difference between using a window size of six and ten but there is a significant difference between a window size of zero and six, as well as four and six (p ≤ 0.01). Table Of the 18 acronyms, three obtain an accuracy below 80 percent: FDP, MCP and PCA. FPD has four possible long-forms: Fructose Diphosphate (E1), Formycin Diphosphate (E2), Fibrinogen Degradation Product (E3) and Flexor Digitorum Profundus (E4). The confusion matrix in Table Of the previously developed methods, (2004) because we do not have an exact duplication of the dataset that they use. Their results are comparable to In this paper, we presented a novel method to disambiguate acronyms in biomedical text using secondorder features extracted from the UMLS and Medline. The results show that using second-order features provide a distinct representation of the longform that is useful for disambiguation. We believe that this is because biomedical text contains technical terminology that has a rich source of co-occurrence information associated with them due to their compositionality. Using second-order information works reasonably well because when the terms in the extended definition are broken up into their individual words, information is not being lost. For example, the term Patient Controlled Analgesia can be understood by taking the union of the meanings of the three terms and coming up with an appropriate definition of the term (patient has control over their analgesia). We evaluated various window sizes to extract the second-order co-occurrence information from, and found using locally occurring words obtains a higher accuracy. This is consistent with the finding reported by The amount of data used to extract the second-order features for each ambiguous acronym varied depending on its occurrence in Medline. Table We compared using second-order features and first-order features showing that the second-order results obtained a significantly higher accuracy. We believe that this is because the definitions of the possible concepts are too sparse to provide enough information to distinguish between them. This finding coincides to that of The results of the error analysis indicate that for some acronyms using the extended definition does not provide sufficient information to make finer grained distinctions between the long-forms. This result also indicates that, although many longforms of acronyms can be considered coarse-grained senses, this is not always the case. For example, the analysis of M CP showed that two of its possible long-forms are proteins which are difficult to differentiate from given the context. The results of the error analysis also show that indicative collocation features for acronyms are not easily identified because acronyms tend to be complete phrases. For example, two of the possible long-forms of DF are Fructose Diphosphate and Formycin Diphosphate. Two main limitations of this work must be mentioned to facilitate the interpretation of the results. The first is the small number of acronyms and the small number of long-forms per acronym in the dataset; however, the acronyms in this dataset are representative of the kinds of acronyms one would expect to see in biomedical text. The second limitation is that the dataset contains only those acronyms whose long-forms were found in Medline abstracts. The main goal of this paper was to determine if the context found in the long-forms, extended definition was distinct enough to distinguish between them using second-order vectors. For this purpose, we feel that the dataset was sufficient although a more extensive dataset may be needed in the future for improved coverage. 12 Future Work In the future, we plan to explore three different avenues. The first avenue is to look at obtaining contextual descriptions of the possible long-forms from resources other than the UMLS such as the MetaMapped Medline baseline and WordNet. The second avenue is limiting the features that are used in the instance vectors. The first-order features in the instance vector contain the words from the entire abstract. As previously mentioned, vector methods are subject to noise, therefore, in the future we plan to explore using only those words that are co-located next to the ambiguous acronym. The third avenue is expanding the vector to allow for terms. Currently, we use word vectors, in the future, we plan to extend the method to use terms, as identified by the UMLS, as features rather than single words. We also plan to test our approach in the clinical domain. We believe that acronym disambiguation may be more difficult in this domain due to the increase amount of long-forms as seen in the datasets used by Our study constitutes a significant step forward in the area of automatic acronym ambiguity resolution, as it will enable the incorporation of scalable acronym disambiguation into NLP systems used for indexing and retrieval of documents in specialized domains such as medicine. The advantage of our method over previous methods is that it does not require manually annotated training for each acronym to be disambiguated while still obtaining an overall accuracy of 89%.
531
2,095
531
OssCSE: Overcoming Surface Structure Bias in Contrastive Learning for Unsupervised Sentence Embedding
Contrastive learning has been demonstrated effective in unsupervised sentence representation learning. Given one sentence, positive pairs are obtained by passing the sentence to the encoder twice using the different dropout masks, and negative pairs are obtained by taking another sentence in the same mini-batch. However, the method suffers from the surface structure bias, i.e., sentences with similar surface structures will be regarded as close in semantics while sentences with dissimilar surface structures will be viewed as distinct in semantics. This leads to the result that paraphrasing a sentence that is dissimilar in surface structure will receive a lower semantic similarity score than inserting a negative word into the sentence. In this paper, we first verify the bias by collecting a sentence transformation testset. Then we systematically probe the existing models by proposing novel splits based on benchmark datasets in accordance with semantic and surface structure similarity. We tackle the bias in two aspects: balancing the learning target by augmenting with data that counters the bias, and meanwhile preserving word semantics by leveraging recall loss to prevent catastrophic forgetting. We evaluate our model on standard semantic textual similarity (STS) tasks using different pre-trained backbones and achieve state-of-the-art averaged performance across the STS benchmarks. Particularly, our models that are fine-tuned with RoBERTa base and RoBERTa large achieve significantly better performance on most benchmark datasets.
Deep and surface structures Recent studies To answer the first question, we first propose to validate whether current models can correctly rank a few sentence transformations. Furthermore, to systematically evaluate how the bias effect existing models, we split the existing datasets following the Consistency (Cont.) and Opposition (Oppn.) settings by surface structure and deep structure similarity in Table To answer the second question, since the major bottleneck of current models is their poor performance on the Oppn. datasets, we apply two strategies: (1) automatic sentence-level data augmentations in accordance with the Oppn. setting with max margin loss applied on the augmented data; (2) a regularization loss to prevent catastrophic forgetting in token-level semantics which is critical to constitute sentence meanings. In this work, we use match error rate (MER) In short, we make the following contributions: • We investigate the surface structure bias in contrastive learning for the unsupervised sentence representation and systematically evaluate the bias by constructing datasets following the two designated settings on surface structure and deep structure similarity. • We overcome the bias by leveraging data augmentation according to the Oppn. setting, and then use the max margin loss to incorporate these data in the contrastive learning framework. We also use an additional regularization loss to reduce the catastrophic forgetting in learning to preserve the word semantics from the pre-trained models. • Our methods significantly outperform the baselines, achieving the state-of-the-art averaged performance across the benchmark datasets under the standard metric. We provide detailed analyses on how the bias is mediated.
Unsupervised sentence representation learning has been widely studied. The early study attempts to leverage sentence internal structure Recently, unsupervised sentence embeddings have utilized contrastive learning schemes to further boost the performance by different data augmentation methods, such as dropout To specifically exhibit the bias, we write down seven differently transformed sentences as shown in Example 1 considering six types of sentence transformations: (1) Paraphrasing a sentence with a dissimilar surface forms; (2) Contradicting a sentence by making minor changes; (3) Inserting a few tokens while keeping the meaning unchanged; (4) Deleting a few tokens while keeping the meaning unchanged; (5) Substituting a few tokens while keeping the meaning unchanged; (6) Changing to another random sentence. We probe the officially released unsupervised and supervised models with Bert-base-uncased as the backbone from SimCSE The MER value (range from 0 ∼ 1) indicates the proportion of words that were incorrectly deleted, substituted and inserted: the lower the value, the higher the surface structure similarity. To validate our findings on a larger scale, we gather a total of 135 examples and each example shares the same transformations as Example 1. These sentences are from multiple sources, such as Wikipedia, news headlines, and image descriptions. We first use automatic methods and then manually filter these sentences to obtain the final six transformations for each sentence. For paraphrases, we use ChatGPT's Figure As another step to evaluate the bias, we design new dataset splits based on two settings as illustrated by Table More specifically, since the human annotation scores are only consistent on each dataset, we split the existing datasets following Cont. and Oppn. settings by surface structure similarity (MER: 0 ∼ 1) and semantic similarity (human annotations: 0 ∼ 5) on the basis of a single dataset as shown in Table We perform probing on each above dataset by the standard spearman's correlation relationship between the model computed scores and human annotated scores. As shown in Table Unsupervised Contrastive Learning Given a dataset of n paired sentences {s i , s + i } n i=1 , where s i and s + i are semantically similar and regarded as a positive pair. The main idea behind unsupervised contrastive learning is to utilize identical sentences to construct positive pairs. i.e., s + i = s i , and random two sentences in a mini-batch as negative pairs. We feed s i to the encoder twice by applying different dropout masks in each forward pass and obtaining two sentence embeddings h i , h + i as shown in Figure where τ is a temperature hyper-parameter and sim is the cosine similarity metric. Instead of using the original sentence as input and the last hidden layer of <CLS> token as sentence representations. Following Step 1: Generating a semantically contradicted example sentence s n i by only inserting negative words to the original sentence s i , e.g., My dog likes eating sausage → My dog does not like eating sausage. Given the sentence, we use dependency parsing to find the position of the word that is tagged as root as shown in Figure Step 2: Given a sentence s i , we aim to generate a sentence s p i that is similar in semantics but distinctive in surface structure. We use back translation models on English-Russian-English, English-German-English Although h n i is semantically contradictory to the original sentence, it is close to the original one in terms of semantic relatedness. Hence, we design the above two-way max marginal loss, which can guide the model to learn the following objective: α < sim(h i , h p i )sim(h i , h n i ) < β where α, β are margins. Catastrophic forgetting As shown in Figure where γ is a hyper-parameter for this recall loss and θ * is the initial parameters of the pre-trained models. (Refer to Appendix D for background of the recall loss). Overall, the loss function for our method is as below: 5 Experiments We perform experiments with backbones of RoBERTa base , RoBERTa large Compared Baselines We mainly choose SimCSE for comparison, since we build our method based on it and shares the same setting in our approach. We also compare our results with several most recent strong work as below: PromptBert Sentence Similarity Tasks We show the results of STS tasks in Transfer Tasks We show the results of transfer tasks in Table Cont. and Oppn. We show the results of Cont. and Oppn. dataset splits in Figure Ablation Analysis Figure We also find that recall loss can help stabilize the learning process because of its effectiveness in preventing catastrophic forgetting. Figure Quantitative Analysis We further take the original sentence in Example 1, and write down eight paraphrases and negations (Refer to Appendix B for the specific sentence transformations). When comparing OssCSE-BERT base with SimCSE-BERT base , we find that the averaged score of eight paraphrases increases from 0.79 to 0.88 and the averaged score of eight negations decreases from 0.93 to 0.87. We investigate the surface structure bias in contrastive learning for the unsupervised sentence embedding and systematically probe the bias by constructing datasets following the two designated settings on the surface and deep structure similarity. We overcome the bias by data augmentation methods and then use the max margin loss to incorporate these data in the contrastive learning framework. We also use a recall loss to reduce catastrophic forgetting in unsupervised learning to preserve the word semantics in the pre-trained models. The results significantly outperform the baselines and achieve state-of-the-art results on averaged performance with different pre-trained backbones. First, we cannot guarantee the quality of the backtranslation results as the augmentation target of paraphrase. There might be error propagation in the forward and backward translation process. We are considering using strong paraphrase models in the future. Second, we have not considered other minor modification methods that would be considered significant to semantic meanings rather than negations. Missing these parts may weaken our model generalization ability such that it may not be applicable in specific domains.
1,552
1,751
1,552
STRUDEL : Structured Dialogue Summarization for Dialogue Comprehension
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing, but no previous work has explored the possibility of whether abstractive dialogue summarization can also be used as a means to boost an NLP system's performance on other important dialogue comprehension tasks. In this paper, we propose a novel type of dialogue summarization task -STRUctured DiaLoguE Summarization (STRUDEL ) -that can help pre-trained language models to better understand dialogues and improve their performance on important dialogue comprehension tasks. In contrast to the holistic approach taken by the traditional free-form abstractive summarization task for dialogues, STRUDEL aims to decompose and imitate the hierarchical, systematic and structured mental process that we human beings usually go through when understanding and analyzing dialogues, and thus has the advantage of being more focused, specific and instructive for dialogue comprehension models to learn from. We further introduce a new STRUDEL dialogue comprehension modeling framework that integrates STRUDEL into a dialogue reasoning module over transformer encoder language models to improve their dialogue comprehension ability. In our empirical experiments on two important downstream dialogue comprehension tasks -dialogue question answering and dialogue response prediction -we demonstrate that our STRUDEL dialogue comprehension models can significantly improve the dialogue comprehension performance of transformer encoder language models.
In natural language processing, abstractive dialogue summarization In this paper, we propose a novel type of dialogue summarization task -STRUctured DiaLoguE Summarization (STRUDEL 1 ) -that can help pre-trained language models to better understand dialogues and improve their performance on important dialogue comprehension tasks. In contrast to the holistic approach taken by the traditional freeform abstractive summarization task for dialogues, STRUDEL aims to decompose and imitate the hierarchical, systematic and structured mental process that we human beings usually go through when understanding and analyzing dialogues. Then we further introduce a new dialogue comprehension model that integrates STRUDEL into a dialogue reasoning module over transformer encoder language models. Our empirical experiment results shows that STRUDEL is indeed very effective in providing transformer language models with better support for reasoning and inference over challenging downstream dialogue comprehension tasks such as dialogue question answering and response prediction and improving their performance. 2 Background and Related Work
Abstractive summarization aims to generate a concise summary of a text by producing a paraphrasing of the main contents using different vocabulary, rather than simply extracting the important sentences, which is referred to as extractive summarization. A popular approach to produce abstractive summaries of long documents is via neural abstractive summarization by using a singular extractive step to condition the transformer language model before generating a summary Abstractive summarization has also been applied to solve NLP-related tasks such as text classification, news summarization, and headline generation. Furthermore, the generation of summaries can be integrated into these systems as an intermediate stage to reduce the length of documents. Abstractive dialogue summarization, the task of summarizing multi-turn conversations between different speakers There have been a number of advances in multiturn dialogue comprehension and reasoning in recent years. We define Structured Dialogue Summarization (STRUDEL) as the task of generating a systematic and abstactive multi-entry dialogue summarization organized in a structured form that represents a comprehensive multi-aspect understanding and interpretation of a dialogue's content. A complete STRUDEL summarization of a dialogue (a) Name S 1 -the name of the first speaker of the dialogue. (b) Name S 2 -the name of the second speaker of the dialogue. (c) Role/Identity S 1 -the role or identity of the first speaker of the dialogue. (d) Role/Identity S 2 -the role or identity of the second speaker of the dialogue. (e) Relationship -the relationship between the two speakers of the dialogue. (f) Time -the time that the dialogue takes place. (g) Location S 1 -the physical location of the first speaker when the dialogue takes place. (h) Location S 2 -the physical location of the second speaker when the dialogue takes place. (i) Purpose/Theme -the main purpose or theme for which the dialogue is made between the two speakers. (j) Task/Intention S 1 -the main task or intention that the first speaker would like to achieve in the dialogue. (k) Task/Intention S 2 -the main task or intention that the second speaker would like to achieve in the dialogue. (l) Problem/Disagreement 1 -the most important problem or disagreement that the two speakers need to solve in the dialogue. (m) Solution 1 -the solution that the two speakers reach for the most important problem or disagreement in the dialogue. (n) Problem/Disagreement 2 -the second most important problem or disagreement that the two speakers need to solve in the dialogue. (o) Solution 2 -the solution that the two speakers reach for the second most important problem or disagreement in the dialogue. (p) Conclusion/Agreement -the final conclusion or agreement that the two speakers reach in the dialogue. In an actual STRUDEL summarization of a dialogue, the content of each of the above 16 STRUDEL entries will either be a short text abstractively summarizing a specific aspect of the dialogue as indicated by that STRUDEL entry's definition, or be 'N/A' indicating that the entry can't be inferred from or is not mentioned in the current dialogue. Here we use a concrete example to demonstrate structured dialogue summarization of a dialogue. Figure This same example also appears in the DIALOG-SUM dataset From this comparison between the traditional free-form abstractive dialogue summarization and our proposed structured dialogue summarization, we can clearly see that the STRUDEL summarization includes more important aspects about the dialogue and tells a more comprehensive and informative story compared to the traditional free-form abstractive dialogue summarization. Our proposed new task of Structured Dialogue Summarization (STRUDEL) opens up a gateway for language models to observe, imitate and learn from the structured human mental process of sys-tematic dialogue understanding. But in order to actually infuse these valuable human-guided structural priors regarding dialogue understanding into language models through the task of STRUDEL, we first need to collect high-quality supervision information from empirical human demonstration of performing the STRUDEL task. Therefore, for this purpose, we collect a set of human annotations of STRUDEL over 400 dialogues sampled from two widely used dialogue comprehension datasetsthe MuTual The two dialogue comprehension datasets that we used for the human annotations of STRUDEL are: MuTual DREAM We use the JSON format for the manual annotation of STRUDEL. The two major annotation protocols we prescribed to the annotators during STRUDEL human annotation are: 1. When writing each STRUDEL summarization entry for a dialogue, please be informative, succinct, faithful and to the point. 2. When you think a certain STRUDEL entry can't be inferred from the dialogue or is not mentioned in the dialogue at all or doesn't apply to the current dialogue, please write 'N/A' for that STRUDEL entry in your annotation. See Figure The statistics of our collected human annotations of STRUDEL are reported in Table In this section, we describe our main modeling approach that uses Structured Dialogue Summarization (STRUDEL) to improve pre-trained language model's ability of dialogue comprehension. As we can see from the definition in Section 3.1, Structured Dialogue Summarization (STRUDEL) is a generic task that can be generally applied to any dialogue. Therefore, STRUDEL can be viewed as an important upstream auxiliary NLU task and can be used to train language models to better understand dialogues in a structured and systematic way before they were further finetuned over specific downstream dialogue comprehension tasks. As a result, based on our definition of STRUDEL, we further propose a new modeling framework of STRUDEL dialogue comprehension, in which STRUDEL can be viewed as a metamodel that can be smoothly integrated into and used on top of a wide range of different largescale pre-trained transformer encoder models for dialogue understanding. Figure We first design a prompt question for each STRUDEL summarization entry, which will be used to query a pre-trained language model to generate a vector embedding of that STRUDEL entry for a dialogue. For each STRUDEL summarization entry defined in Section 3.1, we add the common prefix 'Summarize: what is ' to its definition sentence and replace the '.' at the end with '?' to form its corresponding STRUDEL prompt question. For example, for STRUDEL entry (e), the relationship entry, its definition sentence is 'the relationship between the two speakers of the dialogue.', and its corresponding STRUDEL prompt question is 'Summarize: what is [CLS] W: Hi, Bill. I haven't seen a film for half a year. Do you have some free time to go to the cinema with me this weekend? M: …… M: That's a clever idea. I like American films very much. We can go to the same cinema and come home together, but watch different films. [SEP] Summarize: What is the main purpose or theme for which this dialogue is made between the two speakers? In our STRUDEL dialogue comprehension modeling framework, we choose to train transformer encoder language models to learn to generate semantic vector embeddings of the contents of STRUDEL entries instead of the actual text outputs of the STRUDEL entries in the form of token sequences. We make this design choice mainly for two reasons: (1) the form of vector embeddings makes it easier to quantitatively compare model-generated structured dialogue summarizations with their corresponding human annotations (e.g. by calculating cosine similarities in the vector space); (2) vector embeddings of STRUDEL can also be smoothly integrated back into transformer encoders for running inference over dialogue comprehension tasks. Now we describe the procedure to train a pretrained transformer encoder language model to learn to generate STRUDEL embeddings under the supervision from STRUDEL human annotations. Given a dialogue input sequence D and a pre-trained transformer encoder language model T for computing deep contextualized representations of textual sequences, such as BERT and then feed this query sequence into the transformer encoder T to compute its contextualized representation. Let H E be the last layer of hidden state vectors computed from this transformer encoder T , then we have: Let h E [CLS] denote the last-layer hidden state vector of the [CLS] token in H E , then we apply a dedicated multi-layer perceptron MLP E on top of h E [CLS] to project it onto a same-dimensional vector space to obtain our final vector embedding of the STRUDEL entry E. Now let A E denotes the human-annotated ground-truth summarization for STRUDEL entry E. Then we use a frozen version of the same transformer encoder, denoted as T , to encode this human annotation as: Let hE [CLS] denote the last-layer hidden state vector of the [CLS] token in HE , then we can compute the semantic matching score between the transformer model's generated vector embedding for STRUDEL entry E and its corresponding human annotation as: Cos MLP E (h Therefore, the objective function for optimizing the transformer encoder model T to generate STRUDEL summarizations that matches human annotations can be formulated as: (3) where S denotes the set of all 16 different STRUDEL entries. See Figure After a transformer encoder language model learns to generate embeddings of structured dialogue summarization, we need to design a modeling framework to employ these generated STRUDEL embeddings to improve the model's dialogue comprehension ability. Here we focus on two important types of dialogue comprehension tasks -dialogue question answering and dialogue response prediction }, and feed this sequence back to T to compute its last layer of contextualized representation as: Let h SDS [CLS] denote the last-layer hidden state vector of the [CLS] token in H, then we apply a fully connected layer followed by a softmax function on h SDS [CLS] to compute the probability of the answer (or response) being the candidate A given the dialogue D and the question Q as: [CLS] ) (5) Let a * denote the correct answer (or response) in the training labels, then the objective function that we use to train the transformer encoder language model T to use STRUDEL summarization embeddings to perform dialogue question answering (or response prediction) can be formulated as the cross-entropy loss: See Figure where N is the total number of dialogue examples. After our transformer-based STRUDEL dialogue comprehension model has been post-trained using the objective function defined in Equation In our experiment, we use two widely-used transformer encoder language models -RoBERTa In our experiment, we test our STRUDEL dialogue comprehension model on two important and representative dialogue comprehension tasks -dialogue question answering and dialogue response prediction. We use the DREAM dataset and the MuTual dataset introduced in Section 4.1 to train and test our model over the two tasks respectively. The results of our experiments are shown in Table In this paper, we presented STRUDEL (STRUctured DiaLoguE Summarization) -a novel type of dialogue summarization task that can help pretrained language models to better understand dialogues and improve their performance on important dialogue comprehension tasks. In contrast to the traditional free-form abstractive summarization task for dialogues, STRUDEL provides a more comprehensive digest over multiple important aspects of a dialogue and has the advantage of being more focused, specific and instructive for dialogue comprehension models to learn from. In addition, we also introduced a new STRUDEL dialogue comprehension modeling framework that integrates STRUDEL into a dialogue reasoning module over transformer encoder language models to improve their dialogue comprehension ability. Our empirical experiments on the tasks of dialogue question answering and dialogue response prediction confirmed that our STRUDEL dialogue comprehension modeling framework can significantly improve the dialogue comprehension performance of transformer encoder language models. There are two major limitations of our work discussed in this paper: 1. Our paper mainly focuses on designing the structured dialogue summarization task for two-speaker dialogues, which is the majority of multi-turn dialogues that are most commonly seen in dialogue datasets and real applications. In the future, we plan to further extend our STRUDEL framework to also accommodate multi-speaker dialogues between more than two speakers. 2. Our approach haven't included any explicit knowledge reasoning components yet, which are also important for language models to accurately generate structured dialogue summarizations and perform dialogue comprehension tasks. In future work, we plan to integrate a knowledge reasoning module into our STRUDEL dialogue summarization modeling framework in order to further improve its performance.
1,555
1,135
1,555
Sampling-Based Approximations to Minimum Bayes Risk Decoding for Neural Machine Translation
In NMT we search for the mode of the model distribution to form predictions. The mode and other high-probability translations found by beam search have been shown to often be inadequate in a number of ways. This prevents improving translation quality through better search, as these idiosyncratic translations end up selected by the decoding algorithm, a problem known as the beam search curse. Recently, an approximation to minimum Bayes risk (MBR) decoding has been proposed as an alternative decision rule that would likely not suffer from the same problems. We analyse this approximation and establish that it has no equivalent to the beam search curse. We then design approximations that decouple the cost of exploration from the cost of robust estimation of expected utility. This allows for much larger hypothesis spaces, which we show to be beneficial. We also show that mode-seeking strategies can aid in constructing compact sets of promising hypotheses and that MBR is effective in identifying good translations in them. We conduct experiments on three language pairs varying in amounts of resources available: English into and from German, Romanian, and Nepali. 1
NMT systems Eikema and In this work, we first analyse the procedure by Eikema and
NMT employs neural networks (NNs) to predict a conditional probability distribution Y |θ, x over translation candidates of any given source sentence x. The sample space Y is the set of all sequences of known target-language symbols (e.g., sub-word units). NMT factorises the distribution as a chain of random draws from Categorical distributions parameterised in context. The prefix translation y <j starts empty and grows one symbol at a time until a special end-of-sequence symbol is drawn. At each step j, f maps from varying inputs (x, y <j ) to a probability distribution over the vocabulary. Common choices for f include recurrent networks After training, and for a given input, choosing a translation requires a decision rule to map from a distribution over translation candidates to a single 'preferred' translation. The most common decision rule in NMT is MAP decoding, which outputs the mode of the conditional distribution. Despite the widespread intuition that MAP decoding is an obvious choice, maximum likelihood estimation (MLE) is oblivious to our desire to form predictions. Maximum-a-posteriori (MAP) decoding outputs the most probable translation under the model: As this is intractable, beam search Minimum Bayes risk (MBR) decoding stems from the principle of maximisation of expected utility MBR has a long history in parsing In MT, u can be a sentence-level evaluation metric (e.g., METEOR It is a well-known result that for the 'exact match' utility, u(y, h) := 1 {y} (h), the expected utility of h is p Y |X (h|x, θ), hence MBR and MAP decoding have the same optimum under this choice Like in MAP decoding, exhaustive enumeration of all hypotheses is impossible, we must resort to a finite subset H(x) of candidates. Unlike MAP decoding, the objective function µ u (h; x, θ) cannot be evaluated exactly. Most approximations to MBR decoding, from which is unbiased for any sample size N . Eikema and Aziz (2020) use the same N samples as candidates and approximate Equation (3) by y N-by-N := arg max h∈{y (1) ,...,y (N ) } μu (h; x, N ) . (5) We note that the candidates do not need to be obtained using ancestral sampling, and investigate alternative strategies in Section 5.4. It is important, however, to use ancestral samples to obtain an unbiased estimate of expected utility as we show in Section 5.1. We call this class of MBR algorithms using unbiased MC estimation instances of sampling-based MBR decoding. A big disadvantage of MBR N-by-N is that it requires N 2 assessments of the utility function. If U is an upperbound on the time necessary to assess the utility function once, then MBR N-by-N runs in time O(N 2 ×U ). For a complex utility function, this can grow expensive even for a modest hypothesis space. As NMT distributions have been shown to be high entropy An important property of sampling-based MBR decoding is that MC estimation of expected utility, Equation (4), and approximation of the hypothesis space in Equation (5) really are two independent approximations. Tying the two is no more than a design choice that must be reconsidered. We start by obtaining N translation candidates from the model, which will form the hypothesis space H(x). Then, we use any number S < N of ancestral samples for approximating expected utility in Equation (4). An idea that we explore in this work is to make use of a proxy utility that correlates with the target src Convercent erhielt $10 Millionen bei der Finanzierung im Februar von Firmen wie Sapphire Ventures und Tola Capital, womit das gesamte Kapital auf $47 Millionen angehoben wurde. ref Convercent raised $10 million in funding in February from firms such as Sapphire Ventures and Tola Capital, bringing its total capital raised to $47 million. Figure μuproxy (h; x, S) . Upper-bounding the complexity of the proxy utility by U proxy , the target utility by U target , using S samples for MC estimation in the coarse step (6b) and L in the fine step (6a), the complexity of this algorithm is MBR C2F decouples robust MC estimation (large L) from exploration (large N ) and the cost of exploration from the cost of the target utility. As illustrated in Figure We perform experiments on three language pairs with varying amount of resources for training: En-glish into and from German, Romanian and Nepali. For German-English (de-en) we use all available WMT'18 For computational efficiency, we opt for nonneural evaluation metrics for use as utility function in MBR. BEER formed well at pushing translation performance higher across a range of automatic evaluation metrics. We therefore use BEER as the utility of choice in our experiments and as a consequence will consistently report corpus-level BEER scores of MBR translations as well. We also report Sacre-BLEU We start by motivating the importance of unbiased estimates of expected utility using ancestral samples (i.e. sampling-based MBR). In Figure Now, we look into scaling MBR N-by-N . Eikema and Aziz (2020) only explored 30 by 30 approximations to the MBR objective. Our aim is to investigate whether MBR decoding is indeed able to scale to better translation performance with more computa- tion. In Figure We find that MBR steadily improves across language pairs as N grows larger. BLEU scores improve at a similar rate to that of BEER, showing no signs of overfitting to the utility. This is strong empirical evidence that sampling-based MBR has no equivalent to the beam search curse. We see this as an important property of a decoding objective. MBR N-by-N couples two approximations, namely, tractable exploration and unbiased estimation of expected utility are based on the same N ancestral samples. Our aim is to learn more about the impact of these two approximations, for which we look into MBR N-by-S . Moreover, with less than N 2 assessments of utilities per decoding, we can also investigate larger H(x). We explore N ranging from 210 to 1005, while keeping the number of samples used for approximating expected utility of each hypothesis smaller, with S ranging from 10 to 200. We argue that S does not need to grow at the same pace as N , as MC estimates should stabilize after a certain point. We find that growing N beyong 405 improves translation quality further, even when the estimates of expected utility are less accurate. Increasing S also steadily improves translation quality, with diminishing returns in the magnitude of improvement. On the other hand, smaller values of S lead to notable deterioration of translation quality and we note higher variance in results. For all language pairs it is possible to improve upon the best MBR N-by-N results by considering a larger hypothesis spaces and smaller S. This experiment shows that the two approximations can be controlled independently and better results are within reach if we explore more. On top of that, the best setting of MBR N-by-N takes 164,025 utility assessments per decoding, MBR N-by-S with S = 100 brings this number down to 100,500 for the largest N considered, while improving BEER scores on all language pairs. We note that again increasing either N or S generally improves translation quality in our experiments. This further strengthens our previous finding that sampling-based MBR does not seem to have an equivalent of the beam search curse. While our focus thus far has been on reducing the number of target utility calls, allowing the exploration of larger H(x), one should also take sampling time in consideration. For example, we found that in MBR N-by-N with N = 100, sampling time made up about 60% of the total translation time for our setup. Therefore, it is computationally attractive to construct compact H(x) with promising translation candidates. Ideally, for better search in MBR, we enumerate a set of high expected utility hypotheses. Up until now we have constructed H(x) using ancestral samples, following Eikema and We find ancestral sampling to produce hypotheses across the entire range of expected BEER scores. Nucleus sampling and beam search generally produce translations at the higher end of expected BEER. Therefore, these seem more suitable for generating effective H(x) at smaller N . Nucleus sampling seems to lead to the largest proportion of high expected utility translations across language pairs. Beam search has a noticeably high proportion of poor translations for English-Nepali, a low-resource language pair where mode-seeking search has been observed to be less reliable. Results in the opposite direction were similar. We explore both nucleus sampling and beam search for constructing H(x) in the next experiment, as well as combining all three strategies together. We now turn to the coarse-to-fine procedure (MBR C2F ) described in Section 3. We compare various proxy utilities by their effectiveness as filtering strategies in obtaining high expected utility sets, where we again use accurate estimates of expected utility using 1,000 samples for MC estimation. We filter the top-20 hypotheses from an initial 100 hypotheses obtained using ancestral sampling. This ensures a high variety of expected utilities in the initial set. We also compare each proxy utility on their runtime performance. We compare both cheap estimates of expected BEER using either 1 or 5 samples for MC estimation (BEER-1 and BEER-5 respectively) as well as cheap-to-compute proxy metrics: unigram F1 using 50 samples for MC estimation (UF-50) and skip-bigram F1 using 50 samples for MC estimation (SBF-50). Figure We surprisingly find nearly all strategies to lead to equally good filtered sets as BEER-100 in terms of expected BEER of the filtered set. The only strategy that performs slightly worse than the others is BEER-1, which is likely too noisy to be a reliable filtering strategy. We observed very similar results for the other five language pairs. In terms of runtime performance we find BEER-1 to be fastest followed by UF-50 at a 22.2x performance increase over BEER-100. 7 In follow-up experiments, we will use UF-50 as a proxy utility, providing high quality filtered sets at good runtime performance. In Table We also explore the effects on translation quality of changing and combining strategies for constructing H(x). We find that using a beam of N = 405 (using the same length penalty as in Section 5.4) to construct H(x) produces better results than nucleus sampling for most language pairs. Notably, reordering a large beam considerably improves over standard beam search decoding (using the usual beam size of 5 (ro, ne) or 4 (de)) for all language pairs in terms of BEER and for most language pairs in terms of BLEU scores. Combining all strategies for creating hypothesis spaces: ancestral sampling, nucleus sampling and beam search leads to the best results overall. For all language pairs both BEER and BLEU scores either improve or remain similar. This is more empricial evidence that expected utility is a robust and reliable criterion for picking translations: enlarging the hypothesis space or improving MC estimation under reasonable choices of hyperparameters seemingly never unreasonably hurts translation quality, but generally improves it. A Multi-Reference Test Set We also test three systems from We use BEER as utility, UF-50 as proxy utility, set top-T = 50 and use L = 100 samples for MC estimation. We use various strategies for constructing H(x): 405 nucleus samples (N), the 405-best list from beam search (B) and combining both of these along with 1,005 ancestral samples (all). We use S = 13 in MBR N-by-S to mimic the computational cost of MBR C2F at N = 405. The last row shows standard beam search performance using a typical beam size of 4 or 5 depending on the language. MBR results are averaged over 3 runs. Standard deviations for BEER/BLEU scores are below 0.1/0.2 (NxS), 0.1/0.1 (C2F) and 0 (BS). use translators A, C and D). We show results in Table We measure runtime performance on hypothesis generation, sampling for MC estimation of expected utilities and decoding time seperately for various algorithms explored in this work on the English to German language pair. We run all experiments on an Intel Xeon Bronze 3104 Processor and a single NVIDIA GeForce 1080Ti GPU. For generating samples and beam search outputs we set the batch size to as large as possible, constrained by the available GPU memory. MBR using BEER as utility runs on CPU, while sampling and beam search run on GPU. We mimic the MBR N-by-N and MBR C2F setups from Table 1 using a hypothesis space of 405 nucleus samples. We also addition- ally include runtime results for MBR N-by-N with N = 405 and a more expensive MBR N-by-S variant with S = 100 (NxS large ). For beam search we report results for a beam size of 4, as has been used throughout the paper for this language pair. Results are shown in Table In recent NMT literature MBR has started being explored either in combination with MAP decoding or replacing it altogether. All of the above works make use of beam search to provide both the hypothesis space as well as to make a biased estimate of expected utility. Eikema and We provide a more extensive overview of historical approximations to the MBR objective as well as an overview of alternatives for tackling the inadequacy of the mode in Appendix A. We have shown MBR to be a robust decision rule for NMT that can find high quality translations. In particular, we have found that MBR, under reasonable hyperparameter choices, generally leads to improved translation quality with more computation (i.e., searching a larger search space and/or using more samples for more accurate MC estimation). Big challenges in decoding with MBR are constructing the hypothesis space and keeping computational cost of estimating expected utility tractable. We have proposed effective strategies for both, by exploring more efficient ways of forming the hypothesis space and proposing an approximation to MBR that is linear in the size of this hypothesis space. Our coarse-to-fine MBR procedure is able to considerably reduce the number of calls to the utility function without compromising translation quality. We have shown that sampling-based MBR in general can outperform beam search on all the language pairs we explored and can continue to improve with better and more accurate search. We believe sampling-based MBR to be a promising, albeit still more expensive, alternative to beam search decoding. Unlike beam search, where it is not obvious how to further improve translation quality, sampling-based MBR is likely to benefit from improvements of different aspects of the algorithm. We believe fruitful avenues of research to be among i) clever algorithms for constructing hypothesis spaces, ii) more robust estimates of expected utility using fewer samples, iii) use of modern neural utilities and iv) improving the modelling capacity of NMT systems. We hope that this work motivates researchers and practitioners to make more conscious considerations of the choice of decision rule and that it paves the way for use of tractable sampling-based MBR decoding in NMT. This work has proposed a number of algorithms for more efficient decoding under the minimum Bayes risk decision rule. However, in terms of runtime performance MBR decoding is still outperformed by beam search. MBR will likely always be more expensive than current applications of beam search, in which very small beam sizes are employed, since on top of generating translation candidates, MBR decoding will potentially need a separate set of samples for estimating expected utility, and perform additional computations in the form of utility assessments. While this currently makes MBR less attractive in real-time translation scenarios, we believe that the demonstrated scalability and robustness of the decoding objective makes MBR interesting in scenarios in which translation speed is not the highest priority. Furthermore, continued research into algorithmic improvements to MBR approximations and optimized implementations of existing algorithms may make MBR attractive in real-time translation in the future. MBR also relies on a utility function, a hyperparameter to the decision rule (decoding algorithm). On the one hand, this allows us to inject some domain expertise into the decoding algorithm. On the other hand, in machine translation, we do not have a gold-standard metric that we trust to judge translation quality perfectly. This means we will have to choose a utility that we know is suboptimal, and may have peculiarities such as bad hypotheses that exploit certain aspects of the utility to be ranked unreasonably high. Nonetheless, it is unlikely that the NMT model puts a lot of mass on such translations, reducing the likelihood of encountering such situations. We believe there are also positives to incorporating a utility function into the decoding algorithm: MBR can benefit from advances in the field of machine translation evaluation, as some recent works have already exploited Finally, current MBR algorithms do not permit incremental generation of translations. A translation hypothesis can only be assessed once it's fully generated by the NMT model. This is a bottleneck to its speed and doesn't make optimal use of the factorisation of modern-day NMT systems. We do think this is a promising direction for future work. in this subset. This has the undesirable effect of exaggerating differences in probability due to underestimation of the normalisation constant, and, like MAP decoding, it over-represents pathologies around the mode. Similarly, most prior work uses mode-seeking search to explore a tractable subset of the hypothesis space. Mode-seeking approximations bias the decoder towards the mode making MBR decoding less robust to idiosyncratic outcomes in the hypothesis space (Eikema and There are cases in statistical machine translation (SMT) where the computation of expected utility can be factorised along a tractable directed acyclic graph (DAG) via dynamic programming In some (rarer) cases, unbiased (or asymptotically unbiased) samples have been used to approximate the MBR objective and/or to reduce the search space. For example, Eikema and Aziz (2020) link the inadequacy of the mode in NMT to the entropy of the conditional distribution, or, more precisely, to the fact that NMT models tend to spread probability mass over large subsets of the sample space We compare a number of utility functions for use in MBR decoding. In principle any function that measures some notion of similarity across sequences and can be reliably assessed on the sentence-level is suitable as a utility function for MBR. As BLEU is the predominant automatic evaluation metric on which translation quality is assessed, we experiment with a smoothed version of BLEU We perform MBR N-by-S with N = 405 and S = 100 in order to perform the comparisons. We measure the performance of each utility on BEER, BLEU, METEOR and ChrF++. Our results are shown in Table
1,175
81
1,175
Candidate Soups: Fusing Candidate Results Improves Translation Quality for Non-Autoregressive Translation
Non-autoregressive translation (NAT) model achieves a much faster inference speed than the autoregressive translation (AT) model because it can simultaneously predict all tokens during inference. However, its translation quality suffers from degradation compared to AT. And existing NAT methods only focus on improving the NAT model's performance but do not fully utilize it. In this paper, we propose a simple but effective method called "Candidate Soups," which can obtain high-quality translations while maintaining the inference speed of NAT models. Unlike previous approaches that pick the individual result and discard the remainders, Candidate Soups (CDS) can fully use the valuable information in the different candidate translations through model uncertainty. Extensive experiments on two benchmarks demonstrate the effectiveness and generality of our proposed method, which can significantly improve the translation quality of various base models. More notably, our best variant outperforms the AT model on three translation tasks with 7.6× speedup. 1
Autoregressive translation (AT) models based on Transformer Therefore, the non-autoregressive translation (NAT) Several methods have been proposed to alleviate the multimodality problem and improve the performance of the NAT model, such as the iterationbased NAT model Most of the previous methods are modified from the model's perspective, either modifying the structure of the model
It often costs over a hundred dollars to obtain the required identity card . Candidate Soups, which can significantly improve the translation quality without any modification to the model. Moreover, Candidate Soups is a general approach that can be used by any NAT model that can generate multiple candidate results, such as Vanilla NAT However, Candidate Soups will effectively use the valuable information of all the candidate translations to fuse the different candidate results and obtain a higher-quality translation (Figure We conduct extensive experiments in two datasets commonly used in machine translation, The results demonstrate that our proposed method can significantly improve the base models' translation quality on different tasks while maintaining the fast inference speed of the NAT model. Remarkably, our best variant achieves better performance than the AT teacher model on three translation tasks with 7.6× speedup. Figure Since the NAT model was proposed, it has attracted the attention of many researchers due to its superior inference speed. However, its translation quality suffers from degradation compared to AT model. Therefore various methods have been proposed to bridge the performance gap between NAT and AT model. Some researchers constrain the distribution of NAT model outputs by introducing various latent variables The above methods are all improvements from the model perspective, and their purpose is to allow the model to generate higher quality translations. Unlike previous work, Candidate Soups wants to explore how to make the most of an existing model, and it can be applied to all NAT models that can generate multiple candidate results. This section describes the details of the proposed method in the paper. We first show the problem definition and the general idea of Candidate Soups in Section 3.1, then introduce implementation details of the Candidate Soups in Section 3.2, followed by the example in Section 3.3. By introducing uncertainty into the NAT model, we can get a list of candidate results R = Algorithm 1: Candidate Soups Input: A list of candidate results R = [R0, . . . , R k ] and the corresponding log-probability score sequence list and each candidate result may have correctly and incorrectly translated parts that do not completely overlap. Thus, our goal is to find the optimal combination in R , which has the highest average log-probability re-scored by an AT model. Because the word order in the original translations must be kept, we first use R to build a Lattice (Figure However, because the initial Lattice contains too many paths, we cannot calculate the values of all edges. Furthermore, due to the dislocation caused by the different lengths of the candidate results, most of the paths in the initial Lattice have word order errors, such as edges between the same tokens (Figure For the remaining simple Lattice, the cost of calculating each edge value is still unbearable. Therefore, we fuse the nodes belonging to the same candidate result in each Lattice into a single node (Figure Algorithm 1 lists the process of Candidate Soups. We generate the final translation while looking for the common subsequence. First, for the input candidate results set R and the corresponding log-probability score set S, we will remove the adjacent repeated tokens and their corresponding scores for each sentence. Then we initialize a pointer set I that each pointer points to position 0 for each candidate translations and use these pointers to traverse simultaneously. If all the current pointers point to the same token, the token is added to the final translation, and all pointers are moved one step to the right. Otherwise, Candidate Soups will look for the next sequence of pointers I * that satisfies the above conditions and move all pointers there. At the same time, the segment with the highest average log-probability score among all segments generated by the pointer traverse from I to I * is added to the final translation. Experimental results show that Candidate Soups can significantly improve final translation quality, requiring only 3 to 7 candidate translations. Moreover, the time required by Candidate Soups is almost negligible compared to the inference time of the NAT model. Figure First, the NAT model predicts three candidate translations for the input sentence by introducing different lengths (t = 0). Afterward, through the traversal of the pointers, Candidate Soups found that the first two tokens ("The Republican") in the candidate results were the same, so they were added to the final translation (t = 1). When there is a disagreement between candidate results, Candidate Soups will find the next token ("extend") that all candidate translations predict in common and get three different segments ("authorities were quick," "authorities were quick to," and "and the authority"). Then Candidate Soups will select the candidate segment with the highest average logprobability scores ("authorities were quick to") and add it to the final translation (t = 2). Similarly, in the subsequent traversal process, if the tokens are predicted jointly by all candidate results, Candidate Soups will add them to the final translation (t = 3, t = 5). Otherwise, Candidate Soups will select the tokens segment with the highest log-probability score to join the final translation (t = 4). Ultimately, we get higher-quality translations by combining all valuable information in the candidate results. From this example, we can find that only selecting an independent candidate result as the final translation is not effective enough for the NAT model. Because different lengths introduce uncertainty into the NAT model, there is diversity among candidate translations, but the previous methods do not take advantage of this. Proudly, through the certainty and confidence of the NAT model's output, Candidate Soups makes full use of the candidate results, takes the essence and removes the dross, and further improves the final translation quality without affecting the inference speed. In this section, we first introduce the settings of our experiments in Section 4.1, then report the main results in Section 4.2. Ablation experiments and analysis are presented in Section 4.3. Dataset and Evaluation We evaluate our method on the two most recognized machine translation benchmarks: WMT'14 English-German (4.0M sentence pairs) Knowledge Distillation Using AT model's output to train the NAT model can significantly improve the performance of the NAT model. Following previous work Hyperparameters Our model architecture is Transformer-base Base Models Our Candidate Soups is a general algorithm that can be applied to various NAT models. Therefore, to evaluate whether our proposed method can perform well on different NAT models, we selected the following four base models: (1) Vanilla NAT (2) CMLM (3) GLAT (4) GLAT & DSLP The prediction patterns and performance of these base models are quite different, so through them, we can verify whether Candidate Soups can be applied to various NAT models. In the future, we will test Candidate Soups on more NAT models, such as CTC Generality of Candidate Soups Table In conclusion, the above experimental results show that Candidate Soups is a general approach that can significantly improve translation quality while maintaining fast inference speed. Comparing with the State of the Art To evaluate the best performance Candidate Soups can achieve, we compare our best variant (GLAT+DSLP+Candidate Soups) with previous state-of-the-art NAT models, including Iterative NAT and Fully NAT. As shown in Table In addition, we also try to use two smaller AT models for re-scoring to accelerate the inference speed further. These two models have the same hyperparameters as Transformer-base, except for the number of layers of decoder and encoder. AT 4E-2D contains 4 encoder layers and 2 decoder layers, and AT 3E-1D contains 3 encoder layers and 1 decoder layer. Moreover, they were trained using the same distillation data as the NAT model. Surprisingly, even when the small AT models were used for re-scoring, our method maintained a similar performance to the previous model (AT 6E-6D), and its inference speed was 10.1×-11.5× that of the AT model. This result further proves that Candidate Soups can well balance the trade-off between translation quality and inference speed. Influence of the Candidate Number In order to analyze the effect of the candidate translation number on the Candidate Soups, we conduct experiments with different candidate numbers. Figure To analyze the influence of source sentence length on Candidate Soups' performance, we divide the source sentence after BPE into different intervals by length and calculate the BLEU score of each interval. The histogram of results is presented in Figure Impressively, the BLEU score of the source sentence length ranging from 40 to 60 increases by 5.01, and the BLEU score of the source sentences longer than 60 increases by 7.51. We believe this is because the NAT model tends to generate more uncertain and diverse candidate results for longer source sentences. This feature enables Candidate Soups to obtain more useful information in the candidate results to generate higher-quality translations. These experimental results further verify the potential of the Candidate Soups in translating complex long sentences. More experimental results and analyses are presented in the Appendix B. In this paper, we propose "Candidate Soups," which can discover and fuse valuable information from multiple candidate translations based on model uncertainty. This approach is general and can be applied to various NAT models. Extensive experimental results prove that the translation quality of the NAT model can be significantly improved by using Candidate Soups, especially for long sentences that are difficult to translate. And the tradeoff between translation quality and inference speed is well controlled and balanced by Candidate Soups. Furthermore, our best variant can achieve better results on three translation tasks than the AT teacher while maintaining NAT's high-speed inference. Although our proposed method can significantly improve the performance of non-autoregressive translation (NAT) models, it relies on trained autoregressive translation (AT) models to a certain extent. Not using the AT model for re-scoring can lead to poorer quality of translations generated by Candidate Soups, especially when using it for the poorer performing NAT model. Although using a small AT model is sufficient for Candidate Soups to achieve decent performance, it still results in a drop in inference speed and more GPU resources being used for translation. In addition, the performance of the AT model may limit the upper bound of the Candidate Soups' capability. Therefore, we will explore new methods that can be effective without AT re-score in the future. Our work has potentially positive implications for various non-autoregressive machine translation applications. It is a general method that can be applied to virtually all existing non-autoregressive translation models to improve their performance while maintaining their high inference speed. Our work can facilitate the implementation of nonautoregressive translation models in commercial companies and humanitarian translation services in the future and promote cultural exchanges between different languages and different races. A.1 Autoregressive Translation The autoregressive translation (AT) model achieves sort-of-the-art performance on multiple machine translation tasks where y <t denotes generated previous tokens before the t th position. During the training process, the AT model is trained via the teacher-forcing strategy that uses ground truth target tokens as previously decoded tokens so that the output of the decoder can be computed in parallel. However, during inference, the AT model still needs to generate translations one by one from left to right until the token that represents the end [EOS] is generated. Although AT model has good performance, its autoregressive decoding method dramatically reduces the decoding speed and becomes the main bottleneck of its efficiency. To improve the inference speed, the nonautoregressive translation (NAT) model is proposed where m denotes the length of the target sentence. Generally, NAT models need to have the ability to predict the length because the entire sequence needs to be generated in parallel. A common practice is to treat it as a classification task, using the information from the encoder's output to make predictions. However, this superior decoding speed is achieved at the cost of significantly sacrificing translation quality. Because NAT is only conditioned on source-side information, but AT can obtain the strong target-side context information provided by the previously generated target tokens, there is always a gap in the performance of NAT compared with AT. Noisy parallel decoding (NPD) where Y m is the translation predicted by the NAT model based on the length m. NPD also can use the AT model to identify the best translation: Notably, when an AT model is used for rescoring, it can be decoded in parallel as it does at training time. Moreover, since all search samples can be computed independently, even with AT model for re-scoring, the latency of the NPD process is only doubled compared to computing a single translation. Compared with the original data, the distillation data generated by the AT model has less noise and is more deterministic, which can effectively alleviate the multimodality problem of the NAT model. Therefore, almost all the existing NAT model adopts the method of Knowledge Distillation (KD) for training. However, generating distillation data tends to consume significant computing resources and time, and using distillation data to train NAT models may limit the translation capabilities of NAT models. In order to analyze whether our proposed method can still be effective in the scenarios where knowledge distillation is not used, we conducted experiments on the WMT'14 EN-DE dataset. Figure In addition to introducing uncertainty through length, we propose two other methods for generating different candidate translations: • Use the prediction results of different decoder layers. DSLP • Generate different translations by maintaining dropout during inference Table However, for layer uncertainty, the quality of the translations produced by the first layer will be significantly lower than that of the last layer. These low-quality candidate translations are of little help to Candidate Soups and even affect the performance of Candidate Soups. For dropout uncertainty, the candidate translations generated will be affected by the dropout probability. On the one hand, if the dropout probability is set too high, it may reduce the overall quality of the candidate translations. On the other hand, the generated candidate translations will be less diverse if the dropout probability is low. So we further need to spend time searching for the optimal dropout probability setting for different NAT models and tasks. However, these two methods can still achieve about 1 BLEU improvement on the strong baseline (GLAT+DSLP), and their generalization ability is stronger than the length-based method. In addition, Candidate Soups can also be used as a new model ensemble method to enhance the final translation quality by using the output from multiple NAT models. We will discuss this in future work.
1,062
384
1,062
Lifelong Sequence Generation with Dynamic Module Expansion and Adaptation
Lifelong sequence generation (LSG), a problem in continual learning, aims to continually train a model on a sequence of generation tasks to learn constantly emerging new generation patterns while avoiding the forgetting of previous knowledge. Existing LSG methods mainly focus on maintaining old knowledge while paying little attention to knowledge transfer across tasks. In contrast, humans can better learn new tasks by leveraging previously acquired knowledge from similar tasks. Inspired by the learning paradigm of humans, we propose Dynamic Module Expansion and Adaptation (DMEA), which enables the model to dynamically determine the architecture for acquiring new knowledge based on task correlation and select the most similar previous tasks to facilitate adaptation to new tasks. In addition, as the learning process can easily be biased towards the current task which might cause more severe forgetting of previously learned knowledge, we propose dynamic gradient scaling to balance the learning of the current task and replayed tasks. With extensive experiments, we demonstrate that DMEA can consistently outperform existing methods in different LSG settings.
With the recent advancements in pre-trained language models (LMs), current sequence generation methods have achieved impressive performance on a variety of generation tasks A potential solution is to formalize sequence generation as lifelong sequence generation or LSG Despite its effectiveness, ACM has several key limitations. First, it mainly focuses on mitigating forgetting of previously acquired knowledge while paying little attention to transferring learned knowledge to new tasks which is as important for continual learning as preventing forgetting Inspired by the learning paradigm of humans and to address the above limitations of ACM, in this work we propose Dynamic Module In addition, when the model learns a new task, DMEA also incorporates pseudo-sample replay In summary, our main contributions are: • To the best of our knowledge, we are the first to explore solving LSG from the perspective of human learning. We propose DMEA, a novel method based on dynamic module expansion and adaptation, to alleviate catastrophic forgetting and facilitate knowledge transfer in LSG. • With extensive experiments and analysis, we demonstrate the effectiveness of our method compared to existing ones in different LSG settings.
Lifelong Learning (LL) aims to continually learn knowledge from a sequence of tasks with different distributions. The goal is twofold: alleviate catastrophic forgetting Catastrophic forgetting typically means that the model forgets previously acquired knowledge after learning new tasks. Prior LL methods mainly focus on mitigating this problem and can be divided into three categories. First, regularizationbased methods constrain the update of parameters that are important to learned tasks to retain previous knowledge More recently, researchers have considered exploring knowledge transfer in LL, i.e., learning on a task can benefit from learning on another task by transferring related knowledge. This includes CTR LSG involves learning from a stream of sequence generation tasks T = (T 1 , ..., T n ), where every task T i has its own training set D i train , validation set D i valid , and test set , where X j and Y j denote the input and output texts, respectively. At time step k, the model is trained on the training set D k train of task T k and has no access to real samples of previously learned tasks. After the training on D k train , the model is expected to perform well on all the tasks learned so far, i.e., T 1 , ..., T k , and will be evaluated on the test set D i test of each task T i (1 ≤ i ≤ k) with corresponding evaluation metrics separately. Therefore, to achieve the goal of LSG, the model is required to alleviate the forgetting of acquired knowledge and better learn new patterns through possible forward knowledge transfer. Given an input-output text pair (X, Y ) for a task, the model learns to decode the output text Y after reading the input X. Following Zhang et al. ( task, the model is optimized to decode Y given X and Q. Denoting the concatenation of X, Q and Y as A, the autoregressive training objective is: where n is the total number of tokens in A and (A 1 , ..., A m ) is the concatenation of X and Q, and θ denotes the model parameters. Inspired by how humans learn a new task (Fig. The expansion stage ( §4.1) first determines the model architecture dynamically. The selection stage ( §4.2) then selects the top-K most similar previous tasks which are utilized in the final adaptation stage ( §4.3) to facilitate adaptation to the new task. We also employ pseudo-sample replay along with a dynamic gradient scaling method to balance the learning of the new and replayed tasks. Humans are able to determine whether previously acquired skills are sufficient to solve a new task. Our method DMEA aims to mimic this learning process in the expansion stage. It can dynamically decide whether to reuse modules of previous tasks or insert a new module in every transformer layer to learn novel knowledge. Inspired by Zhang et al. (2022), we utilize differentiable architecture search The weighted average ĥl is then passed to the next part of the model for learning. After training the model on D j train for several epochs using L train (defined in §4.3), we select the module with the largest coefficient in every layer for the new task T j . Different from Zhang et al. ( where cos is the cosine similarity function and f i is calculated based on the training set D i train . In this way, a previous module shared by tasks with higher word frequency distribution similarity to the new task has a larger initial coefficient, increasing the tendency to reuse it. In addition, the coefficient λ l k+1 of the newly added module m l k+1 is initialized to the minimum value of the initial coefficients {λ l 1 , ..., λ l k } of previously added modules {m l 1 , ..., m l k } to encourage module reuse. Figure 2: In the selection stage, DMEA selects the top-K most similar previous tasks through input subspace to facilitate adaptation to a new task. During adaptation, the output of the selected similar tasks is fused with that of the new task in every layer to enable forward knowledge transfer. Note that only modules selected for the new task (green polygons) are learnable modules in the adaptation stage. In addition, DMEA introduces dynamic gradient scaling to balance the learning of the new task and replayed tasks. The selected module in layer l can be either from previous modules {m l 1 , ..., m l k } or the newly added one m l k+1 and will be tuned in the adaptation stage to accommodate new knowledge. We then discard newly added modules that are not selected. Note that only newly added modules and coefficients are learnable in the expansion stage; the pre-trained LM and previous modules are kept frozen. As humans, we can better acquire new knowledge by recognizing and utilizing knowledge from previously learned tasks that are similar Similar to After obtaining the representation matrix R j = [X 1 , ..., X n ] ∈ R m×n for task T j , we apply SVD to R j , i.e., R j = U j Σ j (V j ) ′ , where U j = [u j 1 , ..., u j m ] ∈ R m×m is composed of left-singular vectors u j i , Σ j ∈ R m×n is a rectangular diagonal matrix with singular values on the diagonal, and To obtain the input subspace S j of T j , we select the first k left-singular vectors in U j to form the bases B j = [u j 1 , ..., u j k ] for S j , where k is determined by the requirement: F with R j k being the k-rank approximation of R j , F being the Frobenius norm, and ϵ j being a predefined threshold. For the new task T j , the norm of its subspace projection onto the subspace of a previously learned task T i could characterize the similarity Q j,i between these two tasks. More formally, where Proj S i (S j ) = B j B i (B i ) ′ denotes the subspace projection. After getting the similarity scores Q j,i , 1 ≤ i < j of all previous tasks, we pick K tasks T sim = (T 1 , ..., T K ) with the top-K highest scores to facilitate adaptation to the new task T j . For adaptation to T j , assume that T all = (T 1 , ..., T K , T j ) contains a total of r modules {m l 1 , ..., m l r } in layer l. During the training on D j train using L train (see Eq. ( The learnable coefficients {α l 1 , ..., α l r } are equally initialized to 1.0. Similar to the expansion stage, the fused output hl is passed to the next part of the model for learning. After training, the learnable coefficients will be saved for inference. Note that we only tune modules selected in the expansion stage (can be modules of previous tasks or newly added modules) and learnable coefficients while keeping the pre-trained language model and other modules frozen. As there is no saved real sample of previously learned tasks when the model adapts to a new task, we also incorporate pseudo-sample replay where m is the total number of tokens in A ′ . The overall loss that DMEA optimizes for adapting to a new task is: where µ is the weight of data generation loss. After the expansion stage, if the new task reuses some modules of previously learned tasks, the model will generate some pseudo samples of these tasks and train the model using L train on the combination of new data and pseudo data. As the model has not seen new data before, the gradient norm of the new task on reused modules is much larger than that of replayed tasks. The learning process can easily be biased towards the new task which may affect previously acquired knowledge. Therefore, to balance the learning of the new task and replayed tasks, we introduce dynamic gradient scaling. Specifically, assuming that the new task T j reuses s modules {m 1 , ..., m s } of a previous task T i in all layers, we randomly select q examples from D j train and pseudo samples of T i separately and forwards them through the model to obtain the gradient of T j and T i using L train with regard to reused modules {m 1 , ..., m s }, denoted as g j and g i , respectively. The dynamic scale factor η i t is then calculated as: where t is the number of completed training epochs. After dynamic gradient scaling, the total loss for jointly learning T j and T i is: Note that in the early stage of training, the value of t is small. η t is greater than 1 to balance the gradient of the new task T j and the replayed task T i . When the model has seen enough new data in the late stage of training (no need to balance), η t is approximately equal to 1 as the value of t is large. In this section, we first describe investigated tasks and then introduce methods compared in our work. Four representative sequence generation tasks are investigated in our work: natural language generation, summarization, task-oriented dialogue and SQL query generation. Following Zhang et al. (2022), we consider two different scenarios: (i) LSG on similar tasks where the model learns a sequence of tasks of the same type but different domains, and (ii) LSG on random tasks where the model learns knowledge from different types of tasks. For LSG on similar tasks, we use five different domains from two natural language generation datasets (RNNLG Following Zhang et al. ( • Finetune tunes the whole GPT-2 model only on the training data of the new task during the LSG process. • EWC • LAMOL • Metac-Adapt (Metac) • Adapter+LAMOL only inserts adapter modules for the first task and tunes these modules with pseudo-sample replay while keeping the backbone model frozen. • AdapterCL Table Simply fine-tuning the model with new samples leads to poor performance due to catastrophic forgetting. Although EWC adopts Fisher information matrix to alleviate forgetting, its performance is still much worse than other memory-based baselines, indicating the importance of pseudo-sample replay. When learning from a sequence of similar tasks, Adapter+LAMOL performs better than AdapterCL as AdapterCL applies parameter isolation to different tasks which might prevent positive knowledge transfer across tasks. However, this is not the case when learning from random tasks: AdapterCL achieves much better results than Adapter+LAMOL as AdapterCL can avoid catastrophic forgetting by assigning different learnable parameters to each task. The performance of ACM is superior to Adapter+LAMOL and AdapterCL in both scenarios, showing the effectiveness of its adaptive compositional architecture. However, ACM has no explicit mechanism to encourage forward knowledge transfer in LSG, which is actually the human learning paradigm. Our proposed DMEA consistently outperforms ACM by dynamically leveraging previously acquired knowledge to facilitate adaptation to new tasks. We conduct several ablations to analyze the contribution of different components of DMEA. In particular, we investigate three variants of DMEA (a) without selecting similar previous tasks for forward knowledge transfer (w.o. transfer), (b) removing dynamic gradient scaling (w.o. scaling), and (c) without dynamically initializing learnable coefficients (w.o. initialization). For each scenario, i.e., similar tasks or random tasks, we randomly pick one sequence for experiments. Table From the results, we can observe that all components contribute to the average performance. Removing forward knowledge transfer leads to a significant performance drop in both scenarios, indicating that selecting top-K most similar previous tasks can indeed discover and transfer useful learned knowledge to facilitate adaptation to the new task. The adoption of dynamic gradient scaling yields a moderate performance boost as it can balance the learning of the new task and replayed tasks to mitigate catastrophic forgetting. Dynamic initialization of learnable coefficients also facilitates performance improvement, demonstrating the effectiveness of leveraging the similarity of word frequency distributions between tasks. Quantify Forward Knowledge Transfer. Following where R i,j is the performance score on T j after learning T i and di refers to the performance of training T i individually, which is actually the result of AdapterCL. For each scenario, we randomly select one sequence for analysis and report the average performance score along with FKT at each step in Table Input Subspace vs. Other Similarity Metrics. The ablation (w.o. transfer) in §6.2 demonstrates the importance of selecting similar learned tasks. To further investigate whether different similarity metrics influence the performance of DMEA, we conduct controlled experiments with two new metrics: (a) cosine similarity of word frequency distributions between different tasks (frequency), and (b) cosine similarity of the representations of selected samples from different tasks Robustness to Module Type To verify whether the performance gain of DMEA is consistent across different types of modules, we extend the experiments to prefix-tuning Longer Sequence. As mentioned in §5.1, we mainly conduct experiments on sequences consisting of 5 tasks following Quality of Pseudo Data Fig. Other Types of Tasks To explore whether the performance gain of DMEA is consistent on other types of tasks, we further include three new tasks: sentiment analysis (SST Different Pseudo-data Sampling Ratios Following In addition, we show case studies of learned model architecture, model output, dynamic gradient scaling and task selection, generalization of dynamic initialization, and potential real-world applications in Appendix A.9 ∼ A.14, respectively. In this work, we have introduced DMEA for lifelong sequence generation (LSG). DMEA leverages task correlations to dynamically determine the suitable architecture required to acquire novel knowledge of a new task and selects the most similar previous tasks through input subspace to facilitate knowledge transfer. It uses pseudo-sample replay along with dynamic gradient scaling to balance the learning of the new task and replayed tasks to further alleviate forgetting. With extensive experiments and analysis we have shown that DMEA consistently outperforms previous methods in different LSG settings. In the future, we would like to investigate ways to improve the quality of pseudo data and explore more metrics for task similarity. input subspace as 100. The threshold ϵ is set as 0.95 for selecting left-singular vectors. We adopt 1 for the number of similar tasks K. For dynamic gradient scaling, we set 100 for the number (q) of examples selected to calculate the gradient. Table We present the average number of learnable parameters and average running time for ACM and DMEA in Table To further demonstrate that dynamically initializing learnable coefficients can facilitate finding the optimal model architecture, we analyze the model expansion stage of ACM and DMEA using sequence #4 in random scenario. For the final task tv, ACM decides to reuse modules from the first (e2e) and the third task (laptop) while DMEA reuses all modules from laptop which is consistent with the observation that the similarity between tv and laptop is much higher than that between tv and e2e. We select RNNLG.hotel (sequence #1 in similar scenario) and WikiSQL (sequence #4 in random scenario) as two representative tasks and show several examples of output in Table
1,170
1,233
1,170
Learning the Beauty in Songs: Neural Singing Voice Beautifier
We are interested in a novel task, singing voice beautification (SVB). Given the singing voice of an amateur singer, SVB aims to improve the intonation and vocal tone of the voice, while keeping the content and vocal timbre. Current automatic pitch correction techniques are immature, and most of them are restricted to intonation but ignore the overall aesthetic quality. Hence, we introduce Neural Singing Voice Beautifier (NSVB), the first generative model to solve the SVB task, which adopts a conditional variational autoencoder as the backbone and learns the latent representations of vocal tone.
The major successes of the artificial intelligent singing voice research are primarily in Singing Voice Synthesis (SVS) Nowadays in real-life scenarios, SVB is usually performed by professional sound engineers with adequate domain knowledge, who manipulate commercial vocal correction tools such as Melodyne To tackle these challenges, we introduce Neural Singing Voice Beautifier (NSVB), the first generative model to solve the SVB task, which adopts a Conditional Variational AutoEncoder (CVAE) 2) To improve the vocal tone, we propose a latentmapping algorithm in the latent space, which converts the latent variables of the amateur vocal tone to those of the professional ones. This process is optimized by maximizing the log-likelihood of the converted latent variables. To retain the vocal timbre during the vocal tone mapping, we also propose a new dataset named PopBuTFy containing parallel singing recordings of both amateur and professional versions. Besides, thanks to the autoencoder structure, NSVB inherently supports semisupervised learning, where the additional unpaired, unlabeled • We propose the first generative model NSVB to solve the SVB task. NSVB not only corrects the pitch of amateur recordings, but also generates the audio with high audio quality and improved vocal tone, to which previous works typically pay little attention. • We propose Shape-Aware Dynamic Time Warping (SADTW) algorithm to synchronize the amateur recording with the template pitch curve, which ameliorates the robustness of the previous time-warping algorithm. • We propose a latent-mapping algorithm to convert the latent variable of the amateur vocal tone to the professional one's, and contribute a new dataset PopBuTFyto train the latent-mapping function. • We design NSVB as a CVAE model, which supports the semi-supervised learning to leverage unpaired, unlabeled singing data for better performance. 2 Related Works
Singing Voice Conversion (SVC) is a sub-task of Voice Conversion (VC) Mainstream SVC models can be grouped into three categories Automatic Pitch Correction (APC) works attempt to minimize the manual effort in modifying the flawed singing voice In this section, we describe the overview of NSVB, which is shown in Figure As shown in Figure where x, c, z denote the input/output melspectrogram, the mix of content, vocal timbre and pitch conditions, and the latent variable representing the vocal tone respectively; ϕ and θ denote the model parameters of CVAE encoder and CVAE decoder; q ϕ (z|x, c) is the posterior distribution approximated by the CVAE encoder; p θ (x|z, c) is the likelihood function that generates mel-spectrograms given latent variable z and condition c; p(z) is the prior distribution of the latent variables z, and we choose the standard normal distribution as p(z) for simplification. Furthermore, to address the over-smoothing problem where x is the ground-truth and x is the output of CVAE. The descriptions for the model structure of each component are in Section 3.5. To implement the pitch correction, a straightforward method is aligning the amateur recording with the template pitch curve, and then concatenating them to resynthesize a new singing sample with improved intonation. Since the source pitch curve of amateur recordings and template one show a high degree of natural correlation along the time axis, applying a proper time-warping algorithm on them is crucial. However, original DTW We elaborate a non-parametric and data-free algorithm, Shape-Aware DTW (SADTW), based on the prior knowledge that the source pitch curve and the template one have analogous local shape contours. Specifically, we replace the Euclidean distance in the original DTW distance matrix with the shape context descriptor distance. The shape context descriptor of a time point f i in one pitch curve is illustrated in Figure where | • | means the cardinality of a set. This histogram represents the distribution over relative positions, which is a robust, compact and discriminative descriptor. Then, it is natural to use the X 2 -test statistic on this distribution descriptor as the "distance" of two points f a and f p : where h a and h p are the normalized histograms corresponding to the point f a from the amateur pitch curve and the point f p from the template pitch curve. C(a, p) ranges from 0 to 1. Finally, we run DTW on the distance matrix C to obtain the alignment with least distance cost between two curves. 4 windows igure 3: The shape descriptor in SADTW. The blue curve represents pitch; the horizontal axis means time; the vertical axis means F0-frequency. There are m = 4 windows, n = 6 angles to divide neighbor points of f i . Define a pair of mel-spectrograms (x a , x p ): the contents of x a and y p are the same sentence of a song from the same singer 6 , who sings these two recordings using the amateur tone and the professional tone respectively. Given the CVAE model, we can infer the posterior distribution q ϕ (z a |x a , c a ) and q ϕ (z p |x p , c p ) corresponding to x a and x p through the encoder of CVAE. To achieve the conversion of vocal tone, we introduce a mapping function M to convert the latent variables from q ϕ (z a |x a , c a ) to q ϕ (z p |x p , c p ). Concretely, we sample a latent variable of amateur vocal tone z a from q ϕ (z a |x a , c a ), and map z a to M(z a ). Then, M can be optimized by minimizing the negative log-likelihood of M(z a ): Define ĉp as the mix of 1) the content vectors from the amateur recording aligned by SADTW, 2) vocal timbre embedding encoded by timbre encoder, and 3) template pitch where D has been optimized by Eq. ( There are two training stages for NSVB: in the first training stage, we optimize CVAE by minimizing the following loss function and optimize the discriminator (D) by minimizing Eq. ( ϕ, θ, and D are not optimized in this stage. In inference, the encoder of CVAE encodes x a with the condition c a to predict z a . Secondly, we map z a to M(z a ), and run SADTW to align the amateur recordings with the template pitch curve. The template pitch curve can be derived from a reference recording with good intonation or a pitch predictor with the input of music notes. Then, we obtain ĉp defined in Section 3.3 and send M(z a ) together with ĉp in the decoder of CVAE to generate x. Finally, by running a pre-trained vocoder conditioned on x, a new beautified recording is produced. The encoder of CVAE consists of a 1-D convolutional layer (stride=4), an 8-layer WaveNet structure In this section, we first introduce PopBuTFy, the dataset for SVB, and then describe the implementation details in our work. Finally, we explain the evaluation method we adopt in this paper. Dataset Since there is no publicly available highquality, unaccompanied and parallel singing dataset for the SVB task, we collect and annotate a dataset containing both Chinese Mandarin and English pop songs: PopBuTFy. To collect PopBuTFy for SVB, the qualified singers majoring in vocal music are asked to sing a song twice, using the amateur vocal tone for one time and the professional vocal tone for another. Note that some of the amateur recordings are sung off-key by one or more semi-tones for the pitch correction sub-task. The parallel setting could make sure that the personal vocal timbre will keep still during the beautification process. In all, PopBuTFy consists of 99 Chinese pop songs (∼10.4 hours in total) from 12 singers and 443 English pop songs (∼40.4 hours in total) from 22 singers. All the audio files are recorded in a professional recording studio by qualified singers, male and female. Every song is sampled at 22050 Hz with 16-bit quantization. We randomly choose 6 songs in Chinese and 18 songs in English (from unseen speakers) for validation and test. For subjective evaluations, we choose 60 samples in the test set from different singers, half in Chinese and English. All testing samples are included for objective evaluations. Implementation Details We train the Neural Singing Beautifier on a single 32G Nividia V100 GPU with the batch size of 64 sentences for both 100k steps in Stage 1 and Stage 2 respectively. Besides PopBuTFy, we pre-train the ASR model (used for PPG extraction) leveraging the extra speech datasets: AISHELL-3 Performance Evaluation We employ both subjective metrics: Mean Opinion Score (MOS), Comparison Mean Opinion Score (CMOS), and an objective metric: Mean Cepstral Distortion (MCD) to evaluate the audio quality on the test-set. Besides, we use F0 Root Mean Square Error (F0 RMSE) and Pitch Alignment Accuracy (PAA) to estimate the pitch correction performance. For audio, we analyze the MOS and CMOS in two aspects: audio quality (naturalness, pronunciation and sound quality) and vocal tone quality. MOS-Q/CMOS-Q and MOS-V/CMOS-V correspond to the MOS/CMOS of audio quality and vocal tone quality respectively. More details about subjective evaluations are placed in Appendix C. In this section, we conduct extensive experiments to present our proposed model in regard to 1) the performance of pitch conversion; 2) the audio quality and vocal tone quality. Firstly, we provide the comparison among timewarping algorithms in terms of PAA in Table Secondly, to check whether the amateur recordings are corrected to the good intonation after being beautified by NSVB, we calculate the F0 RMSE metric of the amateur recordings and the audio generated by NSVB, and list the results in Table The subjective and objective results on both Chinese and English datasets are shown in Table We conduct some ablation studies to demonstrate the effectiveness of our proposed methods and some designs in our model, including latentmapping, additional loss L map2 in the second training stage, and semi-supervised learning with extra unpaired, unlabeled data on Chinese songs. As shown in Table The details of the adversarial discriminator, the content encoder, and WaveNet structure are shown in Figure As shown in Figure As shown in Figure During testing, each audio sample is listened to by at least 10 qualified testers, all majoring in vocal music. We tell all testers to focus on one aspect and ignore the other aspect when scoring MOS/CMOS of each aspect. For MOS, each tester is asked to evaluate the subjective naturalness of a sentence on a 1-5 Likert scale. For CMOS, listeners are asked to compare pairs of audio generated by systems A and B and indicate which of the two audio they prefer and choose one of the following scores: 0 indicating no difference, 1 indicating small difference, 2 indicating a large difference. For audio quality evaluation (MOS-Q and CMOS-Q), we tell listeners to "focus on examining the naturalness, pronunciation and sound quality, and ignore the differences of singing vocal tone". For vocal tone evaluations (MOS-V and CMOS-V), we tell listeners to "focus on examining singing vocal tone of the song, and ignore the differences of audio quality (e.g., environmental noise, timbre)". We split evaluations for main experiments and ablation studies into several groups for them. They are asked to take a break for 15 minutes between each group of experiments to remain focused during subjective evaluations. All testers get reasonably paid. SADTW is a kind of advanced APC method, which is designed for fine-tuning the amateur recording, but not for the case when the amateur recordings are completely out of tune. In the latter case, we recommend people to use Singing Voice Synthesis (synthesizing waveform from PPG and MIDI) + Singing Voice Conversion (converting the vocal timbre of the synthesized waveform into the user's), or some Speech to Singing (STS) methods. In addition, SADTW provides a score representing the similarity of two pitch curves, which could be used to determine what kind of SVB solution should be chosen. This work develops a possible automatic way for singing voice beautification, which may cause unemployment for people with related occupations. In addition, there is the potential for harm from piracy and abuse of our released recordings. Thus, we choose the dataset license: CC by-nc-sa 4.0.
602
1,922
602
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP WARNING: This paper contains real-world cases which are offensive in nature
Textual adversarial samples play important roles in multiple subfields of NLP research, including security, evaluation, explainability, and data augmentation. However, most work mixes all these roles, obscuring the problem definitions and research goals of the security role that aims to reveal the practical concerns of NLP models. In this paper, we rethink the research paradigm of textual adversarial samples in security scenarios. We discuss the deficiencies in previous work and propose our suggestions that the research on the Security-oriented adversarial NLP (SoadNLP) should: (1) evaluate their methods on security tasks to demonstrate the real-world concerns; (2) consider realworld attackers' goals, instead of developing impractical methods. To this end, we first collect, process, and release a security datasets collection Advbench. Then, we reformalize the task and adjust the emphasis on different goals in SoadNLP. Next, we propose a simple method based on heuristic rules that can easily fulfill the actual adversarial goals to simulate real-world attack methods. We conduct experiments on both the attack and the defense sides on Advbench. Experimental results show that our method has higher practical value, indicating that the research paradigm in SoadNLP may start from our new benchmark. All the code and data of Advbench can be obtained at
Natural language processing (NLP) models based on deep learning have been employed in many realworld applications
Adversarial samples can reveal the practical concerns of NLP models deployed in security situations. Adversarial samples can be employed to benchmark models' robustness to out-of-distribution data (diverse user inputs). Adversarial samples can explain part of the models' decision processes. Adversarial training based on adversarial samples augmentation can improve performance and robustness. Table and Mehta, 2017; There are two core issues about why previous textual adversarial attack work can hardly help realworld security problems. First, most work don't consider security tasks and datasets PWWS I was all over the fuc king place because the toaster had tits. !!!peace peace peace Table To address the issue of security-irrelevant evaluation benchmark, we first summarize five security tasks and search corresponding open-source datasets. We collect, process, and release these datasets as a collection named Advbench to facilitate future research. To address the issue of ill-defined problem definition, we refer to the intention of real-world attackers to reformalize the task of textual adversarial attack and adjust the emphasis on different adversarial goals. Further, to simulate real-world attacks, we propose a simple attack method based on heuristic rules that are summarized from various sources, which can easily fulfill the actual attackers' goals. We conduct comprehensive experiments on Advbench to evaluate methods proposed in the NLP community and our simple method. Experimental results overall demonstrate the superiority of our method, considering the attack performance, the attack efficiency, and the preservation of adversarial meaning (validity). We also consider the defense side and show that the SOTA defense method cannot handle our simple heuristic attack algorithm. The overall experiments indicate that the research paradigm in SoadNLP may start from our new benchmark. To summarize, the main contributions of this paper are as follows: • We collect, process, and release a security datasets collection Advbench. • We reconsider the attackers' goals and reformalize the task of textual adversarial attack in security scenarios. • We propose a simple attack method that fulfills the actual attackers' goals to simulate real-world attacks, which can facilitate future research on both the attack and the defense sides. 2 Advbench Construction In our survey, we find that the current problem definition and research goals considering the security role of adversarial samples to reveal practical concerns are ill-defined and ambiguous. We attribute this to the failure of distinguishing several roles of adversarial samples (See Table To make the research in this field more standardized and in-depth, reformalization of this problem needs to be conducted. Note that we focus on the security role of textual adversarial samples in this paper. We summarize 5 security tasks, including misinformation, disinformation, toxic, spam, and sensitive information detection. The task descriptions and our motivation to choose these tasks are given in Appendix B. Due to the label-unbalanced issue of some datasets, we will release both our processed balanced and unbalanced datasets. The datasets statistics are listed in Table LUN. Our LUN dataset is built on the Labeled Unreliable News Dataset SATNews. The Satirical News Dataset Amazon-LB. The Amazon Luxury Beauty Review dataset is a review collection of the Luxury Beauty category in Amazon with verification information in Amazon Review Data (2018) CGFake. The Computer-generated Fake Review Dataset SpamAssassin. The SpamAssassin 2.2.5 Sensitive Information EDENCE. EDENCE (Neerbek, 2019a) contains samples with auto-generated parsing-tree structures in the Enron corpus. The annotated labels come from the TREC LEGAL FAS. FAS (Neerbek, 2019b) also contains samples with parsing-tree structures built from Enron corpus and is modified for sensitive information detection by using TREC LEGAL labels annotated by domain experts. The samples in FAS are compliant with Financial Accounting Standards 3 and are preprocessed in the same way as EDENCE in our work. 3 Task Formalization Overview. Without loss of generality, we consider the text classification task. Given a classifier f : X → Y that can make correct prediction on the original input text x: arg max where y true is the golden label of x. The attackers will make perturbations δ to craft an adversarial sample x * that can fool the classifier: arg max Refinement. The core part of adversarial NLP is to find the appropriate perturbations δ. We identify four deficiencies in the common research paradigm on SoadNLP. (1) Most attack methods iteratively search for better δ relying on the accessibility to the victim models' confidence scores or gradients (2) Previous work attempts to make δ imperceptible by imposing some restrictions on the searching process, like ensuring that the cosine similarity of adversarial and original sentence embeddings is higher than a threshold (3) Adversarial attack based on word substitution or sentence paraphrase is the most widely studied. However, current attack algorithms are very inefficient and need to query victim models hundreds of times to craft adversarial samples, which makes them unlikely to happen in reality (4) There is a bunch of work assuming that the attackers are experienced NLP practitioners and incorporate external knowledge base In general, we make two suggestions for future research, including considering the decision-based experimental setting and the attack methods that are free of expertise. Besides, we adjust the emphasis on different adversarial goals, corresponding to the real-world attack situations (See Table Note that we don't convey the meaning that the quality of adversarial samples is not important. For example, spam emails and fake news will obtain more attacker-expected feedback if they are more fluent and look more natural. Our intention in this paper is to decrease the priority of the secondary adversarial goals when there exists a trade-off among all adversarial goals, to better simulate real-world attack situations. To simulate the adversarial strategies employed by real-world attackers, we also propose a simple method named ROCKET (Real-wOrld attaCK based on hEurisTic rules) that can fulfill the actual adversarial goals. Our algorithm can be divided into two parts, including heuristic perturba- tion rules and the black-box searching algorithm. Perturbation Rules. To make our heuristic perturbation rules better simulate real-world attackers, we survey and summarize common perturbations rules from several sources, including (1) real adversarial user data (some cases are shown in Appendix D), (2) senior practitioners' experience, (3) papers in the NLP community We now specify how we find distracting words (rule-6). For each task, we first gather some realistic data and obtain the words that occur relatively more in attacker-specified labeled samples (e.g., non-spam in the spam detection task) by calculating word frequency. Then we heuristically select distracting words that will not interfere with the original task. Finally, we add an appropriate amount of selected words at the beginning or end of the original sentence, ensuring that the semantics of the sentence will not be affected. Searching Algorithm. We need to heuristically apply perturbations rules to search adversarial samples in the black-box setting because only victim models' decisions are available. We first apply rule-6 to the original sentence and filter stop words to get the semantic word list L of the modified sentence. Then we repeat the word perturbation process while not fooling the victim model. Specifically, one iteration of the word perturbation process starts by first sampling a batch of words w from L. Repeat the process of sampling actions r from rule-1 to rule-5 for each word in w and query the victim model until the threshold is reached or the attack succeeds. Then w is removed from L. (3) Attack efficiency (Query) is defined as the average query times to the victim models when crafting adversarial samples. (4) Perturbation degree is measured by Levenstein distance. (5) Quality is measured by the relative increase of perplexity and absolute increase of grammar errors when crafting adversarial samples. We implement existing attack methods proposed in the NLP community using the NLP attack package OpenAttack The experimental details can be found in Appendix F. First Priority Metrics. We list the results of attack success rate and average query times in Table 5. Our findings are as follows: • Considering all previous attack methods, we find that it's extremely hard to craft adversarial samples in some tasks (e.g., Misinformation, Spam). And the attack performances of all methods drop compared to the results in original papers • Most previous methods are inefficient when launching adversarial attacks. Usually, they need to query the victim model hundreds of times to craft a successful adversarial sample. • Our simple ROCKET shows superiority overall considering the attack performance and attack efficiency on Advbench. To further demonstrate the efficiency of ROCKET, we restrict the maximum query times to the victim model and test the attack success rate on Amazon-LB, HSOL, and EDENCE. The results are shown in Figure We also conduct a human evaluation on the validity of adversarial samples (See Table Note that ROCKET is designed to better simulate real-world adversarial attacks. The results of first priority metrics and the simple and easyto-implement features prove that this method has higher practical value. Thus, ROCKET can be treated as a simple baseline to facilitate future research in this direction. Secondary Priority Metrics. We evaluate secondary priority metrics on Disinformation, Toxic, and Sensitive tasks because successful adversarial samples on other tasks are limited, which will result in inaccurate measures. We list the results in Table • Considering all attack methods, previously overlooked character-level attacks (e.g., DeepWord-Bug) achieve great success considering perturbation degree (Levenstein distance) and grammaticality (∆I). • While achieving superiority in first priority metrics, ROCKET adds more violent perturbations and breaks the grammaticality more severely. However, as we argue, it's reasonable to tradeoff these secondary priority metrics for the first ones. • Surprisingly, we find that ROCKET crafts more fluent adversarial samples according to the perplexity scores calculated by the language model. We suspect that the pretraining data that large language models fit on contains so much informal text (e.g., Twitter), which may resemble adversarial samples crafted by ROCKET. We give the details and results of experiments on the defense side in Appendix E. Table 5 Related Work Textual adversarial attack methods can be roughly categorized into character-level, word-level, and sentence-level perturbation methods. Character-level attacks make small perturbations to the words, including swapping, deleting, and inserting characters There also exists some work that cannot be categorized in each of these categories, including multigranularity attacks Textual adversarial defense methods can be roughly categorized into five categories based on their strategies, including training data augmentation The research on security NLP is not only about adversarial attacks in the inference time, but also include several other topics that have broad and significant impact in this filed, including privacy attacks Research on Adversarial Attack. Note that we don't discredit previous work in this paper. Most previous methods are very useful considering different roles of adversarial samples except the security role. For example, although synonym substitutionbased methods may not be actually employed by real-world attackers But from the perspective of separating roles of adversarial samples, the research significance of adversarial attack methods that assume only the accessibility to the confidence scores of the victim models may be limited. When adversarial samples are employed to reveal the security issues, they can only access the models' decisions. When adversarial samples are used for other purposes, their roles are to help to improve the models at hand. In this case, these methods should be granted to have access to the victim model's parameters (i.e. white-box attack) 9 . 9 Some methods employ "behavioral testing" (black-box testing) even if permission is granted for model parameters Here we only give our considerations of this problem. Future research and discussion should go on to refine the problem definition in this field. Research on Adversarial Defense. Adversarial defense methods have two functions, namely making models more robust to out-of-distribution data and resisting malicious adversarial attacks. Also, we recommend researchers study these two different functions separately. For improving models' out-of-distribution robustness, existing work has made many good attempts Research on Security NLP. We also conduct a pilot survey on research on the security community. We find that there exists a research gap between the NLP and the security communities in security research topics. While the NLP community puts more emphasis on the methods' novelty, work in the security community usually revolves around actual security scenarios In this paper, we rethink the research paradigm in SoadNLP. We identify two major deficiencies in previous work and propose our refinements. Specifically, we propose an security datasets collection Advbench. We then reconsider the actual adversarial goals and reformalize the task. Next, we propose a simple method summarized from different sources that fulfills real-world attackers' goals. We conduct comprehensive experiments on Advbench on both the attack and the defense sides. Experimental results show the superiority of our In the future, we will reconsider and discuss other roles of textual adversarial samples to make this whole story complete. In this section, we discuss the potential wider implications and ethical considerations of this paper. Intended Use. In this paper, we construct a security benchmark, and propose a simple method that can effectively attack real-world SOTA models. Our motivation is to better simulate real-world adversarial attacks and reveal the practical concerns. This simple method can serve as a simple baseline to facilitate future research on both the attack and the defense sides. Future work can start from our benchmark and propose methods to address real-world security issues. Broad Impact. We rethink the research paradigm in adversarial NLP from the perspective of separating different roles of adversarial samples. Specifically, in this paper, we focus on the security role of adversarial samples and identify two major deficiencies in previous work. For each deficiency, we make some refinements to previous practices. In general, our work makes the problem definition in this direction more standardized and better simulate real-world attack situations. Energy Saving. We describe our experimental details in Appendix F to prevent people from making unnecessary hyper-parameter adjustments and to help researchers quickly reproduce our results. In experiments, we employ BERT-base as the testbed and evaluate existing textual adversarial attack methods and our proposed ROCKET in our constructed benchmark datasets. We only consider one victim model in our experiments because our benchmark includes up to ten datasets and our computing resources are limited. Thus, more comprehensive experiments spanning different model architectures and training paradigms are left for future work. We conduct a survey on previous adversarial attack methods about the specific tasks and datasets they employ in their evaluation. The results are listed in Table The task statistics are listed in Table Words in news media and political discourse have considerable power in shaping people's beliefs and opinions. As a result, their truthfulness is often compromised to maximize the impact on society In addition to misinformation caused by objective reasons, there is also a type of fake information caused by subjectively distorting facts. This type of information mainly concentrates on online comments and reviews in online shopping malls and online restaurant/hotel reservation websites to lure customers into consumption The rapid growth of information in social networks such as Facebook, Twitter, and blogs makes it challenging to monitor what is being published and spread on social media. Abusive comments are widespread on social networks, including cyberbullying, cyberterrorism, sexism, racism, and hate-speech. Thus, the primary objective of toxic detection is to identify toxic contents in the web, which is an essential ingredient for anti-bullying policies and protection of individual rights on social media SST-2; AG; Hate-Speech. In recent years, unwanted commercial bulk emails have become a huge problem on the internet. Spam emails prevent the user from making good use of time. More importantly, some spam emails contain fraud and phishing messages that can also cause financial damage to users Text documents shared across third parties or published publicly contain sensitive information by nature. Detecting sensitive information in unstructured data is crucial for preventing data leakage. This task is to detect sensitive information including intellectual property and product progress from companies, trading and strategic information of public institutions and organizations, and private information of individuals In general, the validity metric is to measure the preservation of adversarial meaning in the crafted adversarial samples. The adversarial meaning is task-specific and should be considered differently. So, the validity definition is relevant to the specific adversarial goal in the specific security task. In our Advbench, the adversarial meanings are exaggerated and satirical contents (Misinformation), inauthentic and untrue comments (Disinformation), abusive language (Toxic), illegal or time-wasting messages (Spam), and sensitive embedded in common comments (Sensitive Information). So, the ultimate goal of attackers is to spread the adversarial meaning, no matter how many perturbations attackers introduce to other unrelated content. We give some real-world adversarial cases collected from social media in Figure Figure We list the results of secondary priority metrics in Table The results are shown in Table For each attack method, we input N adversarial samples (successfully attack the model) to the trained detector to obtain the number of samples detected as adversarial samples (n det ) and the number of samples successfully restored (n res ). Then the detection rate (R det ) and restored rate (R res ) are calculated according to the formula: For the sake of calculation speed and fairness, we truncate all sentences to the first 480 words. Then, we empirically set the hyper-parameters including distracting words, the insertion number of distracting words at the beginning and end of sentences, perturbation batch size, and perturbation epochs according to the attack performance and preservation of adversarial meaning. We only attack the original content in sentences, leaving out adversarial content introduced by our perturbations. The comprehensive settings of hyper-parameters are shown in Table We set up a human evaluation to further evaluate the validity of adversarial samples. We choose the disinformation and toxic detection tasks because the validity definitions are clear and can be easily understood by annotators. For each task, we consider 2 corresponding datasets and sample 100 original and adversarial samples pairs for each attack method. For each pair, we ask 3 human annotators to evaluate whether the adversarial meaning is preserved in the adversarially crafted sample (validity). They need to give a validity score from 0-2 for each pair, where 2 means that the adversarial meaning has been perfectly preserved, 1 means that the sentence meaning is ambiguous but may still preserve some adversarial meaning, and 0 means that the crafted adversarial sample don't preserve any adversarial meaning in the original sample. We use the voting strategy to produce the annotation results of validity for each adversarial sample. Then we average the scores for all 100 samples in each task as the final validity score for each attack method. The results are shown in Table
1,364
113
1,364
Improving Tokenisation by Alternative Treatment of Spaces
Tokenisation is the first step in almost all NLP tasks, and state-of-the-art transformer-based language models all use subword tokenisation algorithms to process input text. Existing algorithms have problems, often producing tokenisations of limited linguistic validity and representing equivalent strings differently depending on their position within a word. We hypothesise that these problems hinder the ability of transformer-based models to handle complex words, and suggest that these problems are a result of allowing tokens to include spaces. We thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens. Specifically, we apply this modification to the BPE and Unigram algorithms. We find that our modified algorithms lead to improved performance on downstream NLP tasks that involve handling complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Intrinsically, we find that our modified algorithms give more morphologically correct tokenisations, in particular when handling prefixes. Given the results of our experiments, we advocate for always treating spaces as individual tokens as an improved tokenisation method.
Tokenisation is a key initial step in processing natural language, as it identifies the linguistic units to be processed, converting them to numerical IDs which can then be vectorised and manipulated by mathematical operations. Earlier NLP approaches used simple stringsearching techniques with regular expressions to tokenise text; however, these pattern-matching tokenisation methods have drawbacks: they require large vocabulary sizes to cover the training data, they cannot handle out-of-vocabulary words, and they do not work for languages without spaces as word boundaries. To address these issues, subword tokenisation was introduced. The first explicit mention (and popularisation) of this approach was by State-of-the art transformer-based language models all use subword tokenisation algorithms based on either byte-pair encoding (BPE) The BPE and Unigram algorithms are implemented in the SentencePiece library Despite their ubiquity, existing tokenisation algorithms have problems, which we hypothesise hinders the ability of language models to handle complex words (Section 2). We suggest that these problems are pervasive across all existing subword tokenisation algorithms due to a shared fundamental design choice of allowing tokens to include spaces, and thus experiment with an alternative treatment of spaces where they are always taken as individual tokens. We implement this approach by making simple modifications to the existing Word-Piece, BPE, and Unigram algorithms (Section 3). We first evaluate our modified algorithms intrinsically (Section 4), quantitatively finding that they improve morphological correctness, in particular when handling prefixes. Qualitatively, we take examples from previous papers critiquing existing tokenisation algorithms, and show how our modified algorithms are able to alleviate the discussed issues. We then evaluate our modified algorithms extrinsically by pretraining and finetuning transformerbased models (Section 5), showing that they give improved performance on NLP tasks that require handling complex words with no detrimental effect on performance in the general domain.
Existing tokenisation algorithms often produce unintuitive tokenisations for complex words, incorrectly splitting prefixes, and producing unmeaningful subword tokens, which are problems that have been discussed in previous works. For these latter examples, there is a second problem: even if the base were tokenised as a single token, the addition of the space symbol means that there would be no explicit link between the prefixed word and the standalone base. As an example, we cherry-pick a rare example of a morphologically correct tokenisation by BERT of a word containing a prefix, showing both strings and token IDs: beatable → _beat, able (3786, 3085) unbeatable → _un, beat, able We hypothesise that both of these problems hinder the ability of existing language models (such as BERT) to deal with complex words. Regarding the first problem, we argue that the morphological correctness of a tokeniser is a metric which will correlate with the ability of language models to deal with complex words: correctly splitting affixes means morphologically related words (those sharing a common base) are given related tokenisations. The splitting of prefixes is particularly important, as prefixes always have a semantic function, unlike suffixes which can have both syntactic and semantic functions We suggest that the problems discussed in Section 2 arise as a result of how spaces are handled by existing algorithms: All subword tokenisation algorithms currently used by transformer-based models allow tokens to include space symbols as the first character Thus, to attempt to alleviate these issues, and hence improve the handling of complex words by language models, we propose an alternative treatment of spaces where they are always assigned individual tokens. This simple modification can be made to any existing subword tokenisation algorithm, though for brevity we focus our attention on BPE and Unigram; this modification can also be made to the WordPiece algorithm, and we see similar (intrinsic) performance improvements from doing so. In Section 4, we perform a qualitative analysis of our modified WordPiece algorithm and also include the default WordPiece algorithm in our quantitative evaluation for comparison. Our modified algorithms and the defaults are shown in Figure In the following sections, we compare our modified tokenisation algorithms to the defaults by evaluating them intrinsically (Section 4) and extrinsically (Section 5). Given our hypothesis that the morphological correctness of a tokeniser, especially when handling prefixes, correlates with the performance of language models in dealing with complex words (Section 2), we perform a controlled intrinsic evaluation of our tokenisers using this metric. We train our modified algorithms and the defaults on 1 million sentences from English Wikipedia for BPE and Unigram, with a fixed vocabulary size of 16,000, and then run evaluation on four morphological datasets: LADEC, MorphoLex, MorphyNet and DagoBERT. The LADEC dataset We evaluate a tokeniser on these datasets using the evaluation method introduced by Remove the substring with the smallest loss from V 7 end Tokenisation input :text T , vocabulary V , language model parameters Θ output :tokens τ 1 Replace whitespace in T with the space symbol 2 Prepend the space symbol to the first word of every sentence in T 3 Use the Viterbi algorithm with the learned language modelling parameters and the vocabulary to tokenise T Tokenisation input :text T , vocabulary V , language model parameters Θ output :tokens τ 1 Replace whitespace in T with the space symbol 2 Use the Viterbi algorithm with the learned language modelling parameters and the vocabulary to tokenise T with spaces being given an arbitrarily high score so they are always selected as individual tokens tokenisation, whilst false positives are boundaries appearing in the generated tokenisation but not in the reference. Because it makes sense to store common words as single tokens in the vocabulary, even if they can be decomposed into morphemes, we report precision along with F1 as a potentially more meaningful metric, since this allows undersegmentation whilst penalising oversegmentation. We also compute the mean sequence length (number of to-kens) for each tokeniser across each dataset. Results are shown in Table The general trend is that Unigram outperforms BPE (consistent with findings by Interestingly, BPE ′ gives the shortest sequence length on three of the four datasets, but not the most morphologically correct tokenisations. Since BPE was developed as a compression algorithm, the short sequence lengths are perhaps expected, but here we only see a weak correlation between sequence length and morphological correctness For a qualitative analysis, we take examples from papers that highlight problems with existing tokenisers (Section 2) and generate the output from the default and modified algorithms for BPE and Unigram, shown in Table In Table We investigate the vocabularies of the default and modified algorithms, shown in Table We note that an interesting result of our modifications is an improvement at word segmentation. As an example, the outputs of the default and modified Unigram algorithms when passed the concatenated sentence "thisisasentencethatneedstobesegmented" are: Unigram _this, isa, s, ent, ence, that, ne, ed, s, to, be, s, eg, ment, ed Unigram ′ this, is, a, sentence, that, needs, to, be, segment, e, d Given the improved intrinsic performance of our algorithms, we wish to evaluate how this impacts the extrinsic performance of NLP models, both in general, and in particular on tasks involving complex words. As in Section 4, we train the default and modified BPE and Unigram algorithms on 1 million sentences from English Wikipedia, with a fixed vocabulary size of 16,000, but we also implement a variant of our modified algorithm that removes spaces as a post-processing step. The reasoning behind this is that it reduces the sequence length significantly with minimal information loss, and more closely mirrors existing models which have no explicit space information. Example tokenisations for the Unigram algorithms given the input "This is an input sentence." are: Unigram _This, _is, _an, _input, _sentence, . Unigram ′ This, _, is, _, an, _, input, _, sentence, . Unigram ′ no spaces This, is, an, input, sentence, . For each of the tokenisers, we pretrain RoBERTa (base) on the full text of English Wikipedia, and then finetune on downstream tasks, keeping all hyperparameters fixed, changing only the tokenisation algorithm used. For evaluation of the models in a general domain, we use the GLUE benchmark Over the whole of the English Wikipedia data, the sequence lengths for each of the tokenisation approaches are: Unigram ′ no spaces 3.67e+09 7 We do not consider the Superbizarre sentiment task due to a higher proportion of uninformative words. As in the evaluation in Table Due to computational constraints, we only ran pretraining once for each model. For finetuning, we ran each experiment with 10 different seeds, reporting the mean development result and standard deviation. Results are shown in Table On the Superbizarre datasets, we can see that Unigram outperforms BPE, with Unigram ′ no spaces performing significantly better than all other models using a Welch's t-test (p < 0.05), see Appendix C. Note that DelBERT On the mean GLUE benchmark, the modified models without spaces perform as well or better than their default counterparts, with Unigram ′ performing the best when both updates and epochs are fixed. However, this result is not statistically significant (see Appendix C), and over the individual GLUE tasks the best performing models vary, with high variances across seeds on some tasks due to the small dataset sizes (see Appendix B). Since the GLUE tasks do not rely on handling complex words, a significant performance difference is probably not expected, but we see no drop in performance with the modified algorithms. The modified models that include spaces perform poorly on the GLUE benchmark, even when the number of epochs is fixed rather than updates, meaning they are trained for ∼65% more updates than the modified models without spaces. This suggests that this method of including spaces as additional tokens is suboptimal for general language tasks, though interestingly Unigram ′ with spaces is the second best performing model across all Superbizarre datasets. The tokenisers themselves perform splitting on spaces as a first step, so additionally including spaces may be simply passing noise to the model for the masked language modelling task, especially due to the high frequency of spaces. This means the pretraining loss decreases rapidly due to space prediction, but plateaus earlier (see Appendix A). Due to the much greater sequence lengths, the models that include spaces also discard examples that are too long during finetuning, which could lead to worse results. There are previous works that have performed controlled extrinsic comparisons of existing subword tokenisation algorithms (BPE, Unigram, and Word-Piece), and have provided results which we relate here to our own findings. There have also been some recent attempts to develop improved subword tokenisation methods. For all of these approaches, spaces still occur as the first character of start-of-word tokens, and we believe this hinders performance: our alternative treatment of spaces could be combined with these algorithms, and the impact on performance investigated. Finally, we note that We hypothesise that problems with current tokenisation algorithms arise from allowing tokens to include spaces, and thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens. We find that this leads to improved performance on NLP tasks involving complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Whilst our work focuses on BPE and Unigram, our modifications can be applied to any existing subword tokenisation algorithm, including WordPiece, and hence to any transformer-based model. Also, although our experiments have only been in English, the algorithms used are unsupervised and languageindependent and our results should extend to other languages. Our best-performing models use lossy tokenisation (removing the space tokens as a postprocessing step), which may not be ideal for all tasks. We did not perform evaluation on sequenceto-sequence tasks, and indeed the subword tokenisation algorithms discussed here were introduced in the field of NMT, where space information needs to be generated in the output. Future work could thus look at alternative methods for including space information that maintain the performance gains seen here whilst keeping tokenisation lossless. The finetuning tasks investigated in this paper are all sequence classification, which is a significant limitation of the evaluation. In order to definitively compare our modified tokenisation algorithms with the defaults, a more thorough evaluation across many types of encoder-architecture NLP tasks would be required (e.g. token classification, question answering, multiple-choice). It is also worth noting that the Superbizarre dataset consists of entries constructed using elements from BERT's WordPiece vocabulary. For their purposes, this is a benefit as it does not unfairly disadvantage BERT, but for our purposes it limits the generality of the results obtained. In this paper, we have chosen a single vocabulary size for all of our evaluation, which limits the robustness of our results. For the intrinsic evaluation, a range of vocabulary sizes could be chosen and evaluated. For extrinsic evaluation, we are limited by the computational expense of pretraining language models, but it is important to note that we don't know how our results will change if the vocabulary size is altered. It would also be beneficial to look at how our modified tokenisers work on morphologically rich languages, and in a multilingual setting, which would further increase the robustness of the results.
1,241
2,138
1,241
Demonstration of a Neural Machine Translation System with Online Learning for Translators
We introduce a demonstration of our system, which implements online learning for neural machine translation in a production environment. These techniques allow the system to continuously learn from the corrections provided by the translators. We implemented an end-to-end platform integrating our machine translation servers to one of the most common user interfaces for professional translators: SDL Trados Studio. Our objective was to save post-editing effort as the machine is continuously learning from human choices and adapting the models to a specific domain or user style.
Productivity is crucial in the translation industry. Nowadays, translation companies must be more competitive than ever and meet fast commercial demands. Thus, they need to produce high quality translations in shorter periods of time. Machine translation (MT) can help them to achieve this goal: instead of a linguist thinking out or "creating" translations from scratch, "humanizing" automatic translations has become a common process in the industry. This is known as post-editing (PE) and it has been shown to be effective in many cases Inherently to the PE process, new bilingual data is continuously generated (the post-edited samples). This data is typically used for the creation of domain-specific corpora, useful for adapting systems from a broader domain into a specific do-main, client or style. The online learning (OL) paradigm aims to perform this adaptation during the PE process: each time the user validates a post-edited translation, the system is updated as this data is taken into account. Therefore, when the next translation is produced, the system will consider the previous post-editions. It is assumed that better translations (or translations more suited to the human post-editor preferences) will be produced. The OL paradigm has quickly attracted the attention of researchers and industry. The Cas-MaCat In this paper, we introduce a demo system of our in-house OL framework, in which we integrated our translation servers with the translators user-friendly interface SDL Trados Studio. The rest of this document is structured as follows: Section 2 introduces the online learning paradigm. Next, in Section 3, we describe in detail our in-house architecture in which this paradigm is implemented. Finally, Section 4 summarizes the demo system.
We are interested in benefiting from the post-edits generated by the user during the PE process. To that end, we update the system on-the-fly, i.e, as soon as a sentence has been validated by the posteditor. Right after the user confirms a post-edit, we update the models of our NMT system, using the source sentence and the post-edit as a training pair. This adaptation can be done following gradient descent, the regular training method for neural networks. Our in-house architecture of the OL framework is composed of three main modules: the MT engine, the user interface and the translation server which links both. Moreover, we added a logging option to keep user tracking information as keystrokes, time and mouse movements. Fig. The core of the MT engine is composed by the models generating translations, which can be retrained when required. Each translation project has its own model, whose architecture is set according to the project's need. All models are neural-based and are trained using OpenNMT-py Each MT model has its own configuration file, which contains personalized translation and OL options, such as tokenization, subword segmentation, learning rate, etc. A translation server communicates with the MT models in order to generate translations and adapt the systems based on the user's post editions. This server is based on OpenNMT-py's REST server and uses the HTTP protocol to define the messages in order to serve user's requests. The code of our translation server is open and available The communication between the user interface and the MT engine is performed by means of GET and POST requests. The server waits for translation requests. When received, these requests are sent to the machine translation engine in a JSON format. When a machine translation segment is corrected by the user, the correction is sent to the translation engine. In the translation industry, the most common user interface for translators is SDL Trados Studio SDL allows the development of plugins for Trados Studio to enhance and extend the tool. Moreover, it has a large developer community In order to set up this plugin, the user fills the credentials with a username, password and URL pointing to the server (see Fig. In order to measure the productivity and effectiveness of OL during the PE process, we integrated tools to log the time, keystrokes and mouse movements involved in post-editing a given file. To achieve this, we incorporated the Qualitivity With all this log information, we can measure the effort required to post-edit a file using MT with OL. Preliminary experiments in simulated and real environments with professional translators We have introduced a demonstration of Pangeanic's translation framework, which incorporates on-the-fly system adaptation via online learning. This paradigm allows human translators /post-editors to produce more human-quality text, that is, be more productive-a fundamental issue in the translation industry-since the system is continuously learning from the user post-edits, avoiding repetition of the same errors. We have integrated our MT servers into the SDL Trados Studio user interface which is one of the most used in the translation industry. This system aims to improve human translators' work by saving
580
1,771
580
Dual Slot Selector via Local Reliability Verification for Dialogue State Tracking
The goal of dialogue state tracking (DST) is to predict the current dialogue state given all previous dialogue contexts. Existing approaches generally predict the dialogue state at every turn from scratch. However, the overwhelming majority of the slots in each turn should simply inherit the slot values from the previous turn. Therefore, the mechanism of treating slots equally in each turn not only is inefficient but also may lead to additional errors because of the redundant slot value generation. To address this problem, we devise the two-stage DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history. The Dual Slot Selector determines each slot whether to update slot value or to inherit the slot value from the previous turn from two aspects: (1) if there is a strong relationship between it and the current turn dialogue utterances; (2) if a slot value with high reliability can be obtained for it through the current turn dialogue. The slots selected to be updated are permitted to enter the Slot Value Generator to update values by a hybrid method, while the other slots directly inherit the values from the previous turn. Empirical results show that our method achieves 56.93%, 60.73%, and 58.04% joint accuracy on MultiWOZ 2.0, MultiWOZ 2.1, and Multi-WOZ 2.2 datasets respectively and achieves a new state-of-the-art performance with significant improvements. 1
Task-oriented dialogue has attracted increasing attention in both the research and industry communities. As a key component in task-oriented dialogue systems, Dialogue State Tracking (DST) aims to extract user goals or intents and represent them as a compact dialogue state in the form of slot-value pairs of each turn dialogue. DST is an essential part of dialogue management in task-oriented dialogue systems, where the next dialogue system action is selected based on the current dialogue state. Early dialogue state tracking approaches extract value for each slot predefined in a single domain To address this problem, we propose a DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history. At each turn, all slots are judged by the Dual Slot Selector first, and only the selected slots are permitted to enter the Slot Value Generator to update their slot value, while the other slots directly inherit the slot value from the previous turn. The Dual Slot Selector is a two-stage judging process. It consists of a Preliminary Selector and an Ultimate Selector, which jointly make a judgment for each slot according to the current turn dialogue. The intuition behind this design is that the Preliminary Selector makes a coarse judgment to exclude most of the irrelevant slots, and then the Ultimate Selector makes an intensive judgment for the slots selected by the Preliminary Selector and combines its confidence with the confidence of the Preliminary Selector to yield the final decision. Specifically, the Preliminary Selector briefly touches on the relationship of current turn dialogue utterances and each slot. Then the Ultimate Selector obtains a temporary slot value for each slot and calculates its reliability. The rationale for the Ultimate Selector is that if a slot value with high reliability can be obtained through the current turn dialogue, then the slot ought to be updated. Eventually, the selected slots enter the Slot Value Generator and a hybrid way of the extractive method and the classification-based method is utilized to generate a value according to the current dialogue utterances and dialogue history. Our proposed DSS-DST achieves state-of-theart joint accuracy on three of the most actively studied datasets: MultiWOZ 2.0 Our contributions in this paper are three folds: • We devise an effective DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue and the Slot Value Generator based on the dialogue history to alleviate the redundant slot value generation. • We propose two complementary conditions as the base of the judgment, which significantly improves the performance of the slot selection. • Empirical results show that our model achieves state-of-the-art performance with significant improvements.
Traditional statistical dialogue state tracking models combine semantics extracted by spoken language understanding modules to predict the current dialogue state On the other hand, dialogue state tracking and machine reading comprehension (MRC) have similarities in many aspects Figure Dual Slot Selector the dialogue state at turn t as B t = {(S j , V j t )|1 ≤ j ≤ J}, where S j are the slots, V j t are the corresponding slot values, and J is the total number of such slots. Following We employ the representation of the previous turn dialog state B t-1 concatenated to the representation of the current turn dialogue D t as input: where [CLS] is a special token added in front of every turn input. Following SOM-DST where R t is the system response and U t is the user utterance. ; is a special token used to mark the boundary between R t and U t , and [SEP] is a special token used to mark the end of a dialogue turn. The representation of the dialogue state at turn t is is a special token used to mark the boundary between a slot and a value. [SLOT] j is a special token that represents the aggregation information of the j-th slot-value pair. We feed a pre-trained ALBERT The output representation of the encoder is O t ∈ R |Xt|×d , and h [CLS] t , h [SLOT] j t ∈ R d are the outputs that correspond to [CLS] and [SLOT] j , respectively. To obtain the representation of each dialogue and state, we split the O t into H t and H B t-1 as the output representations of the dialogue at turn t and the dialogue state at turn t -1. The Dual Slot Selector consists of a Preliminary Selector and an Ultimate Selector, which jointly make a judgment for each slot according to the current turn dialogue. Slot-Aware Matching Here we first describe the Slot-Aware Matching (SAM) layer, which will be used as the subsequent components. The slot can be regarded as a special category of questions, so inspired by the previous success of explicit attention matching between passage and question in MRC [SLOT] j t at turn t to the Slot-Aware Matching layer by taking the slot presentation as the attention to the representation H: The output represents the correlation between each position of H and the j-th slot at turn t. The Preliminary Selector briefly touches on the relationship of current turn dialogue utterances and each slot to make an initial judgment. For the j-th slot (1 ≤ j ≤ J) at turn t, we feed its output representation h [SLOT] j t and the dialogue representation H t to the SAM as follows: where α j t ∈ R N ×1 denotes the correlation between each position of the dialogue and the j-th slot at turn t. Then we get the aggregated dialogue representation H j t ∈ R N ×d and passed it to a fully connected layer to get classification the j-th slot's logits ŷj t composed of selected (logit sel i t ) and fail (logit fai j t ) elements as follows: We calculate the difference as the Preliminary Selector score for the j-th slot at turn t: Pre score j t = logit sel j t -logit fai j t , and define the set of the slot indices as U 1,t = {j|Pre score j t > 0}, and its size as J 1,t = |U 1,t |. In the next paragraph, the slot in U 1,t will be processed as the target object of the Ultimate Selector. The training objectives of both extractive method and classification-based method are defined as cross-entropy loss: where logit p j t is the target indicating the proportion of all possible extracted temporary slot values which is calculated according to the form of Equation The training objective L gen,t of this module has the same form of training objective as in the Ultimate Selector. During training, we optimize both Dual Slot Selector and Slot Value Generator. Preliminary Selector We use cross-entropy as a training objective: (25) where ŷj t denotes the prediction and y j t is the target indicating whether the slot is selected. We choose MultiWOZ 2.0 Following TRADE These domains contain 30 slots (i.e., J = 30). We use joint accuracy and slot accuracy as evaluation metrics. Joint accuracy refers to the accuracy of the dialogue state in each turn. Slot accuracy only considers individual slot-level accuracy. We compare the performance of DSS-DST with the following competitive baselines: DSTreader formulates the problem of DST as an extractive QA task and extracts the value of the slots from the input as a span We employ a pre-trained ALBERT-large-uncased model We train the Preliminary Selector for 10 epochs and train the Ultimate Selector and the Slot Value Generator for 30 epochs. During training the Slot Value Generator, we use the ground truth selected slots instead of the predicted ones. We set k to 2, β to 0.55, and δ to 0. For all experiments, we report the mean joint accuracy over 10 different random seeds to reduce statistical errors. Table Pre-trained Language Model For a fair comparison, we employ different pre-trained language models with different scales as encoders for training and testing on MultiWOZ 2.1 dataset. As shown in Table Separate Slot Selector To explore the effectiveness of the Preliminary Selector and Ultimate Selector respectively, we conduct an ablation study of the two slot selectors on MultiWOZ 2.1. As shown in Table Dialogue History for the Dual Slot Selector As aforementioned, we consider that the slot selection only depends on the current turn dialogue. In order to verify it, we attach the dialogue of the previous turn to the current turn dialogue as the input of the Dual Slot Selector. We observe in Table We try the number from one to three for the k to observe the influence of the selected dialogue history on the Slot Value Generator. As shown in Table Table We introduce an effective two-stage DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history. The Dual Slot Selector determines each slot whether to update or to inherit based on the two conditions. The Slot Value Generator employs a hybrid method to generate new values for the slots selected to be updated according to the dialogue history. Our model achieves state-of-the-art performance of 56.93%, 60.73%, and 58.04% joint accuracy with significant improvements (+2.54%, +5.43%, and +6.34%) over previous best results on MultiWOZ 2.0, Multi-WOZ 2.1, and MultiWOZ 2.2 datasets, respectively. The mechanism of a hybrid method is a promising research direction and we will exploit a more comprehensive and efficient hybrid method for slot value generation in the future.
1,465
2,857
1,465
Opportunistic Decoding with Timely Correction for Simultaneous Translation
Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently. Most existing frameworks, however, have difficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conservative. We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information. At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality. Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation.
Simultaneous translation, which starts translation before the speaker finishes, is extremely useful in many scenarios, such as international conferences, travels, and so on. In order to achieve low latency, it is often inevitable to generate target words with insufficient source information, which makes this task extremely challenging. Recently, there are many efforts towards balancing the translation latency and quality with mainly two types of approaches. On one hand, l a t e x i t > revision window decoding time t < l a t e x i t s h a 1 _ b a s e 6 4 = " t 6 X a y t d I s H w d U 4 A e C N D S j P N P 5 s M = " > A A n e I F X 4 9 F 4 N t 6 M 9 / l o x S h 3 9 u A X j I 9 v L j K U m g = = < / l a t e x i t > irreversible Figure 1: Besides y t , opportunistic decoding continues to generate additional w words which are represented as ŷ w t . The timely correction only revises this part in future steps. Different shapes denote different words. In this example, from step t to t + 1, all previously opportunistically decoded words are revised, and an extra triangle word is generated in opportunistic window. From step t + 1 to t + 2, two words from previous opportunistic window are kept and only the triangle word is revised. Though the existing efforts improve the performance in both translation latency and quality with more powerful frameworks, it is still difficult to choose an appropriate policy to explore the optimal balance between latency and quality in practice, especially when the policy is trained and applied in different domains. Furthermore, all existing approaches are incapable of correcting the mistakes from previous steps. When the former steps commit errors, they will be propagated to the later steps, inducing more mistakes to the future. Inspired by our previous work on speculative beam search We also define, for the first time, two metrics for revision-enabled simultaneous translation: a more general latency metric Revision-aware Average Lagging (RAL) as well as the revision rate. We demonstrate the effectiveness of our proposed technique using fixed
Full-sentence NMT. The conventional fullsentence NMT processes the source sentence x = (x 1 , ..., x n ) with an encoder, where x i represents an input token. The decoder on the target side (greedily) selects the highest-scoring word y t given source representation h and previously generated target tokens, y <t = (y 1 , ..., y t-1 ), and the final hypothesis y = (y 1 , ..., y t ) with y t = <eos> has the highest probability: Simultaneous Translation. Without loss of generality, regardless the actual design of policy, simultaneous translation is represented as: , y <t ) (2) where g(t) can be used to represent any arbitrary fixed or adaptive policy. For simplicity, we assume the policy is given and does not distinguish the difference between two types of policies. Correction and Beam Search Opportunistic Decoding. For simplicity, we first apply this method to fixed policies. We de-fine the original decoded word sequence at time step t with y t , which represents the word that is decoded in time step t with original model. We denote the additional decoded words at time step t as ŷ w t = (y 1 t , ..., y w t ), where w denote the number of extra decoded words. In our setting, the decoding process is as follows: where • is the string concatenation operator. We treat the procedure for generating the extra decoded sequence as opportunistic decoding, which prefers to generate more tokens based on current context. When we have enough information, this opportunistic decoding eliminates unnecessary latency and keep the audience on track. With a certain chance, when the opportunistic decoding tends to aggressive and generates inappropriate tokens, we need to fix the inaccurate token immediately. Timely Correction. In order to deliver the correct information to the audience promptly and fix previous mistakes as soon as possible, we also need to review and modify the previous outputs. At step t + 1, when encoder obtains more information from x g(t) to x g(t+1) , the decoder is capable to generate more appropriate candidates and may revise and replace the previous outputs from opportunistic decoding. More precisely, ŷ w t and y t+1 • ŷ w-1 t+1 are two different hypothesis over the same time chunk. When there is a disagreement, our model always uses the hypothesis from later step to replace the previous commits. Note our model does not change any word in y t from previous step and it only revise the words in ŷ w t . Modification for Adaptive Policy. For adaptive policies, the only difference is, instead of committing a single word, the model is capable of generating multiple irreversible words. Thus our proposed methods can be easily applied to adaptive policies. Correction with Beam Search. When the model is committing more than one word at a time, we can use beam search to further improve the translation quality and reduce revision rate The decoder maintains a beam B k t of size b at step t, which is ordered list of pairs The decoder generates target word y 4 = "his" and two extra words "welcome to" at step t = 4 when input x 9 = "zàntóng" ("agreement") is not available yet. When the model receives x 9 at step t = 5, the decoder immediately corrects the previously made mistake "welcome" with "agreement" and emits two additional target words ("to President"). The decoder not only is capable to fix the previous mistake, but also has enough information to perform more correct generations. Our framework benefits from opportunistic decoding with reduced latency here. Note though the word "to" is generated in step t = 4, it only becomes irreversible at step t = 6. hypothesis, probability , where k denotes the k th step in beam search. At each step, there is an initial beam B 0 t = [ y t-1 , 1 ]. We denote one-step transition from the previous beam to the next as where top b (•) returns the top-scoring b pairs. Note we do not distinguish the revisable and nonrevisable output in y for simplicity. We also define the multi-step advance beam search function with recursive fashion as follows: When the opportunistic decoding window is w at decoding step t, we define the beam search over w + 1 (include the original output) as follows: where next b n+w (•) performs a beam search with n + w steps, and generate y t as the outputs which include both original and opportunistic decoded words. n represents the length of y t We define, for the first time, two metrics for revision-enabled simultaneous translation. AL is introduced in < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 A 3 3 j C S J Y n P / z s 9 We hereby propose a new latency, Revisionaware AL (RAL), which can applied to any kind of translation scenarios, i.e., full-sentence translation, use re-translation as simultaneous translation, fixed and adaptive policy simultaneous translation. Note that for latency and revision rate calculation, we count the target side difference respect to the growth of source side. As it is shown in Fig. From the audience point of view, once the former words are changed, the audience also needs to take the efforts to read the following as well. Then we also penalize the later words even there are no changes, which is shown with blue arrow in Fig. (5) The above definition can be visualized as the thick black line in Fig. where τ (|x|) denotes the cut-off step, and r = |y|/|x| is the target-to-source length ratio. Since each modification on the target side would cost extra effort for the audience to read, we penalize all the revisions during the translation. We define the revision rate as follows: where dist can be arbitrary distance measurement between two sequences. For simplicity, we design where pad is a padding symbol in case b is shorter than a. Datasets and Implementation We evaluate our work on Chinese-to-English and English-to-Chinese simultaneous translation tasks. We use the NIST corpus (2M sentence pairs) as the training data. We first apply BPE For evaluation, we use NIST 2006 and NIST 2008 as our dev and test sets with 4 English references. We re-implement wait-k model Fig. Performance on Adaptive Policy Fig. We use threshold ρ ∈ {0.55, 0.53, 0.5, 0.47, 0.45}. We vary beam size b ∈ {1, 3, 5, 7, 10} and select the best one on devset. Comparing with conventional beam search on consecutive writes, our decoding algorithm achieves even much higher BLEU and less latency. We further investigate the revision rate with different beam sizes on wait-k policies. Fig. We have proposed an opportunistic decoding timely correction technique which improves the latency and quality for simultaneous translation. We also defined two metrics for revision-enabled simultaneous translation for the first time.
905
2,101
905
Using sparse semantic embeddings learned from multimodal text and image data to model human conceptual knowledge
Distributional models provide a convenient way to model semantics using dense embedding spaces derived from unsupervised learning algorithms. However, the dimensions of dense embedding spaces are not designed to resemble human semantic knowledge. Moreover, embeddings are often built from a single source of information (typically text data), even though neurocognitive research suggests that semantics is deeply linked to both language and perception. In this paper, we combine multimodal information from both text and image-based representations derived from state-of-the-art distributional models to produce sparse, interpretable vectors using Joint Non-Negative Sparse Embedding. Through indepth analyses comparing these sparse models to human-derived behavioural and neuroimaging data, we demonstrate their ability to predict interpretable linguistic descriptions of human ground-truth semantic knowledge.
Distributional Semantic Models (DSMs) are used to represent semantic information about concepts in a high-dimensional vector space, where each concept is represented as a point in the space such that concepts with more similar meanings are closer together. Unsupervised learning algorithms are regularly employed to produce these models, where learning depends on statistical regularities in the distribution of words, exploiting a theory in linguistics called the distributional hypothesis. Recent developments in deep learning have resulted in weakly-supervised prediction-based methods, where, for example, a neural network is trained to predict words from surrounding contexts, and the network parameters are interpreted as vectors of the distributional model In cognitive neuroscience, research demonstrates that representations of high-level concepts corresponding to the meanings of nouns and visual objects are widely distributed and overlapping across the cortex
Much of the research aimed at the sparse decomposition of dense vector spaces is closely associated with the work of In total, we used sixteen distributional semantic models, eight of which are dense and eight of which are their sparse counterparts. These models are summarized in Table We implemented two state-of-the-art text-based embedding models, Word2Vec and GloVe, to act as initialisers for our sparse models, following a similar approach to GloVe. Global Vector for Word Representation Word2Vec. Word2Vec We make use of the image embeddings constructed by To retrieve activation vectors for object categories from the ESP dataset, CNN-Max. Each word embedding was produced by taking the elementwise maximum value over all 1000 CNN activation vectors obtained for the sampled images with the same label word. CNN-Mean. Each word embedding was produced by taking the elementwise average of all 1000 activation vectors associated with the same label word. All image embeddings are of size 6144, corresponding to the size of the penultimate layer of the CNN. The embeddings used in our paper correspond to the ESP game labels (which uses a larger number of images, more natural images, and more labels than ImageNet), and all embeddings are normalised to mean zero and L2 unit length before downstream analysis. Again following (1) Here, α is a mixing parameter that determines the relative contribution of each modality to the combined semantic space. We set α = 0.5, so that text and image sources contribute equally to the combined embeddings. Following To produce the new sparse representations, we use the NNSE matrix factorisation technique 1 1 Non-Negative Sparse Embedding code was kindly provided by Partha Talukdar. subject to the constraints which ensure sparse and non-trivial solutions for A NNSE has been extended as a method to combine multiple dense word-feature matrices X ∈ R wx×k and Y ∈ R wy×n into a single nonnegative sparse matix, an extension called Joint Non-Negative Sparse Embedding (JNNSE; Fyshe et al. ( For the NNSE factorization of each of the four initial dense unimodal text and image models (GloVe, Word2Vec, CNN-Mean and CNN-Max), the sparsity parameter λ was set to 0.05 and each model's dimensionality (p) was reduced down from its original size by a factor of approximately 5; the text embedding size was reduced to 200 and both image model embedding sizes were reduced to 1000 (see Table To create sparse multimodal models corresponding to the concatenated multimodal dense models, four new models were produced using Equation 3. These models were constructed by combining all combinations of pruned image and text-based models through JNNSE to produce sparse embeddings of size 200 from their original dimensions of 6144 and 1000 respectively. The sparsity parameter λ was set to 0.025. Though all sparse embedding matrices are calculated over a smaller lexicon and have a much smaller embedding size compared to the original dense embeddings, in the next section, we investigate how these models still produce competitive results on semantic evaluation benchmarks, including neurocognitive data. The aim of our experiments is to compare the quality of the dense and sparse unimodal and multimodal embedding models, with a focus on their ability to explain human-derived semantic data. We use several qualitatively different analyses of how well the models explain human-derived measures of semantic representation and processing. In the results that follow, we first demonstrate that sparse multimodal models are competitive with larger dense embedding models on standard semantic similarity evaluation benchmarks. We then investigate whether the underlying representations of the sparse, multimodal models are more easily interpreted in terms of human semantic property knowledge about familiar concepts, by evaluating the models' ability to predict predicates describing property knowledge found in human property norm data. Finally, we evaluate the models' ability to predict human brain activation data. A widely used evaluation technique for distributional models is the comparison with human se-mantic similarity rating benchmarks. We evaluate our models on three popular datasets which each reflect slightly different intuitions about semantic similarity. WordSim353 MEN SimLex999. In evaluating against the benchmarks, we use the intersection of the words occurring in the benchmarks and the words used in creating our embeddings. Not all words used in the similarity benchmarks appear in our word embedding mod- els, although the overlap is quite high 2 . Evaluations in the next section are based on the subsets of word-pairs for which we have embedding vectors for each word. Figure 3 In order to ensure that the dense models were not disadvantaged by having more dimensions, we also trained dense text models with 200 dimensions and found that these did not perform better than the 1000-dimensional models. Furthermore, we applied SVD to each of the 1000-dimensional dense models to reduce the number of dimensions to 200 but again found the results to be worse than the results for both the 1000-dimensional dense models and the sparse models. ventional benchmarks suggest redundancy in the dense embedding representations, with the sparse embeddings providing a parsimonious representation of semantics that retains information about semantic similarity. Moreover, multimodal models combining both linguistic and perceptual experience better account for human similarity judgements. Following The human-elicited property×concept matrix is sparse; most properties are not true of most concepts. For the logistic regression model trained for each semantic property, we therefore balance positive and negative training items by weighting coefficients inversely proportional to the frequency of the two classes. Properties which are true of less than five concepts (across the set of concepts appearing in both the CSLB norms and our embedding models) were removed, to ensure sufficient positive and negative training cases across concepts. To evaluate the logistic regression models' ability to predict human property knowledge for held-out concepts, we used 5-fold cross-validation with stratified sampling to ensure that at least one positive case occurred in each test set. Using the embedding dimensions as training data, we train on the 4 folds and test on the final fold, and evaluate the logistic regression classifier by taking the average F1 score over all the test folds. For subsequent analysis of the fitted regression models for each property, the semantic properties were partitioned into the five general classes given in Table Information about a specific semantic property can be stored latently over the dimensions of a semantic embedding model, such that the semantic prop-erty can be reliably decoded given an embedding vector, as tested in the previous section. However, a stronger test of how closely an embedding model relates to human-elicited conceptual knowledge is to investigate whether the embedding dimensions encode interpretable, human-like semantic properties directly. In other words, does an embedding model learn a set of basis vectors for the semantic space that corresponds to verbalisable, human semantic properties like is-round, is-a-fruit, and so on? To address this question, we evaluated how the dense and sparse embeddings differ in their degree of correspondence to the property norms by analysing the fitted parameters of our property prediction logistic regression classifiers. For each embedding model and semantic property, we average the fitted parameters in the logistic regression models across cross-validation iterations and extract the 20 parameters with the highest average magnitude. For each property, we store these 20 parameters in a vector sorted by decreasing magnitude. If a particular semantic property is decodable directly from only one (or very few) embedding dimensions, then the magnitude of the first element (or few elements) of the sorted parameter vector will be very high. Over all properties, we then apply element-wise averaging of the sorted parameter vectors. Figure As a further test of how well dimensions of embedding models correspond to human semantic knowledge, we calculated pairwise correlations, across concepts, between embedding dimensions and properties. For a given semantic property, we can test which of two embedding models best encode that semantic property in a single dimension -an embedding model that more directly matches the property norm data will tend to have a dimension that correlates more strongly with that property than any dimension of a model that encodes information about that property more latently. For this analysis, we first filtered the set of concepts in the dense models to include only the concepts in the CSLB norms, and recalculated the (J)NNSE sparse models over these concepts only. We tuned the sparsity parameter so that the sparsity of the sparse embedding models closely matched the sparsity of CSLB concept-property matrix (97% sparse), and kept the dimensionality of the sparse embeddings the same as our original sparse models. Let v P be the values for a property P for each concept in the CSLB norms, and let M d and M s represent the set of embedding columns for a dense model and its sparse counterpart respectively. Then for each property P , we evaluate the inequality where ρ is the Spearman correlation. We count the proportion of times the inequality is true across all properties in the norms, repeat this for each of the eight dense models and their sparse counterparts, and calculate the average. The results show that the sparse models have the most correlated dimension 63.2% of the time. In order to ensure that the dense models were not disadvantaged by having more dimensions (and to test that the sparsity constraint rather than dimensionality reduction was the reason for the superior performance of the sparse models), we used SVD on all dense models to reduce the dimensions down to the same size as their sparse counterparts and reran the test. Here the results show that the sparse models have the most correlated dimension 81.1% of the time, indicating that the sparse models do learn semanticsencoding dimensions from the dense models that more closely correspond to human-derived property knowledge. For our final set of analysis, we tested how closely each of the eight dense and eight sparse models relate to neurocognitive processing in the human brain. We used BrainBench The results demonstrate that semantic distributional models that encode a range of different information are better at making statistically significant predictions on brain data. In general, the multimodal models do better than the unimodal text and image models at fitting the brain data. Finally, we computed the direct correlation be-tween the representations M D and M B , using the technique of Representational Semantic Analsysis (RSA) For a given distributional model, we average all Spearman correlation values across the nine participants for each imaging modality; the results are presented in Table In this paper, we have demonstrated the representational potential of sparse multimodal distributional models using several qualitatively different and complimentary evaluation tasks that are derived from human data: semantic similarity ratings, conceptual property knowledge, and neuroimaging data. We show that both sparse and multimodal embeddings achieve a more faithful representation of human semantics than dense models constructed from a single information source.
911
971
911
Scene Graph as Pivoting: Inference-time Image-free Unsupervised Multimodal Machine Translation with Visual Scene Hallucination
In this work, we investigate a more realistic unsupervised multimodal machine translation (UMMT) setup, inference-time image-free UMMT, where the model is trained with sourcetext image pairs, and tested with only sourcetext inputs. First, we represent the input images and texts with the visual and language scene graphs (SG), where such fine-grained visionlanguage features ensure a holistic understanding of the semantics. To enable pure-text input during inference, we devise a visual scene hallucination mechanism that dynamically generates pseudo visual SG from the given textual SG. Several SG-pivoting based learning objectives are introduced for unsupervised translation training. On the benchmark Multi30K data, our SG-based method outperforms the best-performing baseline by significant BLEU scores on the task and setup, helping yield translations with better completeness, relevance and fluency without relying on paired images. Further in-depth analyses reveal how our model advances in the task setting.
Current neural machine translation (NMT) has achieved great triumph UMMT systems are trained with only the textimage pairs (<text-img>), which can be easier to collect than the parallel source-target sentence pairs (<src-tgt>) In this work, we present a novel UMMT method that solves all aforementioned challenges. First of all, to better represent the visual (also the textual) inputs, we consider incorporating the visual scene graph (VSG) Overall, we make the following contributions: ▶ 1) We are the first to study the inferencetime image-free unsupervised multimodal machine translation, solved with a novel visual scene hallucination mechanism. ▶ 2) We leverage the SGs to better represent the visual and language inputs. Moreover, we design SG-based graph pivoting learning strategies for UMMT training. ▶ 3) Our model achieves huge boosts over strong baselines on benchmark data. Code is available at
Neural machine translation has achieved notable development in the era of deep learning Unsupervised machine translation aims to learn cross-lingual mapping without the use of largescale parallel corpora. The setting is practically meaningful to those minor languages with hard data accessibility. The basic idea is to leverage alternative pivoting contents to compensate the parallel signals based on the back-translation method Scene graph describes a scene of an image or text into a structure layout, by connecting discrete objects with attributes and with other objects via pairwise relations All the UMMT researches assume that the <src-img> pairs are required during inference, yet we notice that this can be actually unrealistic. We thus propose a visual hallucination mechanism, achieving the inference-time image-free goal. There are relevant studies on supervised MMT that manage to avoid image inputs (with text only) during inference. The visual retrieval-base methods 3 Scene Graph-based Translation System In UMMT, no parallel translation pairs are available. This work considers an inference-time imagefree UMMT. During training, the data availability is <x, z>∈<X , Z> and the corresponding srcside LSG x and VSG, where X are the src-side sentences, and Z are the paired images. During inference, the model generates tgt-side sentences y ∈ Y based on the inputs of only x ∈ X and the corresponding LSG x , while the visual scene VSG ′ is hallucinated from LSG x . In both training and inference, y will be generated from the intermediate tgt-side language scene graph LSG y , which is produced from LSG x and VSG (or VSG ′ ). As shown in Fig. We first employ two off-the-shelf SG parsers to obtain the LSG and VSG, separately (detailed in the experiment part). For simplicity, here we unify the notations of LSG and VSG as SG. We denote a SG as G=(V, E), where V are the nodes (including object o, attribute a and relation r types), and E are the edges e i,j between any pair of nodes v i ∈ V . We then encode both the VSG and LSG with two spatial Graph Convolution Networks (GCN) where r i is the representation of node v i . We here denote r L i as LSG's node representation, and r V i as VSG's node representation. Visual Scene Hallucinating During inference, the visual scene hallucination (VSH) module is activated to perform two-step inference to generate the hallucinated VSG ′ , as illustrated in Fig. Step2: completing vision aims to enrich and augment the skeleton VSG into a more realistic one. It is indispensable to add new nodes and edges in the skeleton VSG, since in real scenarios, visual scenes are much more concrete and vivid than textual scenes. Specifically, we develop a node augmentor and a relation augmentor, where the former decides whether to attach a new node to an existing one, and the later decides whether to create an edge between two disjoint nodes. To ensure the fidelity of the hallucinated VSG ′ , during training, the node augmentor and relation augmentor will be updated (i.e., with the learning target L VSH ) with the input LSG and VSG supervisions. Appendix §A.1 details the VSH module. SG Fusing&Mapping Now we fuse the heterogeneous LSG x and VSG into one unified scene graph with a mixed view. The key idea is to merge the information from two SGs serving similar roles. In particular, we first measure the representation similarity of each pair of <text-img> nodes from two GCNs. For those pairs with high alignment scores, we merge them as one by averaging their representations, and for those not, we take the union structures from two SGs. This results in a pseudo tgt-side LSG y . We then use another GCN model for further representation propagation. Finally, we employ a graph-to-text generator to transform the LSG y representations to the tgt sentence y. Appendix §A.2 presents all the technical details in this part. In this part, based on the SG pivot we introduce several learning strategies to accomplish the unsupervised training of machine translation. We mainly consider 1) cross-SG visual-language learning, and 2) SG-pivoted back-translation training. Fig. The visual-language SG cross-learning aims to enhance the structural correspondence between the LSG and VSG. Via cross-learning we also teach the SG encoders to automatically learn to highlight those shared visual-language information while deactivating those trivial substructures, i.e., denoising. The idea is to encourage the text and visual nodes that serve a similar role in VSG and LSG to be closer. To align the fine-grained structures between SGs, we adopt the contrastive learning (CL) technique A threshold value α is pre-defined to decide the alignment confidence, i.e., pairs with s i,j > α are considered similar. Then we put on the CL loss: where τ >0 is an annealing factor. j * means a positive pair with i, i.e., s i,j * >α. Cross-modal Cross-reconstruction We further strengthen the correspondence between VSG and LSG via cross-modal cross-reconstruction. Specifically, we try to reconstruct the input sentence from the VSG, and the image representations from the LSG. In this way we force both two SGs to focus on the VL-shared parts. To realize VSG→x we employ the aforementioned graph-to-text generator. For LSG→z, we use the graph-to-image generator Back-translation is a key method to realize unsupervised machine translation In this work, we further aid the back-translation with structural SG pivoting. Visual-concomitant Back-translation We perform the back-translation with the SG pivoting. We denote the X →Y translation direction as y=F xz→y (x, z), and Y→Z as x=F yz→x (y, z). As we only have src-side sentences, the backtranslation is uni-directional, i.e., x→ȳ→x. (5) Captioning-pivoted Back-translation Image captioning is partially similar to MMT besides the non-text part of the input. Inspired by ⋆ Remarks In the initial stage, each of the above learning objectives will be executed separately, in a certain order, so as to maintain a stable and effective UMMT system. We first perform L CMA and L REC , because the cross-SG visual-language learning is responsible for aligning the VL SGs, based on which the high-level translation can happen. Then we perform back-translation training L VCB and L CPB , together with VSH updating L VSH . Once the system tends to converge, we put them all together for further fine-tuning: 5 Experiments The experiments are carried out on Multi30K data (2020), we mainly consider the English-French (En↔Fr) and English-German (En↔De). For each translation direction, we only use the src sentence & img for training, and only the src sentence for testing. We also test on the WMT16 En→Ro and WMT14 En→De, En→Fr. WMT Following prior research, we employ the Faster-RCNN Our results are computed with a model averaging over 5 latest checkpoints with significance test. Our experiments are based on the NVIDIA A100 Tensor Core GPUs. In Table (man in t-shirt and shorts kicks football off the tee.) (two bicycles stand behind two people sitting on the grass near a body of water.) back-translation influences the results the biggest, with an average 4.3 BLEU score. Overall, two SG-pivoted back-translation training targets show much higher influences than the two cross-SG visual-language learning objectives. When removing both two back-translation targets, we witness the most dramatic decrease, i.e., average -5.7 BLEU. This validates the long-standing finding that the back-translation mechanism is key to unsupervised translation unsupervised MMT. We can find that our unsupervised method only loses within 1 BLEU score to supervised models, e.g., UVR and PUVR. In this part we try to dive deep into the model, presenting in-depth analyses to reveal what and how our proposed method really works and improves. • Integration of the vision and language SGs helps gain a holistic understanding of input. Both VSG and LSG advance in comprehensively depicting the intrinsic structure of the content semantics, which ensures a holistic understanding of the input texts and images. By encoding the vision and language SGs, it is expected to completely capture the key components from src inputs, and thus achieve better translations. However, without such structural features, some information may be lost during the translation. In Table • SG-based multimodal feature modeling helps achieve more accurate alignment between vision and language. Another merit to integrating the SGs is that the fine-grained graph modeling of visual and language scenes obviously aids more precise multimodal feature alignment. In this way, the translated texts have higher fidelity to the original texts. Inaccurate multimodal alignment without considering the SG modeling will otherwise lead to worse ambiguity. Observing the ambiguity in Table • The longer and more complex the sentences, the higher the translation quality benefiting from the SGs features. In this work, we investigate the SG structures to model the input texts. Graph modeling of the texts has proven effective for resolving the long-range dependency issue • Incorporating SGs into MMT advances in more fluent translation. Also, modeling the semantic scene graph of the input features contributes a lot to the language fluency of the translation texts. Looking at the Fluency item in Table • SG-based visual scene hallucination mechanism helps gain rich and correct visual features. Different from the baseline retrieval-based methods that directly obtain the whole images (or local regions), our proposed VSH mechanism instead compensatively generates the VSGs from the given LSGs. In this way, the hallucinated visual features enjoy two-fold advantages. On the one hand, the pseudo VSG has high correspondence with the textual one, both of which will enhance the shared feature learning between the two modalities. On the other hand, the hallucinated VSG will produce some vision-specific scene components and structures, providing additional clues to facilitate back to the textual features for overall better semantic understanding. Fig. We investigate an inference-time image-free setup in unsupervised multimodal machine translation. In specific, we integrate the visual and language scene graph to learn the fine-grained visionlanguage representations. In §3.2 we give a brief induction to the overall model framework. Here we extend the details of each module of the scene graph-based multimodal translation backbone. In Fig. First of all, we note that VSH only will be activated to produce VSG hallucination at inference time. During the training phase, we construct the VSG vocabularies of different VSG nodes. We denote the object vocabulary as D o , which caches the object nodes from parsed VSG of training images; denote the attribute vocabulary as D a , which caches the attribute nodes; and denote the relation vocabulary as D r , which caches the relation nodes. Those vocabularies will be used to provide basic ingredients for VSG hallucination. At inference time, VSH is activated to perform two-step inference to generate the hallucinated VSG ′ . The process is illustrated in Fig. Step1: Sketching Skeleton This step builds the skeleton VSG from the raw LSG. Specifically, we only need to transform the textual entity nodes into the visual object nodes, while keeping unchanged the whole graph topology. As for the attribute nodes and the relation nodes, we directly copy them into the VSG, as they are all text-based labels that are applicable in VSG. Then we transform the textual entity nodes into the visual object nodes. For each textual entity node in LSG, we employ the edges, which is illustrated in Fig. ▶ For the node augmentor, we first traverse all the object nodes in the skeleton VSG. For each object node v i , we then perform k-order routing over its neighbor nodes. We denote its neighbor nodes as Then we use the attention to learn the neighbor influence to v i , and obtain the k-order feature representation h i of v i : where r i and r k is the node representations of v i and v k , which are obtained from GCN encoder. Then we use a classifier to make prediction over the total vocabularies of D o and D a , to determine which node v′ i (either an object or an attribute node) should be attached to v i , if any: , where D na = D o ∪ D a ∪ {ϵ}, including an additional dummy token ϵ indicating no new node to be attached to v i . And if the predicted node is an object node, an additional relation classifier will determine what is the relation label ê′ between v′ i and v i : . ▶ For the relation augmentor, we first traverse all the node-pairs (object or attribute nodes, excluding the relation nodes) in the VSG, i.e., v i &v j . Then, for each node in the pair we use a triaffine attention where D pa = D r ∪ {ϵ}, where the dummy token ϵ indicates no new edge should be created between two nodes. The new edge ê′ i,j has a relation label. r i-j is the representation of the path from v i to v j , which is obtained by the pooling function over all the nodes in the path: h pa i-j = Pool(r i , • • • , r j ) . Note that the triaffine scorer is effective in modeling the high-order ternary relations, which will provide a precise determination on whether to add a new edge. During training, the node augmentor and the relation augmentor are trained and updated based on the gold LSG and VSG, to learn the correct mapping between LSG and VSG. Such supervised learning is also important for ensuring that the final generated hallucinated visual scenes are basically coincident with the caption text, instead of random or groundless vision scenes. Here we extend the contents in § 3.2. As shown in Fig. . This is a similar process as in Eq. ( . Then, an autoregressive sequential decoder (Se-qDec) will take r y to generate tgt-side token over the tgt-side vocabulary at each setp, sequentially: e i = SeqDec(e ≤i , r y ) , ŷi ← Softmax(e i ) .
1,017
908
1,017
Improving Topic Quality by Promoting Named Entities in Topic Modeling
News-related content has been extensively studied in both topic modeling research and named entity recognition. However, expressive power of named entities and their potential for improving the quality of discovered topics has not received much attention. In this paper we use named entities as domain-specific terms for newscentric content and present a new weighting model for Latent Dirichlet Allocation. Our experimental results indicate that involving more named entities in topic descriptors positively influences the overall quality of topics, improving their interpretability, specificity and diversity.
News-centric content conveys information about events, individuals and other entities. Analysis of news-related documents includes identifying hidden features for classifying them or summarizing the content. Topic modeling is the standard technique for such purposes, and Latent Dirichlet Allocation (LDA) The main contribution of this work is improving topic quality with LDA by increasing the impor-tance of named entities in the model. The idea is to adapt the topic model to include more domainspecific terms (NE) in the topic descriptors. We designed our model to be flexible, in order to be used in different variations of LDA. We ultimately employ a term-weighting approach for the LDA input. Our results show that: i) named entities can serve as favorable candidates for high-quality topic descriptors, and ii) weighting model based on pseudo term frequencies is able to improve overall topic quality without the need to interfere with LDA's generative process, which makes it adaptable to other LDA variations. The paper is organized in the following manner: in Section 2 we present the related work; Section 3 describes the proposed solution and is followed by Section 4, where the details of evaluation process and results are outlined. We finish with Section 5, concluding the results and next steps.
This section describes the related work in the area of topic modeling, specifically LDA. Several works explored the relation between LDA and named entities in recent years. The most famous model is CorrLDA2 Traditionally, the input of LDA is a documentterm matrix of term frequencies (TF), according to the bag-of-words model (BoW). However, 3 Proposed model LDA model has been criticized for favoring highly frequent, general words in topic descriptors NE Independent model assumes that all named entities in the corpus are α times more important than their initial weights (TF), i.e. they may not be the most important terms in the corpus, but they should weigh α times more than they do now. Therefore, for each column m w of document-term matrix M , we apply scalar multiplication: By varying α, we can set the importance of named entities in the corpus and impact the outcome of topic modeling. The value need not be an integer, since typical LDA implementation can deal with any numbers. In Section 4 we provide results for several tested values of α parameter and discuss our findings. While we want the topics produced by LDA to include more named entities as domain-specific words, we may assume that NE, in fact, should be the most important, i.e. the most frequent, terms in each document. In order to set the weights accordingly, the maximum term-frequency per document is calculated and added to each named entity's weight in each document: This weighting scheme obliges named entities to be the "heaviest" terms in each document. At the same time, we do not change the weight of other frequent terms, so eventually they still have a high probability to make the top terms list. We designed a series of tests to evaluate our proposed model: a) Baseline Unigram: basic model on the corpus consisting of single tokens (no named entities involved); b) Baseline NE: basic model on the corpus with named entities (the strategy of injecting NE in all tests is replacement instead of supplementation, as suggested by Our test corpora consists of news-related publiclyavailable datasets: 1) 20 Newsgroups 2) Reuters-2013: a set of 14595 news articles from Reuters for year 2013, obtained from Financial News Dataset The term "topic coherence" covers a set of measures describing the quality of the topics regarding interpretability by a human. Most widely used measures are based on PMI (or NPMI, normalized) and log conditional probability, both of which rely on the co-occurrence of terms (3) where N is the number of topics, W t is the set of top N t terms in topic t, the vectors are defined as: and the underlying measure is NPMI with probability P sw over a sliding window. C v with sliding window of 110 words Coherence measures tend to favor topics with general highly frequent terms. As a result we end up with well understandable but quite generic topics. A good topic should also be specific enough to distinguish documents Exclusivity: Represents the degree of overlap between topics, based on the appearance of terms in multiple descriptors Lift: Generally used for reranking the terms in descriptors Table Table Table Presented results indicate that, firstly, our proposed model is capable of improving topic quality by only modifying the TF scores in the input of LDA in favor of named entities. This makes it applicable to any LDA-based models relying on the same input. Secondly, we have shown that named entities are well suited to be used as domain-specific terms and produce highquality topics in news-related texts. Next steps in our research include experimenting with different weights for different categories of named entities, as well as adding new coherence measures, such as word2vec-based one, used by
611
1,312
611
A General-Purpose Algorithm for Constrained Sequential Inference
Inference in structured prediction involves finding the best output structure for an input, subject to certain constraints. Many current approaches use sequential inference, which constructs the output in a left-to-right manner. However, there is no general framework to specify constraints in these approaches. We present a principled approach for incorporating constraints into sequential inference algorithms. Our approach expresses constraints using an automaton, which is traversed in lockstep during inference, guiding the search to valid outputs. We show that automata can express commonly used constraints and are easily incorporated into sequential inference. When it is more natural to represent constraints as a set of automata, our algorithm uses an active set method for demonstrably fast and efficient inference. We experimentally show the benefits of our algorithm on constituency parsing and semantic role labeling. For parsing, unlike unconstrained approaches, our algorithm always generates valid output, incurring only a small drop in performance. For semantic role labeling, imposing constraints using our algorithm corrects common errors, improving F 1 by 1.5 points. These benefits increase in low-resource settings. Our active set method achieves a 5.2x relative speedup over a naive approach. 1
The key challenge in structured prediction problems (like sequence tagging and parsing) is inference (also known as decoding), which involves identifying the best output structure y for an input instance x from an exponentially large search space Y At present, inference algorithms are designed to handle task-specific constraints, and there is no general formulation for constrained sequential inference. This contrasts with the state of affairs in NLP before deep learning, when constrained inference approaches used general formulations like Integer Linear Programming (ILP) We present a simple, general-purpose sequential inference algorithm that takes a model and an automaton expressing the constraints as input, and outputs a structure that satisfies the constraints ( §2.2). The automaton guides the inference to always produce a valid output by reshaping the model's probability distribution such that actions deemed invalid by the automaton are not taken. In some situations, it is more natural to express the constraints as a set of automata. However, naively enforcing multiple automata by fully intersecting them is potentially expensive. Instead, our algorithm lazily intersects the automata using an efficient active set method, reminiscent of the cutting-plane algorithm The choice of using automata to express constraints has several benefits. First, automata are capable of expressing constraints used in a wide variety of NLP tasks. Indeed, in §3, we show that task-specific constrained inference approaches implicitly use an automaton. Second, automata can be naturally incorporated into any sequential inference algorithm such as beam search. Finally, automata make enforcing multiple constraints straightforward -only the automata for individual constraints need to be specified, which are then intersected at inference time. Our algorithm is a principled approach for enforcing constraints and has many desirable properties. It decouples the constraints from the inference algorithm, making it generally applicable to many problems. Further, it guarantees valid output and allows for the seamless addition of constraints at inference time without modifying the inference code. We experimentally demonstrate the benefits of our algorithm on two struc-tured prediction tasks, constituency parsing ( §5.1) and semantic role labeling ( §5.2). Our results in constituency parsing show that our algorithm always outputs valid parses, incurring only a small drop in F 1 . In SRL, constrained inference using our algorithm corrects common errors produced by unconstrained inference, resulting in a 1.5 F 1 improvement. This increase in performance is more prominent in low-resource settings. Finally, the active set method for enforcing multiple constraints achieves a 5.2x speed-up over the naive approach of fully intersecting the relevant automata.
We briefly review automata that we use for representing constraints in our algorithm. For the purposes of this work, an automaton is a (possibly weighted) directed graph that compactly encodes a set of strings, known as its language. The two types of automata used in this work are finite-state automata (FSA) and push-down automata (PDA). Our inference algorithm views an automaton as an abstract stateful function, denoted as A, which accepts strings from its language L(A). After consuming the prefix y 1:i of a string y, A provides a score A(y i+1 | y 1:i ) for every symbol y ∈ Σ. Invoking A.accepts(s) tests if a string s is in L(A). The traditional inference problem for structured prediction can be formalized as solving where x and y are the input and output structures, Y x is the set of valid output structures for x, and θ are the parameters of the model p θ (y | x). A common way to solve this problem is to decompose the objective as follows: This decomposition is adopted by popular approaches, such as seq2seq models, SEARN We are interested in versions of Equation 2 in which the output space is described by the language of an automaton. Formally, where L(A x ) is the language of an automaton A x describing the valid output space for instance x. This framework is capable of expressing many constraints which are common in NLP applications, described in detail in §3. Equation 3 can be rewritten as: A o e X E E D 7 q A J L W A w h m d 4 h T c n c V 6 c d + d j 3 l p y i p l D + A P n 8 w c A 9 4 9 X < / l a t e x i t > where A x reshapes p θ to pθ at each time step. To impose hard constraints, we set this score to -∞ for invalid y i and constant for all valid y i . The formulation above assumes that the constraints are described using a single automaton. However, in some scenarios, it is more natural to impose multiple constraints by representing them as a set of automata. A Motivating Example. Consider Figure Issues with Naive Approaches. Naively extending sequential inference algorithms to impose a set of constraints by traversing multiple automata in parallel fails. There is no guarantee that a valid structure will be found, even if the intersection of the automata's languages is non-empty (a proof by counter-example is provided in Appendix A). The alternative solution of intersecting the automata into a single automaton may be intractable, as the size of the intersected automaton grows exponentially in the number of constraints An Active Set Method. Intuitively, intersecting all of the automata may not be necessary because it is possible for a constraint to be satisfied without it being enforced. This is the basis for active set methods (such as the cutting-plane algorithm We present an active set method for imposing multiple constraints represented by a set of automata S x in Algorithm 2. Our algorithm is inspired by the active set algorithm of For an instance x, Algorithm 2 maintains a active set 5 W corresponding to all violated constraints so far. W is represented by the intersection A W of the relevant automata, which is initialized with an automata Σ * that accepts any sequence (line 1). On each iteration, the algorithm runs a constrained inference algorithm (such as Algorithm 1) that uses A W (line 3) to find an output ŷ. Then, FIND-VIOLATION checks if ŷ violates any of the constraints that are not currently in the active set, S x \ W (line 4). If ŷ is accepted by all of the automata (line 5), it is valid and subsequently returned (line 6). Otherwise, the first violated constraint is added to W (line 8), its automaton A intersected with A W (line 9), and constrained inference is re-run (line 3). 4 These constraints are described in detail in §4. 5 Also known as a working set. AW ← AW ∩ A automata intersection 10: procedure FIND-VIOLATION(K, y) 11: for each A in K do 12: if not A.accepts(y) then 13: return A the first violated constraint 14: return null Algorithm 2 is guaranteed to terminate with a valid output. In the worst case, all of the constraints will be eventually enter the active set and inference will run with a fully intersected automata. Although the cost of this worst case is exponential in the number of constraints, this occurs infrequently in practice. Moreover, we found that Algorithm 2 is faster than naively computing the full intersection despite running inference multiple times. We now illustrate the expressibility of automata by showing how they can represent commonly used constraints in various NLP applications. Text Generation. Text generation tasks like image captioning, machine translation, sentence simplification, etc., often require that the output must contain specific words or phrases Sequence Tagging. Many sequence tagging problems (such as NER, shallow parsing, etc.) require that the output tags are valid under the specific tagging scheme, such as BIO, which marks each token as beginning, inside, or outside of a phrase. One can easily write an FSA that recognizes the language of valid tag sequences for these schemes, such as the automaton in Figure Other constraints commonly applied to sequence tagging are specific to the particular task. In SRL, each argument type can appear exactly one time. For instance, for the label A 0 , an FSA with 3 states (before-A 0 , emitting-A 0 and after-A 0 ) can be written to enforce this constraint. See Syntactic Parsing. Syntactic parsing (dependency or constituency) tasks require that the output forms a valid tree, a constraint commonly enforced using the shift-reduce algorithm Semantic Parsing and Code Generation. In semantic parsing and code generation, constraints ensure both the syntactic validity and the executability of the output. For instance, for the predicate ISADJACENT (which compares two countries for adjacency) the syntactic constraint ensures the predicate receives two arguments, whereas the executability ensures they are properly typed (e.g., that they are countries). We elaborate on two constrained inference tasks from the previous section, constituency parsing and semantic role labeling, which serve as case studies for showing the applicability and versatility of our approach. Our goal is not to beat the state-of-the-art, but to illustrate the practicality and benefits of our approach. Constituency Parsing. We follow the experimental setup of We compare our approach (CONSTRAINED) to unconstrained inference (UNCONSTRAINED), which runs beam search and selects the highestscoring output. We also compare to Semantic Role Labeling. For SRL, the unconstrained model is an off-the-shelf implementation of Constituency Parsing. The constraints used in parsing disallow invalid parses, such as the examples in Figure Semantic Role Labeling. The constraints used in SRL disallow invalid sequences, such as the ones in Figure Ease of Implementation. All of the constraints were expressed using an FSA, with the exception of BAL which requires a PDA. The automata were implemented using Pynini We show the practicality of our approach and the benefits over unconstrained inference in the experiments below. In order to illustrate the benefits of constrained inference in low-resource settings, we simulate different levels of supervision. We experimentally show the need for constraints and compare inference strategies for parsing. Necessity of Constraints. In order to demonstrate that constraints are necessary to guarantee 9 We use Comparing Inference Approaches. Figure At first glance, it appears that UNCON-STRAINED is slightly better than both POSTHOC and CONSTRAINED at 100% supervision. However, EVALB ignores any invalid parse trees when computing F 1 , and therefore it is necessary to take coverage (the percent of valid output parses) into account when comparing inference approaches. It is evident from Figure The increased coverage explains the apparent drop in F 1 for constrained approaches. Intuitively, longer sentences are inherently harder to parse. CONSTRAINED and POSTHOC produce a valid, but potentially incorrect parse for these sentences and get penalized. In contrast, UNCONSTRAINED is more likely to produce invalid parses for these sentences, and is thus effectively evaluated on a test set containing short sentences. To verify this, we re-evaluated the inference approaches on test sentences which are ≥30 tokens (Figure We show that we can improve the performance of a trained model by incorporating constraints at inference time which address common errors Correcting Common Errors. We now show how some common SRL errors like the ones discussed in Figure Starting with an unconstrained baseline model (UNCONSTRAINED), we successively add the NODUP, LEGALARGS and SPANLABEL constraints, in that order. Figure Active Set Size and Efficiency. For SRL, the maximum possible size of the active set W is 7 for any test instance. Intersecting all 7 automata would lead to an automaton with 1043 states and 2022 arcs. We now measure the size of the active set observed in practice. Figure To evaluate the efficiency of Algorithm 2, Figure Factors Affecting Speed-up. In general, the amount of speed-up provided by the active set method depends on several factors, including the number of constraints, the size of the constraint automata, and the cost of computing the softmax during inference. The largest gains will come when the former two factors are most expensive, as the active set will only incur the intersection cost as needed. If the output vocabulary is large, softmax computation may outweigh the cost of fully intersecting the constraint automata. Traditional Constrained Inference. Traditional constrained inference approaches enforced constraints using general combinatorial optimization frameworks, for instance linear Data-driven Approaches. Many sequential inference approaches do not enforce constraints at all, in the hope that they will be learned from data Post-Hoc Constraint Satisfaction. Some approaches first run unconstrained inference to find the top-k structures and then identify valid structures in a post-hoc manner Our Work. We draw from work in NLP that uses automata We presented a principled, general-purpose constrained sequential inference algorithm. Key to our algorithm is using automata to represent constraints, which we showed are capable of expressing popularly used constraints in NLP. Our approach is an attractive alternative to task-specific constrained inference approaches currently in use. Using a fast active set method, we can seamlessly incorporate multiple constraints at inference time without modifying the inference code. The experimental results showed the value of our approach over unconstrained inference, with the gains becoming more prominent in low-resource settings.
1,318
2,867
1,318
Event Embeddings for Semantic Script Modeling
Semantic scripts is a conceptual representation which defines how events are organized into higher level activities. Practically all the previous approaches to inducing script knowledge from text relied on count-based techniques (e.g., generative models) and have not attempted to compositionally model events. In this work, we introduce a neural network model which relies on distributed compositional representations of events. The model captures statistical dependencies between events in a scenario, overcomes some of the shortcomings of previous approaches (e.g., by more effectively dealing with data sparsity) and outperforms count-based counterparts on the narrative cloze task.
It is generally believed that the lack of knowledge on how individual events are organized into higher-level scenarios is one of the major obstacles for natural language understanding. Texts often do not provide a detailed specification of underlying events as writers rely on the ability of humans to read between the lines or, more specifically, on their common sense knowledge of underlying scenarios. For example, going to a restaurant involves entering the restaurant, getting seated, making an order and so on. Consequently, when describing a visit to a restaurant, a writer will not specify all the events, as they are obvious to the reader. This kind of knowledge is typically refereed to as semantic scripts Early work on scripts focused on manual construction of knowledge bases and rule-based systems for inference using these knowledge bases Most of these methods represent events as verbal predicates along with tuples of their immediate arguments (i.e. syntactic dependents of the predicate). These approaches model statistical dependencies between events (or, more formally, mentions of events) in a document, often restricting their model to capturing dependencies only between events sharing at least one entity (a common protagonist). We generally follow this tradition in our approach. Much of this previous work has focused on count-based techniques using, for example, either the generative framework In this work our goal is to overcome the shortcomings of the count-based methods described above by representing events as real-valued vectors (event embeddings), with the embeddings computed in a compositional way relying on the predicate and its arguments. These embeddings capture semantic properties of events: events which differ in surface forms of their constituents but are semantically similar will get similar embeddings. The event embeddings are used and estimated within our probabilistic model of semantic scripts. We evaluate our model on predicting left-out events (the narrative cloze task) where it outperforms existing count-based methods.
The general idea in the previous count based methods is to collect events sequences for an entity from the corpus (referred as a script). An entity is typically a noun/pronoun describing a person, location or temporal construct mentioned in a document. A document is parsed using a statistical dependency parser. Then, the document is processed with a coreference resolution system, linking all the mentions of an entity in the document. Information from the parser and the coreference system is used to collect all the events corresponding to an entity. Different systems differ on how they represent an event. We later explain in detail these event representation differences. The process described above is repeated for all the documents in the corpus to collect event chains for each of the entities. The collected event sequences are used to build different statistical script models. These script models are typically evaluated using a narrative cloze test as explained in section 3. In the cloze test, an event is removed from an event chain and the task is to predict the missing event. As described above different script models differ in, how they represent an event. One of the disadvantage of the count based models described above is poor event representations. Due to these impoverished representations, these models fail to take into account compositional nature of an event and suffer from sparsity issues. These models treat verb-argument pair as one unit and collect chains of verb-arguments pair observed during training. Verb-arguments combinations never observed during training are assigned zero (or very small, if model is smoothed) probability, even if these are semantically similar to the ones in training. These models fail to account for semantic similarity between individual components (verbs and arguments) of an event. For example, events cook(John,spaghetti,dinner) and prepared(Mary,pasta,dinner) are semantically very similar but count based models would not take this into account unless both events occur in similar context. Due to sparsity issues, these models can fail. This can be exemplified as follows. Suppose the following text is observed during model training : John cooked spaghetti for dinner. Later, John ate dinner with his girlfriend. After dinner, John took a dog for a walk. After 30 minutes, John came home. After a while, John slept on the bed. Event sequence (script) corresponding to the above story is: cook(john,spaghetti,dinner)→eat(john,dinner,girlfriend)→ take(john,dog,walk)→come(john,home)→sleep(john,bed) Suppose during testing the following event sequence is observed : prepared(mary,pasta,dinner)→eat(mary,dinner,boyfriend)→ take(mary,cat,walk)→ ? →sleep(mary, couch) The model is required to guess the missing event marked with '?'. A count-based model would fail if it never encountered the same events during training. It would fail to take into account the semantic similarity between words prepared and cook, dog and cat. A related disadvantage of a count based script models is that they suffer from the curse of dimensionality To counter the shortcomings of count based script models, we propose a script model based on distributed representations One of the standard tasks used for evaluating script models is Narrative Cloze Narrative cloze task evaluates models for exact correctness of the prediction. It penalizes predictions even if they are semantically plausible. It would be more realistic to evaluate script models on a task that gives credit for predicting semantically plausible alternatives as well. We propose adversarial narrative cloze task. In this task, the model is shown two event sequences, one is the correct event sequence and another is same sequence but with one event replaced by a random event. The task is to guess which of the two, is the correct event sequence. For example, given two sequences below, the model should be able to distinguish the correct event sequence from the incorrect one. Interestingly, cook(john,spaghetti,dinner)→eat(john,dinner,girlfriend) →take(john,dog,walk)→play(john,tennis)→sleep(john,bed) We propose a probabilistic model for learning a sequence of events corresponding to a script. The proposed model predicts the event incrementally. It first predicts a verbal predicate, followed by protagonist position (since the protagonist argument is already known) and followed by remaining arguments. We believe this is more natural way of predicting the event as opposed to predicting the complete event, treating it as an atomic unit. The information about the predicate influences the possible arguments that could come next due to selectional preferences of the verb. As done in previous work, pr } Figure Here, we are given sequence of events e 1 , e 2 , ....., e k-1 , e k , e k+1 . Event e k is removed from the sequence and it is predicted incrementally. For event representation, one could use a sophisticated compositional model based on recursive neural networks The event model is a simple compositional model representing an event. The model is shown in Figure A good script model should capture the meaning as well as the statistical dependencies between events in an event sequence. More importantly, the model should be able to learn these representations from unlabeled script sequences available in abundance. We propose a neural network based probabilistic model for event sequences, for learning event sequence as well as the event representations. The model is shown in Figure In order to get an intuition how our model predicts an event, consider the following event sequence in a script, with a missing event : (e 1 → e 2 • • • → e k-1 → ? → e k+1 → . . . e n ). We would like to predict the missing event, say e k . The event model is used to obtain event representations for each event in the context. These event representations are then composed into context representation by summing the representation for each of the event in the context. We sum the representations, as this formulation works well in practice. The desired event e k is predicted incrementally, beginning with the predicate p for e k . The context embedding is used to predict the verbal predicate via a hidden layer followed by a multiclass logistic regression (softmax) classification. Next, the protagonist position d (subject, object etc) is predicted. For predicting d, the context embedding and the predicate embedding (corresponding to the predicate predicted in previous step) are linearly combined to be given as input to a hidden layer. This is followed by regular softmax prediction. Similarly, arguments are predicted. For each of the argument, predicate embedding and the previous prediction (position or argument) are linearly combined with the context embedding. If at each prediction stage we used gold predicate/position/argument embedding for linearly combining with context embedding, our model would not be robust to wrong predictions during testing. Using the embeddings corresponding to predicted unit would make the model robust against noise and would help the model to partially recover from wrong predictions during testing. We train the model by minimizing the negative likelihood function for the event prediction. Formally, we minimize the objective function -J(Θ) as shown in equation 1 and 2. As shown in equation 3, we factorize the event distribution into constituents, making appropriate independence assumptions as explained earlier. Each of the factor is a multiclass logistic regression (softmax) function. Equation 4 illustrates the probability distribution for the predicate given the context. Here, u v i is the word embedding for the predicate v i , E is the context embedding and b v i is the bias. Probability distributions for arguments has similar form and are not shown here due to space constraints. pr , B} is the parameter vector to be learned. Parameters are learned using mini-batch (size=1000) stochastic gradient descent with adagrad We regularize the parameters of the model using L2 regularization (regularization parameter = 0.01). All the hidden layers have a dropout factor of 0.5. We trained a word2vec model on train set documents to learn word embeddings. Predicate and arguments vectors are initialized using the learned word embeddings. Predicate and argument embeddings have dimensionality of 50 and hidden layers have dimensionality of 50. All the hyper-parameters were tuned using a dev set. (1) prob. of an event given context i , a i , e, Θ) second arg prob. (3) 5 Experiments and Analysis There is no standard dataset for evaluating script models. We experimented with movies summary corpus We compare our model against two baseline models: Unigram model and MultiProtagonist model. A unigram model is a simple but competitive script model. This model predicts an event by sampling from unigram event frequency distribution of the train set. The events are predicted independent of the context. MultiProtagonist (M-Pr) is the model proposed by We evaluated models for narrative cloze task with three metrics Recall@50 and Accuracy and Event Perplexity. Recall@50 is the standard metric used for evaluating script models The baseline models and our model are probabilistic by nature. Taking inspiration from language modeling community, we propose a new metric Event Perplexity. We define event perplexity as 2 -1 N i log 2 p(e i |e (context,i) ) . The perplexity measure, like the accuracy takes into account the constituents of an event and is a good indicator of the model predictions. Narrative Cloze task was tested on 29,943 test set scripts. The results are shown in Table Similar to narrative cloze, adversarial narrative cloze task was evaluated on 29,943 test set scripts. In each of the event sequence an event was replaced by a random event. The results for the adversarial narrative cloze task are shown in Work on scripts dates back to 70's beginning with introduction of Frames by As mentioned previously, in past few years a number of count based systems for script learning have been proposed for learning script knowledge in unsupervised fashion. In this paper we proposed a probabilistic compositional model for scripts. As shown in experiments, our model outperforms the existing co-occurrence count based methods. This further reinforces our hypothesis of having more richer compositional representations for events. Current tasks to evaluate script models are crude, in the sense that they penalize semantically plausible events. In the future, we propose to create a standard data set of event sequence pairs (correct sequence vs incorrect sequence). The replaced event in the incorrect sequence should not be a random event but rather a semantically close but incorrect event. Models evaluated on this data set would give a better in-dication of script learning capability of the model. Another area which needs further investigation is related to developing models which can learn long tail event distributions. Current models do not capture this well and hence do not perform better than most frequent event baseline on accuracy task. In this paper, we proposed a very simple compositional feed forward neural network model. In the future we plan to explore more sophisticated recurrent neural network (RNN) based models. Recently, RNN based models have shown success in variety of applications
686
2,079
686
Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span Selection
We present a simple and unified approach for both continuous and discontinuous constituency parsing via autoregressive span selection. Constituency parsing aims to produce a set of non-crossing spans so that they can form a constituency parse tree. We sort gold spans in a predefined order and train a pointer network to autoregressively select spans by that order. To deal with a discontinuous span, we consecutively select its subspans from left to right, label all but the last subspans with a special discontinuous label, and label the last subspan with the whole discontinuous span's label. We use a simple heuristic to output valid trees from selected spans so that our approach is able to predict all possible continuous and discontinuous constituency trees without sacrificing data coverage and without the need to use expensive chart-based parsing algorithms. Extensive experiments show that our model achieves stateof-the-art or competitive performance on all benchmarks of continuous and discontinuous constituency parsing . 1
Constituency parsing is a fundamental task in natural language processing, having many applications in downstream tasks such as language modeling Both continuous and discontinuous parsing can be framed as span prediction problems. In continuous parsing, each span corresponds to a single interval (of the observed sentence), while in discontinuous parsing a discontinuous span could correspond to multiple nonadjacent intervals, the number of which we refer to as fan-out. Fig. Current span-based discontinuous parsers can only deal with discontinuous spans of fan-out of at most two due to high parsing time complexity, thereby having limited data coverage. For instance, the most expressive variant of LCFRS-2 2 Scoring all fan-out-2 spans needs O(n 3 For example, In this work, we present a simple yet effective approach to address all aforementioned problems via autogressive span selection. 4 To solve the first issue, we sort gold spans by a predefined order and train a pointer network We conduct extensive experiments on benchmarks, achieving state-of-the-art performance on PTB and competitive performance on CTB for continuous parsing; and state-of-the-art performance on three benchmarks: Tiger, NeGra, and DPTB for discontinuous parsing. parsing accuracy, the discontinuous F1 is much lower than other comparable methods due to the restriction in search space.
We formally frame (dis)continuous constituency parsing as a span selection problem. A (dis)continuous constituency parse tree t comprises a set of nodes and for each node s we have yield(s) = {s 1 , ..., s l } which is the set of sorted token indices in the yield of s in t with s 1 < • • • < s l . The bidirectional conversion between spans and trees are shown as follows. Tree to spans. It is somewhat trivial to derive the span representation of a tree node s based on yield(s) by merging consecutive token indices into continuous intervals represented by left and right boundary indices. We do not allow two resulting intervals sharing any boundaries for eliminating potential ambiguities. If a single interval is obtained, then it is a continuous span; if multiple intervals are obtained, then it is a discontinuous span. Fig. Spans to tree. We say a set of spans S is consistent (to form a valid tree) iff ∀s, t ∈ S, yield(s) ∩ yield(t) = ∅ or yield(s) ⊂ yield(t) or yield(t) ⊂ yield(s). 5 We can build a tree from any consistent S as follows. For each s with | yield(s)| < n, we define P s ⊂ S as a set of spans in which s is properly contained. We say t is parent node of s if t = arg min t ′ ∈Ps len(t ′ ). We can thus determine the parent node of each span and thereby construct the whole tree. We find it convenient to reconstruct the tree if all spans are sorted in postorder tree traversal order , i.e., spans of smaller end position (i.e., max yield(s)) come first, and for tie breaking, spans of smaller width come first. As such, we only need to sequentially scan the sorted spans to build the tree from the bottom up: for a scanned span we build a node for it by looking up prior decoded nodes to find all its children nodes. Finally, we add a <TOP> node spanning the whole sentence to connect all unconnected nodes for obtaining the final parse tree. It also gives us a simple heuristic to build a tree from a set of inconsistent spans: sort spans by postorder, scan sorted spans and if a span crosses to any prior decoded spans, simply discard it, otherwise build a node for it likewise the aforementioned procedure. We adopt this strategy in the 5 Here we slightly abuse the notation yield. post-processing stage to build trees. Handling discontinuous spans. We use a simple strategy to deal with discontinuous spans. For a given discontinuous span of fan-out f {(l 1 , r 1 ), ..., (l f , r f )}, we label all but the last subspans (i.e., {(l 1 , r 1 ), ..., (l f -1 , r f -1 )}) with a special discontinuous label <dis> and the last subspan (i.e., (l f , r f )) with the label of the entire discontinuous span. For example, a discontinuous span (( Example. We take the discontinuous parse tree from Fig. Input. Given a sentence w = w 1 , • • • , w n , we add <bos> at w 0 and <eos> at w n+1 , and sort the gold spans (for discontinuous parsing we first do the transformation descried earlier) by post-order to get S = {(l i , r i , y i )} i where l i , r i , y i are the left boundary index, right boundary index, and the label index of the i-th span, respectively. Since we adopt an autoregressive selection strategy, we need to inform the model when to stop. We achieve it by appending a special indicator span (0, 0, <end>) to S, and denote the resulting size of S as m. Encoder. We tokenize the input sentence and feed it into pre-trained language models such as BERT We use the fencepost representation and obtain span embedding e i,j for (i, j) using the concatenation of the LSTM-minus feature Decoder. For the decoder we use a unidirectional LSTM network, where d t is the hidden state of the LSTM decoder at time step t; d 0 , e start are randomly initialized trainable vectors; E is the label embedding matrix. Fig. For each step, the decoder hidden state is used as a query to select the target spans and then the decoder hidden state and the selected span are used together to predict the label of the selected span. For span selection, we use deep triaffine attention 6 For each word we take its last subtoken's representation from the last output layer of BERT or XLNet For span labeling, we compute the label score g t ∈ R L as, where L is the size of label set, E is the label embedding matrix. We remark that it is crucial to incorporate decoder state embedding into label prediction in discontinuous parsing. A span can be both a standalone continuous span and a continuous subspan of a discontinuous parent span (e.g, (0, 2) and (4, 7) in Fig. We decompose the training loss L as the span selection loss and the span labeling loss, For inference, we autoregressively decode spans with no constraint until the special span (0, 0) is selected, and use the simple heuristic (introduced in Sect. 2.1) to build the final parse. 3 Experiments Data. For continuous parsing, we conduct experiments on Penn Treebank (PTB, Evaluation. We use the script evalb Implementation details. We refer readers to Appendix A for details. Main result. We re-implement the neural TreeCRF parser Table Table We conduct ablation studies on PTB (with BERT large ) and show the results on Table The influence of the decoder. As previously mentioned, span-based parsing amounts to nonautoregreesive span selection with independent assumptions. To show the importance of incorporating an autoregressive neural decoder, we train a span-based parser with local span binary classification loss The effect of span selection order. It is also possible to use other (span) sorting orders e.g. the pre-order tree traversal order as used in Continuous parsing. Continuous parsing is wellstudied in the literature. Span-based parsing is the most popular paradigm and achieves great success. The main issue of span-based parsing stems from the unreasonable conditional independence assumptions imposed in local span scoring. To resolve this issue, one line of work aims to enhance span representation learning. Constituency parsing with pointer nets. Our model has a very similar neural architecture compared to prior pointer network-based constituency parsers: Seq2seq constituency parsing. Our work is also closely connected to the recent work of In this work, we have presented a simple yet effective method for both continuous and discontinuous constituency parsing. We showed that an autoregressive decoder is more desirable than global CKY decoding in span-based continuous parsing, and with a simple labeling mechanism, we obtained state-of-the-art performance in discontinuous parsing. Though autoregressive span selection effectively weakens the conditional independence assumptions imposed by current span-based parsing methods, this strategy imposes another arguably unreasonable inductive bias of forcing a predefined span selection order. We find using post order performs fairly well but this does not necessarily means it is the best order for span selection, and this work might leave other potentially better order unexplored. Future works might consider using set prediction methods Another issue is regarding time complexity. Though our model needs only linear steps (in sentence length) for parsing, each step takes O(n 2 ) time to select a single span over O(n 2 ) total spans, making the overall time complexity cubic. We remark that for each step the O(n 2 ) operation is parallelizable and-with full GPU parallelizationfairly fast in practice, but it would still be problematic when the sentence is extremely long. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. C Did you run computational experiments? section 3 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 3.2 shows the discontinuous parsing speed C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section3 and appendix A C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section 3.1 D Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
1,037
1,372
1,037
Dialogue Natural Language Inference
Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model's consistency.
A long standing issue faced by dialogue models is consistency One approach to increasing the consistency of a chit-chat dialogue model was proposed in Separately, the framework of Natural Language Inference (NLI) Despite this expectation, leveraging an NLI model for a downstream task remains an underexplored research direction. An NLI model may improve downstream task performance if properly used, while downstream tasks may yield new datasets or identify issues with existing NLI models, thus expanding the NLI research domain. In this paper, we reduce the problem of consistency in dialogue to natural language inference. We first create a dataset, Dialogue NLI, Then, we demonstrate that NLI can be used to improve the consistency of dialogue models using a simple method where utterances are re-ranked using a NLI model trained on Dialogue NLI. The method results in fewer persona contradictions on three evaluation sets. The evaluation sets can be used independently to automatically evaluate a dialogue model's persona consistency, reducing the need for human evaluation. We discuss several future research directions involving this approach.
Language Inference First, we review the dialogue generation and natural language inference problems as well as the notions of consistency used throughout. Dialogue Generation Dialogue generation can be framed as next utterance prediction, in which an utterance (a sequence of tokens representing a sentence) u t+1 is predicted given a conversation prefix u ≤t . A sequence of utterances is interpreted as a dialogue between agents. For instance, an alternating two-agent dialogue which starts with agent A and ends with agent B is written as Persona-Based Dialogue In persona-based dialogue, each agent is associated with a persona, P A and P B . An utterance is now predicted using the conversation prefix u ≤t and the agents own persona, e.g. P A for agent A. It is assumed that an agent's utterances are conditionally dependent on its persona, which can be interpreted as the utterances being representative of, or reflecting, the persona. A typical approach for representing the persona is to use a set of sentences P = {p 1 , ..., p m }. Consistency A consistency error, or contradiction, occurs when an agent produces an utterance that contradicts one of their previous utterances. Similarly, a persona consistency error, or persona contradiction, occurs when an agent produces an utterance that contradicts a subset of its persona. A contradiction may be a clear logical contradiction, e.g. I have a dog vs. I do not have a dog, but in general is less clearly defined. As a result, in addition to logical contradictions, we interpret a consistency error as being two utterances not likely to be said by the same persona. For instance, "i'm looking forward to going to the basketball game this weekend!" vs. "i don't like attending sporting events", as well as "i'm a lawyer" vs. "i'm a doctor" would be viewed here as con-tradictions, although they are not strict logical inconsistencies. Similarly, a persona consistency error is interpreted here as an utterance which is not likely to be said given a persona described by a given set of persona sentences, in addition to logical contradictions. Natural Language Inference Natural Language Inference (NLI) assumes a dataset i=1 which associates an input pair (s 1 , s 2 ) to one of three classes y ∈ {entailment, neutral, contradiction}. Each input item s j comes from an input space S j , which in typical NLI tasks is the space of natural language sentences, i.e. s j is a sequence of words (w 1 , ..., w K ) where each word w k is from a vocabulary V. The input (s 1 , s 2 ) are referred to as the premise and hypothesis, respectively, and each label is interpreted as meaning the premise entails the hypothesis, the premise is neutral with respect to the hypothesis, or the premise contradicts the hypothesis. The problem is to learn a function f NLI (s 1 , s 2 ) → {E, N, C} which generalizes to new input pairs. Reducing Dialogue Consistency to NLI Identifying utterances which contradict previous utterances or an agent's persona can be reduced to natural language inference by assuming that contradictions are contained in a sentence pair. That is, given a persona T , it is assumed that a dialogue contradiction for agent A is contained in an utterance pair (u A i , u A j ), and a persona contradiction is contained in a pair (u A i , p A k ). Similarly, we assume that entailments and neutral interactions, defined in Section 3, are contained in sentence pairs. We do not consider relationships which require more than two sentences to express. Under this assumption, we can use a natural language inference model f NLI to identify entailing, neutral, or contradicting utterances. Section 3 proposes a dialogue-derived dataset for training f NLI , and Section 4 proposes a method which incorporates f NLI with a dialogue model for next utterance prediction. The Dialogue NLI dataset consists of sentence pairs labeled as entailment (E), neutral (N), or contradiction (C). Sentences Sentences originate from a two-agent persona-based dialogue dataset. A dialogue between agents A and B consists of a sequence of utterances u A 1 , u B 2 , u A 3 , u B 4 , ..., u B T , and each agent has a persona represented by a set of persona sentences {p A 1 , ..., p A m A } and {p B 1 , ..., p B m B }. The Dialogue NLI dataset consists of (u i , p j ) and (p i , p j ) pairs In order to determine labels for our dataset, we require human annotation of the utterances and persona sentences in PersonaChat, as the original dataset does not contain this information. We perform such annotation by first associating a human-labeled triple (e 1 , r, e 2 ) with each persona sentence, and a subset of all the utterances, detailed in 3.2. Each triple contains the main fact conveyed by a persona sentence, such as (i, have pet, dog) for the persona sentence I have a pet dog, or a fact mentioned in an utterance, such as No, but my dog sometimes does. Persona sentences and utterances are grouped by their triple (e.g. see Figure Entailment Each unique pair of sentences that share the same triple are labeled as entailment. Neutral Neutral pairs are obtained with three different methods. First, a miscellaneous utterance is a (u, p) pair of which u is not associated with any triple. This includes greetings (how are you today?) and sentences unrelated to a persona sentence (the weather is ok today), so such utterances are assumed to be neutral with respect to persona sentences. The second method, persona pairing, takes advantage of the fact that each ground-truth persona is typically neither redundant nor contradictory. A persona sentence pair (p, p ) is first selected from a persona if p and p do not share the same triple. Then each sentence associated with the same triple as p is paired with each sentence associated with the same triple as p . Lastly, we specify relation swaps (r, r ) for certain relations (see Appendix A.2) whose triples are assumed to represent independent facts, such as have vehicle and have pet. A sentence pair, whose first sentence is associated with a triple (•, r, •) and whose second sentence has triple (•, r , •), is labeled as neutral. See Table Contradiction We obtain contradictions using three methods. See Figure First, the relation swap method is used by specifying contradicting relation pairs (r, r ) (see Appendix A.2), such as (like activity, dislike), then pairing each sentence associated with the triple (e 1 , r, e 2 ) with each sentence associated with (e 1 , r , e 2 ). Similarly, an entity swap consists of specifying relations, e.g., physical attribute, that would yield a contradiction when the value of e 2 is changed to a different value e 2 , e.g., short → tall (see Appendix A.3). Sentences associated with (e 1 , r, e 2 ) are then paired with sentences associated with (e 1 , r, e 2 ). Finally, a numeric contradiction is obtained by first selecting a sentence which contains a number that appears in the associated triple (see Table Each persona sentence is annotated with a triple (e 1 , r, e 2 ) using Amazon Mechanical Turk task. We first define a schema consisting of category relation category rules, such as person have pet animal , where the relation comes from a fixed set of relation types R, listed in Appendix A.1. Given a sentence, the annotator selects a relation r from a drop-down populated with the values in R. The annotator then selects the categories and values of the entities e 1 and e 2 using drop-downs that are populated based on the schema rules. An optional drop-down contains numeric values for annotating entity quantities (e.g., 3 brothers). If selected, the numeric value is concatenated to the front of the entity value. The annotator can alternatively input an out-of-schema entity value in a text-box. Using this method, each of the 10,832 persona sentences is annotated with a triple (e 1 , r, e 2 ), where r ∈ R, e 1 ∈ E 1 , and e 2 ∈ E 2 . Here E 1 is the set of all annotated e 1 from the drop-downs or the text-box, and E 2 is similarly defined. Finally, utterances are associated with a triple as follows. Let p be a persona sentence with triple (e 1 , r, e 2 ). We start with all utterances, U , from agents that have p in their persona. An utterance u ∈ U is then associated with the triple (e 1 , r, e 2 ) and persona sentence p when e 2 is a sub-string of u, or word similarity Table We now present a method which demonstrates that natural language inference can used to improve the consistency of dialogue agents. Candidate utterances are re-ranked based on whether the candidate is predicted to contradict a persona sentence. If the NLI model predicts that a candidate contradicts a persona sentence, the candidate's score is penalized, with the penalty weighted by the NLI model's confidence The NLI model is then run on each (u i , p j ) pair, predicting a label y i,j ∈ {E, N, C} with confidence c i,j . A contradiction score is computed for each candidate as: is the highest confidence, c i,j , out of the contradicting (u i , p j ). and the candidates are sorted according to s re-rank . Hyper-parameters λ and k control the NLI model's influence in re-ranking. For example, if the top candidate has a contradiction score of 1.0, then with λ = 1, it will be moved to the k'th position in the ranking. λ = 0 corresponds to no re-ranking. 5.1 Experiment 1: NLI Models Many recently proposed NLI models can be categorized into sentence encoding based methods of the form f MLP (g enc (s 1 ), g enc (s 2 )), and attention-based methods of the form f MLP (g attn (s 1 , s 2 )) For the sentence encoding method, we use In-ferSent This experiment evaluates the effect of the reranking method from Section 4 on the dialogue model's persona consistency. To study the effect of re-ranking on persona consistency, we form evaluation sets which contain next-utterances which are likely to yield persona contradiction or entailment, as follows. Evaluation Sets Each example is formed by first finding a next-utterance u t+1 in the Persona-Chat validation set which has an associated triple (e 1 , r, e 2 ) of interest, e.g. (i, like music, country). If a sentence in the agent's profile P has triple (e 1 , r, e 2 ), we form the validation example (P, u ≤t , u t+1 ). Figure Each example is associated with candidates U , consisting of the ground-truth utterance u t+1 , 10 entailment candidates with the same triple as u t+1 , 10 contradicting candidates with a different triple than that of u t+1 , and 10 random candidates. The dialogue model must avoid ranking a contradicting candidate highly. Specifically, suppose the ground-truth nextutterance u t+1 is associated with triple (e 1 , r, e 2 ), e.g., (i, have pet, dog). Entailment candidates are utterances u from the validation or training sets such that u is associated with triple (e 1 , r, e 2 ). Since by construction a sentence in the profile also has triple (e 1 , r, e 2 ), these candidates entail a profile sentence. A contradicting candidate is an utterance associated with a specified contradicting triple (e 1 , r , e 2 ), e.g., (i, not have, dog). We construct three evaluation sets, Haves, Likes, and Attributes using this process. Metrics We introduce variants of the ranking metric Hits@k, called Contradict@k and Entail@k. Contradict@k measures the proportion of top-k candidates returned by the model which contradict candidates, averaged over examples. This measures the propensity of a model to highly rank contradictions. Contradiction@1 is the proportion of consistency errors made by the model. For this metric lower values are better, in contrast to Hits@k. Entail@k measures the ment candidates share the same underlying triple as the ground-truth next utterance, so this metric rewards highly ranked candidates that convey similar meaning and logic to the ground-truth utterance. Thus it can be interpreted as a more permissive version of Hits@k. Results Table This experiment evaluates the effect of the proposed NLI re-ranking method on a dialogue model's consistency, where consistency is judged by human annotators in an interactive personabased dialogue setting. Experiment Setup We use ParlAI Scoring and Calibration Following a conversation, an annotator is shown the conversation and the model's persona, and assigns three scores: an overall score of how well the model represented its persona ({1,2,3,4,5}), a marking of each model utterance that was consistent with the model's persona ({0,1}), and a marking of each model utterance that contradicted a previous utterance or the model's persona ({0,1}). We use Bayesian calibration to adjust for annotator bias, following Table In this paper, we demonstrated that natural language inference can be used to improve performance on a downstream dialogue task. To do so, we created a new dialogue-derived dataset called Dialogue NLI, a re-ranking method for incorporating a Dialogue NLI model into a dialogue task, and an evaluation set which measures a model's persona consistency. The dataset offers a new domain for natural language inference models, and suggests avenues such as devising alternative methods for using natural language inference components in downstream tasks. Future work may also incorporate contradiction information into the dialogue model itself, and extend to generic contradictions. Neutral relation swaps include (have x, have y), e.g. have pet, have sibling. Additional (have * A, not have B) swaps were defined for entities A which are a super-type of B, namely (A,B) pairs ({pet, animal}, {dog, cat}), ({sibling}, {brother, sister}), ({child, kid}, {son, daughter}), ({vehicle}, {car, truck}); this includes sentence pairs such as "i have a sibling", "i do not have a sister". Similarly, (not have B, have * A) swaps were defined using the (A, B) pairs above. For contradictions, swapping entities for the following relation types was assumed to yield a contradiction: attend school, employed by company, employed by general, favorite animal, favorite book, favorite color, favorite drink, favorite food, favorite hobby, favorite movie, favorite music, favorite music artist, favorite place, favorite season, favorite show, favorite sport, gender, has profession, job status, live in citystatecountry, marital status, nationality, place origin, previous profession, school status, want job. Additionally, for physical attribute, misc attribute, or other relations, an en-tity swap was done using all WordNet antonym pairs in the personality trait and person attribute entity categories, as well as the swaps ({blonde}, {brunette}), ({large}, {tiny}), ({carnivore, om-nivore}, {vegan, vegetarian}), ({depressed}, {happy, cheerful}), Experiment 1 The InferSent model used the Adam (Kingma and Lei Ba, 2014) optimizer with learning rate 0.001, and otherwise used the hyperparameters from the open source implementation 1-5 star rating Let M i ∼ N (µ i , 1 2 ) be the unobserved, underlying quality of the i-th approach, where µ i ∼ U(1, 5). Let A j ∼ N (0, 1 2 ) be the unobserved annotator bias, indicating whether the j-th annotator is more or less generous. We observe a score given the j-th annotator to the i-th approach, and this score follows a normal distribution with its mean given by the sum of the underlying model score and annoator bias, i.e., S ij ∼ N (M i + A j , 1 2 ). We observe some of these scores, and given these scores, the goal is to infer E[M i ] and V[M i ] for all i. Utterance-pair selection Each annotator is asked to label each utterance-pair as consistent and/or contradictory with respect to the personas. In this case, the unobserved, underlying model score is modelled as a pre-sigmoid normal variable, i.e., M i ∼ N (0, 1 2 ), and the annotator bias as a usual normal variable, i.e., A j ∼ N (0, 1 2 ), similarly to the 1-5 star rating case above. We however also introduce a turn bias T k ∼ N (0, 1 2 ) to incorporate the potential degradation of a neural dialogue model as the conversation lengthens. An observed score for each utterance pair then follows a Bernoulli distribution with its mean given as the sigmoid of the sum of these three latent variables, i.e., S ijk ∼ B(sigmoid(M i +A j +T k )). The
521
1,151
521
Beyond the Granularity: Multi-Perspective Dialogue Collaborative Selection for Dialogue State Tracking
In dialogue state tracking, dialogue history is a crucial material, and its utilization varies between different models. However, no matter how the dialogue history is used, each existing model uses its own consistent dialogue history during the entire state tracking process, regardless of which slot is updated. Apparently, it requires different dialogue history to update different slots in different turns. Therefore, using consistent dialogue contents may lead to insufficient or redundant information for different slots, which affects the overall performance. To address this problem, we devise DiCoS-DST to dynamically select the relevant dialogue contents corresponding to each slot for state updating. Specifically, it first retrieves turn-level utterances of dialogue history and evaluates their relevance to the slot from a combination of three perspectives: (1) its explicit connection to the slot name; (2) its relevance to the current turn dialogue; (3) Implicit Mention Oriented Reasoning. Then these perspectives are combined to yield a decision, and only the selected dialogue contents are fed into State Generator, which explicitly minimizes the distracting information passed to the downstream state prediction. Experimental results show that our approach achieves new state-of-the-art performance on MultiWOZ 2.1 and MultiWOZ 2.2, and achieves superior performance on multiple mainstream benchmark datasets (including Sim-M, Sim-R, and DSTC2). 1
Task-oriented dialogue systems have recently attracted growing attention and achieved substantial progress. Dialogue state tracking (DST) is a core component, where it is responsible for interpreting user goals and intents and feeding hotel-type:
... Figure In dialogue state tracking, dialogue history is a crucial source material. Recently, granularity has been proposed to quantify the utilization of dialogue history Furtherly, granularity means directly working on all dialogue contents from a particular turn to the current turn, regardless of the fact that there are still dialogue contents that are not relevant to the slot. Therefore, if it is possible to break the limitation of granularity and to dynamically select relevant dialogue contents corresponding to each slot, the selected dialogue contents as input will explicitly minimize distracting information being passed to the downstream state prediction. To achieve this goal, we propose a DiCoS-DST to fully exploit the utterances and elaborately select the relevant dialogue contents corresponding to each slot for state updating. Specifically, we retrieve turn-level utterances of dialogue history and evaluate their relevance to the slot from a combination of three perspectives. First, we devise an SN-DH module to touch on the relation of the dialogue and the slot name, which straightforward reflects the relevance. Second, we propose a CT-DH module to explore the dependency between each turn in the dialogue history and the current turn dialogue. The intuition behind this design is that the current turn dialogue is crucial. If any previous turn is strongly related to the current turn dialogue, it can be considered useful as dependency information for slot updating. Third, we propose an Implicit Mention Oriented Reasoning module to tackle the implicit mention (i.e., coreferences) problem that commonly exists in complex dialogues. Specifically, we build a novel graph neural network (GNN) to explicitly facilitate rea-soning over the turns of dialogue and all slot-value pairs for better exploitation of the coreferential relation information. After the evaluation of these three modules, we leverage a gate mechanism to combine these perspectives and yield a decision. Finally, the selected dialogue contents are fed into State Generator to enhance their interaction, form a new contextualized sequence representation, and generate a value using a hybrid method. We evaluate the effectiveness of our model on most mainstream benchmark datasets on taskoriented dialogue. Experimental results show that our proposed DiCoS-DST achieves new state-ofthe-art performance on both two versions of the most actively studied dataset: MultiWOZ 2.1 Our contributions in this work are three folds: • We propose a Multi-Perspective Dialogue Collaborative Selector module to dynamically select relevant dialogue contents corresponding to each slot from a combination of three perspectives. This module can explicitly filter the distracting information being passed to the downstream state prediction. • We propose Implicit Mention Oriented Reasoning and implement it by building a GNN to explicitly facilitate reasoning and exploit the coreferential relation information in complex dialogues. • Our DiCoS-DST model achieves new stateof-the-art performance on the MultiWOZ 2.1, MultiWOZ 2.2, Sim-M, and Sim-R datasets. There has been a plethora of research on dialogue state tracking. Traditional dialogue state trackers relied on a separate Spoken Language Understanding (SLU) module the current turn dialogue " " " " " " " " The selected dialogue content will be utilized to jointly update the dialogue state. Cascaded Context Refinement After acquiring a nearly noise-free set U D of selected dialogue turns, we consider that directly using their representations as inputs may ignore the cross attention between them since they are used as a whole. As a result, we concatenate these dialogue utterances together to form a new input sequence Especially, we inject an indicator token "⟨t⟩" before each turn of dialogue utterance to get aggregated turn embeddings for the subsequent classification-based state prediction. Then we feed this sequence into a single PrLM to obtain the contextualized output representation. Multi-Head Self-Attention history. Some DST models obtain each slot value in the dialogue state by inquiring about a part or all of the dialogue history On the other hand, dialogue state tracking and machine reading comprehension (MRC) have similarities in many aspects The architecture of DiCoS-DST is illustrated in Figure We employ the representation of the previous turn dialogue state B T -1 concatenated to the representation of each turn dialogue utterances D t as input: where [CLS] t is a special token added in front of every turn input. The representation of the previous turn dialogue state is T -1 and [VALUE] j T -1 are special tokens that represent the slot name and the slot value at turn T -1, respectively. We donate the representation of the dialogue at turn t as , where R t is the system response and U t is the user utterance. ; is a special token used to mark the boundary between R t and U t , and [SEP] is a special token used to mark the end of a dialogue turn. Then a pre-trained language model (PrLM) will be adopted to obtain contextualized representation for the concatenated input sequence E t . We attach a two-way classification module to the top of the Encoder output. It predicts which slots require to be updated in the current turn. The subsequent modules will only process the selected slots, while the other slots will directly inherit the slot values from the previous turn. We inject this module because whether a slot requires to be updated indicates whether the current turn dialogue is significant for this slot. For CT-DH of the subsequent Multi-Perspective Collaborative Selector, the great importance of the current turn dialogue is a prerequisite. A more detailed explanation will be given in Section 3.3. We employ the same mechanism as We define the set of the selected slot indices as U s = {j|SUP(S j ) = update}. For each slot S j (j ∈ U s ) selected to be updated, SN-DH, CT-DH, and Implicit Mention Oriented Reasoning modules are proposed to evaluate dialogue relevance and aggregate representations from three perspectives. Then a gated fusion mechanism is implemented to perform the dialogue selection. SN-DH SN-DH (Slot Name -Dialogue History) aims to explore the correlation between slot names and each turn of the dialogue history. For slot S j , the slot name is straightforward explicit information. Therefore, the correlation with the slot name directly reflects the importance of the dialogue turn. We take the slot name presentation [SLOT] j T -1 as the attention to the t-th turn dialogue representation D t . The output α j t = softmax(D t ([SLOT] j T -1 ) ⊺ ) represents the correlation between each position of D t and the j-th slot name at turn t. Then we get the aggregated dialogue representation h t SN-DH = (α j t ) ⊺ D t , which will participate in the subsequent fusion as the embedding of the t-th turn dialogue in this perspective. CT-DH As aforementioned, a slot that needs to be updated in the current turn means that the current turn dialogue is most relevant to this slot. In this case, if the dialogue content of any other turn contains the information that the current turn dialogue highly depends on, it can also be considered useful. Based on this consideration, we devise a CT-DH (Current Turn -Dialogue History) module to explore this association. Specifically, we build a multi-head self-attention (MHSA) layer on top of the [CLS] tokens generated from different turns of dialogue to enhance inter-turn interaction. The MHSA layer is defined as: where Q, K, and V are linear projections from [CLS] embeddings of each turn of dialogue, representing attention queries, key and values. We then append an attention layer between the output representation of the current turn dialogue and each turn of dialogue history to capture interactions between them: h t CT-DH will participate in the subsequent fusion as an aggregated representation of the t-th dialogue in this perspective. Implicit Mention Oriented Reasoning Handling a complex dialogue usually requires addressing implicit mentions (i.e., coreferences). As shown in Figure , respectively. Then we design four types of edges to build the connections among graph nodes: 1) Add an edge between N j S-V and N T D (red line in Figure The motivation for this design is that we first explore the relation between the slot to be updated and other slot-value pairs based on the current turn dialogue. Then we use other slot-value pairs as media to establish relations to their corresponding dialogue turns. We add the fourth type of edges to represent the auxiliary relationship of slots that belong to the same domain. We use multi-relational GCN with gating mechanism as in N r i is the neighbors of node i with edge type r, R is the set of all edge types, and h l n is the node representation of node n in layer l. |• | indicates the size of the neighboring set. Each of f r , f s , f g can be implemented with an MLP. Gate control g l i is a vector consisting of values between 0 and 1 to control the amount information from computed update u l i or from the original h l i . Function σ denotes a non-linear activation function. After the message passes on the graph with L hops, we take the final representation of the t-th turn dialogue node N t D as the aggregated representation h t IMOR in this perspective. The representations h t SN-DH , h t CT-DH , and h t IMOR of the t-th turn dialogue enter this module for fusion and ranking. To balance the information from multiple perspectives, we leverage a gate mechanism to compute a weight to decide how much information from each perspective should be combined. It is defined as follows: After the fusion, an MLP layer is followed, and then we take the dialogues of the top k ranked turns as the selected dialogue contents. It is worth mentioning that, unlike the state update predictor, since there is no ground-truth label of the dialogue turns that should be selected corresponding to each slot, we take this module and the following state generator as a whole and train it under the supervision of the final dialogue state label. We mark each selected dialogue turn to make the gradient of the state generator losses only backpropagate to the marked turns to ensure the effectiveness of supervision. We first attempt to obtain the value using the extractive method from representation The position of the maximum value in p and q will be the start and end predictions of the slot value. If this prediction does not belong to the candidate value set of S , we use the representation of C C = ⟨t⟩ 1 ⊕ ⟨t⟩ 2 ⊕ . . . ⊕ ⟨t⟩ T _S ⊕ ⟨t⟩ T to get the distribution and choose the candidate slot value corresponding to the maximum value: We define the training objectives of two methods as cross-entropy loss: where p and q are the targets indicating the proportion of all possible start and end, and ŷ is the target indicating the probability of candidate values. We conduct experiments on most of the mainstream benchmark datasets on task-oriented dialogue, including MultiWOZ 2.1, MultiWOZ 2.2, Sim-R, Sim-M, and DSTC2. MultiWOZ 2.1 and Multi-WOZ 2.2 are two versions of a large-scale multidomain task-oriented dialogue dataset. It is a fullylabeled collection of human-human written dialogues spanning over multiple domains and topics. Sim-M and Sim-R are multi-turn dialogue datasets in the movie and restaurant domains, respectively. DSTC2 is collected in the restaurant domain. We use joint goal accuracy and slot accuracy as evaluation metrics. Joint goal accuracy refers to the accuracy of the dialogue state in each turn. Slot accuracy only considers slot-level accuracy. We compare the performance of DiCoS-DST with the following baselines: TRADE encodes the dialogue and decodes the value using a copyaugmented decoder We employ a pre-trained ALBERT-large-uncased model Table Different PrLMs We employ different pretrained language models with different scales as the backbone for training and testing on MultiWOZ 2.2. combination of perspectives is the combination of SN-DH and CT-DH. Despite the simplicity of the mechanism of SN-DH, the association with the slot name straightforward reflects the importance of the dialogue. To solve the common problem of coreferences in complex dialogues, the Implicit Mention Oriented Reasoning module improves the performance close enough to the CT-DH. We investigate the effect of the different edges in the GNN. As shown in Table DiCoS-DST filters out some distracting information by selecting relevant dialogues, but is it really beyond the granularity? To investigate it, we simulate the granularity and compare it with DiCoS-DST. Specifically, we use the maximum granularity (i.e., the number of dialogue turns spanning from the selected furthest dialogue turn to the current turn) and capture the corresponding dialogue contents as input to State Generator. As shown in Table Table We introduce an effective DiCoS-DST that dynamically selects the relevant dialogue contents corresponding to each slot from a combination of three perspectives. The dialogue collaborative selector module performs a comprehensive selection for each turn dialogue based on its relation to the slot name, its connection to the current turn dialogue, and the implicit mention oriented reasoning. Then only the selected dialogue contents are fed into State Generator, which explicitly minimizes the distracting information passed to the downstream state prediction. Our DiCoS-DST model achieves new state-of-the-art performance on the MultiWOZ benchmark, and achieves competitive performance on most other DST benchmark datasets. The potential relationship among the above perspectives is a promising research direction, and we will explore it for more than dialogue selection in the future.
1,466
246
1,466
Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation
Following the recent success of word embeddings, it has been argued that there is no such thing as an ideal representation for words, as different models tend to capture divergent and often mutually incompatible aspects like semantics/syntax and similarity/relatedness. In this paper, we show that each embedding model captures more information than directly apparent. A linear transformation that adjusts the similarity order of the model without any external resource can tailor it to achieve better results in those aspects, providing a new perspective on how embeddings encode divergent linguistic information. In addition, we explore the relation between intrinsic and extrinsic evaluation, as the effect of our transformations in downstream tasks is higher for unsupervised systems than for supervised ones.
Word embeddings have recently become a central topic in natural language processing. Several unsupervised methods have been proposed to efficiently train dense vector representations of words While there is still an active research line to better understand these models from a theoretical perspective Nevertheless, the above argument does not formalize what "similar words" means, and it is not entirely clear what kind of relationships an embedding model should capture in practice. For instance, some authors distinguish between genuine similarity In this paper, we propose a new method to tailor any given set of embeddings towards a specific end in these axes. Our method is inspired by the work on first order and second order cooccurrences 1. We propose a linear transformation with a free parameter that adjusts the perfor-mance of word embeddings in the similarity/relatedness and semantics/syntax axes, as measured in word analogy and similarity datasets. 2. We show that the performance of embeddings as used currently is limited by the impossibility of simultaneously surfacing divergent information (e.g. the aforementioned axes). Our method uncovers the fact that embeddings capture more information than what is immediately obvious. 3. We show that standard intrinsic evaluation offers a static and incomplete picture, and complementing it with the proposed method can offer a better understanding of what information an embedding model truly encodes. 4. We show that the effect of our method also carries out to downstream tasks, but its effect is larger in unsupervised systems directly using embedding similarities than in supervised systems using embeddings as input features, as the latter have enough expressive power to learn the optimal transformation themselves. All in all, our work sheds light in how word embeddings represent divergent linguistic information, analyzes the role that this plays in intrinsic evaluation and downstream tasks, and opens new opportunities for improvement. The remaining of this paper is organized as follows. We describe our proposed post-processing in Section 2. Section 3 and 4 then present the results in intrinsic and extrinsic evaluation, respectively. Section 5 discusses the implications of our work on embedding evaluation and their integration in downstream tasks. Section 6 presents the related work, and Section 7 concludes the paper.
Let X be the matrix of word embeddings in a given language, so that X i * is the embedding of the ith word in the vocabulary. Such embeddings are meant to capture the meaning of their corresponding words in such a way that the dot product sim(i, j) = X i * • X j * gives some measure of the similarity between the ith and the jth word Inspired by first order and second order cooccurrences More formally, we define the second order similarity matrix M 2 (X) = XX T XX T , so that sim 2 (i, j) = M 2 (X) ij . Note that M 2 (X) = M (M (X)), so second order similarity can be seen as the similarity of the similarities across all words, which is in line with the intuitive definition given above. More generally, we could define the nth order similarity matrix as M n (X) = (XX T ) n , so that sim n (i, j) = M n (X) ij . We next show that, instead of changing the similarity measure, one can change the word embeddings themselves through a linear transformation so they directly capture this second or nth order similarity. Let X T X = QΛQ T be the eigendecomposition of X T X, so that Λ is a positive diagonal matrix whose entries are the eigenvalues of X T X and Q is an orthogonal matrix with their respective eigenvectors as columns More generally, we can define W α = QΛ α , where α is a parameter of the transformation that adjusts the desired similarity order. Following the above definitions, such transformation would lead to first order similarity as defined for the original embeddings when α = 0, second order similarity when α = 0.5 and, in general, nth order similarity when α = (n-1)/2, that is, M (XW 0 ) = M (X), M (XW 0.5 ) = M 2 (X) and M (XW (n-1)/2 ) = M n (X). Note that the proposed transformation is relative in nature (i.e. it does not make any assumption on the similarity order captured by the embeddings it is applied to) and, as such, negative values of α can also be used to reduce the similarity order. For instance, let X be the second order transformed embeddings of some original embeddings Z, so X = ZW 0.5 , where W 0.5 was computed over Z. It can be easily verified that W -0.25 , as computed over X, would recover back the original embeddings, that is, M (XW -0.25 ) = M (Z). In other words, assuming that the embeddings X capture some second order similarity, it is possible to transform them so that they capture the corresponding first order similarity, and one can easily generalize this to higher order similarities by simply using smaller values of α. All in all, this means that the parameter α can be used to either increase or decrease the similarity order that we want our embeddings to capture. Moreover, even if the similarity order is intuitively defined as a discrete value, the parameter α is continuous, meaning that the transformation can be smoothly adjusted to the desired level. In order to better understand the effect of the proposed post-processing in the two similarity axes introduced in Section 1, we adopt the widely used word analogy and word similarity tasks, which offer specific benchmarks for semantics/syntax and similarity/relatedness, respectively. More concretely, word analogy measures the accuracy in answering questions like "what is the word that is similar to France in the same sense as Berlin is similar to Germany?" (semantic analogy) or "what is the word that is similar to small in the same sense as biggest is similar to big?" (syntactic analogy) using simple word vector arithmetic On the other hand, word similarity measures the correlation So as to make our evaluation more robust, we run the above experiments for three popular embedding methods, using large pre-trained models released by their respective authors as follows: Word2vec Table The graphs in Figure Apart from that, the results also show that, while the general trend is the same for all embedding models, their axes seem to be centered at different points. This is clearly reflected in the optimal values of α for semantic and syntactic analogies (-0.65 and 0.10 for word2vec, -0.85 and -0.10 for glove, and -0.45 and 0.25 for fasttext): the distance between them is very similar in all cases (either 0.70 or 0.75), yet they are centered at different points. This suggests that different embedding models capture a different similarity order and, therefore, obtain a different balance between semantic and syntactic information in the original setting (α = 0), yet our method is able to adjust it to the desired level in a post-processing step. As the results in Table Following the discussion in Section 3.1, this behavior seems clearly connected with the differences in the default similarity order captured by different embedding models. In fact, the optimal word2vec glove fasttext word2vec glove fasttext values of α reflect the same trend observed for word analogy, with glove having the smallest values with -0.85 and -0.45, followed by word2vec with -0.70 and -0.30, and fasttext with -0.25 and -0.15. Moreover, the effect of this phenomenon is more dramatic in this case: fasttext achieves significantly better results than glove for the original embeddings (a difference of nearly 10 and 3.5 points for SimLex-999 and MEN, respectively), but this proves to be an illusion after adjusting the similarity order with our post-processing, as both models get practically the same results with differences below 0.1 points. At the same time, although less pronounced than with semantic/syntactic analogies In order to better understand the effect of the proposed post-processing in downstream systems, we adopt the STS Benchmark dataset on semantic textual similarity As the results in Table Our experiments reveal that standard word embeddings encode more information than what is immediately obvious, yet their potential performance is limited by the impossibility of optimally surfacing divergent linguistic information at the same time. This can be clearly seen in the word analogy experiments in Section 3.1, where we are able to achieve significant improvements over the original embeddings, yet every improvement in semantic analogies comes at the cost of a degradation in syntactic analogies and vice versa. At the same time, our work shows that the effect of this phenomenon is different for unsupervised systems that directly use embedding similarities and supervised systems that use pre-trained embeddings as features, as the latter have enough expressive power to learn the optimal balance themselves. We argue that our work thus offers a new perspective on how embeddings encode divergent linguistic information and its relation with intrinsic and extrinsic evaluation as follows: • Standard intrinsic evaluation offers a static and incomplete picture of the information encoded by different embedding models. This can be clearly seen in the word similarity experiments in Section 3.2, where fasttext achieves significantly better results than glove for the original embeddings, yet the results for their best post-processed embeddings are at par. As a consequence, if one simply looks at the results of the original embeddings, they might wrongly conclude that fasttext is vastly superior to glove at encoding semantic similarity information, but this proves to be a mere illusion after applying our post-processing. As such, intrinsic evaluation combined with our post-processing provides a more complete and dynamic picture of the information that is truly encoded by different embedding models. • Supervised systems that use pre-trained em-beddings as features have enough expressive power to learn the optimal similarity order for the task in question. While there are practical aspects that interfere with this theoretical consideration, our experiments confirm that the proposed post-processing has a considerably smaller effect in a prototypical deep learning system. This reinforces the previous point that standard intrinsic evaluation offers an incomplete picture, as it is severely influenced by an aspect that has a much smaller effect in typical downstream systems. For that reason, using our proposed post-processing to complement intrinsic evaluation offers a better assessment of how each embedding model might perform in a downstream task. • Related to the previous point, while our work shows that the default similarity order captured by embeddings has a relatively small effect in larger learning systems as they are typically used, this is not necessarily the best possible integration strategy. If one believes that a certain similarity order is likely to better suit a particular downstream task, it would be possible to design integration strategies that encourage it to be so during training, and we believe that this is a very interesting research direction to explore in the future. For instance, one could design regularization methods that penalize large deviations from this predefined similarity order. There have been several proposals to learn word embeddings that are specialized in certain linguistic aspects. For instance, Other authors have also proposed postprocessing methods for word embeddings with different motivations. For instance, Finally, Labutov and Lipson (2013) perform unconstrained optimization with proper regularization to specialize embeddings in a supervised task. The proposed method is also connected to a similar parameter found in traditional count-based distributional models as introduced by Caron (2001) and further analyzed by Finally, there are others authors that have also pointed limitations in the intrinsic evaluation of word embeddings. For instance, In this paper, we propose a simple post-processing to tailor word embeddings in the semantics/syntax and similarity/relatedness axes without the need of additional resources. By measuring the effect of our post-processing in word analogy and word similarity, we show that standard embedding models are able to encode more information than what is immediately obvious, yet their potential performance is limited by the impossibility of optimally surfacing divergent linguistic information. We analyze the different role that this phenomenon plays in intrinsic and extrinsic evaluation, concluding that intrinsic evaluation offers a static picture that can be complemented with the proposed post-processing, and prompting for better integration strategies for downstream tasks. We release our implementation at In the future, we would like to explore better integration strategies for machine learning systems that use pre-trained embeddings as features, so that downstream systems can better benefit from previously adjusting the embeddings in the semantics/syntax and similarity/relatedness axes. At the same time, we would like to extend our analysis to more specialized embedding models
813
2,401
813
The GATE Crowdsourcing Plugin: Crowdsourcing Annotated Corpora Made Easy
Crowdsourcing is an increasingly popular, collaborative approach for acquiring annotated corpora. Despite this, reuse of corpus conversion tools and user interfaces between projects is still problematic, since these are not generally made available. This demonstration will introduce the new, open-source GATE Crowdsourcing plugin, which offers infrastructural support for mapping documents to crowdsourcing units and back, as well as automatically generating reusable crowdsourcing interfaces for NLP classification and selection tasks. The entire workflow will be demonstrated on: annotating named entities; disambiguating words and named entities with respect to DBpedia URIs; annotation of opinion holders and targets; and sentiment.
Annotation science A big outstanding challenge for crowdsourcing projects is that the cost to define a single annotation task remains quite substantial. This demonstration will introduce the new, open-source GATE Crowdsourcing plugin, which offers infrastructural support for mapping documents to crowdsourcing units, as well as automatically generated, reusable user interfaces
Conceptually, the process of crowdsourcing annotated corpora can be broken down into four main stages, within which there are a number of largely infrastructural steps. In particular, data preparation and transformation into CrowdFlower units, creation of the annotation UI, creation and upload of gold units for quality control, and finally mapping judgements back into documents and aggregating all judgements into a finished corpus. The rest of this section discusses in more detail where reusable components and infrastructural support for automatic data mapping and user interface generation are necessary, in order to reduce the overhead of crowdsourcing NLP corpora. An important part of project definition is the mapping of the NLP problem into one or more crowdsourcing tasks, which are sufficiently simple to be carried out by non-experts and with a good quality. What are helpful here are reusable patterns for how best to crowdsource different kinds of NLP corpora. The GATE Crowdsourcing plugin currently provides such patterns for selection and classification tasks. This stage also focuses on setup of the task parameters (e.g. number of crowd workers per task, payment per task) and piloting the project, in order to tune in its design. With respect to task parameters, infrastructural support is helpful, in order to enable automatic splitting of longer documents across crowdsourcing tasks. This stage, in particular, can benefit significantly from infrastructural support and reusable components, in order to collect the data (e.g. crawl the web, download samples from Twitter), preprocess it with linguistic tools (e.g. tokenisation, POS tagging, entity recognition), and then map automatically from documents and sentences to crowdsourcing micro-tasks. This is the main phase of each crowdsourcing project. It consists of three kinds of tasks: task workflow and management, contributor management (including profiling and retention), and quality control. Paid-for marketplaces like Amazon Mechanical Turk and CrowdFlower already provide this support. As with conventional corpus annotation, quality control is particularly challenging, and additional NLP-specific infrastructural support can help. In this phase, additional NLP-specific, infrastructural support is needed for evaluating and aggregating the multiple contributor inputs into a complete linguistic resource, and in assessing the resulting overall quality. Next we demonstrate how these challenges have been addressed in our work. To address these NLP-specific requirements, we implemented a generic, open-source GATE Crowdsourcing plugin, which makes it very easy to set up and conduct crowdsourcing-based corpus annotation from within GATE's visual interface. Documents and their annotations are encoded in the GATE stand-off XML format (Cunningham The plugin expects documents to be presegmented into paragraphs, sentences and word tokens, using a tokeniser, POS tagger, and sentence splitter -e.g. those built in to GATE The User Interfaces (UIs) applicable to various task types tend to fall into a set of categories, the most commonly used being categorisation, selection, and text input. The GATE Crowdsourcing plugin provides generalised and re-usable, automatically generated interfaces for categorisation In the first step, task name, instructions, and classification choices are provided, in a UI configuration dialog (see Figure For some categorisation NLP annotation tasks (e.g. classifying sentiment in tweets into positive, negative, and neutral), fixed categories are sufficient. In others, where the available category choices depend on the text that is being classified (e.g. the possible disambiguations of Paris are different from those of London), choices are defined through annotations on each of the classification targets. In this case case, the UI generator then takes these annotations as a parameter and automatically creates the different category choices, specific to each crowdsourcing unit. Figure Figure Since the text may not contain a sequence to be annotated, we also generate an explicit confirmation checkbox. This forces annotators to declare that they have made the selection or there is nothing to be selected in this text. CrowdFlower can then use gold units and test the correctness of the selections, even in cases where no sequences are selected in the text. In addition, requiring at least some worker interaction and decision-making in every task improves overall result quality. The key mechanism for spam prevention and quality control in CrowdFlower is test data, which we also refer to as gold units. These are completed examples which are mixed in with the unprocessed data shown to workers, and used to evaluate worker performance. The GATE Crowdsourcing plugin supports automatic creation of gold units from GATE annotations having a feature correct. The value of that feature is then taken to be the answer expected from the human annotator. Gold units need to be 10%-30% of the units to be annotated. The minimum performance threshold for workers can be set in the job configuration. On completion, the plugin automatically imports collected multiple judgements back into GATE and the original documents are enriched with the crowdsourced information, modelled as multiple annotations (one per contributor). Figure This paper described the GATE Crowdsourcing plugin Future work will focus on expanding the number of reusable components, the implementation of reusable automatic adjudication algorithms, and providing support for crowdsourcing through games-with-a-purpose (GWAPs).
737
378
737
Latent Variable Model for Multi-modal Translation
In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kádár, 2017) and a conditional variational auto-encoder approach
Multi-modal machine translation (MMT) is an exciting novel take on machine translation (MT) where we are interested in learning to translate sentences in the presence of visual input (mostly images). In the last three years there have been shared tasks Most MMT models expand neural machine translation (NMT) architectures In this work, we also aim at translating without images at test time, yet learning a visually grounded translation model. To that end, we resort to probabilistic modelling instead of multi-task learning and estimate a joint distribution over translations and images. In a nutshell, we propose to model the interaction between visual and textual features through a latent variable. This latent variable can be seen as a stochastic embedding which is used in the target-language decoder, as well as to predict image features. Our experiments show that this joint formulation improves over an MTL approach The main contributions of this paper are: • we propose a novel multi-modal NMT model that incorporates image features through latent variables in a deep generative model. • our latent variable MMT formulation improves considerably over strong baselines, and compares favourably to the state-of-the-art. • we exploit correlations between both modalities at training time through a joint generative approach and do not require images at prediction time. The remainder of this paper is organised as follows. In §2, we describe our variational MMT models. In §3, we introduce the data sets we used and report experiments and assess how our models compare to prior work. In §4, we position our approach with respect to the literature. Finally, in §5 we draw conclusions and provide avenues for future work.
Similarly to standard NMT, in MMT we wish to translate a source sequence The main difference is the presence of an image v which illustrates the sentence pair x m 1 , y n 1 . We do not model images directly, but instead an 2048dimensional vector of pre-activations of a ResNet-50's pool5 layer In our variational MMT models, image features are assumed to be generated by transforming a stochastic latent embedding z, which is also used to inform the RNN decoder in translating source sentences into a target language. We propose a generative model of translation and image generation where both the image v and the target sentence y n 1 are independently generated given a common stochastic embedding z. The generative story is as follows. We observe a source sentence x m 1 and draw an embedding z from a latent Gaussian model, where f µ (•) and f σ (•) map from a source sentence to a vector of locations µ ∈ R c and a vector of scales σ ∈ R c >0 , respectively. We then proceed to draw the image features from a Gaussian observation model, where f ν (•) maps from z to a vector of locations ν ∈ R o , and ς ∈ R >0 is a hyperparameter of the model (we use 1). Conditioned on z and on the source sentence x m 1 , and independently of v, we generate a translation by drawing each target word in context from a Categorical observation model, where f π (•) maps z, x m 1 , and a prefix translation y <j to the parameters π j of a categorical distribution over the target vocabulary. Functions f µ (•), f σ (•), f ν (•), and f π (•) are implemented as neural networks whose parameters are collectively denoted by θ. In particular, implementing f π (•) is as simple as augmenting a standard NMT architecture consisting of three components which we parameterise directly. As there are no observations for z, we cannot estimate these components directly. We must instead marginalise z out, which yields the marginal (5) An important statistical consideration about this model is that even though y n 1 and v are conditionally independent given z, they are marginally dependent. This means that we have designed a data generating process where our observations 1 , we model the joint likelihood of the translation y n 1 , the image (features) v, and a stochastic embedding z sampled from a conditional latent Gaussian model. Note that the stochastic embedding is the sole responsible for assigning a probability to the observation v, and it helps assign a probability to the translation. y n 1 , v|x m 1 are not assumed to have been independently produced. 3 This is in direct contrast with multi-task learning or joint modelling without latent variables-for an extended discussion see )) (6) on the log-likelihood function. This evidence lowerbound (ELBO) is expressed in terms of an inference model q λ (z|x m 1 , y n 1 , v) which we design having tractability in mind. In particular, our ap-3 This is an aspect of the model we aim to explore more explicitly in the near future. proximate posterior is a Gaussian distribution parametrised by an inference network, that is, an independently parameterised neural network (whose parameters we denote collectively by λ) which maps from observations, in our case a sentence pair and an image, to a variational location u ∈ R c and a variational scale s ∈ R c >0 . Figure Location-scale variables (e.g. Gaussians) can be reparametrised, i.e. we can obtain a latent sample via a deterministic transformation of the variational parameters and a sample from the standard Gaussian distribution: This reparametrisation enables backpropagation through stochastic units (Kingma and Welling, 2014; Architecture All of our parametric functions are neural network architectures. In particular, f π is a standard sequence-to-sequence architecture with attention and a softmax output. We build upon OpenNMT We let the inference model condition on sourcelanguage encodings without updating them, and we use a target-language bidirectional LSTM encoder in order to also condition on the complete target sentence. Then g u and g s transform a concatenation of the average source-language encoder hidden state, the average target-language bidirectional encoder hidden state, and the image features. Fixed Gaussian prior We have just presented our variational MMT model in its full generalitywe refer to that model as VMMT C . However, keeping in mind that MMT datasets are rather small, it is desirable to simplify some of our model's components. In particular, the estimated latent Gaussian model ( Our encoder is a 2-layer 500D bidirectional RNN with GRU, the source and target word embeddings are 500D, and all are trained jointly with the model. We use OpenNMT to implement all our models Visual features are obtained by feeding images to the pre-trained ResNet-50 and using the activations of the pool5 layer All models are trained using the Adam optimiser (Kingma and Ba, 2014) with an initial learning rate of 0.002 and minibatches of size 40, where each training instance consists of one English sentence, one German sentence and one image (MMT). Models are trained for up to 40 epochs and we perform model selection based on BLEU4, and use the best performing model on the validation set to translate test data. Moreover, we halt training if the model does not improve BLEU4 scores on the validation set for 10 epochs or more. We report mean and standard deviation over 4 independent runs for all models we trained ourselves (NMT, VMMT F , VMMT C ), and other baseline results are the ones reported in the authors' publications We preprocess our data by tokenizing, lowercasing, and converting words to subword tokens using a bilingual BPE model with 10k merge operations The Flickr30k dataset Since this dataset is very small, we also investigate the effect of including more in-domain data to train our models. To that purpose, we use addi- 37.5 (0.3) ↑ 0.7 55.7 (0.1) ↓ 0.3 61.9 (0.1) ↑ 0.9 66.5 (0.1) ↑ 1.3 tional 145K monolingual German descriptions released as part of the Multi30k dataset to the task of image description generation We refer to this dataset as comparable Multi30k (M30k C ). Descriptions in the comparable Multi30k were collected independently of existing English descriptions and describe the same 29K images as in the M30k T dataset. In order to obtain features for images, we use ResNet-50 In order to investigate how well our models generalise, we also evaluate our models on the ambiguous MSCOCO test set Finally, we use a 50D latent embedding z in our experiments with the translated Multi30k data, whereas in our ablative experiments and experiments with the comparable Multi30k data, we use a 500D stochastic embedding z. We compare our work against three different baselines. The first one is a standard text-only sequenceto-sequence NMT model with attention We now report on experiments conducted with models trained to translate from English into German using the translated Multi30k data set (M30k T ). In Table In Table Number the true posterior collapse to the prior and the KL term in the ELBO vanish to zero. In practice, that would mean the model has virtually not used the latent variable z to predict image features v, but mostly as a source of stochasticity in the decoder. This can happen because the model has access to informative features from the source bi-LSTM encoder and need not learn a difficult mapping from observations to latent representations predictive of image features. For that reason, we wish to measure how well can we train latent variable MMT models while ensuring that the KL term in the loss (Equation ( In Table Since the translated Multi30k dataset is very small, we also investigate the effect of including more in-domain data to train our models. For that purpose, we use additional 145K monolingual German descriptions released as part of the comparable Multi30k dataset (M30k C ). We train a text-only NMT model to translate from German into English using the original 29K parallel sentences in the translated Multi30k (without images), and apply this model to back-translate the 145K German descriptions into English In this set of experiments, we explore how pretraining models NMT, VMMT F and VMMT C using both the translated and back-translated comparable Multi30k affects results. Models are pre-trained on mini-batches with a one-to-one ratio of translated and back-translated data. tuned on the translated Multi30k until convergence, and model selection using BLEU is only applied during fine-tuning and not at the pre-training stage. In Figure to be sensitive to whether x is gold-standard or synthetic, whereas p(z) cannot; (ii) in the conditional case the posterior approximation q(z|x, y, v) can directly exploit different patterns arising from a gold-standard versus a synthetic x, y pair; and finally (iii) our synthetic data is made of targetlanguage gold-standard image descriptions, which help train the inference network's target-language BiLSTM encoder. In Table In our ablation we are interested in finding out to what extent the model makes use of the latent space, i.e. how important is the latent variable. KL free bits A common issue when training latent variable models with a strong decoder is having 5 There are no additional images because the comparable Multi30k consists of additional German descriptions for the same 29K images already in the translated Multi30k. In Table additional back-translated data. Even though there has been growing interest in variational approaches to machine translation Fully supervised MMT models. All submissions to the three runs of the multi-modal MT shared tasks Perhaps the first MMT model proposed prior to these shared tasks is that of Finally, Multi-task MMT models. Multi-task learning MMT models are easily applicable to translate sentences without images (at test time), which is an advantage over the above-mentioned models. Variational MMT models. We have proposed a latent variable model for multimodal neural machine translation and have shown benefits from both modelling images and promoting use of latent space. We also show that in the absence of enough data to train a more complex inference network a simple fixed prior suffices, whereas when more training data is available (even noisy data) a conditional prior is preferable. Importantly, our models compare favourably to the state-of-theart. In future work we will explore other generative models for multi-modal MT, as well as different ways to directly incorporate images into these models. We are also interested in modelling different views of the image, such as global vs. local image features, and also in using larger image collections and modelling images directly, i.e. pixel intensities.
728
1,727
728
Overlap-based Vocabulary Generation Improves Cross-lingual Transfer Among Related Languages
Pre-trained multilingual language models such as mBERT and XLM-R have demonstrated great potential for zero-shot cross-lingual transfer to low web-resource languages (LRL). However, due to limited model capacity, the large difference in the sizes of available monolingual corpora between high web-resource languages (HRL) and LRLs does not provide enough scope of co-embedding the LRL with the HRL, thereby affecting the downstream task performance of LRLs. In this paper, we argue that relatedness among languages in a language family along the dimension of lexical overlap may be leveraged to overcome some of the corpora limitations of LRLs. We propose Overlap BPE (OBPE), a simple yet effective modification to the BPE vocabulary generation algorithm which enhances overlap across related languages. Through extensive experiments on multiple NLP tasks and datasets, we observe that OBPE generates a vocabulary that increases the representation of LRLs via tokens shared with HRLs. This results in improved zero-shot transfer from related HRLs to LRLs without reducing HRL representation and accuracy. Unlike previous studies that dismissed the importance of token-overlap, we show that in the low-resource related language setting, token overlap matters. Synthetically reducing the overlap to zero can cause as much as a four-fold drop in zero-shot transfer accuracy.
Zero-shot cross-lingual transfer is the ability of a model to learn from labeled data in one language and transfer the learning to another language without any labeled data. Transformer Vocabulary generation is an important step in multilingual model training, where vocabulary size directly impacts model capacity. Usually, the vocabulary is generated from a union of HRL and LRL data. This often results in under-allocation of vocabulary bandwidth to LRLs, as LRL data is significantly smaller in size compared to HRL. This under-allocation of model capacity results in lower LRL performance In this paper, we hypothesize that exploiting language relatedness can result in an overall more effective vocabulary, which is also better representative of LRLs. Closely related languages (e.g., languages belonging to a single family) have common origins for words with similar meanings. We show some examples across three different families of related languages in Table ing the correct granularity of sharing automatically is tricky. On one extreme, we can choose a vocabulary which favours longer units frequent in HRL without regard for sharing, thereby leading to better semantic representation of the tokens but no cross-lingual transfer. On the other extreme, we can choose character-level vocabulary In this paper, we propose Overlap BPE (OBPE). OBPE chooses a vocabulary by giving token overlap among HRL and LRLs a primary consideration. OBPE prefers vocabulary units which are shared across multiple languages, while also encoding the input corpora compactly. Thus, OBPE tries to balance the trade-off between cross-lingual subword sharing and the need for robust representation of individual languages in the vocabulary. This re-sults in a more balanced vocabulary, resulting in improved performance for LRLs without hurting HRL accuracy. Table Recently This paper offers the following contributions • We present OBPE, a simple yet effective modification to the popular BPE algorithm to promote overlap between LRLs and a related HRL during vocabulary generation. OBPE uses a generalized mean based formulation to quantify token overlap among languages. • We evaluate OBPE on twelve languages across three related families, and show consistent improvement in zero-shot transfer over state-of-the art baselines on four NLP tasks. We analyse the reasons behind the gains obtained by OBPE and show that OBPE increases the percentage of LRL tokens in the vocabulary without reducing HRL tokens. This is unlike over-sampling strategies where increasing one reduces the other. • Through controlled experiments on the amount of token overlap on a related HRL-LRL pair, we show that token overlap is extremely important in the low-resource, related language setting. Recent literature which conclude that token overlap is unimportant may have overlooked this important setting. The source code for our experiments is available at
Transformer-based multilingual language models such as mBERT Input Data In the data creation stage, Tokenization Vocabulary Generation We are not aware of any prior work that explicitly promotes overlapping tokens between LRLs and HRLs in the vocabulary of multilingual models. We are given monolingual data D 1 , ..., D n in a set of n languages L = {L 1 , ..., L n } and a vocabulary budget V. Our goal is to generate a vocabulary V that when used to tokenize each D i in a multilingual model would provide cross-lingual transfer to LRLs from related HRLs. We use L LRL to denote the subset of the n languages that are low-resource, the remaining languages L -L LRL are denoted as the set L HRL of high resource languages. Existing methods of vocabulary creation start with a union D of monolingual data D 1 , ..., D n , and choose a vocabulary V that most compactly represents D. We first present an overview of BPE, a popular algorithm for vocabulary generation. Byte Pair Encoding (BPE) The size of the encoding |encode(D i , S)| can be alternately expressed as the sum of frequency of tokens in S when D i is tokenized using S. This motivates the following efficient greedy algorithm to implement the above optimization for i ∈ {1, 2, ..., n} do Split words in Di into characters Ci with a special marker after every word end for Update token and pair frequency on {Di}, V Add to V token k formed by merging pairs u, v ∈ V with the largest value of end while 2016). Let f ki denote the frequency of a candidate token k in the corpus D i of language L i . The BPE algorithm grows V incrementally. Initially, V comprises of characters in D. Then, until |V| ≤ V, it chooses the token k obtained by merging two existing tokens in V for which the frequency in D is maximum. A limitation of BPE on multilingual data is that tokens that appear largely in low-resource D i may not get added to V, leading to sentences in L i being over-tokenized. For a low resource language, the available monolingual data D i is often orders of magnitude smaller than another high-resource language. Models like mBERT and XLM-R address this limitation by over-sampling documents of lowresource languages. However, over-sampling LRLs might compromise learned representation of HRLs where task-specific labeled data is available. We propose an alternative strategy of vocabulary generation called OBPE that seeks to maximize transfer from HRL to LRL. The key idea in OBPE is to maximize the overlap between an LRL and a closely related HRL while simultaneously encoding the input corpora compactly as in BPE. When labeled data D T h for a task T is available in an HRL L h , then a multilingual model fine-tuned with D T h is likely to transfer better to a related LRL L i when L i and L h share several tokens in common. Thus, the objective that OBPE seeks to optimize when creating a vocabulary is: where 0 ≤ α ≤ 1 determines importance of the two terms. The first term in the objective compactly represents the total corpus, as in BPE's (Eq (1)). The second term additionally biases towards vocabulary with greater overlap of each LRL to one HRL where we expect task-specific labeled data to be present. There are several ways in which we can measure the overlap between two languages with respect to a current vocabulary. First, we encode each of D i and D j using the vocabulary S, which then yields a multiset of tokens in each corpus. Inspired by the literature on fair allocation (4) where f ki denotes the frequency of token k when D i is encoded with S. For different values of p, we get different tradeoffs between fairness to each language and overall goodness. When p = -∞, generalized mean reduces to the minimum function, and we get the most egalitarian allocation. However, this ignores the larger of the two frequencies. When p = 1, we get a simple average which is what the first term in Equation (3) already covers. For p = 0, -1, we get the geometric and harmonic means respectively. Due to smaller size of LRL monolingual data, the frequency of a token which is shared across languages is likely to be much higher in HRL monolingual data as compared to that in LRL monolingual data, Hence, setting p to large negative values will increase the weight given to LRLs and thus increase overlap. We will present an exploration of the effect of p on zero-shot transfer in the experiment section. The greedy version of the above objective that controls the candidate vocabulary item to be in- ducted in each iteration of OBPE is thus: The data structure maintained by BPE to efficiently conduct such merges can be applied with little changes to the OBPE algorithm. The only difference is that we need to separately maintain the frequency in each language in addition to overall frequency. Since the time and resources used to create the vocabulary is significantly smaller than the model pre-training time, this additional overhead to the pre-training step is negligible. We evaluate by measuring the efficacy of zeroshot transfer from the HRL on four different tasks: named entity recognition (NER), part of speech tagging (POS), text classification(TC), and Cross-lingual Natural Language Inference (XNLI). Through our experiments, we evaluate the following questions: 1. Is OBPE more effective than BPE for zeroshot transfer? (Section 4.2) 2. What is the effect of token overlap on overall accuracy? (Section 4.3) 3. How does increased LRL representation in the vocabulary impact accuracy? (Section 4.4) We report additional ablation and analysis experiments in Section 4.5. Pre-training Data and Languages As our pretraining dataset {D i }, we use the Wikipedia dumps of all the languages as used in mBERT. We pretrain with 12 languages grouped into three families of four related languages as shown in Table • BALANCED : all three HRLs get 160K documents each • SKEWED : English gets one million, French half million, and Hindi 160K documents We evaluate twelve-language models in each of these settings, and present results for separate four language models per family in Table To ensure that LRLs are not under-represented, we over-sample using exponentially smoothed weighting similar to multilingual BERT Task-specific Data We evaluate on four downstream tasks: (1) NER: data from WikiANN Task-specific fine-tuning details We perform taskspecific fine-tuning of pre-trained BERT on the task-specific training data of HRL and evaluate on all languages in the same family. Here we used learning-rate 2e-5 and batch size 32, with training duration as 16 epochs for NER, 8 epochs for POS and 3200 iterations for Text Classification and XNLI. The models were evaluated on a separate validation dataset of the HRL and the model with the minimum validation loss, maximum F1-score, accuracy and minimum validation loss was selected for final evaluation for XNLI, NER, POS and Text Classification respectively. All fine-tuning experiments were performed on Google Colaboratory. The results reported for all the experiments are an average of 3 independent runs. We evaluate the impact of OBPE on improving zero-shot transfer from HRLs to LRLs within the same family across four different tasks. We compare with four existing methods that represent different methods of vocabulary creation and allocation of budget across languages: Methods compared 1. BPE In addition to improving zero-shot transfer from HRLs to LRLs on downstream tasks, OBPE also leads to better intrinsic representation of LRLs. We validate that by measuring the pseudoperplexity In order to investigate the reasons behind the OBPE gains, we first inspected the percentage of tokens in the vocabulary that belong to LRLs, HRLs, and in their overlap. We find that with OBPE both LRL tokens and overlapping tokens increase. Either of these could have led to the observed gains. We analyze the effect of each of these factors in the following two sections. We present the impact of token overlap via two sets of experiments: first, a controlled setup where we synthetically vary the fraction of overlap and second where we measure correlation between overlap and gains of OBPE on the data as-is. For the controlled setup we follow in both languages as overlapping tokens. We then incrementally sample 10%, 40%, 50%, 90% of the tokens from this set. We shift the Unicode of the entire Hindi monolingual data except the set of sampled tokens so that there are no overlapping tokens between Hindi (hi) and Marathi (mr) monolingual data other than the sampled tokens. Let us call this Hindi data SynthHindi. We then run OBPE on SynthHindi-Marathi language pair to generate a vocabulary to pretrain the model. The task-specific Hindi data is also converted to SynthHindi during fine-tuning and testing of the model. Figure Our results contradict the conclusions of Thus, we conclude that as long as languages are related, token overlap is important and the benefit from overlap is higher in the low resource setting. Overlap Vs Gain: Real data setup We further substantiate our hypothesis that the shared tokens across languages favoured by OBPE enable transfer of supervision from HRL to LRL via statistics on real-data. In Table We next investigate the impact of increased representation of LRL tokens in the vocabulary. OBPE increases LRL representation by favoring overlapping tokens, but LRL tokens can also be increased by just over-sampling LRL documents. We train another BALANCED12 model but with further oversampling LRLs with exponentiation factor of 0.5 instead of 0.7. We observe in Figure We conducted experiments for different values of p that controls the amount of overlap in the generalized mean function (Equation ( In this paper, we address the problem of crosslingual transfer from HRLs to LRLs by exploiting relatedness among them. We focus on lexical overlap during the vocabulary generation stage of multilingual pre-training. We propose Overlap BPE (OBPE), a simple yet effective modification to the BPE algorithm, which chooses a vocabulary that maximizes overlap across languages. OBPE encodes input corpora compactly while also balancing the trade-off between cross-lingual subword sharing and language-specific vocabularies. We focus on three sets of closely related languages from diverse language families. Our experiments provide evidence that OBPE is effective in leveraging overlap across related languages to improve LRL performance. In contrast to prior work, through controlled experiments on the amount of token overlap between two related HRL-LRL language pairs, we establish that token overlap is important when a LRL is paired with a related HRL. Table • Our approach is expected to improve crosslingual transfer from HRL to LRL only when the HRL and LRL are related linguistically since it relies on the presence of lexically overlapping tokens • It requires the transliteration of LRL data to the script of its related HRL if LRL does not have the same script. Language models may amplify bias in data and also introduce new ones. Multilingual models explored in the paper are not immune to such issues. Detecting such biases and mitigating them is a topic of ongoing research. We are hopeful that our focus on better representation of LRLs in the vocabulary is a step towards more inclusive models. "attention_probs_dropout_prob": 0.1, "hid-den_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initial-izer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30000 All the task-specific fine-tuning experiments are done using GPUs on Google Colaboratory where each fine-tuning experiment requires 2 GPU hours. Tatoeba data, GLUE data, Wikipedia dumps use the Creative Commons licenses. TDIL data used for Indic languages uses Research license type and Xtreme dataset uses Apache License 2.0. To the best of our knowledge, the use of scientific artifacts in this work is consistent with their intended use. Performance is measured on four tasks: NER (F1), Text Classification (Accuracy), POS (Accuracy), and XNLI (Accuracy). For all metrics, higher is better . Zero-shot transfer to LRL improves without hurting HRL accuracy. Averages results across HRLs and LRLs are presented in Table We have used standard Wikipedia corpus, and there have been some studies on bias in such corpus. Table
1,371
2,929
1,371
Deriving Generalized Knowledge from Corpora using WordNet Abstraction
Existing work in the extraction of commonsense knowledge from text has been primarily restricted to factoids that serve as statements about what may possibly obtain in the world. We present an approach to deriving stronger, more general claims by abstracting over large sets of factoids. Our goal is to coalesce the observed nominals for a given predicate argument into a few predominant types, obtained as WordNet synsets. The results can be construed as generically quantified sentences restricting the semantic type of an argument position of a predicate.
Our interest is ultimately in building systems with commonsense reasoning and language understanding abilities. As is widely appreciated, such systems will require large amounts of general world knowledge. Large text corpora are an attractive potential source of such knowledge. However, current natural language understanding (NLU) methods are not general and reliable enough to enable broad assimilation, in a formalized representation, of explicitly stated knowledge in encyclopedias or similar sources. As well, such sources typically do not cover the most obvious facts of the world, such as that ice cream may be delicious and may be coated with chocolate, or that children may play in parks. Methods currently exist for extracting simple "factoids" like those about ice cream and children just mentioned (see in particular The work reported here is aimed at deriving generalizations of the latter sort from large sets of weaker propositions, by examining the hierarchical relations among sets of types that occur in the argument positions of verbal or other predicates. The generalizations we are aiming at are certainly not the only kinds derivable from text corpora (as the extensive literature on finding isa-relations, partonomic relations, paraphrase relations, etc. attests), but as just indicated they do seem potentially useful. Also, thanks to their grounding in factoids obtained by open knowledge extraction from large corpora, the propositions obtained are very broad in scope, unlike knowledge extracted in a more targeted way. In the following we first briefly review the method developed by Schubert and collaborators to abstract factoids from text; we then outline our approach to obtaining strengthened propositions from such sets of factoids. We report positive results, while making only limited use of standard corpus statistics, concluding that future endeavors exploring knowledge extraction and WordNet should go beyond the heuristics employed in recent work.
Rilly or Glendora had entered her room while she slept, bringing back her washed clothes. (: Here the upper-case sentences are automatically generated verbalizations of the abstracted LFs shown beneath them. The goal in this work, with respect to the example given, would be to derive with the use of a large collection of KNEXT outputs, a general statement such as If something may sleep, it is probably either an animal or a person. While the community continues to make gains in the automatic construction of reliable, general ontologies, the WordNet sense hierarchy The use of WordNet raises the challenge of dealing with multiple semantic concepts associated with the same word, i.e., employing Word-Net requires word sense disambiguation in order to associate terms observed in text with concepts (synsets) within the hierarchy. In their work on determining selectional preferences, both As will be seen, our algorithm does not select word senses prior to generalizing them, but rather as a byproduct of the abstraction process. Moreover, it potentially selects multiple senses of a word deemed equally appropriate in a given context, and in that sense provides coarse-grained disambiguation. This also prevents exaggeration of the contribution of a term to the abstraction, as a result of being lexicalized in a particularly finegrained way. While the procedure given here is not tied to a particular formalism in representing semantic con-text, in our experiments we make use of propositional templates, based on the verbalizations arising from KNEXT logical forms. Specifically, a proposition F with m argument positions generates m templates, each with one of the arguments replaced by an empty slot. Hence, the statement, A MAN MAY GIVE A SPEECH, gives rise to two templates, A MAN MAY GIVE A , and A MAY GIVE A SPEECH. Such templates match statements with identical structure except at the template's slots. Thus, the factoid A POLITICIAN MAY GIVE A SPEECH would match the second template. The slot-fillers from matching factoids (e.g., MAN and POLITICIAN form the input lemmas to our abstraction algorithm described below. Additional templates are generated by further weakening predicate argument restrictions. Nouns in a template that have not been replaced by a free slot can be replaced with an wild-card, indicating that anything may fill its position. While slots accumulate their arguments, these do not, serving simply as relaxed interpretive constraints on the original proposition. For the running example we would have; A MAY GIVE A ?, and, A ? MAY GIVE A , yielding observation sets pertaining to things that may give, and things that may be given. Our method for type derivation assumes access to a word sense taxonomy, providing: W : set of words, potentially multi-token N : set of nodes, e.g., word senses, or synsets L is a distance function based on P that gives the length of the shortest path from a node to a dominating node, with base case: L(n, n) = 1. When appropriate, we write L(w, n) to stand for the arithmetic mean over L(n , n) for all senses of w that are dominated by n. We refer to a given predicate argument position for a specified propositional template simply as a slot. W ⊆ W will stand for the set of words found to occupy a given slot (in the corpus employed), and D : N →W * is a function mapping a node to the words it (partially) sense dominates. That is, for all n ∈ N and w ∈ W , if w ∈ D(n) then there is at least one sense n ∈ S(w) such that n is an ancestor of n as determined through use of P. For example, we would expect the word bank to be dominated by a node standing for a class such as company as well as a separate node standing for, e.g., location. Based on this model we give a greedy search algorithm in Figure For a given slot we start with a set of observed words W , an upper bound m on the number of types allowed in the result R, and a parameter p setting a lower bound on the fraction of items in W that a valid solution must dominate. For example, when m = 3 and p = 0.9, this says we require the solution to consist of no more than 3 nodes, which together must dominate at least 90% of W . The search begins with initializing the cover set C, and the result set R as empty, with the variable α set to 1. Observe that at any point in the execution of DERIVETYPES, C represents the set of all words from W with at least one sense having as an ancestor a node in R. While C continues to be smaller than the percentage required for a solution, nodes are added to R based on whichever element of N has the smallest score. The SCORE function first computes the modified coverage of n, setting C to be all words in W that are dominated by n that haven't yet been "spoken for" by a previously selected (and thus lower scoring) node. SCORE returns the sum of the path lengths between the elements of the modified set of dominated nodes and n, divided by that set's size, scaled by the exponent α. Note when α = 1, SCORE simply returns the average path length of the words dominated by n. If the size of the result grows beyond the specified threshold, R and C are reset, α is incremented by some step size δ, and the search starts again. As α grows, the function increasingly favors the coverage of a node over the summed path length. Each iteration of DERIVETYPES thus represents a further relaxation of the desire to have the returned nodes be as specific as possible. Eventually, α will be such that the minimum scoring nodes will be found high enough in the tree to cover enough of the observations to satisfy the threshold p, at which point R is returned. As can be observed, our approach makes no use of the relative or absolute frequencies of the words in W , even though such frequencies could be added as, e.g., relative weights on length in SCORE. This is a purposeful decision motivated both by practical and theoretical concerns. Practically, a large portion of the knowledge observed in KNEXT output is infrequently expressed, and yet many tend to be reasonable claims about the world (despite their textual rarity). For example, a template shown in Section 5, A MAY WEAR A CRASH HELMET, was supported by just two sentences in the BNC. However, based on those two observations we were able to conclude that usually If something wears a crash helmet, it is probably a male person. Initially our project began as an application of the closely related MDL approach of Li and Abe (1998), but was hindered by sparse data. We observed that our absolute frequencies were often too low to perform meaningful comparisons of relative frequency, and that different examples in development tended to call for different trade-offs between model cost and coverage. This was due as much to the sometimes idiosyncratic structure of WordNet as it was to lack of evidence. From the entire set of BNC-derived KNEXT propositional templates, evaluations were performed on a set of 21 manually selected examples, together representing the sorts of knowledge for which we are most interested in deriving strengthened argument type restrictions. All modification of the system ceased prior to the selection of these templates, and the authors had no knowledge of the underlying words observed for any particular slot. Further, some of the templates were purposefully chosen as potentially problematic, such as, A ? MAY OBSERVE A , or A PERSON MAY PAINT A . Without additional context, templates such as these were expected to allow for exceptionally broad sorts of arguments. For these 21 templates, 65 types were derived, giving an average of 3.1 types per slot, and allowing for statements such as seen in Table One way in which to measure the quality of an argument abstraction is to go back to the underlying observed words, and evaluate the resultant sense(s) implied by the chosen abstraction. We say senses plural, as the majority of KNEXT propositions select senses that are more coarse-grained than WordNet synsets. Thus, we wish to evaluate these more coarse-grained sense disambiguation results entailed by our type abstractions. 7 We performed this evaluation using as comparisons the first-sense, and all-senses heuristics. The first-sense heuristic can be thought of as striving for maximal specificity at the risk of precluding some admissible senses (reduced recall), 7 Allowing for multiple fine-grained senses to be judged as appropriate in a given context goes back at least to while the all-senses heuristic insists on including all admissible senses (perfect recall) at the risk of including inadmissible ones. Table In all cases our method gives precision results comparable or superior to the first-sense heuristic, while at all times giving higher recall. In particular, for the case of Primary type, corresponding to the derived type that accounted for the largest number of observations for the given argument slot, our method shows strong performance across the board, suggesting that our derived abstractions are general enough to pick up multiple acceptable senses for observed words, but not so general as to allow unrelated senses. We designed an additional test of our method's performance, aimed at determining whether the distinction between admissible senses and inadmissible ones entailed by our type abstractions were in accord with human judgement. To this end, we automatically chose for each template the observed word that had the greatest number of senses not dominated by a derived type A MAY HAVE A BROTHER 1 WOMAN : an adult female person (as opposed to a man); "the woman kept house while the man hunted" 2 WOMAN : a female person who plays a significant role (wife or mistress or girlfriend) in the life of a particular man; "he was faithful to his woman" 3 WOMAN : a human female employed to do housework; "the char will clean the carpet"; "I have a woman who comes in four hours a day while I write" *4 WOMAN : women as a class; "it's an insult to American womanhood"; "woman is the glory of creation"; "the fair sex gathered on the veranda" If something is famous, it is probably a person1, an artifact1, or a communication2 If ? writes something, it is probably a communication2 If a person is happy with something, it is probably a communication2, a work1, a final result1, or a state of affairs1 If a fish has something, it is probably a cognition1, a torso1, an interior2, or a state2 If something is fast growing, it is probably a group1 or a business3 If a message undergoes something, it is probably a message2, a transmission2, a happening1, or a creation1 If a male builds something, it is probably a structure1, a business3, or a group1 restriction. For each of these alternative (nondominated) senses, we selected the ancestor lying at the same distance towards the root from the given sense as the average distance from the dominated senses to the derived type restriction. In the case where going this far from an alternative sense towards the root would reach a path passing through the derived type and one of its subsumed senses, the distance was cut back until this was no longer the case. These alternative senses, guaranteed to not be dominated by derived type restrictions, were then presented along with the derived type and the original template to two judges, who were given the same instructions as used by Results for this evaluation are found in abstracted types that were possible based on the given word. Achieving even stronger rejection of alternative types would be difficult, since KNEXT templates often provide insufficient context for full disambiguation of all their constituents, and judges were allowed to base their assessments on any interpretation of the verbalization that they could reasonably come up with. Our method as described thus far is not tied to a particular word sense taxonomy. Experiments reported here relied on the following model adjustments in order to make use of WordNet (version 3.0). The function P was set to return the union of a synset's hypernym and instance hypernym relations. Regarding the function L , WordNet is constructed such that always picking the first sense of a given nominal tends to be correct more often than not (see discussion by Parameters were set for our data based on manual experimentation using the templates seen in Table In addition, we found it desirable to add a few hard restrictions on the maximum level of generality. Nodes corresponding to the word sense pairs given in Table Our method assumes that if multiple words occurring in the same slot can be subsumed under the same abstract class, then this information should be used to bias sense interpretation of these observed words, even when it means not picking the first sense. In general this bias is crucial to our ap-proach, and tends to select correct senses of the words in an argument set W . But an example where this strategy errs was observed for the template A MAY BARK, which yielded the generalization that If something barks, then it is probably a person. This was because there were numerous textual occurrences of various types of people "barking" (speaking loudly and aggressively), and so the occurrences of dogs barking, which showed no type variability, were interpreted as involving the unusual sense of dog as a slur applied to certain people. The template, A CAN BE WHISKERED, had observations including both face and head. This prompted experiments in allowing part holonym relations (e.g., a face is part of a head) as part of the definition of P , with the final decision being that such relations lead to less intuitive generalizations rather than more, and thus these relation types were not included. The remaining relation types within WordNet were individually examined via inspection of randomly selected examples from the hierarchy. As with holonyms we decided that using any of these additional relation types would degrade performance. A shortcoming was noted in WordNet, regarding its ability to represent binary valued attributes, based on the template, A CAN BE PREGNANT. While we were able to successfully generalize to female person, there were a number of words observed which unexpectedly fell outside that associated synset. For example, a queen and a duchess may each be a female aristocrat, a mum may be a female parent, There is a wealth of existing research focused on learning probabilistic models for selectional restrictions on syntactic arguments. In describing ALICE, a system for lifelong learning, Minimizing word sense ambiguity by focusing on a specific domain was later seen in the work of Assigning pre-compiled instances to their firstsense reading in WordNet, Pas ¸ca (2008) then generalized class attributes extracted for these terms, using as a resource Google search engine query logs. As the volume of automatically acquired knowledge grows, it becomes more feasible to abstract from existential statements to stronger, more general claims on what usually obtains in the real world. Using a method motivated by that used in deriving selectional preferences for verb arguments, we've shown progress in deriving semantic type restrictions for arbitrary predicate argument positions, with no prior knowledge of sense information, and with no training data other than a handful of examples used to tune a few simple parameters. In this work we have made no use of relative term counts, nor corpus-wide, distributional frequencies. Despite foregoing these often-used statistics, our methods outperform abstraction based on a strict first-sense heuristic, employed in many related studies. Future work may include a return to the MDL approach of
558
1,989
558
What to Read in a Contract? Party-Specific Summarization of Legal Obligations, Entitlements, and Prohibitions
Reviewing and comprehending key obligations, entitlements, and prohibitions in legal contracts can be a tedious task due to their length and domain-specificity. Furthermore, the key rights and duties requiring review vary for each contracting party. In this work, we propose a new task of party-specific extractive summarization for legal contracts to facilitate faster reviewing and improved comprehension of rights and duties. To facilitate this, we curate a dataset comprising of party-specific pairwise importance comparisons annotated by legal experts, covering ∼293K sentence pairs that include obligations, entitlements, and prohibitions extracted from lease agreements. Using this dataset, we train a pairwise importance ranker and propose a pipeline-based extractive summarization system that generates a party-specific contract summary. We establish the need for incorporating domain-specific notion of importance during summarization by comparing our system against various baselines using both automatic and human evaluation methods 1 .
A contract is a legally binding agreement that defines and governs the rights, duties, and responsibilities of all parties involved in it. To sign a contract (e.g., lease agreements, terms of services, and privacy policies), it is important for these parties to precisely understand their rights and duties as described in the contract. However, understanding and reviewing contracts can be difficult and tedious due to their length and the complexity of legalese. Having an automated system that can provide an "at a glance" summary of rights and duties can be useful not only to the parties but also to legal professionals for reviewing contracts. While existing works generate section-wise summaries of unilateral contracts, such as terms of services However, we argue that a single summary may not serve all the parties as they may have different rights and duties Existing summarization systems perform poorly on legal contracts due to large compression ratios and the unavailability of public datasets We break down the contract-level task of partyspecific summarization into two sentence-level subtasks: (1) Content Categorization -identifying the modal categories expressed in the sentences of a contract for a specified party, and (2) Importance Ranking -ranking the sentences based on their importance for a specified party. This approach has three benefits: (a) enabling us to use an existing corpus for identifying deontic modalities in contract, (b) cognitively simplifying the annotation task for experts who only need to compare a few sentences (resulting in higher agreement) at a time rather than read and summarize a full contract (spanning 10 -100 pages), and (c) reducing the cost of annotation as a contract-level end-to-end summarization system requires more data to train than does a sentence-level categorizer and ranker. This work makes the following contributions: (a) we introduce a new task of party-specific extractive summarization of important obligations, entitlements, and prohibitions in legal contracts; (b) we are the first to curate a novel legal expert annotated dataset ( §4) (using best-worst scaling) consisting of party-specific pairwise importance comparisons for sentence pairs (that include obligations, entitlements, or prohibitions) from lease agreements; and (c) we train a pairwise importance ranker using the curated dataset to build a pipeline-based extractive summarization system ( §5) for the proposed task, and show the effectiveness of the system as compared to several unsupervised ranking-based summarization baselines, under both automatic and human evaluations ( §8); underscoring the domainsensitive nature of "importance" in legal settings.
Summarization of Legal Text Existing works focus on summarizing legal case reports Existing works either propose rule-based methods We formally define the new task as: given a contract C consisting of a sequence of sentences (c 1 , c 2 , . . . , c n ) and a party P , the task is to generate an extractive summary S consisting of the most important m o obligations, m e entitlements, and m p prohibitions (where m i < n ∀i ∈ {o, e, p}) for the specified party. As previously mentioned, we break this task into two sub-tasks: (1) Content Categorization to identify the categories expressed in a sentence of a contract for a given party, and (2) Importance Ranking to rank the sentences based on their importance to a given party. The LEXDEMOD dataset Dataset Source We use a subset of contracts from the LEXDEMOD dataset to collect importance annotations as it enables us to create reference summaries for which we need both the category and importance annotations. LEXDEMOD contains lease agreements crawled from Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system which is maintained by the U.S. Securities and Exchange Commission (SEC). The documents filed Annotation Task Rating the party-specific importance of a sentence in a contract on an absolute scale requires well-defined importance levels to obtain reliable annotations. However, defining each importance level can be subjective and restrictive. Moreover, rating scales are also prone to difficulty in maintaining inter-and intra-annotator consistency From a list of N = 3, 300 sentences spanning 11 lease agreements from LEXDEMOD, we generate Annotation Aggregation Annotation for each 4-tuple provides us 5 pairwise inequalities. E.g., if a is marked as the most important and d as the least important, then we know that a ≻ b, a ≻ c, Annotation Reliability A commonly used measure of quality and reliability for annotations producing real-valued scores is split-half reliability (SHR) Dataset Analysis We obtain a total of 293, 368 paired comparisons after applying the BT model. Table As we do not explicitly define the notion of "importance" during annotation. We learned from feedback and interaction with the annotators about several factors they considered when determining importance. These factors included the degree of liability involved (e.g., 'blanket indemnifications' were scored higher than 'costs incurred in alterations' by a tenant since blanket rights can have an uncapped amount of liability), and liabilities incurred by a more probable event were rated as more important than those caused by less probable events, among others. As is evident from these examples, the factors that can influence importance may be complex, multi-faceted, and difficult to exhaustively identify, which is why our approach in this work was to allow our annotators to use their legal knowledge to inform a holistic judgment. We build a pipeline-based extractive summarization system, CONTRASUM (Figure The Content Categorizer takes in a sentence c i from C and a party to output all the categories (such as, obligations, entitlements, prohibitions) mentioned in c i . Such categorization helps in partitioning the final summary as per the categories of interest. We use the multi-label classifier introduced in A contract may contain a large number of obligations, entitlements, and prohibitions for each party; however, not all instances of each category may be equally important. Also, the importance of sentences within each category may vary for each party. Therefore, this module aims to rank the sentences belonging to each category based on their level of importance for the specified party. As indicated in §4, we do not define the notion of "importance"; instead, we rely on the annotations from legal experts based on their understanding from contract review and compliance perspective to build the ranker. This module (Figure Recall that the Content Categorizer and Importance Ranker work at the sentence-and sentence-pairlevel, respectively. Therefore, to produce the desired party-specific category-based summary for a contract, we obtain (1) categories for each sentence in the contract (using the Content Categorizer), and (2) a ranking of the sentence pairs within each category according to their relative importance (using the Importance Ranker) with respect to the party. We do not explicitly account for the diversity within each category (although organization by category helps ensure some degree of diversity across categories). As the ranker provides ranking at a sentence pair level, to obtain an importance-ranked list of sentences from all the pairwise predictions for each category, we use the Bradley-Terry (BT) model as described in §4. We produce the final summary by selecting the m o , m e , and m p most important sentences predicted as obligations, entitlements, and prohibitions, respectively. CONTRASUM is a pipeline-based system consisting of two modules. The two modules are trained (and evaluated) separately and pipelined for generating end-to-end summaries as described in §5.3. Datasets We use the category annotations in LEXDEMOD dataset For training the Importance Ranker, we need sentence pairs from contracts ordered by relative importance to a particular party. We use the pairwise importance comparison annotations collected in §4 to create a training dataset. If, for a pair of sentences (a,b), a ≻ b (a is more important than b), then the label for binary classification is positive, and negative otherwise. We retain the same train/dev/test splits as LEXDEMOD to avoid any data leakage. Table As mentioned earlier, the two modules are separately trained and evaluated at a sentence-or sentence pair-level. However, CONTRASUM generates party-specific summaries at a contract-level. Therefore, for evaluating the system for the end-toend summarization task, we need reference summaries. To obtain reference summaries, we need ground-truth category and importance ranking between sentences in a contract. Since we collected importance comparison annotations for the same sentences for which LEXDEMOD provides category annotations, we group the sentences belonging to a category (ground-truth) and then derive the ranking among sentences within a category using the gold importance annotations and BT model (as described earlier) for each party. We obtain reference summaries at different compression ratios (CR) (5%, 10%, 15%) with a maximum number of sentences capped at 10 per category to evaluate the output summaries against different reference summaries. CR is defined as the % of total sentences included in the summary for a contract. Please note that the reference summaries are extractive. Training Details We use the RoBERTalarge Implementation Details We use HuggingFace's Transformers library We perform an extrinsic evaluation of CONTRA-SUM to assess the quality of the generated summaries as well as an intrinsic evaluation of the two modules separately to assess the quality of categorization and ranking. Intrinsic evaluation of the Categorizer and the Ranker is done using the test sets of the datasets ( §6) used to train the models. Although the size of datasets used to evaluate the two modules at a sentence-(∼1.5K for Categorizer) or sentence pair-level (∼130K for Ranker) is large, it amounts to 5 contracts for the extrinsic evaluation of CONTRASUM for the end-to-end summarization task at a contract-level. Since the number of contracts is small, we perform 3-fold validation of CONTRASUM. Each fold contains 5 contracts (in the test set) sampled from 11 contracts (remaining contracts are used for training the modules for each fold) for which both category and importance annotations are available to ensure the availability of reference summaries. CONTRASUM uses the best Categorizer and the best Ranker (RoBERTa-L models) trained on each fold (see Table Evaluation Measures We report macroaveraged Precision, Recall, and F1 scores for predicting the correct categories or importance order for the Content Categorizer and Importance Ranker. We also report the accuracy of the predicted labels. As both the reference and predicted summaries are extractive in nature, we use metrics from information retrieval and ranking literature for the end-to-end evaluation of CONTRASUM. We report Precision@k, Recall@k, F1@k (following We compare the Ranker against various pre-trained language models; BERT-BU We do not compare the categorizer against other baselines, instead directly report the scores in Table Extractive Summarization Baselines We compare CONTRASUM against several unsupervised baselines which use the same content categorizer but the following rankers. 1. Random baseline picks random sentences from the set of predicted sentences belonging to each category. We report average scores over 5 seeds. 2. KL-Sum Automatic Evaluation of the Categorizer and the Ranker We report the evaluation results for the Ranker in Table We report the automatic evaluation results for the end-to-end summarization of contracts in Table NDCG computed against the gold-references at different compression ratios establishing the need for domain-specific notion of importance which is not captured in the other baselines. Surprisingly, Random baseline performs better than (LSA) or is comparable to other baselines (KL-Sum) when predicted categories are used. While PACSUM achieves better scores than LexRank at low compression ratios, using centrality does not help at CR=0.15. As expected, we observe a consistent increase in NDCG with the increase in the compression ratio as the number of sentences per category increases in the gold-references. While CONTRA-SUM outperforms all the baselines, it is still far away from the upper-bound performance which uses the gold importance ranking. This calls for a more sophisticated and knowledge-driven approach to learning the importance of different sentences. As CONTRASUM is a pipeline-based system where the sentences predicted as containing each of the categories are input to the importance ranker, erroneous category predictions may affect the final summaries. Thus, we present scores (last block in Table Figure Owing to the recent advancements and power of LLMs, we prompt ChatGPT (details in §A.5) to asses its performance on this task. We find that it is not straightforward for ChatGPT to perform the task with simple prompting due to hallucinations in the generated output and token limit. Further work is needed to look into how to best use LLMs for such tasks in domains such as legal. Human Evaluation of Summaries In addition to automatic evaluation, we also give contracts along with their party-specific summaries from CONTRASUM, gold-references (CR=0.15), the best baseline-LexRank (PC), and Random (PC) baseline to legal experts for human evaluation. 16 summaries, each consisting of a maximum of 10 sentences per category, for 2 contracts are provided to 2 experts. They are asked to rate the summaries for each category on a 5-point scale (1 least; 5 most) as per: (1) informativeness; (2) usefulness; (3) accuracy of categorization; (4) redundancy, and (5) accuracy of importance ranking. In addition, we ask them to rate the overall quality of a summary on a 10-point scale. The average scores are presented in Table We introduced a new task to extract party-specific summaries of important obligations, entitlements, and prohibitions in legal contracts. Obtaining absolute importance scores for contract sentences can be particularly challenging, as we noted in pilot studies, thus indicating the difficulty of this task. Instead, we collected a novel dataset of legal expert-annotated pairwise importance comparisons for 293K sentence pairs from lease agreements to guide the Importance Ranker of CON-TRASUM built for the task. Automatic and human evaluations showed that our system that models domain-specific notion of "importance" produces good-quality summaries as compared to several baselines. However, there is a large gap between the performance of CONTRASUM and the upperbound, leaving scope for future work including the generation of abstractive summaries. We note the following limitations of this work: (1) We take a non-trivial step of generating partyspecific extractive summaries of a contract to ease contract reviewing and compliance. However, a simplified abstractive summary of key points will be valuable in improving the understanding of a contract. We leave further simplification of extractive summaries to future work due to the unavailability and difficulty in collecting abstractive summaries of long contracts and issues of hallucination and factual errors associated with the existing summarization and simplification systems We are committed to ethical practices and protecting the anonymity and privacy of the annotators who have contributed. We paid annotators at an hourly rate of >12.5 USD for their annotations. Societal Impact Advances in ML contract understanding and review, including agreement summarization, can reduce the costs of and increase the availability of legal services to small businesses and individuals. We believe that legal professionals would likely benefit from having auxiliary analysis provided by ML models in the coming years. However, we recognize and acknowledge that our work carries a possibility of misuse including malicious adulteration of summaries generated by our model and adversarial use of Categorizer and Ranker to mislead users. Such kind of misuse is common to any predictive model therefore, we strongly recommend coupling any such technology with external expert validation. The purpose of this work is to provide aid to legal professionals or laypersons dealing with legal contracts for a better understanding of them, and not to replace any experts. As contracts are long documents, a party-specific summary of key obligations, entitlements, and prohibitions can help significantly reduce the time spent on reading and understanding the contracts. A.1 More details on Importance Dataset We hire 3 lawyers from Upwork (1 female, 2 males) based in India. We instructed the annotators to rate the importance of sentences with the vision of important sentences being part of a summary that can be used for contract review and complaince purposes. We also mentioned to them that these annotations will be used to train a machine learning model to produce a summary of a given contract with respect to each contracting party. We also mentioned that their annotations will be shared anonymously for research purposes. We observed an increase in annotation reliability with the increase in the number of annotations done by the annotators. As the task is subjective, we also observed that the perception of importance changes with years of experience. Challenges faced during data collection. We ran below pilot studies with different annotation task designs to obtain reliable annotations. • Rating scale importance for each sentence: We provided sentences from a contract to the annotators and asked them to rate the importance level of each sentence with respect to a given party on a scale of 0-5 where 0 denotes 'not at all important', 1 least important, and 5 most important. We asked the annotators to rate sentences from the preamble as least important as our focus is on a scenario where a contract is already been signed. However, we faced inter-and intra-rating consistency issues with this task design. • Rating scale importance for a pair of sentences: We provided a pair of sentences and a party to the annotator and asked them to rate each sentence's importance level. Providing a pair of sentences provides information on the relative importance. However, we observed inconsistencies in terms of the same sentence being rated with different scores (≥ ±2). Combining annotations. For combining the annotations, we also experimented with a simple counting-based method Party-wise statistics of the frequency of sentences in each category per contract are presented in Table We present the dataset statistics in Table The formula for each of the automatic measures is given below. We compute these scores for each category, for each party of a contract. true positive@k k Recall@k = true positive@k true positive@k + false negative@k DCG IDCG where rel i is the relevance (1 or 0) for the predicted sentences and rel true i is the relevance of the ideal ordering of sentences. We take n = min(1, number of predicted sentences) for NDCG. Q is the number of parties or contracts on which the measure is averaged. The average score is computed by using the following formula for each metric m, N contracts in Since m i is capped at 10 for predicted summaries and one predicted summary is compared against references with different compression ratios, there are two cases when a predicted summary can have more sentences (capped at 10) than reference summaries (e.g., number of prohibitions is < 10 at CR = 0.15 for a party): (1) when categorizer makes false positive predictions (2) when predictions are correct and capped at 10 but the overall number of sentences belonging to a category are less resulting in fewer than 10 sentences in the references for a compression ratio. We believe that the choice of m i to be kept fixed is justified as there can be false positives or false negatives during category prediction resulting in more or fewer sentences belonging to a category. Also, setting m i to be the same as reference summaries is not a realistic choice as in a real-world setting reference summaries are not available. However, we also provide results (see Table Evaluation results of content categorizer against baselines. We report the results in Table categorizer and the importance ranker. ROUGE scores for end-to-end summarization task. We report ROUGE-1/2/L for the summarization task in Table End-to-end summarization results for each party. We present the summarization results with respect to each party averaged over the 3 folds in Table
1,048
2,702
1,048
Flexible Visual Grounding
Existing visual grounding datasets are artificially made, where every query regarding an entity must be able to be grounded to a corresponding image region, i.e., answerable. However, in real-world multimedia data such as news articles and social media, many entities in the text cannot be grounded to the image, i.e., unanswerable, due to the fact that the text is unnecessarily directly describing the accompanying image. A robust visual grounding model should be able to flexibly deal with both answerable and unanswerable visual grounding.
Starting from conventional vision-and-language tasks such as image captioning VQA, it is crucial to understand to which image region the question is referring. Because of the importance of visual grounding, many research efforts have been dedicated to improve its accuracy Previous visual grounding work assume that a query must be able to be grounded to an image region and create many datasets such as the Flickr30k entities We name the case that a query can be grounded to an image region as answerable visual grounding; otherwise, unanswerable visual grounding from here. The ignorance of unanswerable visual grounding in previous work can lead to problems for downstream tasks. For instance, in VQA, if the VQA model cannot understand the case that entities in the question cannot be grounded to the image, it cannot deal with the case that a question cannot be answered given the image either. Therefore, a robust visual grounding model should be able to flexibly deal with both answerable and unanswerable visual grounding. In this work, we study this flexible visual grounding problem. Figure To study flexible visual grounding, we construct two types of datasets. The first one is a pseudo dataset, which is constructed by randomly selecting queries from other images and combining it with a target image in the RefCOCO+ dataset Previous visual grounding models cannot handle unanswerable visual grounding. To give a model the ability to flexibly identify whether the input query can be grounded or not, we propose a novel method for unanswerable visual grounding by adding a pseudo region corresponding to a query that cannot be grounded. The model is then trained to ground to ground-truth regions for answerable queries and pseudo regions for unanswerable queries. Experiments conducted on both the pseudo and SMD4FVG datasets indicate that our model can flexibly process both answerable and unanswerable queries with high accuracy. In addition, we study the possibility of the usage of using the pseudo dataset to improve the accuracy on the SMD4FVG dataset. The contributions of this paper are in three-folds: • We propose a flexible visual grounding task that includes unanswerable visual grounding, where the unanswerable visual grounding problem has not been studied before. • We construct a pseudo dataset based on the RefCOCO+ dataset and a social media dataset based on tweets consisting of both images and text via crowdsourcing for studying the flexible visual grounding task. • We propose a flexible visual grounding model, which can deal with both answerable and unanswerable queries and achieves high accuracy on our datasets.
Previous visual grounding studies have been conducted on different datasets. In the Flickr30k entities dataset Regarding visual grounding models, Inspired by the success of pre-training language models such as BERT Because there are no existing visual grounding datasets where unanswerable queries are contained, we present two ways to construct two types of datasets to study the flexible visual grounding problem. As the construction of a new large-scale dataset is costive and time-consuming, firstly, we constructed a pseudo dataset based on the RefCOCO+ dataset Unanswerable visual grounding exists in real-world multimedia data consisting of both text and visual information such as news, TV dramas, and social media. Among these, social media is one typical case where there are many unanswerable visual grounding data because the text and visual information posted by users are not necessarily closely related to each other. Due to this characteristic, in social media, there could be more unanswerable visual grounding data than answerable ones. This might result in an unbalanced dataset, making training and evaluation difficult. In order to construct a balanced dataset, we propose a pipeline shown in Figure To construct the SMD4FVG dataset, we first crawled image and text pairs from Twitter. We will follow the fair use policy of Twitter regarding copyright of the crawled data. In order to construct a visual grounding dataset balanced on both answerable and unanswerable queries, we further conducted image filtering from the crawled tweets. For the image filtering process, we used EfficientnNet The EfficientNet model was pre-trained on the ImageNet dataset same purpose of inheriting previous visual grounding studies, from the ImageNet classes output by EfficientNet, we only chose the classes similar to RefCOCO+ classes and removed the others. When determining the similarities between the Re-fCOCO+ classes, we calculated the Wu & Palmer similarity (1) As a result of the image classification-based filtering, the crawled 20, 941 tweets decreased to 6, 813 tweets. For the next step, we filtered more tweets using the Yolov4 object detection model. The object detection model was pre-trained with the Microsoft COCO dataset In the crawled tweets, we found that many images consisted of mostly text and website information. As visual grounding is almost impossible for text/website-dominated images, we further filtered those images. To this end, we used the optical character recognition model of CRAFT. Based on the results of the optical character recognition model, we calculated a text proportion ratio in an image. We only kept images that had a proportion ratio lower than 0.05 with respective to the entire image. As a result, 3, 425 images were left. Due to the limitations of the above image processing models, advertisement, inappropriate, and duplicate images were still left in the dataset after the above filtering process. Therefore, we further manually checked the data and discarded them. As a result, 988 tweets were finally left. Tweets contain emoji, links, and mentions, which make query extraction difficult. Therefore, we preprocessed the data and eliminated those expressions. From the pre-processed text, we extracted sentences and used the chunking model From the 8, 827 pairs of image and query obtained, we annotated image regions that can be grounded by queries and finally constructed the SMD4FVG dataset. For the annotation, we used Amazon Mechanical Turk. The compensation was 8-9 dollars per hour. The annotation process consists of two steps. In case 1, the query refers to an entity, but the image does not contain that entity. For instance, in the right part of Figure The second step is the "drawing the bounding box" task. In this step, the annotation was done for data that were not annotated as unanswerable in the first step. Workers were asked to draw a bounding box for an image region corresponding to a query. The difficult part of this process was when there were multiple instances that corresponded to one query in an image. In this case, we instructed the workers to annotate multiple instances to one bounding box if the instances are not clearly separated; otherwise, we annotate them with individual bounding boxes. Besides that, queries in social media data can contain proper nouns, which are special compared to previous datasets and could be interesting to study; thus, we asked workers to indicate if an answerable query belongs to these. In total, 1, 886 answerable queries were annotated, among which 576 queries belong to proper nouns. Finally, we manually checked the results of the two steps. We checked 100 unanswerable pairs and found that 7 of them were wrongly labeled. Most of them were simple misses where the entity that the query refers to does exist in an image, which we plan to improve as our future work. In addition, we checked and corrected the bounding boxes that were miss-labeled by workers of all answerable pairs. As a result, we obtained 8, 827 annotated query and image pairs for our SMD4FVG dataset. We propose to add a pseudo region to a visual grounding model to achieve flexible visual grounding for both answerable and unanswerable queries. An overview of our proposed model is shown in Figure Our visual grounding model follows In detail, after extracting a feature vector f v ∈ R dv for a region proposal by Faster RCNN, a spatial vector f s ∈ R 5 is incorporated to it. The spatial vector is encoded to a 5-d vector from normalized top-left and bottom-right coordinates as: where (x tl , y tl ) is the top-left coordinate, (w br , y br ) is the bottom-right coordinate, w and h are the the width and the height of the region, and W and H are the width and the height of the image, respectively. The spatial vector is then projected to match the dimension of the visual feature by a learnable weight matrix W s ∈ R 5×dv and then added to f v to generate the final region feature vector v r as: The query is given in both training and inference. It is denoted as q. Next, v r and q are input to the multi-task ViLBERT model, which generates a representation h i ∈ R d i for the ith region and the query as: h i is then used to calculate a similarity score for the ith region by: where W i ∈ R d i ×1 is a learnable weight matrix. The ground-truth label score is set to 1 if the IoU between a region proposal and the ground-truth region is larger than 0.5; otherwise, it is set to 0. The similarity score vector s ji and the ground-truth label vector l ji for the ith region in the jth image are then used to minimize a BCE loss as: where N is the number of image and query pairs in a dataset, and M is the number of region proposals for an image. To make our visual grounding model deal with unanswerable queries, we propose to incorporate a pseudo region corresponding to an unanswerable query into the region proposals. An example is shown in Figure For this query, we add a pseudo region to the regions proposed by Faster The position of the pseudo region is set to the topleft of the input image, and all the x and y coordinate values of its spatial vector are set to 0 in Eq. (2). All components of the feature vector f v ∈ R dv for the pseudo region are set to +1. Our visual grounding model calculates the similarity score between the pseudo region incorporated region vectors and the query same as Section 4.1. The model is then trained to give the highest similarity score for the pseudo region when the query cannot be grounded. During inference, the model will output the region with the highest score as the prediction. For instance, in the example of Figure In our experiments, we verify the effectiveness of the proposed model on both the RefCOCO+ pseudo and SMD4FVG datasets. Here, we first describe the statistics of each dataset and settings, followed by training details. For the pseudo dataset, based on the RefCOCO+ dataset, we generated unanswerable data and com- bined them with the original dataset with the ratio of 1:2. The upper part of Table For the pseudo dataset, we investigated the performance of our model with the following settings: • RefCOCO+: A baseline that trained our visual grounding model in Section 4 on the original RefCOCO+ dataset to evaluate answerable visual grounding only, and compared the performance with • RefCOCO+Thres: A baseline based on the RefCOCO+ setting but sets a threshold according to the similarity score (Eq. ( • Pseudo: We directly trained and evaluated our model on the pseudo dataset. • SM→Pseudo: We first trained our model on the training data of the SMD4FVG dataset and then further fine-tuned it on the pseudo dataset. We hope that the annotated SMD4FVG dataset could boost the performance on the pseudo dataset. The lower part of Table • RefCOCO+Thres: A baseline similar to the RefCOCO+Thres setting on the pseudo dataset, but the threshold was tuned on the validation split of the SMD4FVG dataset. • Pseudo: Aiming to investigate the difference between the pseudo and SMD4FVG datasets, we trained our model on the training data of the pseudo dataset and evaluated it on the SMD4FVG dataset. • SM: This is a straightforward setting that directly trained and evaluated our visual grounding model on the SMD4FVG dataset. • Pseudo→SM: We first trained our model on the training data of the pseudo dataset and then further fine-tuned it on the SMD4FVG dataset. We hope that the large scale of the pseudo dataset could boost the performance on the SMD4FVG dataset. Visual features and region proposals were extracted from the ResNeXT-152 Faster-RCNN model 6 Results The upper part of Table For the pseudo setting, our model achieves an accuracy of 69.7% and 91.2% for answerable and unanswerable queries, respectively. Our model can ground unanswerable queries with high accuracy. However, it drops 2.6% point for answerable queries compared to the RefCOCO+ setting. We think the reason for this is due to the mixture of unanswerable queries to the original RefCOCO+ dataset, leading the judgment to answerable visual grounding be more complex. SM→Pseudo only slightly boots the All accuracy due to the smallscale of the SMD4FVG dataset. Some incorrect predictions for unanswerable queries are due to the randomness of the dataset, and qualitative examples can be found in Appendix C. The lower part of Table Among the other three settings, the pseudo setting achieves the highest accuracy of 49.7% for answerable queries. We think the reason for this is that there are only a few answerable queries in the SMD4FVG dataset, while both the amount and ratio for that are higher in the pseudo dataset, making the model learn answerable grounding well. However, the accuracy for unanswerable queries is only 65.6%, which is significantly worse than the other two settings that use the SMD4FVG dataset for training. We think this is due to the different characteristics of unanswerable queries in the pseudo and SMD4FVG datasets, wherein the pseudo dataset the unanswerable queries are unrelated to the images, but in the SMD4FVG dataset they are more complex. The SM setting achieves high accuracy of 95.0% for unanswerable queries and the best accuracy of 81.7% for all queries. The reason for this can be that our model is optimized in the SMD4FVG dataset directly with the SM setting. However, the accuracy for answerable queries with the SM setting is the lowest due to the small ratio of answerable queries and complex answerable queries in the SMD4FVG dataset. The Pseudo→SM setting achieves a trade-off between the pseudo and SM settings, where there is an improvement for answerable queries compared to the SM setting and a big improvement for unanswerable queries compared to the pseudo setting. We think the reason for this is that Pseudo→SM can take the balance between the pseudo and SM settings via fine-tuning the model pre-trained on the pseudo dataset to the SMD4FVG dataset. We also observe a 1% accuracy drop of all queries from SM to Pseudo→SM. We think it is caused by the big ratio of unanswerable queries in the SMD4FVG dataset. The SM model was more biased to unanswerable queries and thus performed better in accuracy for all queries because of the big ratio of unanswerable queries. Qualitative examples can be found in Appendix C. For both the pseudo and SMD4FVG datasets, we observe better performance on unanswerable queries than answerable queries besides Ref-COCO+Thres on the pseudo dataset. We think the reason could be that it is much easier to learn that a query is unrelated to an image (i.e., unanswerable) instead of finding the exact region that a query refers to (i.e., answerable) by our models. Previous studies on visual grounding ignored the case of unanswerable queries, which is common in real-world such as social media data. In this paper, we proposed flexible visual grounding to address both answerable and unanswerable visual grounding. To this end, we constructed a pseudo dataset based on the RefCOCO+ dataset and a social media dataset based on tweets consisting of both images and text via crowdsourcing. In addition, we proposed a flexible visual grounding model, which can deal with both answerable and unanswerable queries. Experiments on our datasets indicated that our model could achieve high accuracy, especially for unanswerable queries, but there is still room for further improvement. To make our social media dataset balanced, we constrained it to the RefCOCO+ classes, which may also limit the ability of our model on realworld data. In the future, we plan to construct a dataset without such constraints. Figure Figure Figure Figure Figure Figure Figure
543
2,651
543
Mixed-Lingual Pre-training for Cross-lingual Summarization
Cross-lingual Summarization (CLS) aims at producing a summary in the target language for an article in the source language. Traditional solutions employ a twostep approach, i.e. translate→summarize or summarize→translate. Recently, end-to-end models have achieved better results, but these approaches are mostly limited by their dependence on large-scale labeled data. We propose a solution based on mixed-lingual pretraining that leverages both cross-lingual tasks such as translation and monolingual tasks like masked language models. Thus, our model can leverage the massive monolingual data to enhance its modeling of language. Moreover, the architecture has no task-specific components, which saves memory and increases optimization efficiency. We show in experiments that this pre-training scheme can effectively boost the performance of cross-lingual summarization. In Neural Cross-Lingual Summarization (NCLS) (Zhu et al., 2019b) dataset, our model achieves an improvement of 2.82 (English to Chinese) and 1.15 (Chinese to English) ROUGE-1 scores over state-of-the-art results.
Text summarization can facilitate the propagation of information by providing an abridged version for long articles and documents. Meanwhile, the globalization progress has prompted a high demand of information dissemination across language barriers. Thus, the cross-lingual summarization (CLS) task emerges to provide accurate gist of articles in a foreign language. Traditionally, most CLS methods follow the twostep pipeline approach: either translate the article into the target language and then summarize it On the other hand, the pre-training strategy has proved to be very effective for language understanding Therefore, we leverage large-scale pre-training to improve the quality of cross-lingual summarization. Built upon a transformer-based encoderdecoder architecture Furthermore, based on a shared multi-lingual vocabulary, our model has a shared encoder-decoder architecture for all pre-training and finetuning tasks, whereas NCLS In the experiments, our model outperforms various baseline systems on the benchmark dataset NCLS
Pre-training language models Early literatures on cross-lingual summarization focus on the two-step approach involving machine translation and summarization (2018) presents a solution to zero-shot cross-lingual headline generation by using machine translation and summarization datasets. We propose a set of multi-task pre-training objectives on both monolingual and cross-lingual corpus. For monolingual corpus, we use the masked language model (MLM) from To leverage cross-lingual parallel corpus, we introduce the cross-lingual masked language model (CMLM). CMLM is an extension of MLM on the parallel corpus. The input is the concatenation of a sentence in language A and its translation in language B. We then randomly select one sentence and mask some of its tokens by sentinels. The target is to predict the masked tokens in the same way as MLM. Different from MLM, the masked tokens in CMLM are predicted not only from the context within the same language but also from their translations in another language, which encourages the model to learn language-invariant representations. Note that CMLM is similar to the Translation Language Model (TLM) loss proposed in Lample and Conneau (2019). The key differences are: 1) TLM randomly masks tokens in sentences from both languages, while CMLM only masks tokens from one language; 2) TLM is applied on encoderonly networks while we employ CMLM on the encoder-decoder network. In addition to CMLM, we also include standard machine translation (MT) objective, in which the input and output are the unchanged source and target sentences, respectively. The examples of inputs and targets used by our pre-training objectives are shown in Table While NCLS We empirically find that our model does not suffer from the phenomenon of forgetting target language controllability as in We conduct our experiment on NCLS dataset For pre-training, we obtain monolingual data for English and Chinese from the corresponding Wikipedia dump. There are 83 million sentences for English monolingual corpus and 20 million sentences for Chinese corpus. For parallel data between English and Chinese, we use the parallel corpus from Our transformer model has 6 layers and 8 heads in attention. The input and output dimensions d model for all transformer blocks are 512 and the inner dimension d f f is 2048. We use a dropout probability of 0.1 on all layers. We build a shared SentencePiece We first include a set of pipeline methods from Finally, we include the result of ATS from the concurrent work of Table Our model outperforms all baseline models in all metrics except for ROUGE-L in English-to-Chinese. For instance, our model achieves 2.82 higher ROUGE-1 score in Chinese to English summarization than the previously best result and 1.15 higher ROUGE-1 score in English to Chinese summarization, which shows the effectiveness of utilizing multilingual and multi-task data to improve cross-lingual summarization. Table As shown, the pre-training can improve ROUGE-1, ROUGE-2, and ROUGE-L by 2.38, 1.74, and 1.13 points respectively on Chinese-to-English summarization. Moreover, all pre-training objectives have various degrees of contribution to the results, and the monolingual unsupervised objectives (MLM and DAE) are relatively the most important. This verifies the effectiveness of leveraging unsupervised data in the pre-training. Low-resource scenario. We sample subsets of size 1K and 10K from the training data of crosslingual summarization and finetune our pre-trained model on those subsets. Figure We present a mix-lingual pre-training model for cross-lingual summarization. We optimize a shared encoder-decoder architecture for multi-lingual and multi-task objectives. Experiments on a benchmark dataset show that our model outperforms pipelinebased and other end-to-end baselines. Through an ablation study, we show that all pretraining objectives contribute to the model's performance.
1,085
1,041
1,085
Equipping Language Models with Tool Use Capability for Tabular Data Analysis in Finance
Large language models (LLMs) have exhibited an array of reasoning capabilities but face challenges like error propagation and hallucination, particularly in specialised areas like finance, where data is heterogeneous, and precision is paramount. We explore the potential of language model augmentation with external tools to mitigate these limitations and offload certain reasoning steps to external tools that are more suited for the task, instead of solely depending on the LLM's inherent abilities. More concretely, using financial domain questionanswering datasets, we apply supervised finetuning on a LLAMA-2 13B CHAT model to act both as a task router and task solver. The task router dynamically directs a question to either be answered internally by the LLM or externally via the right tool from the tool set. Our tool-equipped SFT model, RAVEN, demonstrates an improvement of 35.2% and 5.06% over the base model and SFT-only baselines, respectively, and is highly competitive with strong GPT-3.5 results. To the best of our knowledge, our work is the first that investigates tool augmentation of language models for the finance domain. 1
Augmenting Large Language Models (LLMs) with tools has emerged as a promising approach to further complement LLMs' capabilities with specialised mechanisms, leading to improved accuracy and reliability This paradigm holds particular appeal in fields demanding precision, such as finance A satisfying review of existing works on tool augmentation of LLMs is beyond the scope of this work; however, this space can be divided into two primary directions: (1) approaches that require an LLM at the center and uses few-shot incontext learning to either provide tool and API documentations, or demonstrations that involve tool use In this work, our primary focus lies in demonstrating the potential of tool augmentation within the finance domain. Acknowledging the utmost significance of privacy concerns within the financial sector, we have chosen to adopt a fully offline approach, equipping a language model with diverse tool utilisation mechanisms. More concretely, we employ Parameter Efficient Fine-Tuning (PEFT) How much was the included change in fair value of the company's servicing asset included in its servicing fees? GreenSky, Inc. NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (Continued) (United States Dollars in thousands, except per share... {"header": ["", "", "Year Ended December 31,", ""], "rows": Our model, RAVEN, achieves significant improvements in reasoning over structured data. For example, compared to the base model we demonstrate a lift in exact match accuracy of 63.8% (21.68% → 85.52%) on the WIKI-SQL
We use the LLAMA 2 13B CHAT We use a mixture of four financial and generic structured and unstructured question-answering datasets. We provide a brief summary in below. TAT-QA. Consists of questions generated by financial experts associated with hybrid contexts drawn from real-world financial reports Financial PhraseBank. Consists of phrases derived from English news on listed companies in OMX Helsinki Wiki-SQL. Consists of manually annotated crowd sourced examples of natural language questions and SQL queries over tables found on Wikipedia OTT-QA. Similar to TAT-QA, this dataset consists of questions over tabular data and unstructured text across diverse domains Data splits. Among the four datasets, FPB 2 and OTT-QA RAVEN is equipped with two external offline tools: a calculator and a SQL engine. The Calculator is instantiated in a python interpreter and is used to evaluate well-formed arithmetic expressions. The API expects one input representing the arithmetic expression and returns the evaluated result. The Lightweight SQL engine is an API capable of executing SQL scripts on relational data. The API expects two inputs, (1) a string representation of the structured data and (2) a SQL script. The API's lightweight database engine converts structured data from its textual form to the engine's relational representation and converts data types where applicable. The SQL script is executed on this representation and the API returns the result. Inspired by To ensure training diversity, our model is trained on a combination of all available training data. Based on the data, we craft different templates depending on which tool the model should choose or if the model should directly answer the question on its own (i.e., to train the Task Solver in Figure During inference, we follow a two-step process with RAVEN. First, we employ a specialised template choice prompt to determine the most suitable prompt template (from "arithmetic," "classification," "script," or "information extraction") based on the input. Next, we wrap the instruction, including the input and relevant data, in the inferred prompt template and send it to RAVEN for generating the subsequent output. Depending on the selected template, the Task Solver either activates a tool to fulfil the request or directly produces the response. We discuss the inference behaviour when each of these templates are used. For Script the model is expected to produce a well-structured SQL script. In this scenario, the structured data table provided in the prompt is temporarily loaded in memory using a lightweight database engine, and the script execution on the table produces the output. For Arithmetic the model is expected to predict a well formed arithmetic expression. This expression is evaluated by a calculator and the resulting value passed as output. The Information Extraction template instructs the model that there is information included in structured form that needs to be considered before producing the answer. In this case no tool is used and the model is expected to infer the correct output based solely on the information in the prompt. The Classification template is used when the prediction of the model should be taken as-is. We compare with the base LLAMA 2 13B CHAT with and without SFT The results are summarised in Table We see a similar pattern on the TAT-QA benchmark with the tool augmented model achieving a 5-fold improvement on the base model. Approximately 46% of the observations of the TAT-QA dataset are annotated with an intermediate arithmetic derivation that RAVEN evaluates using a calculator at inference time. We perform a comparative analysis to explore whether our model performs better on this portion of the data in the analysis section ( §3.2). In OTT-QA, the majority of questions require multi-hop inference involving both tabular data and unstructured text, with the information needed to answer the questions dispersed across these two input types. This dataset does not have annotated intermediate steps to get to the answer and therefore all models are expected to infer the answer without relying on tools. Despite SFT achieving an increase in accuracy compared to the base model, the relatively low score underscores the importance of intermediate reasoning steps and tools We observed the BACKOFF mechanism to bring slight improvement on TAT-QA (51.35% → 52.27%) and WIKI-SQL (84.25% → 85.52%). Is it better to have a separate model for each task? We developed a model specifically using the TAT-QA dataset, achieving an evaluation score of 54.70%. This dedicated model outperforms RAVEN by 2.4%. We contend that this modest per-formance gain does not warrant the added complexity of maintaining separate models and switching between them during inference. Why tool augmentation is necessary? Approximately half of the questions within the TAT-QA dataset are annotated with an arithmetic equation. The presence of the equation implies that the language model needs to perform multiple actions to output the correct answer. This process involves the correct extraction of, at a minimum, two numerical values from the context, followed by the execution of an arithmetic operation, such as addition or division. This particular scenario is ideal to understand the effect of SFT and tool augmentation by comparing the performance of different models on the two categories of data from the same dataset. As shown in Figure The utility of augmenting language models with external tools is substantiated further through a comparative analysis of experimental outcomes on two similar datasets. Addressing questions on WIKI-SQL and OTT-QA requires multi-hop reasoning across diverse forms of data, spanning both structured and unstructured formats. The primary difference lies in the annotation method: the WIKI-SQL dataset is annotated with a data extraction script which, when executed on the structured data, yields the answer. In contrast, the OTT-QA dataset lacks this intermediate derivation step. By delegating the script execution to an external tool, RAVEN achieves an exact match accuracy of 85.52% on WIKI-SQL and 16.03% on OTT-QA, underscoring the effectiveness of fit-for-purpose external tools in this scenario. What is the impact of question complexity? On the TAT-QA dataset we can use the number of arithmetic operators in the gold arithmetic equation as a proxy for question complexity. One arithmetic operator implies the extraction of two numerical values from the context, two operators, three numerical values, and so on. As shown in Figure In this paper we have demonstrated the feasibility of equipping a LLAMA 2 13B CHAT model with tool use capabilities via fine-tuning a mere 0.2% of its parameters on a relatively small and diverse dataset. The augmentation with tools remarkably elevated the performance of the base model by an average of 35.2% across 4 datasets, surpassing even a significantly larger GPT-3.5 model by 9.2%. Additionally, through a comparative analysis of question answering datasets we demonstrate the effectiveness of augmenting language models with external tools, showing significant improvements in accuracy when addressing multi-hop questions with tools. Infrastructure Bottleneck. Our experiments were constrained with fitting our model on available commodity hardware. We hypothesise that it would be possible to obtain better performance using the larger LLAMA 2 70 billion-parameter model and a longer context length. Experiments by Language model evaluation. Free-form natural language generation (NLG) poses significant evaluation challenges that remain under-studied to this date Conversely, using exact match criteria might unjustly penalise NLG models, given that identical numerical values can be expressed in varying forms -such as "$4 million" and "$4,000,000," or "0.24" and "24%,". In some cases, numerical values can be integrated within a passage of text, rendering the evaluation of such content very challenging. In our evaluation we have normalised different formatting (such as converting values to percentages where appropriate), however a universal normalising algorithm in this space is outside the scope of our research. GPT-3.5 evaluation. Evaluating our benchmark with GPT-3.5 poses significant challenges, especially when using ZERO-SHOT (COT) Our work is built on top of existing pre-trained language models. Our goal was not to attend to alleviate the well-documented is-sues (e.g., privacy, undesired biases, etc) that such models embody. For this reason, we share the similar potential risks and concerns posed by these models. Additionally, our SFT was conducted on publicly available research benchmarks, and as such the additional SFT step used in RAVEN is unlikely to introduce any new area of risk. Araci (2019) tackles financial sentiment analysis by further pre-training BERT Training details. We use the pre-trained weights of LLAMA 2 13B CHAT Training hardware. We train the models on commodity hardware equipped with a 13th Gen Intel(R) Core(TM) i7-13700KF CPU at 3.40 GHz, 64 GB installed RAM and NVIDIA GeForce RTX 4090 GPU with 24 GB onboard RAM. The final model consumed 100 GPU hours during training and 10 GPU hours for evaluation. Carbon footprint. Given we train two models and an average consumption of 400 Wh we estimate the total power consumption to be 88 kWh with a carbon dioxide equivalent (CO 2e ) emissions of 0.081 tonnes We compare our results with GPT-3.5 using few-shot in-context learning. We use the following system to steer the model into producing a short answer. "You are a data expert that can reason over structured and unstructured data. (5) Earnings Per Share Basic earnings per share is computed by dividing Net earnings attributable to Black Knight by the weighted-average number of shares of common stock outstanding during the period. For the periods presented, potentially dilutive securities include unvested restricted stock awards and the shares of BKFS Class B common stock prior to the Distribution. For the year ended December 31, 2017, the numerator in the diluted net earnings per share calculation is adjusted to reflect our income tax expense at an expected effective tax rate assuming the conversion of the shares of BKFS Class B common stock into shares of BKFS Class A common stock on a one-for-one basis prior to the Distribution. The effective tax rate for the year ended December 31, 2017 was (16.7)%, including the effect of the benefit related to the revaluation of our net deferred income tax liability and certain other discrete items recorded during 2017. For the year ended December 31, 2017, the denominator includes approximately 63.1 million shares of BKFS Class B common stock outstanding prior to the Distribution. The denominator also includes the dilutive effect of approximately 0.9 million, 0.6 million and 0.6 million shares of unvested restricted shares of common stock for the years ended December 31, 2019, 2018 and 2017, respectively. The shares of BKFS Class B common stock did not share in the earnings or losses of Black Knight and were, therefore, not participating securities. Accordingly, basic and diluted net earnings per share of BKFS Class B common stock have not been presented. The computation of basic and diluted earnings per share is as follows (in millions, except per share amounts): ### Data: {"header": ["", "", "Year ended December 31,", ""], "rows": [["", "2019", "2018", "2017"], ["Basic:", "", "", ""], ["Net earnings attributable to Black Knight", "$108.8", "$168.5", "$182.3"], ["Shares used for basic net earnings per share:", "", "", ""], ["Weighted average shares of common stock outstanding", "147.7", "147.6", "88.7"], ["Basic net earnings per share", "$0.74", "$1.14", "$2.06"], ["Diluted:", "", "", ""], ["Earnings before income taxes and equity in losses of unconsolidated affiliates", "", "", "$192.4"], ["Income tax benefit excluding the effect of noncontrolling interests", "", "", "(32.2)"], ["Net earnings", "", "", "$224.6"], ["Net earnings attributable to Black Knight", "$108.8", "$168.5", ""], ["Shares used for diluted net earnings per share:", "", "", ""], ["Weighted average shares of common stock outstanding", "147.7", "147.6", "88.7"], ["Dilutive effect of unvested restricted shares of common", "", "", ""], ["stock", "0.9", "0.6", "0.6"], ["Weighted average shares of BKFS Class B common stock outstanding", "", "", "63.1"], ["Weighted average shares of common stock, diluted", "148.6", "148.2", "152.4"], ["Diluted net earnings per share", "$0.73", "$1.14", "$1.47"]]} ### Equation: 0.74-2.06 Example 2 -The response is determined from the text or table Here is a instruction detailing a task, accompanied by input and data providing additional context. Provide a suitable reply that effectively fulfills the inquiry. What was the Additions based on tax positions related to current year in 2019 and 2018 respectively? ### Input: A reconciliation of the beginning and ending amount of unrecognized tax benefits is as follows: Interest and penalty charges, if any, related to uncertain tax positions are classified as income tax expense in the accompanying consolidated statements of operations. As of March 31, 2019 and 2018, the Company had immaterial accrued interest or penalties related to uncertain tax positions. The Company is subject to taxation in the United Kingdom and several foreign jurisdictions. As of March 31, 2019, the Company is no longer subject to examination by taxing authorities in the United Kingdom for years prior to March 31, 2017. The significant foreign jurisdictions in which the Company operates are no longer subject to examination by taxing authorities for years prior to March 31, 2016. In addition, net operating loss carryforwards in certain jurisdictions may be subject to adjustments by taxing authorities in future years when they are utilized. The Company had approximately $24.9 million of unremitted foreign earnings as of March 31, 2019. Income taxes have been provided on approximately $10.0 million of the unremitted foreign earnings. Income taxes have not been provided on approximately $14.9 million of unremitted foreign earnings because they are considered to be indefinitely reinvested. The tax payable on the earnings that are indefinitely reinvested would be immaterial. ### Data: {"header": ["", "Year ended March 31,", ""], "rows": [["", "2019", "2018"], ["Beginning balance", "$6,164", "$4,931"], ["Additions based on tax positions related to current year", "164", "142"], ["Additions for tax positions of prior years", "231", "1,444"], ["Reductions due to change in foreign exchange rate ", "(301)", "(353)"], ["Expiration of statutes of limitation", "(165)", ""], ["Reductions due to settlements with tax authorities", "(77)", ""], ["Ending balance", "$6,016", "$6,164"]]} ### Response: 164, 142 Example 3 -The response is an equation Below is an instruction that describes a task, coupled with input and data providing additional context. Formulate an arithmetic equation to generate the answer. ### Instruction: What is the average value per share that Robert Andersen acquired on vesting? ### Input: Option Exercises and Stock Vested The table below sets forth information concerning the number of shares acquired on exercise of option awards and vesting of stock awards in 2019 and the value realized upon vesting by such officers. (1) Amounts realized from the vesting of stock awards are calculated by multiplying the number of shares that vested by the fair market value of a share of our common stock on the vesting date. ### Data: {"header": ["", "Option Awards", "", "Stock Awards", ""], "rows": [["Name", "Number of Shares Acquired on Exercise (#)", "Value Realized on Exercise ($)", "Number of Shares Acquired on Vesting (#)", "Value Realized on Vesting ($)"], ["Jon Kirchner", "", "", "153,090", "3,428,285"], ["Robert Andersen", "", "", "24,500", "578,806"], ["Paul Davis", "", "", "20,500", "482,680"], ["Murali Dharan", "", "", "15,000", "330,120"], ["Geir Skaaden", "", "", "21,100", "500,804"]]} ### Equation: 578,806/24,500 Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Determine the sentiment of the following. The plant will be fired with a combination of spruce bark, chipped logging residues or milled peat.
1,146
1,527
1,146
Accelerating Neural Transformer via an Average Attention Network
With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding. 1
The past few years have witnessed the rapid development of neural machine translation (NMT), which translates a source sentence into the target language with an encoder-attention-decoder framework Most interestingly, the neural Transformer is capable of being fully parallelized at the training phase and modeling intra-/inter-dependencies of source and target sentences within a short path. The parallelization property enables training NMT very quickly, while the dependency modeling property endows the Transformer with strong ability in inducing sentence semantics as well as translation correspondences. However, the decoding of the Transformer cannot enjoy the speed strength of parallelization due to the auto-regressive generation schema in the decoder. And the self-attention network in the decoder even further slows it. We explain this using Figure In this paper, we propose an average attention network (AAN) to handle this challenge. We show the architecture of AAN in Figure We use AAN to replace the self-attention part of the neural Transformer's decoder. Considering the characteristic of the cumulative average operation, we develop a masking method to enable parallel computation just like the original selfattention network in the training. In this way, the whole AAN model can be trained totally in par-allel so that the training efficiency is ensured. As for the decoding, we can substantially accelerate it by feeding only the previous hidden state to the Transformer decoder just like RNN does. This is achieved with a dynamic programming method. In spite of its simplicity, our model is capable of modeling complex dependencies. This is because AAN regards each previous word as an equal contributor to current word representation. Therefore, no matter how long the input is, our model can always build up connection signals with previous inputs, which we argue is very crucial for inducing long-range dependencies for machine translation. We examine our model on WMT17 translation tasks. On 6 different language pairs, our model achieves a speed-up of over 4 times with almost no loss in both translation quality and training speed. In-depth analyses further demonstrate the convergency and advantages of translating long sentences of the proposed AAN.
GRU The attention mechanism is originally proposed to induce translation-relevant source information for predicting next target word in NMT. It contributes a lot to make NMT outperform SMT. Recently, a variety of efforts are made to further improve its accuracy and capability. In the respect of speeding up the decoding of the neural Transformer, Given an input layer y = {y 1 , y 2 , . . . , y m }, AAN first employs a cumulative-average operation to generate context-sensitive representation for each input embedding as follows (Figure where FFN (•) denotes the position-wise feedforward network proposed by We treat g j as a contextual representation for the j-th input, and apply a feed-forward gating layer upon it as well as y j to enrich the non-linear expressiveness of AAN: where [•; •] denotes concatenation operation, and indicates element-wise multiplication. i j and f j are the input and forget gate respectively. Via this gating layer, AAN can control how much past information can be preserved from previous context g j and how much new information can be captured from current input y j . This helps our model to detect correlations inside input embeddings. Following the architecture design in the neural Transformer We refer to the whole procedure formulated in Eq. (1∼3) as original AAN (•) in following sections. A computation bottleneck of the original AAN described above is that the cumulative-average operation in Eq. ( In this section, we provide a thorough analysis for AAN in comparison to the original self-attention model used by where Y ∈ R n×d is the input matrix, f (•) is a mapping function and Q, K, V ∈ R n×d are the corresponding queries, keys and values. Following Our AAN has a maximum path length of O (1), because it can directly capture dependencies between any two input embeddings. For the original AAN, the nature of its sequential computation enlarges its minimum number sequential operations to O (n). However, due to its lack of positionwise masked projection, it only consumes a computational complexity of O n • d 2 . By contrast, both self-attention and masked AAN have a computational complexity of O n 2 • d + n • d 2 , and require only O (1) sequential operation. Theoretically, our masked AAN performs very similarly to the self-attention according to Table Differing noticeably from the self-attention in the Transformer, our AAN can be accelerated in the decoding phase via dynamic programming thanks to the simple average calculation. Particularly, we can decompose Eq. ( (5) where g0 = 0. In doing so, our model can compute the j-th input representation based on only one previous state gj-1 , instead of relying on all previous states as the self-attention does. In this way, our model can be substantially accelerated during the decoding phase. The neural Transformer models translation through an encoder-decoder framework, with each layer involving an attention network followed by a feed forward network where the superscript l indicates layer depth, and MHAtt denotes the multi-head attention mechanism proposed by Based on the encoded source representation h N , the Transformer relies on its decoder to generate corresponding target translation y = {y 1 , y 2 , . . . , y m }. Similar to the encoder, the decoder also consists of a stack of N = 6 identical layers. For each layer in our architecture, the first sub-layer is our proposed average attention network, aiming at capturing target-side dependencies with previous predicted words: Carrying these dependencies, the decoder stacks another two sub-layers to seek translation-relevant source semantics for bridging the gap between the source and target language: We use subscript c to denote the source-informed target representation. Upon the top layer of this decoder, translation is performed where a linear transformation and softmax activation are applied to compute the probability of the next token based on s N To memorize position information, the Transformer augments its input layer h 0 = x, s 0 = y with frequency-based positional encodings. The whole model is a large, single neural network, and can be trained on a large-scale bilingual corpus with a maximum likelihood objective. We refer readers to We examine various aspects of our AAN on this translation task. The training data consist of 4.5M sentence pairs, involving about 116M English words and 110M German words. We used newstest2013 as the development set for model selection, and newstest2014 as the test set. We evaluated translation quality via case-sensitive BLEU metric We applied byte pair encoding algorithm Table We also show an ablation study in terms of the FFN(•) network in Eq. ( Different neural architectures might require different number of training steps to converge. In this section, we testify whether our AAN would reveal different characteristics with respect to convergency. We show the loss curve of both the Transformer and our model in Figure Surprisingly, both model show highly similar tendency, and successfully converge in the end. To train a high-quality translation system, our model consumes almost the same number of training steps as the Transformer. This strongly suggests 2 In Section 3, we demonstrate in theory that our AAN is as efficient as the self-attention during training, but can be substantially accelerated during decoding. In this section, we provide quantitative evidences to examine this point. We show the training and decoding speed of both the Transformer and our model in Table When it comes to decoding procedure, the time of our model required to translate one sentence Translation statistics on WMT14 English-German test set (newstest14) with respect to the length of source sentences. The top figure shows tokenized BLEU score, and the bottom one shows the average length of translations, both visa-vis sentence length is only a quarter of that of the Transformer, with beam size ranging from 4 to 20. Another noticeable feature is that as the beam size increases, the ratio of required decoding time between the Transformer and our model is consistently enlarged. This demonstrates empirically that our model, enhanced with the dynamic decoding acceleration algorithm (Section 3.3), can significantly improve the decoding speed of the Transformer. A serious common challenge for NMT is to translate long source sentences as handling longdistance dependencies and under-translation issues becomes more difficult for longer sentences. Our proposed AAN uses simple cumulativeaverage operations to deal with long-range depen- dencies. We want to examine the effectiveness of these operations on long sentence translation. For this, we provide the translation results along sentence length in Figure We find that both the Transformer and our model generate very similar translations in terms of BLEU score and translation length, and obtain rather promising performance on long source sentences. More specifically, our model yields relatively shorter translation length on the longest source sentences but significantly better translation quality. This suggests that in spite of the simplicity of the cumulative-average operations, our AAN can indeed capture long-range dependences desired for translating long source sentences. Generally, the decoder takes more time for translating longer sentences. When it comes to the Transformer, this time issue of translating long sentences becomes notably severe as all previous predicted words must be included for estimating both self-attention weights and word prediction. We show the average time required for translating a source sentence with respect to its sentence length in Figure We further demonstrate the effectiveness of our model on six WMT17 translation tasks in both directions (12 translation directions in total). These tasks contain the following language pairs: • En-De: The English-German language pair. This training corpus consists of 5.85M sentence pairs, with 141M English words and 135M German words. We used the concatenation of newstest2014, newstest2015 and newstest2016 as the development set, and the newstest2017 as the test set. • En-Fi: The English-Finnish language pair. This training corpus consists of 2.63M sentence pairs, with 63M English words and 45M Finnish words. We used the concatenation of newstest2015, newsdev2015, new-stest2016 and newstestB2016 as the development set, and the newstest2017 as the test set. • En-Lv: The English-Latvian language pair. This training corpus consists of 4.46M sentence pairs, with 63M English words and 52M Latvian words. We used the news-dev2017 as the development set, and the new-stest2017 as the test set. • En-Ru: The English-Russian language pair. This training corpus consists of 25M sentence pairs, with 601M English words and 567M Russian words. We used the concatenation of newstest2014, newstest2015 and newstest2016 as the development set, and the newstest2017 as the test set. • En-Tr: The English-Turkish language pair. This training corpus consists of 0.21M sentence pairs, with 5.2M English words and 4.6M Turkish words. We used the concatenation of newsdev2016 and newstest2016 as the development set, and newstest2017 as the test set. • En-Cs: The English-Czech language pair. This training corpus consists of 52M sentence pairs, with 674M English words and 571M Czech words. We used the concatenation of newstest2014, newstest2015 and new-stest2016 as the development set, and the newstest2017 as the test set. Interestingly, these translation tasks involves training corpora with different scales (ranging from 0.21M to 52M sentence pairs). This help us thoroughly examine the ability of our model on different sizes of training data. All these preprocessed datasets are publicly available, and can be downloaded from WMT17 official website. Table Although different languages have different linguistic and syntactic structures, our model consistently yields rather competitive results against the Transformer on all language pairs in both directions. Particularly, on the De→En translation task, our model achieves a slight improvement of 0.10/0.07 case-sensitive/case-insensitive BLEU points over the Transformer. The largest performance gap between our model and the Transformer occurs on the En→Tr translation task, where our is lower than the Transformer by 0.52/0.53 case-sensitive/case-insensitive BLEU points. We conjecture that this difference may be due to the small training corpus of the En-Tr task. In all, these results suggest that our AAN is able to perform comparably to Transformer on different language pairs with different scales of training data. We also show the decoding speed of both the Transformer and our model in Table In this paper, we have described the average attention network that considerably alleviates the decoding bottleneck of the neural Transformer. Our model employs a cumulative average operation to capture important contextual clues from previous target words, and a feed forward gating layer to enrich the expressiveness of learned hidden representations. The model is further enhanced with a masking trick and a dynamic programming method to accelerate the Transformer's decoder. Extensive experiments on one WMT14 and six WMT17 language pairs demonstrate that the proposed average attention network is able to speed up the Transformer's decoder by over 4 times. In the future, we plan to apply our model on other sequence to sequence learning tasks. We will also attempt to improve our model to enhance its modeling ability so as to consistently outperform the original neural Transformer.
1,111
2,278
1,111
Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding
Generating long and informative review text is a challenging natural language generation task. Previous work focuses on word-level generation, neglecting the importance of topical and syntactic characteristics from natural languages. In this paper, we propose a novel review generation model by characterizing an elaborately designed aspect-aware coarse-tofine generation process. First, we model the aspect transitions to capture the overall content flow. Then, to generate a sentence, an aspectaware sketch will be predicted using an aspectaware decoder. Finally, another decoder fills in the semantic slots by generating corresponding words. Our approach is able to jointly utilize aspect semantics, syntactic sketch, and context information. Extensive experiments results have demonstrated the effectiveness of the proposed model.
In the past decades, online review services (e.g., In the literature, various methods have been developed for review generation As found in the literature of linguistics Based on such a generation process, in this paper, we propose a novel aspect-aware coarse-tofine decoder for generating product reviews. We first utilize unsupervised topic models to extract aspects and tag review sentences with aspect labels. We develop an attention-based RNN decoder to generate the aspect sequence conditioned on the context including users, items and ratings. By modeling the transitions of aspect semantics among sentences, we are able to capture the content flow of the whole review. Then, we generate a semantic template called sketch using an aspect-aware decoder, which represents the sentence skeleton. Finally, we generate the word content according to an informed decoder that considers aspect labels, sketch symbols and previously decoded words. Extensive experiments on three real-world review datasets have demonstrated the effectiveness of the proposed model. To our knowledge, it is the first review generation model that is able to jointly utilize aspect semantics, syntactic sketch, and context information. We decompose the entire generation process into three stages. In this way, the generation of long review text becomes more controllable, since we consider a simpler sequence generation task at each stage. Furthermore, we incorporate language characteristics (e.g., Part-of-Speech tags and ngrams) into the aspect-aware decoder to instruct the generation of well-structured text.
In recent years, researchers have made great progress in natural language generation (NLG) It has been found that RNN models tend to generate short, repetitive, and dull texts Our work is inspired by the work of using sketches as intermediate representations A review is a natural language text written by a user u on a product (or item) i with a rating score of r. Let V denote the vocabulary and y j=1 denote a review text consisting of m sentences, where y j,t ∈ V denotes the t-th word of the j-th review sentence and n j is the length of the j-th sentence. We assume that the review generation process is decomposed into three different stages. First, a user generates an aspect sequence representing the major content flow for a review. To generate a sentence, we predict an aspect-aware sketch conditioned on an aspect label. Finally, based on the aspect label and the sketch, we generate the word content for a sentence. The process is repeated until all the sentences are generated. Let A denote a set of A aspects in our collection. Following where a j ∈ A is the aspect label (or ID) of the j-th sentence. For each sentence, we assume that it is written according to some semantic sketch, which is also denoted by a symbol sequence. Let , where n j is the length of the j-th sketch, and s j,t is the t-th token of the j-th sketch denoting a word, a Part-of-Speech tag, a bi-gram, etc. Based on the above notations, we are ready to define our task. Given user u, item i and the rating score r, we aim to automatically generate a review that is able to maximize the joint probability of the aspects, sketches and words Pr(y 1:m , s 1:m , a 1:m |c) (1) = Pr(a 1:m |c)Pr(s 1:m |a 1:m , c)Pr(y 1:m |a 1:m , s 1:m , c), Pr(aj|a<j, c) Pr(sj,t|sj,<t, aj, c) Pr Unlike previous works generating the review in a single stage, we decompose the generation pro- cess into three stages, namely aspect sequence generation, aspect-aware sketch generation and sketch-based sentence generation. We present an overview illustration of the proposed model in Fig. To learn the model for generating aspect sequences, we need to derive the aspect sequence for training, and then decode the aspect sequence based on the context encoder. Aspect Extraction. Aspects provide an informative summary about the feature or attribute information about a product or an item. For example, aspects of a restaurant may include food, staff and price, etc. It is time-consuming and laborious to manually discover the aspects from texts. Here, we use an automatic unsupervised topic modeling approach to learning the aspects from the review content. Based on the Twitter-LDA model Context Encoder. Our aspect generation module adopts an encoder-decoder architecture. We first develop the context encoder based on the information of user u, item i and rating score r. We first use a look-up layer to transform the three kinds of information into low-dimensional vectors. Let denote the embeddings for u, i and r respectively. Then, we feed the concatenated vector into a Multi-Layer Perceptron (MLP) and produce a single vectorized (2) The embedding v c summarizes the necessary information from the three kinds of context data. It is flexible to incorporate more kinds of useful information using a similar approach. Aspect Decoder. The decoder is built upon the GRU-based RNN network. Let h A j ∈ R d H A denote a d H A -dimensional hidden vector at the j-th time step, which is computed via: where v a j-1 ∈ R d A is the embedding of the previous aspect label a j-1 . The hidden vector of the first time step is initialized by the encoding vector h A 0 = v c in Eq. 2. Then, RNNs recurrently compute hidden vectors, and predict the next aspect label (or ID) a j . Additionally, we use an attention mechanism where W 1 is the parameter matrix to learn, and the attention vector ct is obtained by: Finally, we compute the probability of the j-th aspect label p(a t |a <j , c) via: where W 2 , W 3 , W 4 and b 1 are learnable parameter matrices or vector. A sketch is a symbol sequence describing the skeleton of a sentence, where each symbol denotes a semantic symbol such as a POS tag or a bi-gram. Similar to the aspect decoder, we also use the GRU-based RNNs to implement the sketch decoder. As shown in Fig. where x S j,t is further defined as where v s j,t-1 ∈ R d S denotes the embedding for the previous sketch symbol s j,t-1 , v a j denotes the embedding of the current aspect, and " " denotes the element-wise product. In this way, the aspect information can be utilized at each time step for generating an entire sketch. We set the initial hidden vector for the j-th sketch as the last embedding of the previous sketch: . Specifically, we have h S 1,0 = v c for initialization. Similar to Eq. 4 and 5, we can further use an attention mechanism for incorporating context information, and produce a context-enhanced sketch representation hS j,t for time step t. Finally, we compute Pr(s j,t |s j,<t , a j , c) via: where we incorporate the embedding v a j of the aspect a j for enhancing the aspect semantics. When the aspect sequence and the sketches are learned, we can generate the word content of a review. Here, we focus on the generation process of a single sentence. Sketch Encoder. To encode the sketch information, we employ the a bi-directional GRU encoder Sentence Decoder. Consider the word generation at time step t. Let v y j,t-1 ∈ R d Y denotes the embedding of the previous word y j,t-1 . As input, we concatenate the current sketch representation and the embedding of the previous word where "⊕" denotes the vector concatenation. Then, we compute the hidden vector h Y j,t ∈ R d H Y for the j-th sentence via: Similar to Eq. 4 and 5, we further leverage the context to obtain an enhanced state representation denoted by hY j,t using the attention mechanism. Then we transform it into an intermediate vector with the dimensionality of the vocabulary size: where v s j,t is the embedding of the sketch symbol s j,t . By incorporating aspect-specific word distributions, we can apply the softmax function to derive the generative probability of the t-th word Pr(yj,t|yj,<t, s j,1:n j , aj, c) = softmax(zy j,t + θ where θ a j y j,t is the probability from the word distribution for aspect a j . Here, we boost the importance of the words which have large probabilities in the corresponding topic models. In this process, the generation of words is required to match the generation of sketch symbols slot by slot. Here, we align words and sketch symbols by using the same indices for each slot for ease of understanding. However, the length of the sketch is not necessarily equal to that of the generated sentence, since a sketch symbol can correspond to a multiterm phrase. When the sketch token is a term or a phrase (e.g., bi-grams), we directly copy the original terms or phases to the output slot(s). Integrating Eq. 6, 10 and 14 into Eq. 1, we derive the joint model for review generation. We take the log likelihood of Eq. 1 over all training reviews as the objective function. The joint objective function is difficult to be directly optimized. Hence, we incrementally train the three parts, and fine-tune the shared or dependent parameters in different modules with the joint objective. For training, we directly use the real aspects and sketches for learning the model parameters. For inference, we apply our model in a pipeline way: we first infer the aspect, then predict the sketches and finally generate the words using inferred aspects and sketches. During inference, for sequence generation, we apply the beam search method with beam size 4. In the three sequence generation modules of our model, we incorporate two special symbols to indicate the start and end of a sequence, namely START and END. Once we generate the END symbol, the generation process will be stopped. Besides, we set the maximum generation lengths for aspect sequence and sketch sequence to be 5 and 50, respectively. In the training procedure, we adopt the Adam optimizer (Kingma and Ba, 2014). In order to avoid overfitting, we adopt the dropout strategy with a rate of 0.2. More implementation details can be found in Section 5.1 (see Table In this section, we first set up the experiments, and then report the results and analysis. Datasets. We evaluate our model on three real-world review datasets, including AMA-ZON Electronic dataset Baseline Models. We compare our model against a number of baseline models: • gC2S • Attr2Seq • TransNets • ExpansionNet • SeqGAN • LeakGAN Among these baselines, gC2S, Attr2Seq and TransNets are context-aware generation models in different implementation approaches, Expan-sionNet introduces external information such as aspect words, and SeqGAN and LeakGAN are GAN based text generation models. Original Seq-GAN and LeakGAN are designed for general sequence generation without considering context information (e.g., user, item, rating). The learned aspect keywords are provided as input for both ExpansionNet and our model. All the methods have several parameters to tune. We employ validation set to optimize the parameters in each method. To reproduce the results of our model, we report the parameter setting used throughout the experiments in Table Evaluation Metrics. To evaluate the performance of different methods on automatic review generation, we adopt six evaluation metrics, including Perplexity, BLEU-1/BLEU-4, ROUGE-1/ROUGE-2/ROUGE-L. Perplexity BLEU In this subsection, we construct a series of experiments on the effectiveness of the proposed model for the review generation task. Main Results. Table Ablation Analysis. The major novelty of our model is that it incorporates two specific modules to generate aspects and sketches respectively. To examine the contribution of the two modules, we compare our model with its two variants by removing either of the two modules. We present the BLEU-1 and ROUGE-1 results of our model and its two variants in Table the sketch generation module is able to utilize syntactic templates to improve the generation fluency, especially for long sentences. Current experiments evaluate the usefulness of the two modules based on the overall generation quality. Next, we verify their functions using two specific experiments, namely aspect coverage and fluency evaluation. Aspect Coverage Evaluation. A generated review is informative if it can effectively capture the semantic information of the real review. Following In Table Fluency Evaluation. We continue to evaluate the usefulness of the sketch generation module in improving the fluency of the generated text. Following In this part, we perform the qualitative analysis on the quality of the generated reviews. We present three sample reviews generated by our model in Table With the aspect and sketch generation modules, our model is able to produce informative reviews consisting of multiple well-structured sentences. Another interesting observation is that the polarities of the generated text also correspond to their real rating scores, since the rating score has been modeled in the context encoder. This paper presented a novel review generation model using an aspect-aware coarse-to-fine generation process. Unlike previous methods, our model decomposed the generation process into three stages focusing on different goals. We constructed extensive experiments on three real-world review datasets. The results have demonstrated Gold Standard Generated Sketch Generated Review the shipping was quick and easy service very good product at a reasonable price price 5mm male to 2 rca stereo audio cable sound highly recommend this product to anyone overall this cable worked_perfectly for my NNS sound the price was very JJ and i would_purchase NN from this NNprice it VBD on_time and in good NN service i would_recommend it overall this cable worked perfectly for my needs sound the price was very reasonable and i would purchase another from this vendorprice it arrived on time and in good conditionservice i would recommend it overall oxtail was good other than the flavors were very bland food place is small so if the tables are full be prepared to wait place pay too much for what you get price i will not be back to this locationoverall i had the NN NN and it was very JJ food the staff was JJ but service was a little JJservice i had a bad_experience at this NN place i VBP not JJ if i will be back RBoverall i had the falafel wrap and it was very bland food the staff was friendly but service was a little slow service i had a bad_experience at this place place i am not sure if i will be back again overall the aroma is insanely sour from bad hops aroma dark clear ruby red beat sugar flavor and strong alcohol in aftertaste flavor golden body with a small white head body dont waste your money on this overall VBZ an amber_body with a JJ NN head body the flavor is very JJ with notes of NN flavor this beer has the JJS aroma of canned_corn i have ever VBNaroma pours an amber body with a white finger head body the flavor is very horrible with notes of alcohol flavor this beer has the worst aroma of canned corn i have ever smelledaroma Table the effectiveness of our model in terms of overall generation quality, aspect coverage, and fluency. As future work, we will consider integrating more kinds of syntactic features from linguistic analysis such as dependency parsing.
834
1,592
834
Does Your Model Classify Entities Reasonably? Diagnosing and Mitigating Spurious Correlations in Entity Typing
Entity typing aims at predicting one or more words that describe the type(s) of a specific mention in a sentence. Due to shortcuts from surface patterns to annotated entity labels and biased training, existing entity typing models are subject to the problem of spurious correlations. To comprehensively investigate the faithfulness and reliability of entity typing methods, we first systematically define distinct kinds of model biases that are reflected mainly from spurious correlations. Particularly, we identify six types of existing model biases, including mention-context bias, lexical overlapping bias, named entity bias, pronoun bias, dependency bias, and overgeneralization bias. To mitigate model biases, we then introduce a counterfactual data augmentation method. By augmenting the original training set with their debiased counterparts, models are forced to fully comprehend sentences and discover the fundamental cues for entity typing, rather than relying on spurious correlations for shortcuts. Experimental results on the UFET dataset show our counterfactual data augmentation approach helps improve generalization of different entity typing models with consistently better performance on both the original and debiased test sets 1 .
Given a sentence with an entity mention, the entity typing task aims at predicting one or more words or phrases that describe the type(s) of that specific mention To tackle the task, literature has developed vari-ous predictive methods to capture the association between the contextualized entity mention representation and the type label. For instance, a number of prior studies approach the problem as multiclass classification based on distinct ways of representing the entity mentioning sentences To comprehensively investigate the faithfulness and reliability of entity typing methods, the first contribution of this paper is to systematically define distinct kinds of model biases that are reflected mainly from spurious correlations. Particularly, we identify the following six types of existing model biases, for which examples are illustrated in Fig. We introduce a counterfactual data augmentation
In this section, we start with the problem definition ( §2.1) and then categorize and diagnose the spurious correlations causing shortcut predictions by the typing model ( §2.2). Lastly, we propose a counterfactual data augmentation approach to mitigate the identified spurious correlations, as well as several alternative techniques that apply ( §2.3). Given a sentence s with an entity mention e ∈ s, the entity typing task aims at predicting one or more words or phrases T from the label space L that describe the type(s) of e. By nature, the inference of type T should be context-dependent. Take the first sample demonstrated in Fig. We systematically define six types of typical model biases caused by spurious correlations in entity typing models. For each bias, we qualitatively inspect its existence and the corresponding spurious correlations used by a SOTA entity typing model on sampled instances with bias features. Following large 1) Mention-Context Bias: Semantically rich entity mentions may encourage the model to overly associate the mention surface with the type without considering the key information stated in contexts. An example is accordingly shown in T1 of Tab. 1, where MLMET predicts types that correspond to the case where "fire" is regarded as burning instead of gun shooting. Evidently, this is due to not effectively capturing the clues in the context such as "shooting" and "gunman". This is further illustrated by the counterfactual example T2, where the model predicts almost the same labels when seeing "fire" without a context. To identify potential instances with the mentioncontext bias, we query the PLM to infer the entity types based only on the mention with the template shown in Prompt I (Tab. 1). Therefore, samples where the PLM can accurately predict without the context information are regarded as biased. Entity typing models can easily achieve good performance on those biased samples by leveraging spurious correlations between their mention surface and types, as shown in S2 from Tab. 1. 2) Lexical Overlapping Bias: Type labels that have lexical overlaps with the entity mention can also become prediction shortcuts. As shown in T3 from Tab. 1: labeling mention "next day" with the type day and additional relevant types leads to the F1 up to 0.749. We observe a considerable amount of similar examples, e.g., typing the mention "eye shields" as shield, "the Doha negotiations" as negotiation, etc. The highly overlapped mention words and type labels make it difficult to evaluate whether the model makes predictions based on content comprehension or simply lexical similarities. 2), we show one Dependency bias instance where the model fails to locate the target entity in the mention (T9) and two Overgeneralization bias instances: T11 annotated by coarse types and T12 annotated by ultra-fine types. To quantify the overgeneralization bias ( §2.2), we query the typing model with an empty sentence in T13. To mitigate spurious correlations ( §2.3), we do dependency parsing to distinguish headwords from dependents in S6 and truncate the mention with only the headword preserved as T10 to help address dependency bias. We substitute the overlapping mention words with semantically similar words and ask the PLM to infer the entity types on such perturbed instances (details introduced in §2.3) by prompting with the template Prompt II (Tab. 1). We consider instances have lexical overlapping biases when the PLM performs poorly after the overlapped mention words are substituted, as shown in S3 of Tab. 1. 3) Named Entity Bias: On cases where mentions refer to high-reporting entities in corpora, models may be trained to ignore the context but directly predict labels that co-occur frequently with those entities. We show a concrete instance to type a person named entity in T5 of Tab. 1. The mention Benjamin Netanyahu, known as Israeli former prime minister, is normally annotated with politician, leader and authority. After observing popular named entities and their common annotations during training, models are able to predict their common types, making it hard to evaluate models' capabilities to infer context-sensitive labels. As illustrated in Prompt III (Tab. 1), we prompt the PLM to type the named entity when only the name and its general attribute is given, e.g., the geopolitical area India or the organization Apple, etc. We regard instances to have the named entity bias when the PLM accurately infers the mention types relying on prior knowledge of named entities. In Tab. 1, we show one instance with the mention containing Benjamin Netanyahu in S4, and the Thai pop music singer -Jintara Poonlarp in S5 4) Pronoun Bias: Compared with diverse person names, pronouns show up much more frequently to help make sentences smoother and clearer. Therefore, models are subject to biased training to type pronouns well, but lose the ability to type based on diverse real names. To type the pronoun her in T7 of Tab. 1, the entity typing model can successfully infer general types woman, female as well as the context-sensitive type actress. To obtain high generalization, we expect models to infer types correctly for both pronouns and their referred names. We substitute the gender pronoun with a random person name of the same gender (details introduced in §2.3) and ask the PLM to infer the types with Prompt IV (Tab. 1). We consider samples to have the pronoun bias when the PLM fails to capture the majority of types after the name substitution, as shown in S6 of Tab. 1. 5) Dependency Bias: It is observed that the mention's headwords explicitly match the mention to its types Since knowledge about mention structures is beneficial for typing complex multi-word mentions, we mitigate the bias by data augmentation to improve model learning (details introduced in §2.3), rather than identify whether the bias exists or not. 6) Overgeneralization Bias: When training with disproportional distributed labels, frequent labels are more likely to be predicted compared with rare ones. Entity typing datasets are naturally imbalanced As shown in T13 of Tab. 2, we craft a special instance -an empty sentence, with which the uniform distribution over all types is expected from models free of overgeneralization bias. We then compute its disparity with the model's actual probability distribution: the higher/lower probability predicted on popular/rare types, the more biased the model on the label distribution. Discussion The prior defined six biases are not mutually exclusive. We discuss some possible mixtures of concurrent biases as follows: Mention-Context and Lexical Overlapping Bias: the model falsely types the mention "Treasure Island" as island, without understanding the context talking about the holiday accommodation. Another possible reason that the mention far outweighs the context might be the high word similarity between mention word "Island" and type word "island". Dependency and Lexical Overlapping Bias: ML-MET incorrectly makes the prediction car for the mention "most car spoilers" without distinguishing important headwords from less important depen-dent words. Another reasonable explanation for emphasizing on the dependent rather than the headword is its perfect lexical match with the type set, where "car" is a relatively popular label but no type has high word similarity with "spoilers". To diagnose and mitigate all spurious correlations the entity typing model may take advantage of, we disentangle the multiple biases on a single instance by analyzing each bias individually without considering their mutual interactions. In Tab. 6, we evaluate robustness of entity typing models after adopting the proposed counterfactual data augmentation or alternative debiasing tech- niques, and present results on the UFET test set with bias and our counterfactually debiased test set. Overall, our counterfactual data augmentation is the only approach that consistently improves the generalization of the studied models across both test sets. Particularly, we achieve the best performance on UFET and the debiased test set with ML-MET. Besides, models trained with our approach improve the performance of BiLSTM and MLMET relatively by 71.15% and 11.81% on the debiased test set, respectively, implying the least reliance on spurious correlations to infer correct entity types. When evaluating other debiasing approaches, we find that 1) none of the resampling or reweighting techniques is capable to maintain the performance on UFET test set of both models, which could be attributed to the large-scale label space and the existence of diverse causes of model biases; 2) contrastive learning with either cross entropy loss or cosine similarity loss helps improve performance on debiased samples, but leads to accuracy drop of MLMET on UFET; 3) without updating model parameters given bias features, counterfactual inference fails to improve performance of MLMET on debiased samples. In this section, we start with describing the experimental setups ( §3.1). Next, we diagnose entity models to measure their reliance on spurious correlations ( §3.2). We then compare our counterfactual data augmentation with other debiasing techniques for spurious correlation mitigation ( §3.3). We leverage the ultra-fine entity typing (UFET) dataset We diagnose the prediction biases and the effectiveness of distinct debiasing models based on following approaches: 1) BiL-STM In Tab. 3, we report performance of entity typing models trained on UFET. The models are tested on original biased samples and their perturbed new instances to reflect exploited spurious correlations. We conduct similar analyses on unbiased samples. 1) Mention-Context Bias: When perturbing the biased samples by only feeding their mentions to typing models, the performance of MLMET keeps unchanged while the performance of BiLSTM even improves by 3.8%. This disobeys the task goal of entity typing where types of the mentions should also depend on contexts, and we suggest that samples with mention-context biases are insufficient for a faithful evaluation of a reliable typing system. 2) Lexical Overlapping Bias: After substituting label-overlapped mention words with semantically similar words, performance of both models drops drastically especially on biased samples identified by the PLM. Compared with MLMET, BiLSTM has less parameter capacity and is more inclined to leverage lexical overlapping between mentions and type labels as the shortcut for typing. Compared with original biased instances, the perturbed instances with label-overlapped mention words replaced might look less natural or fluent. In Tab. 4, we therefore substitute words from different parts of instance, and prove that performance degradation is caused by removed lexical overlapping bias rather than unnatural or dysfluent input. 3) Named Entity Bias: After replacing named entities to be less impacted from biased prior knowl- edge, performance of both studied models in Tab. 3 decreases considerably when encountering named entities, with which models struggle to capture spurious correlations with mention types. Interestingly, perturbing unbiased samples by utilizing named entities with bias provides shortcuts for prediction, leading to improved performance of both models. 4) Pronoun Bias: With pronouns replaced by their referred entities in contexts or random masculine/feminine names otherwise, we observe serious performance degradation from both models, which demonstrates their common weakness on typing more diverse and less frequent real names. 5) Dependency Bias: With headwords directly exposed to entity typing models by dropping all other less important dependents, performance from BiLSTM on around 30% of all testing samples with dependency structures gets improved dramatically, while MLMET also predicts more precisely on 23% of samples. Hereby, we confirm that existing entity models still suffer from extracting core components of given mentions for entity typing and appeal for more research efforts to address this problem. 6) Overgeneralization Bias: Models are subject to making biased predictions towards popular types observed during training, which leads to contrastive performance on instances purely annotated by coarse and ultra-fine types, as shown in Tab. 3. This problem is exemplified in a case study in Tab. 5, where typing models are queried with an empty sentence. Compared with the uniform probability distribution expected from models free from overgeneralization bias, existing models are inclined to give much higher probabilities to coarse types such as person and title. Entity Typing Earlier studies on entity typing The spurious correlation problems in information extraction tasks are still an under-explored area. Despite most recent studies on NER To comprehensively investigate the faithfulness and reliability of entity typing methods, we systematically define six kinds of model biases that are reflected mainly from spurious correlations. In addition to diagnosing the biases on representative models using benchmark data, we also present a counterfactual data augmentation approach that helps improve the generalization of different entity typing models with consistently better performance on both original and debiased test sets. There are two important caveats to this work. First, for instances identified with a particular bias by the PLM, we do not guarantee all typing models would exploit spurious correlations on it. To the best of our knowledge, entity typing models with spurious correlation ablated and mitigated do not yet exist. Although we observe significant performance differences between the original biased instances and the crafted debiased counterparts from existing entity typing models, we hope future work would pay attention to spurious correlations, and develop models with improved robustness and generalization performance. Second, although biases defined in this work comprehensively cover six aspects, but still they may not exhaust all kinds of biased prediction in entity typing. In our study we only tried our best effort to study the most noteworthy and typical biases with which models may inflate performance by leveraging corresponding spurious correlations. At the same time, appeal for more research efforts to complete our understanding with more biases investigated. In addition, the studied model biases are representative to the widely practiced classificationbased typing paradigm. There are effects in the most recent NLI-based or bi-encoder-based methods We acknowledge the importance of ethical considerations in language technologies and would like to point the reader to the following concern. Gender is a spectrum and we respect all gender identities, e.g., nonbinary, genderfluid, polygender, omnigender, etc. To craft instances free from pronoun bias, we substitute the gender pronouns with their referred names in contexts if they exist, or random masculine/feminine given names otherwise. This is due to the lack of entity typing datasets going beyond binarism for pronoun mentions such as they/them/theirs, ze/hir/hir, etc. Nevertheless, we support the rise of alternative neutral pronoun expressions and look forward to the development of non-binary inclusive datasets and technologies. In the meantime, although our techniques do not introduce or exaggerate possible gender bias in the original experimental data, in cases where such biases pre-exist in those data, additional gender neuralization techniques would be needed in order for such biases to be mitigated. Lexical Overlapping Bias We consider the following sentence as an instance: "Deutsche Bank would neither confirm nor deny the discharge of the two executives, and it also would not specify who was the target of the alleged spying", annotated with types dismissal, discharge, leave, termination. Since "discharge" shows up both in the mention and the true labels, we perform word substitutions with synonym candidates from 20 synsets found in WordNet. We show a few synsets with popular senses as follows: Synset I: (the termination of someone's employment) dismissal, dismission, discharge, firing, liberation, release, sack, sacking SynsetII: (a substance that is emitted or released) discharge, emission SynsetIII: (a formal written statement of relinquishment) release, waiver, discharge Synonyms that share high word similarities with the true labels are removed to avoid creating new lexical overlapping bias features, e.g., dismissal, discharge from Synset I, discharge from Synset II and Synset III. To guarantee the semantic consistency of the new sentence and the fidelity of true labels to type the new mention, we leverage available word sense disambiguation models to preserve synonyms from the synset that is most consistent with the sense used in the original sentence: dismission, firing, liberation, release, sack, sacking from Synset I are finally selected to substitute "discharge". As shown in T4 of Tab. 1, without training on the debiased set, MLMET no longer predicts the overlapped type "day", but some surface word "period" instead. Named Entity Bias Compared with the politician Benjamin Netanyahu, the PLM can hardly infer the impression of the singer Jintara Poonlarp on the public. Particularly, only general types to describe person named entities are predicted in S5: person, human, woman. We then consider Benjamin Netanyahu as a biased named entity containing much prior knowledge, while Jintara Poonlarp as an unbiased named entity without much type-relevant information revealed. After substituting Benjamin Netanyahu with Jintara Poonlarp in T6, MLMET can hardly infer the political role of the new mention by analyzing its connection with the politician (Amin al-Husseini, Palestinian Arab nationalist and Muslim leader in Mandatory Palestine Pronoun Bias As shown in the original instance T7 of Tab. 1, the actual person's name that the pronoun mention "Her" refers to is not provided in the current sentence. As a result, a random feminine name, "Judith" is assumed to be the referred entity and substitutes the pronoun mention as a new sentence in S6. Considering the ridiculously wrong types predicted by RoBERTa such as bird and cat, we include this new instance in the debiased set and expect the entity modeling training on this kind of instances to infer person name types as accurate as pronoun types. Beforehand, we test on the newly crafted instance without counterfactual augmented training, and observe huge performance drop after pronoun concretization: types related to the name's gender attribute such as woman and female are missing, let alone the types requiring fully context understanding such as actress. Dependency Bias For instance T9 in Tab. 2, we show their mention word dependency analysis in S6 and predictions on the perturbed instance in T10. Without distractions from other dependent words in the new mention, MLMET spares no effort to infer types of the target entity "whale" with the correct prediction subject. Motivated by the improved performance when the mention headword is specifically provided, we believe entity typing models can actively learn to capture target entity among mention words when both original sentences and their debiased counterparts are given during training. In such augmented training regime, the entity typing model is expected achieve robust performance on new sentences bearing distractions from dependent words in mentions. We adopt the released checkpoints of RoBERTalarge To diagnose entity typing models, for those with released checkpoints (BiLSTM, Box4Types, LRN), we directly evaluate on the original (un)biased and crafted debiased instances. We train LabelGCN and MLMET by ourselves following hyperparameters and training strategies introduced in their papers. To evaluate various debiasing approaches, we train entity typing models using checkpoints training on the original dataset as the warm start with the same hyperparameter sets. We run experiments on a commodity server with a GeForce RTX 2080 GPU. It takes about 4 hours to train one entity typing model on average and 2 minutes for inference on the UFET test set. We diagnose entity typing models and the effectiveness of the proposed counterfactual augmented approach on OntoNotes In Tab. 8, we report performance of two representative entity typing models on original biased samples where they are likely to exploit spurious correlations, the perturbed counterparts, as well as performance on unbiased samples. We have the following observations: 1) entity typing models can achieve satisfactory performance when only the mention is provided without context; 2) considering lexical overlapping bias, performance on both biased and unbiased samples identified by the PLM drops a lot after substituting overlapped mention words with their sematically similar words; 3) the performance variation after named entity substitution is evident; 4) models can obtain much better performance on some instances when the headwords are explicitly given without distractions from other words in mentions; 5) performance on instances purely annotated by coarse and fine labels is good in general with around 15% difference in F1 score. Similarly to UFET, models training on OntoNotes may achieve good performance without reasoning on the context, rely on lexical overlapping between mention words and types to make precise predictions, and obtain below-average results on some instances for lack of syntactic structure understanding. To mitigate spurious correlations, we evaluate the proposed counterfactual augmented approach in Tab. 9. With additional debiased instances for model training, both BiLSTM and MLMET maintain good performance on the original OntoNotes test set and much higher accuracy on the corresponding debiased test set, leading to improved generalization.
1,250
907
1,250
Reasoning with Latent Structure Refinement for Document-Level Relation Extraction
Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F 1 score of 59.05 on a large-scale documentlevel dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations.
Relation extraction aims to detect relations among entities in the text and plays a significant role in a variety of natural language processing applications. Early research efforts focus on predicting relations between entities within the sentence A more challenging, yet practical extension, is the document-level relation extraction, where a system needs to comprehend multiple sentences to infer the relations among entities by synthesizing relevant information from the entire document Prior efforts show that interactions between mentions of entities facilitate the reasoning process in the document-level relation extraction. Thus, Unlike previous methods, where a documentlevel structure is constructed by co-references and rules, our proposed model treats the graph structure as a latent variable and induces it in an end-to-end fashion. Our model is built based on the structured attention Experiments show that our model significantly outperforms the existing approaches on DocRED, a large-scale document-level relation extraction dataset with a large number of entities and relations, and also yields new state-of-the-art results on two popular document-level relation extraction datasets in the biomedical domain. The code and pretrained model are available at Our contributions are summarized as follows: • We construct a document-level graph for inference in an end-to-end fashion without relying on co-references or rules, which may not always yield optimal structures. With the iterative refinement strategy, our model is able to dynamically construct a latent structure for improved information aggregation in the entire document. • We perform quantitative and qualitative analyses to compare with the state-of-the-art mod-
Node constructor encodes sentences in a document into contextual representations and constructs representations of mention nodes, entity nodes and meta dependency paths (MDP) nodes, as shown in Figure Given a document d, each sentence d i in it is fed to the context encoder, which outputs the contextualized representations of each word in d i . The context encoder can be a bidirectional LSTM (BiL-STM) where -→ h i j and -→ h i j-1 represent the hidden representations of the j-th, (j+1)-th and (j-1)th token in the sentence d i of two directions, and γ i j denotes the word embedding of the j-th token. Contextual representation of each token in the sentence is represented as -→ h i j ] by concatenating hidden states of two directions, where h i j ∈ R d and d is the dimension. All figures best viewed in color. We construct three types of nodes for a documentlevel graph: mention nodes, entity nodes and meta dependency paths (MDP) nodes as shown in Figure The dynamic reasoner has two modules, structure induction and multi-hop reasoning as shown in Figure The multi-hop reasoning module is used to perform inference on the induced latent structure, where representations of each node will be updated based on the information aggregation scheme. We stack N blocks in order to iteratively refine the latent document-level graph for better reasoning. Unlike existing models that use co-reference links for reasoning, our model treats the graph as a latent variable and induces it in an end-to-end fashion. The structure induction module is built based on the structured attention Let u i denote the contextual representation of the i-th node, where u i ∈ R d , we first calculate the pair-wise unnormalized attention score s ij between the i-th and the j-th node with the node represen-tations u i and u j . The score s ij is calculated by two feed-forward neural networks and a bilinear transformation: where W p ∈ R d×d and W c ∈ R d×d are weights for two feed-forward neural networks, d is the dimension of the node representations, and tanh is applied as the activation function. W b ∈ R d×d are the weights for the bilinear transformation. Next we compute the root score s r i which represents the unnormalized probability of the i-th node to be selected as the root node of the structure: where W r ∈ R 1×d is the weight for the linear transformation. Following Koo et al. ( where P ij is the weight of the edge between the i-th and the j-th node. We then define the Laplacian matrix L ∈ R n×n of G in Equation ( We use A ij to denote the marginal probability of the dependency edge between the i-th and the j-th node. Then, A ij can be derived based on Equation ( Here, A ∈ R n×n can be interpreted as a weighted adjacency matrix of the document-level entity graph. Finally, we can feed A ∈ R n×n into the multi-hop reasoning module to update the representations of nodes in the latent structure. Graph neural networks have been widely used in different tasks to perform multi-hop reasoning Formally, given a graph G with n nodes, which can be represented with an n × n adjacency matrix A induced by the previous structure induction module, the convolution computation for the node i at the l-th layer, which takes the representation u l-1 i from previous layer as input and outputs the updated representations u l i , can be defined as: where W l and b l are the weight matrix and bias vector for the l-th layer, respectively. σ is the ReLU (Nair and Hinton, 2010) activation function. u 0 i ∈ R d is the initial contextual representation of the i-th node constructed by the node constructor. Following Though structured attention As shown in Figure After N times of refinement, we obtain representations of all the nodes. Following where W e ∈ R d×k×d and b e ∈ R k are trainable weights and bias, with k being the number of relation categories, σ is the sigmoid function, and the subscript r in the right side of the equation refers to the relation type. 3 Experiments We evaluate our model on DocRED We use spaCy Following We compare our proposed LSR with the following three types of competitive models on the DocRED dataset, and show the main results in Table • Sequence-based Models. These models leverage different neural architectures to encode sentences in the document, including convolutional neural networks (CNN) Under the same setting, our model consistently outperforms graph-based models based on static graphs or attention mechanisms. Compared with EoG, our LSR model achieves 3.0 and 2.4 higher F 1 on development and test set, respectively. We also have similar observations for the GCNN model, which shows that a static document-level graph may not be able to capture the complex interactions in a document. The dynamic latent structure induced by LSR captures richer non-local dependencies. Moreover, LSR also outperforms GAT and AGGCN. This empirically shows that compared to the models that use local attention and self-attention In addition, LSR with GloVe obtains better results than two BERT-based models. This empirically shows that our model is able to capture longrange dependencies even without using powerful context encoders. Following In this subsection, we analyze intra-and intersentence performance on the development set. An entity pair requires inter-sentence reasoning if the two entities from the same document have no mentions in the same sentence. In DocRED's development set, about 45% of entity pairs require information aggregation over multiple sentences. Under the same setting, our LSR model outperforms all other models in both intra-and intersentence setting. The differences in F 1 scores between LSR and other models in the inter-sentence setting tend to be larger than the differences in the intra-sentence setting. These results demonstrate that the majority of LSR's superiority comes from the inter-sentence relational facts, suggesting that the latent structure induced by our model is indeed capable of synthesizing the information across multiple sentences of a document. Furthermore, LSR with GloVe also proves better in the inter-sentence setting compared with two BERT-based Table F 1 Intra-F 1 Inter-F 1 NoInf In this subsection, we use the development set of DocRED to demonstrate the effectiveness of the latent structure and refinements. We investigate the extent to which the latent structures, that are induced and iteratively refined by the proposed dynamic reasoner, help to improve the overall performance. We experiment with the three different structures defined below. For fair comparisons, we use the same GCN model to perform multi-hop reasoning for all these structures. Rule-based Structure: We use the rule-based structure in EoG We adapt rules from De Cao et al. ( Attention-based Structure: This structure is induced by AGGCN As shown in Figure The most significant difference is visible in the structure induction module. Removal of structure induction part leads to a 3.26 drop in terms of F 1 score. This result indicates that the latent structure plays a key role in the overall performance. In Figure We also depict the predicted relations of Con-textAware, AGGCN and LSR on the graph on the right side of the Figure Document-level relation extraction. Early efforts focus on predicting relations between entities within a single sentence by modeling interactions in the input sequence Structure-based relational reasoning. Structural information has been widely used for relational reasoning in various NLP applications including question answering We introduce a novel latent structure refinement (LSR) model for better reasoning in the documentlevel relation extraction task. Unlike previous approaches that rely on syntactic trees, co-references or heuristics, LSR dynamically learns a documentlevel structure and makes predictions in an end-toend fashion. There are multiple avenues for future work. One possible direction is to extend the scope of structure induction for constructions of nodes without relying on an external parser.
1,183
1,741
1,183
Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages
Human languages are full of metaphorical expressions. Metaphors help people understand the world by connecting new concepts and domains to more familiar ones. Large pretrained language models (PLMs) are therefore assumed to encode metaphorical knowledge useful for NLP systems. In this paper, we investigate this hypothesis for PLMs, by probing metaphoricity information in their encodings, and by measuring the cross-lingual and crossdataset generalization of this information. We present studies in multiple metaphor detection datasets and in four languages (i.e., English, Spanish, Russian, and Farsi). Our extensive experiments suggest that contextual representations in PLMs do encode metaphorical knowledge, and mostly in their middle layers. The knowledge is transferable between languages and datasets, especially when the annotation is consistent across training and testing sets. Our findings give helpful insights for both cognitive and NLP scientists.
Pre-trained language models (PLMs) Metaphors are important aspects of human languages. In conceptual metaphor theory (CMT) So far, there has been no comprehensive analysis of whether and how PLMs represent metaphorical information. We intuitively assume that PLMs must encode some information about metaphors due to their great performance in metaphor detection and other language processing tasks. Confirming that experimentally is a question that we address here. Specifically, we aim to know whether generalizable metaphorical knowledge is encoded in PLM representations or not. The outline of our work is presented in Figure We first do probing experiments to answer questions such as: (i) with which accuracies and extractablities do different PLMs encode metaphorical knowledge? (ii) how deep is the metaphorical knowledge encoded in PLM multi-layer representations? We take two probing methods, edge probing To better estimate the generalization of metaphorical knowledge in PLMs, we design two setups in which testing comes from a different distribution than training data: cross-lingual and cross-dataset metaphor detection. Each setup can reveal important information on whether or not the metaphorical knowledge is encoded consistently in PLMs. Four languages (English, Farsi, Russian and Spanish) and four datasets In summary, this paper makes the following contributions: • For the first time, and through careful probing analysis, we confirm that PLMs do encode metaphorical knowledge. • We show that metaphorical knowledge is encoded better in the middle layers of PLMs. • We evaluate the generalization of metaphorical knowledge in PLMs across four languages and four dataset sources, and find out that there is considerable transferability for the pairs with consistent data annotation even if they are in different languages.
Metaphor detection using PLMs. The metaphor detection task Probing methods in NLP. Probing is an analytical tool used for assessing linguistic knowledge in language representations. In probing, the information richness of the representations is inspected by the quality of a supervised model in predicting linguistic properties based only on the representations A popular probing method introduced by Tenney et al. (2019b) is edge probing (Figure An Information-theoretic view can solve this issue Probing multilingual PLMs. The application of probing methods in NLP is extended to multilingual PLMs as well Out-of-distribution generalization. There has been no earlier work on studying or evaluating outof-distribution generalization in metaphor detection. This generalization refers to scenarios where testing and training sets come from different distributions Metaphors are used frequently in our everyday language to convey our thoughts more clearly. There are related theories in linguistics and cognitive science. Following linguistic theories, metaphoricity is mostly annotated using metaphor identification procedure (MIP). MIP identifies a word in a given context as a metaphor if it has a basic or literal meaning that contrasts with its context words. Based on conceptual metaphor theory (CMT) Here, we use the metaphor detection datasets annotated based on these theories and analyze the PLM representations to see if they encode metaphorical knowledge and if the encoding is generalizable. To do so, we first probe PLMs for their metaphorical information, generally and also across layers. This gives us intuition on how well metaphoricity is encoded and how local or contextual that is. Then, we test if the knowledge of metaphor detection can be transferred across languages and if multilingual PLMs capture that. Finally, the generalization of metaphorical knowledge across datasets is examined to see if the theories and annotations followed by different datasets are consistent, and if PLMs learn generalizable knowledge rather than dataset artifacts. Here, we aim to answer general questions about metaphors in PLMs: do PLMs encode metaphorical information and, if so, how it is distributed in their layers. We do not attempt to achieve the best metaphor detection results but to analyze layers of PLMs to test if they contain the necessary information to perform this task. In trying to answer this question, we apply probing methods, discussed as follows, to focus on the representation itself by freezing the PLM parameters and training classifiers on top. We hypothesize that metaphorical information does exist in PLM layers and more in the middle layers. As we discussed earlier, metaphor detection depends on contrast prediction between source and target domains of a token. We assume that this prediction is made mainly based on the initial layers of PLM representations of either the token itself or its context or both. In higher layers of PLMs, the representations are dominated by contextual information, making it hard to retrieve the source domain, and so, reasoning about the contrast of the source and target domains becomes difficult. Methods We employ edge probing The Minimum Description Length (MDL) probing is based on information theory and combines the quality of the classifier and the amount of effort needed to achieve this quality. To see if PLMs encode generalizable metaphorical knowledge, we evaluate them in settings where testing and training data are in different distributions. We explore transferability analysis across languages and datasets as two sources of distribution. We explain each in the following sections. The four LCC datasets corresponding to four languages are used here. We subsample from the datasets to have the same number of examples in the training sets, i.e., 12,238 which is the size of the Russian training set. The results are shown in Table We observe that XLM-R significantly outperforms the random, confirming that metaphorical knowledge learned during the pre-training is transferable across languages. This considerable transferability can be attributed to the ability of XLM-R to build language-universal representations useful for metaphoricity transfer. Moreover, the innate similarities of metaphors in distinct languages can contribute to higher transferability, despite the lexicalization differences. E.g., analogizing a concept to a tool (en) occurs the same way in other languages like instrumento (es), (fa) and инструмент (ru). Finally, the constraints of the dataset producers in, for instance, keeping the languages in relatively similar target and source domains, could be influential. (See Figures An interesting observation is that training on Russian shows the best out-of-distribution results when testing on other languages. We analyze this further. First, we observe that LCC(ru) has almost the closest target domain distribution to all other languages (See Table Second, the reported results can also be influenced by the amount of data from each of these languages in the pre-training data of XLM-R. Russian has the second largest size after English Similar to the cross-lingual evaluations, here we have four datasets used as sources and targets. We set the train size of each to the minimum of all, i.e., 3,838. For each pair, we run two experiments: one with randomized and one with pre-trained BERT as our PLM. Results are shown in Table PLM is much better than random in all outof-distribution cases, suggesting the presence of generalizable metaphorical information. As expected, VUA Verbs and POS achieve the best results when mutually tested, because, apart from the POS, they have the same distribution. VUA datasets and LCC(en) show good transferability, but the gap with in-distribution results is still considerable (>13% absolute). VUA Verbs is the best source for TroFi, likely because of the POS match between them. Overall, apart from the two VUA datasets, the gap between in-and out-ofdistribution performance is large. The random PLM accuracies range from about 54%-64% and 50%-56% for in-and out-ofdistribution cases. We hypothesize that this drop in the out-of-distribution is related to the annotation biases, which a randomly initialized classifier can leverage better when testing and training sets are from the same distribution. When the sets have different distributions, the biases do not transfer well. As additional transferability analysis, we compare cross-lingual and cross-dataset results, by using XLM-R and evaluating different training sources on LCC(en) test set. We make the size of each train set to be the same Datasets We use four metaphor detection datasets in our study. The annotations of LCC TroFi dataset of metaphoric and literal usages of 51 English verbs from WSJ. VUA LCC contains annotations in four languages: English, Russian, Spanish, and Farsi. The other three datasets, TroFi, VUA Verbs and VUA POS, are in English only. We have label-balanced all the datasets to get a more straightforward interpretation of results (the accuracy of a fair-coin random baseline is 50% in all cases) and have split the datasets to train / dev / test sets with ratios of 0.7 / 0.1 / 0.2. The statistics of the datasets are shown in Table 2. Example sentences with the corresponding annotations can be seen in Table Setup In implementing the edge probe, we use batch size = 32 and learning rate = 5e-5 and train for five epochs in all experiments. For the MDL probe, the same structure of edge probing is employed. We apply a logarithm to the base two instead of the natural logarithm in cross-entropy loss to have all the obtained code lengths in bits (see extra details in Here, BERT Table MDL probing compression across layers is demonstrated in Figure In §3.1, we elaborated a hypothesis that the process of detecting metaphors is not very deep since what it needs to do is mainly contrast prediction between source and target domains, and the deep layers do not represent the source domain well. Our reported probing results confirm that metaphor detection is not deep in PLM layers. To further evaluate our reasoning, we probe the domain knowledge in PLM representations across layers. We employ LCC's annotation of source and target domains, and run a similar MDL probing on different PLMs but for domain prediction. The obtained results, shown in Figure A.1 in appendix, demonstrate that the source domain information is represented in the initial layers (2-6), confirming that the source domain is dominated by other information in higher layers. On the other hand, target domain information generally increases across layers. Therefore, the middle layers can be the best place for contrasting source and target domains. As our PLMs, we use XLM-R We run two experiments for each case of a source distribution S and a target distribution T : one with the PLM and one with a randomized version of the PLM where weights are set to random values. Randomly initialized Transformers with the same architecture as PLMs are common baselines in the community. The difference between the two gives evidence about the helpfulness of the encoded knowledge gained during pre-training in doing the task. When S = T , this effect is measured for in-distribution and when S = T , for out-of-distribution generalization. Comparing results of in-distribution (e.g., training and testing on English data) and out-of-distribution (e.g., training on Spanish and testing on English) setups demonstrates how generalizable the metaphorical knowledge in PLM is and how consistent the annotations are. Clearly, there is a substantial gap between crosslingual and cross-dataset accuracies. The annotation guideline is consistent in the LCC language datasets, while for the cross-dataset settings, we have datasets that differ in many aspects, including annotation procedure and definitions, covered partof-speeches (e.g., Trofi and VUA Verbs vs. LCC and VUA POS) and sentence lengths (LCC: 25.9, VUA: 19.4, Trofi: 28.3). Metaphors are important in human cognition, and if we seek to build cognitively inspired or plausible language understanding systems, we need to work more on their best integration in the future. Therefore, any work in this regard is impactful. Our probing experiments showed that PLMs do in fact represent the information necessary to do the task of metaphor detection. We assume this information is related to metaphorical knowledge learned during pre-training. Further, the layer-wise analysis confirmed our hypothesis that middle layers are more informative. Even though our probing experiments did show that metaphorical knowledge is present in PLMs, it was still unclear if this knowledge is generalizable beyond the training data. So, to probe the probe and evaluate generalization, we ran crosslingual and cross-dataset experiments. Our results showed that the transferability across languages works quite well for the four languages in LCC annotation. However, when the definitions and annotations were inconsistent across different datasets, the cross-dataset results were not satisfactory. Overall, we conclude that metaphorical knowledge does exist in PLM representations and in middle layers mainly, and it is transferable if the annotation is consistent across training and testing data. We will explore more the cross-lingual transfer of metaphors and the impact of cross-cultural similarities in the future. Also, the application of metaphorical knowledge for text generation is something important that we will address.
963
1,843
963