title
stringlengths
15
188
abstract
stringlengths
400
1.8k
introduction
stringlengths
9
10.5k
content
stringlengths
778
41.9k
abstract_len
int64
400
1.8k
intro_len
int64
9
10.5k
abs_len
int64
400
1.8k
Improving Long Distance Slot Carryover in Spoken Dialogue Systems
Tracking the state of the conversation is a central component in task-oriented spoken dialogue systems. One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn. Previous work on the slot carryover task used models that made independent decisions for each slot. A close analysis of the results show that this approach results in poor performance over longer context dialogues. In this paper, we propose to jointly model the slots. We propose two neural network architectures, one based on pointer networks that incorporate slot ordering information, and the other based on transformer networks that uses self attention mechanism to model the slot interdependencies. Our experiments on an internal dialogue benchmark dataset and on the public DSTC2 dataset demonstrate that our proposed models are able to resolve longer distance slot references and are able to achieve competitive performance.
In task-oriented spoken dialogue systems, the user and the system are engaged in interactions that can span multiple turns. A key challenge here is that the user can reference entities introduced in previous dialogue turns. For example, if a user request for what's the weather in arlington is followed by how about tomorrow, the dialogue system has to keep track of the entity arlington being referenced. In slot-based spoken dialogue systems, tracking the entities in context can be cast as slot carryover task -only the relevant slots from the dialogue context are carried over to the current turn. Recent work by To validate our approach, we conduct thorough evaluations on both the publicly available DSTC2 task To summarize we make the following contributions in this work: 1. We improve upon the slot carryover model architecture in
A dialogue H is formulated as a sequence of utterances, alternatively uttered by a user (U) and the system agent (A): where each element h is an utterance. A subscript d denotes the utterance distance which measures the offset from the most recent user utterance (h U 0 ). The i-th token of an utterance with distance d is denoted as h d A slot x = (d, k, l, r) in a dialogue is defined as a key-value pair that contains an entity information, e.g. Given a dialogue history H and a set of candidate slots X, the context carryover task is addressed by deciding which slots should be carried over. The previous work where F binary (x, H) denotes a binary classification model 3 Models Candidate Generation We follow the approach in Slot Encoder A model, given a candidate slot (a slot key, a span in the history and a distance), results in a fixed-length vector representation of a slot: x = F S (x, H) ∈ R D S , where x is the slot, H is the full history. We serialize the utterances in the dialogue and use BiLSTM to encode the context as a fixed-length vector The intent I of the most recent utterance determined by an NLU module is also encoded as a fixed-length vector i ∈ R D I by averaging the tokens in the intent. We average the word embeddings of the tokens associated with the intent to get the intent embedding. Decoder Given the encoded vector representations {x 1 , • • • , x n } of the slots, the context vector c, the intent vector i, produce a subset of the slot ids: The overall architecture of the model is shown in Figure In this section, we describe the different encoding methods that we use to encode slots. We average the word embeddings of the tokens in the slot key as the slot key encoding: where v(w) is the embedding vector of token w. For the slot value (the tokens h d [l : r]), we propose following encoding approaches. The first is to average the token embeddings of the tokens in the slot value: CTX LSTM To get improved contextualized representation of the slot value in dialogue, we also use neural network models to encode slots. We experimented with bidirectional LSTM Additionally, distance may contain important signals. This integer, being odd or even, provides information on whether this utterance is uttered by a user or the system. The smaller it is, the closer a slot is to the current utterance, hence implicitly more probable to be carried over. Building on these intuitions, we encode the distance as a small vector (x dist , 4 dimensions) and append it to the overall slot encoding: x = x key ; x val ; x dist . (8) x 4 q 2 q 3 x 1 Figure In this case, the pointer network selects x 4 , x 1 successively and stops after selecting EOS. Pointer network decoder We adopt the architecture of the pointer network We use a pointer network to select a subset of the slots from the input slot set. The input slot encodings are ordered as a sequence, then fed into a bidirectional LSTM encoder to yield a sequence of encoded hidden states. We experiment with different slot orderings as described in section 4. Here a special sentinel token EOS is appended to the beginning of the input to the pointer networkwhen decoding, once the output pointer points to this EOS token, the decoding process stops. Given the hidden states, e 0:n , the decoding process at every time step i is computed and updated as shown in Algorithm 1. Contrary to normal attention-based models which directly uses the decoder state (d i ) as the query, we incorporate the context vector (c) and the intent vector (i) into the attention query. a i j ← F A (q i , e j ) attention scores 10: until y i = 0 index of EOS is 0 17: return ŷ1:i-1 return all generated ŷ's 18: end procedure query vector is a concatenation of the three components: We use the general Luong attention As a subset output is desired, the output ŷi should be distinct at each step i. To this end, we utilize a dynamic mask in the decoding process: for every input slot encoding x j a Boolean mask variable m j is set to TRUE. Once a specific slot is generated, it is crossed out -its corresponding mask is set to FALSE, and further pointers will never attend to this slot again. Hence distinctness of the output sequence is ensured. Self-attention decoder The pointer network as introduced previously yields a succession of pointers that select slots based on attention scores, which allows the model to look back and forth over entire slot sequence for slot dependency modeling. Similar to the pointer network, the selfattention mechanism is also capable of modeling relationships between all slots in the dialogue, regardless of their respective positions. To compute the representation of any given slot, the selfattention model compares it to every other slot in the dialogue. The result of these comparisons is attention scores which determine how much each of the other slots should contribute to the representation of the given slot. In this section, we also propose to use the self-attention mechanism with the neural transformer networks One major component in the transformer is the multi-head self-attention unit. Rather than only computing the attention once, the multi-head mechanism runs through the scaled dot-product attention multiple times and allows the model to jointly attend to information from different perspectives at different positions, which is empirically shown to be more powerful than a single attention head Given the input slot encodings x 1:n , we compute the self-attention as follows: where the superscript 0 ≤ z < Z is the head number. We model the query construction, Equation 12, and the attention score, Equation 14, in the same way as their counterparts (Equation We derive the final decision over whether to carry over a slot as a 2-layer feedforward neural network atop the features x i , xi , context vector (c) and the intent vector (i): This creates a highway network connection For all the models, we initialize the word embeddings using fastText embeddings We compare our models against the baseline model -encoder-decoder with word attention architecture described by Impact of slot ordering Using pointer network model, we experiment with the following slot orderings to measure the impact of the order on carryover performance. no order -slots are ordered completely randomly. turn-only order -slots are ordered based on their slot distance, but the slots with the same distance (i.e., candidates generated from the same contextual turn) are ordered randomly. temporal order -slots are ordered based on the order in which they occur in the dialogue. Partial ordering slots across turns i.e., turn-only order significantly improves the carryover performance as compared to using no order. Further, enforcing within distance order using temporal order improves the overall performance slightly, but we see drop in F1 by 7 points for slots at distance ≥3. indicating that a strict ordering might hurt model accuracy. Impact of slot encoding Here, we compare slot value representations obtained by averaging pretrained embeddings (CTX avg ) with contextualized slot value representation obtained from BiLSTM over complete dialogue(CTX LSTM ). The results in Table Compared to the baseline model, both the pointer network model and the transformer model are able to carry over longer dialogue context due to being able to model the slot interdependence. With the transformer network, we completely forgo ordering information. Though the slot embedding includes distance feature x dist , the actual order in which the slots are arranged does not matter. We see improvement in carryover performance for slots at all distances. While the pointer network seems to deal with longer context better, the transformer architecture still gives us the best overall performance. For completeness, Table To gain deeper insight into the ability of the models to learn and utilize slot co-occurrence patterns, we measure the models' performance on buckets o / J 8 b 4 0 6 X U 9 r 8 k t J J O / / j j H s w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " obtained by slicing the data using S FINAL -total number of slots after resolution (i.e after context carryover) and S CARRY -total number of slots carried from context. For example, in a dialogue, if the current turn utterance has 2 slots, and after reference resolution if we carry 3 slots from context, the values for S FINAL and S CARRY would be 5 and 3 respectively. Figure Figure Dialogue state tracking Dialogue state tracking (DST) focuses on tracking conversational states as well. Traditional DST models rely on handcrafted semantic delexicalization to achieve generalization Coreference resolution Our problem is closely related to coreference resolution, where mentions in the current utterance are to be detected and linked to previously mentioned entities. Previous work on coreference resolution have relied on clustering (1) most traditional methods for coreference resolution follows a pipeline approach, with rich linguistic features, making the system cumbersome and prone to cascading errors; (2) Zero pronouns, intent references and other phenomena in spoken dialogue are hard to capture with this approach In this work, we proposed an improvement to the slot carryover task as defined in For future work, we plan to improve these models by encoding the actual dialogue timing information into the contextualized slot embeddings as additional signals. We also plan on exploring the impact of pre-trained representations
1,002
839
1,002
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
CLIP has shown a remarkable zero-shot capability on a wide range of vision tasks. Previously, CLIP is only regarded as a powerful visual encoder. However, after being pretrained by language supervision from a large amount of image-caption pairs, CLIP itself should also have acquired some few-shot abilities for vision-language tasks. In this work, we empirically show that CLIP can be a strong vision-language few-shot learner by leveraging the power of language. We first evaluate CLIP's zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task. Then we propose a parameter-efficient fine-tuning strategy to boost the few-shot performance on the vqa task. We achieve competitive zero/fewshot results on the visual question answering and visual entailment tasks without introducing any additional pre-training procedure.
Vision-language understanding (VLU) tasks, such as visual question answering Recently, CLIP To answer the above question, in this work, we empirically study how to transfer CLIP's zero-shot ability into VLU tasks and further turn CLIP into a few-shot learner. We carried out experiments on two VLU tasks: 1) visual question answering, where the model needs to give an answer according to the details of an image and a natural sentence question, and 2) visual entailment, where the model needs to determine the entailment relation between an image and a natural sentence. Figure For the zero-shot visual question answering task, the key to a successful zero-shot capability transfer is to mitigate the gap between the pre-training task of CLIP and the task form of question answering. Inspired by the recent advancements of few-shot learning in NLP We explore a zero-shot cross-modality (language and vision) transfer capability through the visual entailment task. Specifically, we replace the image with its captions during training and only update a small classification layer. Then at inference, as usual, we still use image-text pairs for testing. This allows us to investigate how well the language and vision representations are aligned in CLIP models. We further leverage few-shot learning to improve CLIP's visual question answering performance based on the zero-shot transferring methods. We find that optimizing only bias and normalization (BiNor) parameters would make better use of limited examples and yield better results than the latest few-shot model Frozen Our contributions are summarized as follows: • To the best of our knowledge, this is the first work that studies how to transfer CLIP's zeroshot capabilities into VLU tasks and confirms CLIP models can be good few-shot learners. • A zero-shot cross-modality transfer capability in CLIP is demonstrated. • A parameter-efficient fine-tuning strategy, Bi-Nor, is proposed to boost CLIP's few-shot visual question answering performance.
2.1 CLIP CLIP, short for Contrastive Language-Image Pretraining T(text) • V(image), which is used as an alignment score between the input image and text. It is pretrained to distinguish aligned image-text pairs from randomly combined ones by a contrastive loss. Instead of training on vision benchmarks, CLIP leverages abundant language supervisions from 400 million web-crawled image-text pairs and can conduct a variety of image classification tasks without specific optimizing. However, directly applying CLIP as a vision-language understanding model is still difficult Visual question answering. The task of VQA requires the model to answer questions about the details of input images. Following previous work, we experiment on the VQAv2 3 Zero-shot VQA Previous works [answer text]" prompt template After rethinking the essence of prompt engineering in CLIP, we can find that the key to a successful zero-shot capability transfer for the VQA task is to mitigate the gap between natural language description and the form of question answering. Motivated by the above observations, we propose a two-step automatic prompt generation method to enable the zero-shot VQA capabilities in CLIP models, with the assistant of a pre-trained generative T5 model Step I: Automatic Template Generation This step is designed to convert the question into a template, which is a statement with a mask token. To tackle the conversion challenge, we explore two ways, including an in-context demonstration method and a dependency parsing based method. Demonstration to T5. The idea of this conversion method is relatively simple: by demonstrating question-to-template We present a concatenation of examples, question, and the <extra_id_0> token to T5 for conditional generation to restore it, and the generated span is our masked template, named as T demo . Dependency parsing. Although the T5 conversion method works well in most situations, it still faces some out-of-coverage problems. To compensate for this shortcoming, we turn to a traditional dependency parsing based way. This method converts a question to a statement by its part-of-speech tagging and parsing results, where the wh-word, root word, auxiliary, or copula, as well as prepositions and particles that are dependents of the whword or the root, are identified, and transformations are performed according to grammar rules. We use the Stanza Step II: Answer Filtering As common sense, "the specie of a flower" can never be a vase. Therefore, leveraging pre-trained language models, which have well learned such concepts during pre-training, to filter out less likely answers would have a positive influence on the final question answering performance. Given a masked template T , a language model L, and the answer vocabulary V, we get the filtered answers V F as: where the [mask] is the answer span in template T , and P L is the output distribution of the language model. Here we also apply the T5 to infill answers because it makes no assumption about the length and position of the span. Once we get the template T and the filtered answers V F , we replace the [mask] token in template T with every selected answer in V F to get the prompts P. The proposed method follows a Template-Answer-Prompt then CLIP discrimination pipeline, and thus we name it as TAP-C. To make better use of template T parsing and T demo , we use an ensemble of both templates by simply setting a threshold for the T5's generation confidence. We prefer to use T demo but use T parsing if the generation confidence is low. Finally, given an image i and the generated prompts P, the TAP-C method can get a zero-shot VQA prediction by: where V and T are the visual and text encoders in CLIP models. The p v is a prompt generated by the TAP-C method, where the masked template is infilled with answer v from the filtered answer vocabulary V F . We report the zero-shot cross-modality transfer results in Table Here we briefly define the terminology used in our few-shot visual question answering settings: • Number of ways. Originally, it is defined as the distinct classes in a task. However, rather than defining a 3,129-way task according to the answer vocabulary, we define the number of ways as question type times answer type ( § 2.2), i.e., 65×3=195 ways, to ensure the model's generalization ability where it can answer a type of questions. • Number of shots. The number of distinct examples in each way. Here a shot is an image along with the question and the answer. • Support set and query set. Under the few-shot setting, our goal is to make the CLIP models learn from N-way K-shot examples and improve the zero-shot VQA performance. Specifically, we identify only a very small set of parameters in CLIP models (about 0.3 million out of over 100 million, details in appendix B.3), including the bias term and normalization term, to be optimized. For either the BatchNorm in ResNet or the LayerNorm in Transformer, the normalization could be uniformly denoted as: where x and y are the mini-batched input and output, and the γ and β are learned parameters. And for all the linear layers and projection layers in CLIP models, they could be denoted as: where h and o are the input and output vectors. We define the learnable parameter set as: We optimize the Bias and Normalization (BiNor) parameters on the few-shot examples with a standard cross-entropy loss over the dot products from each image-prompt pair (Eq.2). Besides, when there are a few examples available, we could also leverage an in-context demonstration manner to improve the performance of the answer filtering process in TAP-C ( § 3.1) by: Top-k where the D denotes the demonstrations. D is similar to template T but has been infilled with the answers, and it is sampled from the same type of question in the available few-shot examples. The resulting filtered vocabulary is noted as V demo . We report the few-shot training procedure in appendix C. Datasets. For visual question answering and visual entailment, we carry out experiments on the VQAv2 CLIP models. According to the types of visual encoders, e.g. ResNet or ViT, CLIP models have different variants, resulting in a significant difference in the number of learnable bias and normalization parameters. We report the number of learnable parameters of CLIP variants in appendix B.3. We select two best performing (and publicly available) variants from two kinds of visual encoders, including the CLIP Res50x16 and the CLIP ViT-B/16, to empirically study their zero-shot and few-shot vision-language understanding performances by applying our transferring methods ( § § 3-5). As previous VL models heavily rely on object detection sub-modules, it is not feasible to directly apply them under the zero-shot setting. Here we setup zero-shot VL baselines from two latest works: • Frozen. Frozen It is trained on aligned image-caption data and is also the first model that shows promising zero-shot and few-shot VQA performances. • Question irrelevant prompt. We report the zero-shot VQA results in Table We report the few-shot VQA results in Table We take the Frozen model and the image blacked out Frozen blind as baselines. Under different k, our methods could always learn from limited training examples and improve over the zero-shot results, which confirms that CLIP models could be VL fewshot learners. With the increase of the number of shots, significant performance gains are observed in other category, which concurs with our intuition: as we sample examples from each question type, most answers in other category are not revealed to the model. As a result, the model could always learn to improve. Similarly, presenting examples to the T5 could also improve the answer filtering process, leading to significant performance gains over the other category. In contrast, the score of number category improves significantly when the model just begins to see some training examples while slowing down as k continues to increase. The effects of template generation methods. Our TAP-C method uses an ensemble of depen- dency parsing template T parsing and T5 demonstration template T demo . Here we investigate whether it is necessary to use such an ensemble. We report the ablation results of two templates in Table The effects of two steps in TAP-C. The TAP-C method generates prompts through template generation (t.gen.) and answer filtering (a.filt.). Here we quantify how much each step contributes to the final zero/few-shot VQA performances. We report the ablation results in Table Limitations of TAP-C. The proposed TAP-C method explores CLIP models' potential to conduct zero/few-shot VQA tasks. However, we also found several limitations that hinder further improving the few-shot performance, which could be rooted in the CLIP models. First, CLIP models struggle with counting the number of fine-grained objects in an image, especially counting from a small area of the image. This shortcoming can hardly be improved by any kind of language knowledge. Besides, the CLIP models perform poorly in distinguishing subtle semantic differences. For example, when asked "what is the man in the background doing?", all the experimented CLIP models give predictions of the man "in the foreground". Under such cases, even if the TAP-C method perfectly converts the question into a prompt, the final results would still be wrong. Nevertheless, We believe this issue could be well addressed by enhancing CLIP models with a stronger text encoder, and we will make explorations in future work. Vision-language few-shot learning. Leveraging aligned caption data, vision-language models pre-trained by an image-text discriminative loss have recently enabled strong zero-shot generalization on image classification and cross-modality retrieval tasks Language model prompting. This work is also inspired by the line of research in language model prompting In this work, we empirically studied how to transfer CLIP models into vision-language understanding tasks. We first explored the CLIP models' zero-shot VQA capability by leveraging language prompts and further proposed a parameter-efficient finetuning method to boost the few-shot performance. We also demonstrate a zero-shot cross-modality transfer capability of CLIP models on the visual entailment task. Experiments and analyses on VQAv2 and SNLI-VE confirm that the CLIP models can be good VL few-shot learners. In this section, we showcase several template generation examples to illustrate how the proposed method works. Since we have introduced how to convert a question into a masked template by demonstrating examples to the T5 ( § 3.1), here we directly present several examples in Table
931
2,005
931
Learning Architectures from an Extended Search Space for Language Modeling
Neural architecture search (NAS) has advanced significantly in recent years but most NAS systems restrict search to learning architectures of a recurrent or convolutional cell. In this paper, we extend the search space of NAS. In particular, we present a general approach to learn both intra-cell and inter-cell architectures (call it ESS). For a better search result, we design a joint learning method to perform intra-cell and inter-cell NAS simultaneously. We implement our model in a differentiable architecture search system. For recurrent neural language modeling, it outperforms a strong baseline significantly on the PTB and Wiki-Text data, with a new state-of-the-art on PTB. Moreover, the learned architectures show good transferability to other systems. E.g., they improve state-of-the-art systems on the CoNLL and WNUT named entity recognition (NER) tasks and CoNLL chunking task, indicating a promising line of research on large-scale prelearned architectures.
Neural models have shown remarkable performance improvements in a wide range of natural language processing (NLP) tasks. Systems of this kind can broadly be characterized as following a neural network design: we model the problem via a pre-defined neural architecture, and the resulting network is treated as a black-box family of functions for which we find parameters that can generalize well on test data. This paradigm leads to many successful NLP systems based on well-designed architectures. The earliest of these makes use of recurrent neural networks (RNNs) for representation learning In designing such models, careful engineering of the architecture plays a key role for the state-ofthe-art though it is in general extremely difficult to find a good network structure. The next obvious step is toward automatic architecture design. A popular method to do this is neural architecture search (NAS). In NAS, the common practice is that we first define a search space of neural networks, and then find the most promising candidate in the space by some criteria. Previous efforts to make NAS more accurate have focused on improving search and network evaluation algorithms. But the search space is still restricted to a particular scope of neural networks. For example, most NAS methods are applied to learn the topology in a recurrent or convolutional cell, but the connections between cells are still made in a heuristic manner as usual Note that the organization of these sub-networks remains important as to the nature of architecture design. For example, the first-order connectivity of cells is essential to capture the recurrent dynamics in RNNs. More recently, it has been found that additional connections of RNN cells improve LSTM models by accessing longer history on language modeling tasks In this paper, we address this issue by enlarging the scope of NAS and learning connections among sub-networks that are designed in either a handcrafted or automatic way (Figure Our ESS method is simple for implementation. We experiment with it in an RNN-based system for language modeling. On the PTB and WikiText data, it outperforms a strong baseline significantly by 4.5 and 2.4 perplexity scores. Moreover, we test the transferability of the learned architecture on other tasks. Again, it shows promising improvements on both NER and chunking benchmarks, and yields new state-of-the-art results on NER tasks. This indicates a promising line of research on largescale pre-learned architectures. More interestingly, it is observed that the inter-cell NAS is helpful in modeling rare words. For example, it yields a bigger improvement on the rare entity recognition task (WNUT) than that on the standard NER task (CoNLL).
NAS is a promising method toward AutoML Despite of great success, previous studies restricted themselves to a small search space of neural networks. For example, most NAS systems were designed to find an architecture of recurrent or convolutional cell, but the remaining parts of the network are handcrafted In this work we use RNNs for description. We choose RNNs because of their effectiveness at preserving past inputs for sequential data processing tasks. Note that although we will restrict ourselves to RNNs for our experiments, the method and discussion here can be applied to other types of models. For a sequence of input vectors {x 1 , ..., x T }, an RNN makes a cell on top of every input vector. The RNN cell receives information from previous cells and input vectors. The output at time step t is defined to be: where π(•) is the function of the cell. ĥt-1 is the representation vector of previous cells, and xt is the representation vector of the inputs up to time step t. More formally, we define ĥt-1 and xt as functions of cell states and model inputs, like this where h [0,t-1] = {h 0 , ..., h t-1 } and x [1,t-1] = {x 1 , ..., x t-1 }. f (•) models the way that we pass information from previous cells to the next. Likewise, g(•) models the case of input vectors. These functions offer a general method to model connections between cells. For example, one can obtain a vanilla recurrent model by setting ĥt-1 = h t-1 and xt = x t , while more intra-cell connections can be considered if sophisticated functions are adopted for f (•) and g(•). While previous work focuses on searching for the desirable architecture design of π(•), we take f (•) and g(•) into account and describe a more general case here. We separate two sub-problems out from NAS for conceptually cleaner description: • Intra-Cell NAS. It learns the architecture of a cell (i.e., π(•)). • Inter-Cell NAS. It learns the way of connecting the current cell with previous cells and input vectors (i.e., f (•) and g(•)). In the following, we describe the design and implementation of our inter-cell and intra-cell NAS methods. For search algorithms, we follow the method of differentiable architecture search (DARTS). It is gradient-based and runs orders of magnitude faster than earlier methods where W j is the parameter matrix of the linear transformation, and θ i,j k is the weight indicating the importance of o i,j k (•). Here the subscript k means the operation index. θ i,j k is obtained by softmax normalization over edges between nodes i and j: θ i,j k = exp(w i,j k )/ k exp(w i,j k ). In this way, the induction of discrete networks is reduced to learning continuous variables {θ i,j k } at the end of the search process. This enables the use of efficient gradient descent methods. Such a model encodes an exponentially large number of networks in a graph, and the optimal architecture is generated by selecting the edges with the largest weights. The common approach to DARTS constraints the output of the generated network to be the last node that averages the outputs of all preceding nodes. Let s n be the last node of the network. We have Given the input vectors, the network found by DARTS generates the result at the final node s n . Inter-cell ... Here we present a method to fit this model into intra and inter-cell NAS. We re-formalize the function for which we find good architectures as F (α; β). α and β are two groups of the input vectors. We create DAGs on them individually. This gives us two DAGs with s α and s β as the last nodes. Then, we make the final output by a Hadamard product of s α and s β , like this, See Figure Another note on F (α; β). The grouping reduces a big problem into two cheap tasks. It is particularly important for building affordable NAS systems because computational cost increases exponentially as more input nodes are involved. Our method instead has a linear time complexity if we adopt a reasonable constraint on group size, leading to a Table The search of intra-cell architectures is trivial. Since β = 1 and s β = 1 (see Table (2019a)'s work and force the input of networks to be a single layer network of ĥt-1 and xt . This can be described as where W (h) and W (x) are parameters of the transformation, and tanh is the non-linear transformation. e 1 is the input node of the graph. See Figure To learn ĥt-1 and xt , we can run the DARTS system as described above. However, Eqs. (2-3) define a model with a varying number of parameters for different time steps, in which our architecture search method is not straightforwardly applicable. Apart from this, a long sequence of RNN cells makes the search intractable. Function JOINTLEARN (rounds, w, W ) 1: for i in range(1, rounds) do 2: while intra-cell model not converged do 3: Update intra-cell w (intra) and W 4: while inter-cell model not converged do 5: Update inter-cell w (inter) and W 6: Derive architecture based on w 7: return architecture For a simplified model, we re-define f (•) and g(•) as: where m is a hyper-parameter that determines how much history is considered. Eq. ( Learning f (•) and g (•) fits our method well due to the fixed number of input vectors. Note that f (•) has m input vectors x [t-m,t-1] for learning the gate network. Unlike what we do in intra-cell NAS, we do not concatenate them into a single input vector. Instead, we create a node for every input vector, that is, the input vector e i = x t-i links with node s i . We restrict s i to only receive inputs from e i for better processing of each input. This can be seen as a pruned network for the model described in Eq. (4). See Figure Our model is flexible. For architecture search, we can run intra-cell NAS, or inter-cell NAS, or both of them as needed. However, we found that simply joining intra-cell and inter-cell architectures might not be desirable because both methods were restricted to a particular region of the search space, and the simple combination of them could not guarantee the global optimum. This necessitates the inclusion of interactions between intra-cell and inter-cell architectures into the search process. Generally, the optimal inter-cell architecture depends on the intra-cell architecture used in search, and vice versa. A simple method that considers this issue is to learn two models in a joint manner. Here, we design a joint search method to make use of the interaction between intra-cell NAS and inter-cell NAS. Figure Obviously, a single run of intra-cell (or inter-cell) NAS is a special case of our joint search method. For example, one can turn off the inter-cell NAS part (lines 4-5 in Figure We experimented with our ESS method on Penn Treebank and WikiText language modeling tasks and applied the learned architecture to NER and chunking tasks to test its transferability. For language modeling task, the monolingual and evaluation data came from two sources. • Penn Treebank (PTB). We followed the standard preprocessed version of PTB • WikiText-103 (WT-103). We also used WikiText-103 Our ESS method consisted of two components, including recurrent neural architecture search and architecture evaluation. During the search process, we ran our ESS method to search for the intra-cell and inter-cell architectures jointly. In the second stage, the learned architecture was trained and evaluated on the test dataset. For architecture search on language modeling tasks, we applied 5 activation functions as the candidate operations, including drop, identity, sigmoid, tanh and relu. On the PTB modeling task, 8 nodes were equipped in the recurrent cell. For the intercell architecture, it received 3 input vectors from the previous cells and consisted of the same number of the intermediate nodes. By default, we trained our ESS models for 50 rounds. We set batch = 256 and used 300 hidden units for the intra-cell model. The learning rate was set as 3 × 10 -3 for the intracell architecture and 1 × 10 -3 for the inter-cell architecture. The BPTT (Werbos, 1990) length was 35. For the search process on WikiText-103, we developed a more complex model to encode the representation. There were 12 nodes in each cell and 5 nodes in the inter-cell networks. The batch size was 128 and the number of hidden units was 300 which was the same with that on the PTB task. We set the intra-cell and inter-cell learning rate to 1 × 10 -3 and 1 × 10 -4 . A larger window size (= 70) for BPTT was applied for the WikiText-103. All experiments were run on a single NVIDIA 1080Ti. After the search process, we trained the learned architectures on the same data. To make it comparable with previous work, we copied the setup in Here we report the perplexity scores, number of parameters and search cost on the PTB and WikiText-103 datasets (Table Also, we find that searching for the appropriate connections among cells plays a more important role in improving the model performance. We observe that the intra-cell NAS (DARTS) system underperforms the inter-cell counterpart with the same number of parameters. It is because the welldesigned intra-cell architectures (e.g., Mogrifier-LSTM) are actually competitive with the NAS structures. However, the fragile connections among different cells greatly restrict the representation space. The additional inter-cell connections are able to encode much richer context. Nevertheless, our ESS method does not defeat the manual designed Transformer-XL on the WikiText-103 dataset, even though ESS works better than other RNN-based NAS methods. This is partially due to the better ability of Transformer-XL to capture the language representation. Note that RNNs are not good at modeling the long-distance dependence even if more history states are considered. It is a good try to apply ESS to Transformer but this is out of the scope of this work. To modulate the complexity of the intra and intercell, we study the system behaviors under different numbers of intermediate nodes (Figure In order to figure out the advantage of inter-cell connections, we detail the model contribution on each word on the validation data. Specifically, we compute the difference in word loss function (i.e., Additionally, we visualize the learned intracell architecture in Figure After architecture search, we test the transferability of the learned architecture. In order to apply the model to other tasks, we directly use the architecture searched on WikiText-103 and train the param-Models F1 Cross-BiLSTM-CNN We have proposed the Extended Search Space (ESS) method of NAS. It learns intra-cell and inter-cell architectures simultaneously. Moreover, we present a general model of differentiable architecture search to handle the arbitrary search space. Meanwhile, the high-level and low-level sub-networks can be learned in a joint fashion. Experiments on two language modeling tasks show that ESS yields improvements of 4.5 and 2.4 perplexity scores over a strong RNN-based baseline. More interestingly, it is observed that transferring the pre-learned architectures to other tasks also obtains a promising performance improvement.
973
2,731
973
LOCALRQA: From Generating Data to Locally Training, Testing, and Deploying Retrieval-Augmented QA Systems
Retrieval-augmented question-answering systems combine retrieval techniques with large language models to provide answers that are more accurate and informative. Many existing toolkits allow users to quickly build such systems using off-the-shelf models, but they fall short in supporting researchers and developers to customize the model training, testing, and deployment process. We propose LOCALRQA 1 , an open-source toolkit that features a wide selection of model training algorithms, evaluation methods, and deployment tools curated from the latest research. As a showcase, we build QA systems using online documentation obtained from Databricks and Faire's websites. We find 7B-models trained and deployed using LOCAL-RQA reach a similar performance compared to using OpenAI's text-ada-002 and GPT-4-turbo.
Retrieval-augmented question-answering (RQA) systems enhance large language models (LLMs) by enabling them to search through a large collection of documents before answering a user's query. These systems have shown improved performance in providing more accurate, informative, and factually grounded answers compared to using LLMs alone We introduce LOCALRQA, an open-source toolkit that enables researchers and developers to easily train, test, and deploy RQA systems using techniques from recent research. Given a collection of documents, users can use pre-built pipelines in our framework to quickly assemble an RQA system using the best off-the-shelf models. Alternatively, users can create their own training data, train open-source models using algorithms from latest research, and deploy a local RQA system that achieves similar performance compared to using paid services such as OpenAI's models. To our knowledge, LOCALRQA is the first toolkit that provides a wide range of training algorithms and automatic evaluation metrics curated from the latest research (see Table
Haystack Table approaches RQA systems combine retrievers with powerful LLMs to provide answers that are more accurate and informative. Given a user query, a retriever first selects k most relevant passages from a collection of documents. Then, a generative model produces an answer conditioned on the user's query, selected passages, and a chat history. Popular methods to achieve this include concatenating all inputs into a single string and generating with decoder-only models We introduce LOCALRQA, a Python-based toolkit designed to help users flexibly train, test, and deploy RQA systems. As shown in Figure A prerequisite for training and evaluating RQA systems is a dataset of (question, answer, passage) pairs, denoted as ⟨q, a, p⟩. However, full ⟨q, a, p⟩ pairs may not always be available in practice. To cater to various scenarios, our toolkit provides: 1) scripts to generate ⟨q, a, p⟩ pairs from a collection of documents, and 2) scripts to convert existing QA datasets into ⟨q, a, p⟩ pairs. These scripts can be useful for researchers to create RQA datasets for new domains, or for developers to prepare training/testing data for specific applications. Generate RQA Data Given a collection of documents, our scripts first use a sampling algorithm to select a set of gold (and hard negative) documents, and then use LLMs to generate questions and answers from each gold document (see Appendix C for more details). These scripts can be used to create ⟨q, a, p⟩ pairs not only from a collection of documents, but also from a collection of ⟨q, p⟩ pairs (e.g., from information retrieval datasets). Convert from Existing Datasets Many existing QA datasets include supporting passages for each gold question-answer pair. We provide scripts to download and reformat these datasets into ⟨q, a, p⟩ pairs compatible with the rest of our toolkit. This includes popular datasets such as Natural Questions Given a dataset of ⟨q, a, p⟩ pairs, users can train a retriever to select the most relevant passages for a given query. Prior work shows that using better retrievers often leads to more performant RQA systems Supported Models For lexical-based methods, we support BM25 Trainers We implement trainers for encoders that distill from a down-stream LM, and trainers that perform contrastive learning using a dataset of ⟨q, p⟩ pairs (and optionally hard negative examples). This includes trainers that: (1) distill from cross-attention scores of an encoder-decoder model Besides improving retrievers, using better generative models can more effectively incorporate retrieved passages. To this end, our toolkit provides: 1) direct support for many open-source generative models, and 2) various training algorithms to finetune these models to improve their task-specific performance. We support all huggingface Trainers We implement supervised fine-tuning trainers that concatenate input queries with groundtruth or retrieved passages, and fusion-in-decoder trainers that process retrieved passages in parallel. This includes trainers that: (1) supervised finetune a decoder using ground-truth ⟨q, a, p⟩ pairs We provide easy-to-use Python scripts for each trainer, where all training hyperparameters can be specified in a single command line. Given a retriever and a generative model, users can now assemble an end-to-end RQA system. Similar to frameworks such as LlamaIndex, LOCALRQA uses a modular design to support arbitrary combinations of retrievers, generative models, as well as user-defined modules (see ?? for more details), such as safety filters and decision planners In general, users can easily add new modules to an existing pipeline by: 1) implementing a class that inherits from Component, which requires defining a run method and run_input_keys, and 2) append the module to the components field. Alteratively, researchers can create a fully customized pipeline by inheriting from the RQAPipeline class. For more documentation and examples, please refer to our GitHub pages. Given an RQA system, LOCALRQA implements many automatic evaluation metrics to help users measure their system's performance. This can be used by researchers to compare their system's performance against prior work, or by developers to find the most cost-effective models/training methods suitable for their applications. We provide scripts to automatically evaluate the perfor-Listing 1 Assembling an RQA system. 1 from local_rqa import ... ) mance of any RQA system that inherits from the RQAPipeline class. These scripts will also save the evaluation results in a JSONL file, which can be used to further obtain human evaluation using our serving methods (see Section 3.6). We describe the supported automatic metrics below. Retrieval To test the performance of a retriever, we provide an evaluation script that measures: (1) Recall@k and nDCG@k score, and (2) runtime. Recall and nDCG scores are often used in information retrieval benchmarks such as BEIR To test the end-to-end performance of an RQA system, we provide an automatic evaluation script that measures: (1) retrieval performance such as Recall@k; (2) generation performance such as BLEU Finally, researchers and developers may want to showcase their RQA systems to the public, or to collect human feedback to further improve their systems using techniques such as RLHF Acceleration Frameworks To speed up document retrieval, we support FAISS Interactive UIs We provide (1) a static evaluation webpage where users directly evaluate the quality of pre-generated responses (e.g., computed from a test set); and (2) an interactive chat webpage where users can chat with a system and rate the correctness and helpfulness of each response. Both web interfaces can be easily launched with our toolkits, which not only support a variety of models (see Section 3.4) but also integrate with acceleration frameworks mentioned in the previous paragraph. See Figure To showcase our toolkit, we built two RQA systems using data scraped from Databricks and Faire's online documentations (under consent). Databricks provides the world's first data intelligence platform powered by generative AI, providing products that facilitate building, sharing, and maintaining data at scale. Faire is an online wholesale marketplace that connects independent retailers and brands around the world. Since the documents we obtained include many company/product-specific details, we believe this is an ideal use case for RQA systems. First, we describe the documentation datasets we collected in Section 4.1. Then, we describe our model training, baselines, and evaluation procedures in Section 4.2, Section 4.3, and Section 4.4. Finally, we present our main results in Section 4.5. Databricks We use data provided by Databricks' technical team, which includes documentations such as API references and technical tutorials from docs.databricks.com and kb.databricks.com. After applying our data processing scripts, we obtain a dataset of 11,136 passages with a maximum length of 400 tokens. See Appendix E for examples of preprocessed documents. Databricks Faire Retrieval Generation Retrieval Generation Recall@1 Recall@4 ROUGE-L GPT4-Acc Recall@1 Recall@4 ROUGE-L GPT4 LOCALRQA supports a large variety of models and training algorithms. To demonstrate the flexibility of our toolkit, we experiment with all available trainers and the most capable open-source models. Retrievers We consider the best open-source encoder models according to the MTEB benchmark Generators We consider the best generator models according to the Chatbot Arena leaderboard Since LOCALRQA features developing new RQA systems locally, we compare against the most powerful models accessible remotely. This include using text-ada-002 (OpenAI, 2022a) as the retriever, and prompting GPT-3.5-turbo (ChatGPT) and GPT-4-turbo as the generative models. We present a subset of automatic evaluation metrics from LOCALRQA, and also include human evaluations on the best-performing models using UIs from Section 3.6. To measure retrievers' performance, we report Recall@1 and Recall@4 which are commonly used in information retrieval Table Lastly, we use the best models from Table We present LOCALRQA, a Python-based toolkit designed to help users develop novel retrievalaugemented QA systems. Different from existing frameworks such as LlamaIndex and LangChain, our toolkit features a wide collection of training algorithms, evaluation metrics, and deploy-ment methods to help users quickly develop costeffective RQA systems. Strong results using models and training algorithms from recent research pave the way for future work to explore RQA methods in both practical and academic settings. Model Size We performed all of our experiments using a single A100 80G GPU, and investigated a large combination of model choices and training methods. Therefore, we considered the bestperforming models up to 7B parameters due to time and resource concerns. We believe experimenting with larger, more capable models could further improve the systems' performance, and we leave this for future work. More Training Algorithms Besides providing tools to help users easily build an RQA system, LOCALRQA features a collection of training algorithms and evaluation methods curated from latest research. However, this collection is nonexhaustive Compute Requirement LOCALRQA features methods to help users develop novel RQA systems locally. Compared with using paid services such as OpenAI's text-ada-002 and GPT-4, this approach is less expensive but requires access to compute resources (e.g., GPUs). To make our toolkit more accessible, we not only support open-source models from huggingface of various sizes, but also support using "remote" models such as OpenAI's ChatGPT and GPT-4. Our work describes a toolkit that can be used to help researchers develop new RQA systems. LO-CALRQA offers a suite of tools, starting from data generation to locally training, testing, and serving an RQA system. While most toolkits are not designed for unethical usage, there is often potential for abuse in their applications. In our demo (Section 4), we apply our toolkit to train RQA systems based on documentations obtained from two companies' website, Databricks and Faire. However, since our toolkit can be used with any kind of data, it is possible to use it for unethical tasks, such as scamming and generating harmful responses Many existing toolkits, such as Haystack, LangChain, and LLamaIndex help users quickly build an RQA system LOCALRQA support data coming from many different sources, by providing integration with frameworks such as LangChain LOCALRQA provides data generation scripts that can be used to create questions q from a set of documents p, and answers from a set of ⟨q, p⟩ pairs. These scripts can also be easily modified to use: 1) custom prompts to generate a question or answer, and 2) custom filtering functions to use a subset of the documents for question/answer generation. Question Generation Given a set of documents, LOCALRQA first creates a set of gold passages by sampling. Since contrastive learning (Section 3.2) benefits from using hard negative passages (related passages but does not contain the answer), we also sample nearby passages as hard negatives. This is achieved by first organizing all passages according to their source s i (e.g., URL or title): {p s 0 0 , p s 0 1 , ..., p s 0 n , p s 1 0 , p s 1 1 ...} and then sample from {p s j } j̸ =i as hard negatives for p s i . Next, an LLM of choice (e.g., ChatGPT) is prompted to generate k questions given a sampled gold passage. To filter duplicate questions, LOCAL-RQA uses ROUGE-L score Answer Generation Given a set of ⟨q, p⟩ pairs, LOCALRQA prompts an LLM of choice (e.g., GPT-4) to generate answers conditioned on the question q and the gold passage p. See Appendix E for examples on how to customize the data generation scripts and Appendices E and F for examples commands. LOCALRQA offers two serving methods: 1) an interactive chat page where users can chat with an RQA system while also providing ratings for each generated response, and 2) a static evaluation page where users directly evaluate the quality (e.g., accuracy, helpfulness, harmlesness) of the pre-generated response. The front-end UIs are created using Gradio Collected Documents We use documents provided by Databrick's technical team, which are already cleaned and parsed into markdown format. We present an example in Table We generate questions and answers using the data generation scripts in LOCAL-RQA. We first customize the prompts and filtering functions in order to obtain high-quality questions based mostly on technical tutorials rather than version release notes # Catalog Import for Wholesale This is ... metadata "source": " Model Training To show the flexibility of LO-CALRQA training, we present at least one run of using trainer in our main experiments Table Collected Documents We contacted Faire's Sales team and crawled documents from faire. com/support according to their suggestions. We only kept raw texts by removing all hyperlinks for images and other websites. We present an example in Table QA Generation Since document data from Faire include simpler guides and QAs compared to Databricks, we find using the default generation script in LOCALRQA sufficient to obtain highquality questions and answers. Therefore, we simply ran scripts/data/doc_to_q.py to generate questions, and scripts/data/docq_to_a.py to generate answers. Similar to Databricks, we used ChatGPT (OpenAI, 2022b) and GPT-4-turbo (Ope-nAI, 2023) to generate questions and answers, respectively. Model Training Similar to the training process implemented in Databricks, we conduct experiments across various trainers and model choices. See Table To have your feature image approved, you should crop it to a square shape that fills a 1:1 ratio . "batch_questions", 13 "batch_source_documents",
813
1,079
813
A* shortest string decoding for non-idempotent semirings
The single shortest path algorithm is undefined for weighted finite-state automata over nonidempotent semirings because such semirings do not guarantee the existence of a shortest path. However, in non-idempotent semirings admitting an order satisfying a monotonicity condition (such as the plus-times or log semirings), the shortest string is well-defined. We describe an algorithm which finds the shortest string for a weighted non-deterministic automaton over such semirings using the backwards shortest distance of an equivalent deterministic automaton (DFA) as a heuristic for A* search performed over a companion idempotent semiring, This algorithm is proven to return the shortest string. There may be exponentially more states in the equivalent DFA, but the proposed algorithm needs to visit only a small fraction of them if determinization is performed "on the fly".
Weighted finite-state automata provide a compact representation of hypotheses in various speech recognition and text processing applications (e.g., The shortest path-and the algorithms that compute it-are well-defined when the weights of a lattice are idempotent and exhibit the path property. These properties are formalized below, but informally they hold that the distance between any two states corresponds to a single path between those states, so that the shortest-path algorithm-having identified this path-does not need to consider the weights of competing paths between those states. However, when the weights of a lattice lack these two properties, there is no guarantee that a shortest path between any two states exists. This situation arises in many speech and language technologies. For instance, generative models for speech recognition and machine translation-and in many unsupervised settings-often use expectation maximization (EM;
Before we introduce the proposed decoding algorithm we provide definitions of key notions. Weighted automata algorithms operate with respect to an algebraic system known as a semiring, characterized by the combination of two monoids. Definition 2.1. A monoid is a pair (K, •) where K is a set and • is a binary operator over K with the following properties: 3. identity: there exists an identity element e ∈ K such that ∀a ∈ Definition 2.3. A semiring is a five-tuple (K, ⊕, ⊗, 0, 1) where: 1. (K, ⊕) is a commutative monoid with identity element 0. ) is a monoid with identity element 1. Definition 2.6. A semiring has the path property if ∀a, b ∈ K : a ⊕ b ∈ {a, b}. Remark 2.1. If a semiring has the path property it is also idempotent. Definition 2.7. The natural order of an idempotent semiring is a boolean operator Remark 2.2. In a semiring with the path property, the natural order is a total order. That is, ∀a, Definition 2.9. A semiring is negative if 1 ⪯ 0. Remark 2.3. In a monotonic negative semiring, ∀a, b ∈ K : a ⪯ 0 and a ⊕ b ⪯ b. Some examples of monotonic negative semirings are given in Table Definition 2.10. The companion semiring of a monotonic negative semiring (K, ⊕, ⊗, 0, 1) with total order ⪯ is the semiring (K, ⊕, ⊗, 0, 1) where ⊕ is the minimum binary operator for ⪯: Remark 2.4. The max-times and tropical semirings are companion semirings to the plus-times and log semirings, respectively. Remark 2.5. By construction a companion semiring has the path property and natural order ⪯. Without loss of generality, we consider singlesource ϵ-free weighted finite-state acceptors. Definition 2.11. A weighted finite-state acceptor (WFSA) is defined by a five-tuple (Q, s, Σ, ω, δ) and a semiring (K, ⊕, ⊗, 0, 1) where: 1. Q is a finite set of states. 2. s ∈ Q is the initial state. 3. Σ is the alphabet. Definition 2.12. An WFSA is acyclic if there exists a topological ordering, an ordering of the states such that if there is a transition from state q to r where q, r ∈ Q, then q is ordered before r. Otherwise, the WFSA is cyclic. Definition 2.14. Let F = {q | ω(q) ̸ = 0} denote the set of final states. Definition 2.15. A path through an acceptor p is a triple consisting of: Table 1. a state sequence q that is, each transition from q i to q i+1 must have label z i and weight k i . Definition 2.16. Let P q→r be the set of all paths from q to r where q, r ∈ Q. Definition 2.17. The forward shortest distance α ⊆ Q × K is a partial function from a state q ∈ Q that gives the ⊕-sum of the ⊗-product of the weights of all paths from the initial state s to q: Definition 2.18. The backwards shortest distance β ⊆ Q × K is a partial function from a state q ∈ Q that gives the ⊕-sum of the ⊗-product of the weights of all paths from q to a final state, including the final weight of that final state: Definition 2.19. A state is accessible if there exists a path to it from the initial state s. Definition 2.20. A state is coaccessible if there exists a path from it to a final state f ∈ F . Remark 2.6. For a state q, α(q) and β(q) are defined if and only if q is accessible and coaccessible, respectively. Definition 2.21. The total shortest distance of an automaton is β(s). That is, a complete path must also begin with an arc from the initial state s to q 1 with label z 1 and weight k 1 , and halt in a final state. Definition 2.23. The weight of a complete path is given by the ⊗-product of its weight sequence and its final weight: Definition 2.24. A shortest path through an automaton is a complete path whose weight is equal to the total shortest distance β(s). Remark 2.7. Automata over non-idempotent semirings may lack a shortest path Remark 2.8. It is not possible in general to efficiently find the shortest path over non-monotonic semirings. See Definition 2.25. A WFSA is deterministic if, for each state q ∈ Q, there is at most one transition with a given label z ∈ Σ from that state, and nondeterministic otherwise. Definition 2.26. A zero-sum-free semiring is Definition 2.27. A weakly divisible semiring is cancellative if c is unique and can thus be denoted by c = (a ⊕ b) -1 a Remark 2.9. All semirings in Table Remark 2.10. For every non-deterministic, acyclic WFSA (or NFA) over a zero-sum-free, weakly divisible and cancellative semiring, there exists an equivalent deterministic WFSA (or DFA). However, a DFA may be exponentially larger than an equivalent NFA We now provide a brief presentation of the determinization algorithm for WFSAs. Proofs can be found in where Q d is a finite set whose elements are subsets of Q × K, recursively defined as follows: ) defines the set of states as the closure of the next-state transition function. The transition relation is then defined as The intuition underlying this construction is that a state q ∈ Q d encodes a set of states in Q that can be reached from s by some common strings. More precisely, let p ′ be the unique path in P s d →q labeled by some z ′ ∈ Σ * , then for any (q i , k i ) ∈ q: Termination is guaranteed for acyclic WFSAs Figure Remark 2.11. Given a NFA A with backwards shortest distance β, the backwards shortest distance β d over the equivalent DFA A d can be computed from β: for any q ∈ Q d Since A is assumed to be acyclic, β can be computed in O(|Q|) time Definition 2.28. Let P z be the set of paths with string z ∈ Σ * , and let the weight of P z be Lemma 2.1. In an idempotent semiring, a shortest path's string is also a shortest string. Proof. Let p be a shortest path. By definition, thus z[p] is the shortest string. Lemma 2.2. In a DFA over a monotonic semiring, a shortest string is the string of a shortest path in that DFA viewed as an WFSA over the corresponding companion semiring. Proof. Determinism implies that for all complete path p ′ , k[p ′ ] = σ(z[p ′ ]). Let z be the shortest string in the DFA and p the unique path admitting the string z. Then for any complete path p ′ . Hence Thus p is a shortest path in the DFA viewed over the companion semiring. A* search In Dijkstra's algorithm, at every iteration the algorithm explores the state q which minimizes α(q), the shortest distance from the initial state s to q, until all states have been visited. In A* search, search priority is determined by a some function of 𭟋 ⊆ Q×K, known as the heuristic, which gives an estimate of the weight of paths from some state to a final state. At every iteration, A* instead explores the state q which minimizes α(q) ⊗ 𭟋(q). 3 Definition 2.30. An A* heuristic is admissible if it never overestimates the shortest distance to a state 3 One can thus view Dijkstra's algorithm as a special case of A* search with the uninformative heuristic 𭟋 = 1. Remark 2.12. If 𭟋 is admissible and consistent, A* search is guaranteed to find a shortest path (if one exists) after visiting all states such that 𭟋[q] ⪯ β[s] Consider an acyclic, ϵ-free WFSA over a monotonic negative semiring (K, ⊕, ⊗, 0, 1) with total order ⪯ for which we wish to find the shortest string. The same WFSA can also be viewed as a WFSA over the corresponding companion semiring (K, ⊕, ⊗, 0, 1), and we denote by β the backward shortest-distance over this companion semiring. We prove two theorems, and then introduce an algorithm for search. Theorem 3.1. The backwards shortest distance of an WFSA over a monotonic negative semiring is an admissible heuristic for the A* search over its companion semiring. Proof. In a monotonic negative semiring, the ⊕sum of any n terms is upper-bounded by each of the n terms and hence by the ⊕-sum of these n terms. It follows that and this shows that 𭟋 = β is an admissible heuristic for β. Theorem 3.2. The backwards shortest distance of an WFSA over a monotonic negative semiring is a consistent heuristic for the A* search over its companion semiring. Proof. Let (q, z, k, r) be a transition in δ. Leveraging again the property that an ⊕-sum of any n terms is upper-bounded by any of these terms, we show that Having established that this is an admissible and consistent heuristic for A* search over the companion semiring, a naïve algorithm then suggests itself, following Lemma 2.2 and Remark 2.12. Given a non-deterministic WFSA over the monotonic negative semiring (K, ⊕, ⊗, 0, 1), apply determinization to obtain an equivalent DFA, compute β d , the backwards shortest distance over the resulting DFA over (K, ⊕, ⊗, 0, 1) and then perform A* search over the companion semiring using β d as the heuristic. However, as mentioned in Remark 2.10, determinization has an exponential worse-case complexity in time and space and is often prohibitive in practice. Yet determinization-and the computation of elements of β d -only need to be performed for states actually visited by A* search. Let β denote the backwards shortest distance over a nondeterministic WFSA over the monotonic negative semiring (K, ⊕, ⊗, 0, 1). Then, the algorithm is as follows: 1. Compute β over (K, ⊕, ⊗, 0, 1). 2. Lazily determinize the WFSA, lazily computing β d from β over (K, ⊕, ⊗, 0, 1). 3. Perform A* search for the shortest string over (K, ⊕, ⊗, 0, 1) with β d as the heuristic. We evaluate the proposed algorithm using nonidempotent speech recognition lattices. We search for the shortest string in a sample of 700 word lattices derived from Google Voice Search traffic. This data set was previously used by The above algorithm is implemented as part of OpenGrm-BaumWelch, an open-source C++17 library released under the Apache-2.0 license. We compare the proposed algorithm to the naïve algorithm mentioned in ( §3). The naïve algorithm first exhaustively constructs the equivalent DFA by applying weighted determinization-as implemented by OpenFst's fstdeterminize command-line tool-then performs A* search on the DFA over the companion semiring. Its complexity is bounded by the number of states in the full DFA. In contrast, the complexity of the proposed algorithm is bounded by the number of DFA states dynamically constructed-i.e., when they are visited-during search. As an additional measure, we also compare the number of states visited by the proposed algorithm to the number of states in the original NFA lattice. Figure Several prior studies use A* search for decoding speech lattices over idempotent semirings. For example, We propose an algorithm which allows for efficient shortest string decoding of weighted automata over non-idempotent semirings using A* search and onthe-fly determinization. We find that A* search results in a substantial reduction in the number of states visited during decoding, which in turn minimizes the amount of determinization required to find the shortest string. We envision several possible applications for the proposed algorithm. It could be used to exactly decode noisy channel "decipherment" models (e.g., where λ h is estimated using ordinary EM. While the evaluation ( §4) finds the proposed algorithm to be substantially more efficient than the naïve algorithm on real-world data, it has the same exponential worst-case complexity as exhaustive determinization of acyclic WFSAs. This worst case dominates the linear-time operations used to compute β n and β d , and to solve for the single shortest path. However, we conjecture that the worst case is unlikely to arise for topologies encountered in actual speech and language processing applications. We are aware of no ethical issues raised by the proposed algorithm beyond issues of dual use, bias, etc., which are inherent to all known speech and language technologies.
875
949
875
Polar Ducks and Where to Find Them: Enhancing Entity Linking with Duck Typing and Polar Box Embeddings
Entity linking methods based on dense retrieval are widely adopted in large-scale applications for their efficiency, but they can fall short of generative models, as they are sensitive to the structure of the embedding space. To address this issue, this paper introduces DUCK, an approach to infusing structural information in the space of entity representations, using prior knowledge of entity types. Inspired by duck typing in programming languages, we define the type of an entity based on its relations with other entities in a knowledge graph. Then, porting the concept of box embeddings to spherical polar coordinates, we represent relations as boxes on the hypersphere. We optimize the model to place entities inside the boxes corresponding to their relations, thereby clustering together entities of similar type. Our experiments show that our method sets new state-of-the-art results on standard entity-disambiguation benchmarks. It improves the performance of the model by up to 7.9 F 1 points, outperforms other type-aware approaches, and matches the results of generative models with 18 times more parameters.
State-of-the-art approaches to entity linking, namely the task of linking mentions of entities in a text to the corresponding entries in a knowledge base (KB) In this paper, we aim to close the gap with generative approaches by infusing structural information in the latent space of retrieval-based methods. Recent work The first time a World Cup final was settled in a penalty shootout was in 1994, when Italy lost to Brazil.
The Italy national football team has represented Italy in football since 1910. Italy, officially the Italian Republic or the Republic of Italy, is a country in Europe. We achieve this goal by drawing inspiration from the concept of duck typing in programming languages, which relies on the idea of defining the type of an object based on its properties. Extending this idea to the realm of KGs, we define the type of an entity based on the relations that it has with other entities in the graph. Figure Motivated by this intuition, we propose DUCK (Disambiguating Using Categories extracted from Knowledge), an approach to infusing prior type information in the latent space of methods based on dense retrieval. Building on recent work on regionbased representations We use our approach to train a bi-encoder model with the same architecture as We start by formalizing the entity-disambiguation problem, then we outline the main intuitions underlying methods based dense-retrieval. Problem statement. The goal of entity disambiguation (ED) is to link entity mentions in a piece of text to the entity they refer to in a reference KB. For each entity e, we assume we have an entity description expressed as a sequence of tokens s e = (s (1) e , . . . , s ). Similarly, each mention m is associated with a sequence of tokens , representing the mention itself and its context. We denote the entity a mention m refers to as e ⋆ m . Further, we assume that the reference KB is a knowledge graph G = (E, R), where E is a set of entities and R is a set of relations, namely boolean functions r : E × E -→ {0, 1} denoting whether a relation exists between two entities. Then, given a set of entity-mention pairs D = {(m 1 , e ⋆ m 1 ), . . . , (m |D| , e ⋆ m |D| )}, we aim to learn a model f : M -→ E, such that the entity predicted by the model for a given mention êm = f (m) is the correct entity e ⋆ m . Dense-retrieval methods. Methods based on dense retrieval where s is a similarity function between entities and mentions. This objective encourages the representation of mention m to be close to the representation of the correct entity e ⋆ m and far from other entities e j , according to the similarity s. This similarity function s(m, e) is usually chosen to be the dot product between learned representations m m m, e e e ∈ R d of the mention and entity respectively. At inference time, a mention is encoded in the dense space of entity embeddings and the entity with the highest similarity is returned. Our approach builds on dense-retrieval methods and aims to enhance their performance using finegrained type information. Duck typing on knowledge graphs. Duck typing is a well-known concept in dynamically typed programming languages and is based on the overall idea of weakly defining the type of an object based on its properties. Extending this concept to KGs, without any need for type labels, we can describe the type of an entity e ∈ E in terms of the set of relations labeling the edges originating from e in the KG. With slight abuse of notation, we will denote this set as R(e) = {r ∈ R | ∃e ′ ∈ E : r(e, e ′ ) = 1}. An example of how the set of relations of an entity can be used to determine its type is shown in Figure 2. For a qualitative analysis showing how duck typing works in real-world knowledge graphs, we refer the reader to Appendix A. Relations as polar box embeddings. Inspired by region-based representations , where ϕ ϕ ϕ - r , ϕ ϕ ϕ + r ∈ R d-1 are vector of angles denoting respectively the bottom-left and top-right corners of the box in spherical coordinates. For an entity e ∈ E, we say that e ∈ Box(r), if the expression in polar coordinates ϕ ϕ ϕ e of the entity representation e e e is between ϕ ϕ ϕ - r and ϕ ϕ ϕ + r across all dimensions. Then, our goal is to structure the latent space in such a way that e ∈ Box(r + ) for every r + ∈ R(e) and e / ∈ Box(r -) for every r -∈ R \ R(e). In order to achieve the goal mentioned above, we need to turn the intuition of Section 3.1 into an optimization problem. To this end, it helps to define a distance function between an entity and a box. Entity-box distance. Following where φ ϕ ϕ r = (ϕ ϕ ϕ - r + ϕ ϕ ϕ + r )/2 is the center of the box corresponding to relation r, δ δ δ r = ϕ ϕ ϕ + r -ϕ ϕ ϕ - r is a vector containing the width of the box along each dimension, • is the Hadamard product, / is elementwise division, and κ κ κ is a vector of width-dependent scaling coefficients defined as: Intuitively, this function heavily penalizes entities outside the box, with higher distance values and gradients, whereas it mildly pushes entities lying already inside the box towards the center. We refer the reader to Appendix B for more details. Loss function for typing. To encourage an entity e ∈ E to lie inside all boxes representing the relations R(e) and outside the other boxes, we use a negative-sampling loss similar to the one of Above, γ ∈ R is a margin parameter, σ is the sigmoid function, r + is a relation of entity e, drawn uniformly from the set of relations R(e), whereas r -is a relation drawn from the set of relations R \ R(e) according to the probability distribution: where α ∈ [0, 1] is a temperature parameter. The lower α, the closer the distribution is to a uniform distribution, whereas higher values of α result in more weight given to boxes that are close to the entity. Notice that this objective forces the distance between an entity e and relations r + ∈ R(e) to be small, while keeping the entity far from boxes corresponding to the negative relations r -. Hence, optimizing the objective L Duck will result in clustering together entities that share many relations. Overall optimization objective. We train the model to optimize jointly the entity-disambiguation loss of Section 2 and the duck-typing loss L Duck . Although we defined the loss L Duck for entities, we calculate it for mentions as well, defining the set of relations of a mention based on the ground-truth entity R(m) = R(e ⋆ m ). In order to prevent boxes from growing too large during training, we further introduce an L2 regularization term l 2 on the size of the boxes: Then, our final optimization objective is: where λ Duck , λ l2 ∈ [0, 1] are hyperparameters defining the weight of each component of the loss. Building on prior work, we used the method described in Section 3 to train a bi-encoder model with the same architecture of Bi-encoders, introduced in this context by Entity encoder. Given a textual description of an entity e ∈ E, expressed as a sequence of tokens s e = (s (1) e , . . . , s ), we learn an entity representation e e e ∈ R d as: e e e = f entity (s e ). Concretely, following prior work Mention encoder. We model a mention as a sequence of tokens s m = (s ) denoting both the mention itself and the context surrounding it, up to a maximum mention length n m . Following where f mention is a mention encoder based on a pre-trained RoBERTa model and the final mention representation m m m is obtained using the encoding of the [CLS] token. Overall, our bi-encoder is the same as the one used by Relation modeling. We model a relation r ∈ R as a sequence of tokens s r = (s (1) r , . . . , s ). These sequences are extracted from Wikidata where f relation is a relation encoder similar to f entity and f mention , which computes the relation representation r r r as the embedding of the [CLS] token produced by a pre-trained RoBERTa model. Learning boxes in polar coordinates. Given a relation representation r r r calculated as described above, we parametrize a box as a pair of vectors , where: Above, FFN -and FFN + are 2-layer feed-forward networks, σ is the sigmoid function, and δ min is a margin parameter denoting the minimum width of a box across any dimension. Calculating the corners of a box in this manner allows us to achieve two main objectives: (i) all components of ϕ ϕ ϕ - r and ϕ ϕ ϕ + r range from 0 to π, hence they assume valid values in the spherical coordinate system, and (ii) ϕ ϕ ϕ + r is greater than ϕ ϕ ϕ - r across all dimensions, so that boxes are never empty and the model does not have to learn how to produce non-degenerate regions. Notice that, in a spherical coordinate system, only one of the coordinates is allowed to range from 0 to 2π, while all remaining coordinates will range from 0 to π. For simplicity, we constrain all coordinates in the interval [0, π], thereby reducing all representations to half of the hypersphere. Training. We train DUCK by optimizing the overall objective defined in Section 3. In order to compute the loss L Duck , we calculate the representations ϕ ϕ ϕ e , ϕ ϕ ϕ m ∈ R d-1 by converting to spherical coordinates the entity and mention representations e e e and m m m produced by the entity and mention encoders respectively. To make training more efficient, the relation representations r r r are pre-computed and kept fixed at training time. We use the dot product between entity and mention representations to evaluate the entity disambiguation loss L ED : s(e, m) = e e e ⊤ m m m. The expectations in the loss L Duck are estimated across all relations r + ∈ R(e) and by sampling k relations r -∈ R \ R(e) according to p(r -| e). The L2 regularization on the width of the boxes is performed across all relations in a batch. Inference. At inference time, our approach is not different from the method of where E m ⊆ E is a set of candidate entities for mention m. In practice, we can precompute all entity embeddings, so that inference only requires one forward pass through the mention encoder and selecting the entity with the highest similarity. This section provides a thorough evaluation of our approach. First, we show that DUCK achieves new state-of-the-art results on popular datasets for entity disambiguation, closing the gap between retrievalbased methods and more expensive generative models. Then, we discuss several ablation studies, showing that incorporating type information using box embeddings in polar coordinates improves the performance of the model. Finally, we dig into qualitative analyses, showing that our model is able to place entities in the correct boxes despite the incompleteness of the information in the KG. We reproduce the same experimental setup of prior work We compared DUCK against three main categories of approaches: (a) methods based on dense retrieval, (b) generative models, and (c) type-aware models, namely other approaches to adding type information to retrieval-based methods (DUCK pertains to this category). We report the results both for the model trained only on the BLINK data and for the model fine-tuned on AIDA, referring to the former as "DUCK (Wikipedia)" and to the latter as "DUCK (fine-tuned)". Main results. In order to provide more insights into the performance of the model, we performed several ablation studies. First, we performed an ablation where we removed the contribution of the L Duck terms and the L2 regularization l 2 from the loss function (DUCK w/o types). In this case, we only train the model using the entity-disambiguation loss L ED , without infusing any type information. In addition, we assessed the benefit of using box embeddings in spherical polar coordinates by experimenting with a version of the model where boxes are expressed in cartesian coordinates (DUCK cartesian coord). In this case, we parametrize a box as a pair of vectors Box(r) = (r r r -, r r r + ), where r r r -= FFN -(r r r), r r r + = r r r -+ ReLU(FFN + (r r r)) + δ ′ min . As before, δ ′ min is a margin parameter that defines the minimum width of a box, FFN -and FFN + are feed-forward networks, and ReLU(x) = max(0, x) is the ReLU activation function. Finally, we report the results obtained by DUCK when no candidate set is provided (DUCK w/o candidate set). In this case, we score each mention against the whole set of entities (which amounts to almost 6M entities). Table This section complements the quantitative results discussed so far with some qualitative analyses. Analysis of the boxes. Table Examples. Figure Our work builds on top of the bi-encoder architecture of This paper introduced DUCK, a method to improve the performance of entity disambiguation models using prior type knowledge. The overall idea underlying our method was inspired by the concept of duck typing, as we defined types in a fuzzy manner, without any need for type labels. We introduced box embeddings in spherical polar coordinates and we demonstrated that using this form of representation allows effectively clustering entities of the same type. Crucially, we showed that infusing structural information in the latent space is sufficient to close the gap between efficient methods based on dense retrieval and generative models. As a future line of research, it might be interesting to explore methods to infuse prior knowledge of entity types in generative models as well. Our method assumes that we have access to both entity descriptions in natural language and a knowledge graph providing relations between pairs of entities. Methods based on dense retrieval (without type information) usually rely only on the first assumption. In our experiments, entity descriptions are obtained from Wikipedia (more precisely, from the KILT dump of Compared to other type-aware methods, DUCK has the disadvantage that we cannot predict the type of a mention in the form of a label. This is a design choice that allows modeling type information in a more fine-grained manner. As shown in the paper, this choice results in better overall entity-disambiguation performance compared to other type-aware methods. In applications where it would be interesting to obtain the type of a mention in the form of a label, we believe that a simple heuristic correlating the relations of an entity to its type in Wikidata would be very effective. We refer the reader to Appendix A for insights on the type information carried by relations in a KG. Additionally, we emphasize that the choice of spherical polar coordinates for modeling relational information is dependent on the use of the dot product or the cosine similarity as the function for ranking the closest entities to a given mention. In case a different function is used (e.g., the L 2 distance), then box embeddings in cartesian coordinates might be better suited. We used the dot product because it is the most popular choice, allowing us to build on the model of One more caveat is that our method is sensitive to the margin parameter γ. In case DUCK is trained on different domains, it might be beneficial to tune this parameter carefully. We tried using probabilistic box embeddings (similar to Finally, our loss function does not optimize only for entity disambiguation. Hence, DUCK might occasionally loose contextual information, in favor of placing the mention in the correct boxes. This is shown in the rightmost examples of Figure Entity disambiguation is a well-known task in natural language processing, with several real-world applications in different domains, including content understanding, recommendation systems, and many others. As such, it is of utmost importance to consider ethical implications and evaluate the potential bias that ED models could exhibit. DUCK is trained on Wikipedia and Wikidata To get more insights into our definition of duck typing on knowledge graphs, we performed a qualitative analysis of entities that share a large number of relations in Wikidata as a measure of the distance between the types of two entities e 1 and e 2 . Notice that the distance defined above can be expressed as the Hamming distance between binary encodings of the sets of relations, hence we can efficiently retrieve the neighbors of a given entity on GPU, following the method of This section provides more details on the entitybox distance function defined in Section 3.2. A plot of the distance function in the uni-dimensional case, for a scalar entity representation e and several boxes centered at π/2 with different scalar widths δ r is shown in Figure In order to train DUCK, we need to convert the entity representations e e e into spherical polar coordinates (the same applies to the mention representations m m m). This can be done as follows: Looking at the equations above, we notice that in a spherical coordinate systems, all angles range from 0 to π, with the only exception of the last coordinate ϕ e,d-1 , which ranges from 0 to π if e d is positive and from π to 2π otherwise. In order to make the definition of the boxes and of the entity-box distance simpler, we decided to constrain the last coordinate in the range [0, π] as well. We achieved this objective by constraining the last coordinate of the entity and mention representations to be positive, applying an absolute value to the last dimension of the output of the entity and mention encoders. This essentially restricts all representations and boxes to be on half of the hypersphere, more precisely on the portion where e d > 0. We apply this transformation before computing the overall optimization objective of Section 3.2. In order to train DUCK, we need to select negative entities e j for the entity-disambiguation loss L ED of Section 3. Having high-quality negative entities is crucial to achieve high performance, thus we trained DUCK in several stages. First, we trained the model using, as negative entities for each mention, all entities in the same batch. In order to provide more meaningful information, we further added entities that maximize a prior probability p(e|m), extracted from count statistics derived from large text corpora. In details, we used the prior probabilities of We trained the model for 1 epoch on 8 GPUs, validating on the BLINK validation set every 5000 gradient steps. Then, we used the model that maximizes the validation performance to produce a representation for every entity, and we mined the closest representations for each entity in Wikipedia. This step is usually referred to as hard-negative mining. We used these entities as negative examples for the L ED loss and trained the model again, starting from the same checkpoint employed for the negative-mining stage. We used a batch size of 16, with 3 negative examples for each mention and up to 32 entities in a batch. We increased the sampling temperature for the boxes to α = 0.5, keeping a threshold of at least 5 relations for each entity. We trained the model for one more epoch, validating every 5000 gradient steps as before. Finally, we repeated the hard-negative mining process and kept training the model for 10 000 additional gradient steps, using a batch size of 4, 5 hard negatives for each mention and up to 3 entities that maximize the prior probability p(e|m) (if distinct from the negatives). As we increased the number of negative entities, the batch size is significantly smaller than before. Therefore, we did gradient accumulation for 4 steps. Furthermore, we increased the maximum length of a mention from 128 tokens to 512, and set the sampling temperature to α = 1.0. In this final stage, we assumed the model had already learned to place entities in their target boxes, hence we used all entities in the dataset, regardless of the number of relations they have in Wikidata. Table Following the qualitative analyses of Section 5.4, in this section we provide additional results and further examples. Analysis of the boxes. Table Examples. Figure We trained DUCK using the AdamW optimizer
1,122
426
1,122
Learning to Control the Fine-grained Sentiment for Story Ending Generation
Automatic story ending generation is an interesting and challenging task in natural language generation. Previous studies are mainly limited to generate coherent, reasonable and diversified story endings, and few works focus on controlling the sentiment of story endings. This paper focuses on generating a story ending which meets the given fine-grained sentiment intensity. There are two major challenges to this task. First is the lack of story corpus which has fine-grained sentiment labels. Second is the difficulty of explicitly controlling sentiment intensity when generating endings. Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges. The sentiment analyzer adopts a series of methods to acquire sentiment intensities of the story dataset. The sentimental generator introduces the sentiment intensity into decoder via a Gaussian Kernel Layer to control the sentiment of the output. To the best of our knowledge, this is the first endeavor to control the fine-grained sentiment for story ending generation without manually annotating sentiment labels. Experiments show that our proposed framework can generate story endings which are not only more coherent and fluent but also able to meet the given sentiment intensity better. 1 ⇤ Equal Contribution.
Story ending generation aims at completing the plot and concluding a story given a story context. Previous works mainly study on how to generate a coherent, reasonable and diversified story ending
She almost won the game, but eventually lost. The game ended with a draw. She eventually won the game. She won the game and was very proud of her team. Story context: Sally really loves to play soccer. She joined a team with her friends and she plays everyday. Her coach and her teammates are all really fun. Sally practiced extra hard for her first match. controlling the sentiment for story ending generation. Different from previous work, we propose the task of controlling the sentiment for story ending generation at a fine-grained level, without any human annotation of story dataset Experiments show the effectiveness and generality of the proposed framework, since it can generate story endings which are not only coherent and fluent but also able to better meet the given sentiment intensity. Here we formulate the task of fine-grained sentiment controllable story ending generation. Given the story context x = (x 1 , • • • , x m ) which consists of m sentences, and the target sentiment intensity s, the goal of this task is to generate a story ending y that is coherent to story context x and expresses the target sentiment intensity s. Note that the sentiment intensity s 2 [0, 1]. Although existing datasets for story ending generation can provide paired data (x, y), the true sentiment s of y is not observable. To remedy this, the sentiment analyzer S employs several methods to acquire the sentiment intensity s of y. Then the sentimental generator G takes the story context x and the sentiment of the story ending s as input to generate the story ending y. The overview of our proposed framework is presented in Figure The sentiment analyzer S aims to predicting the sentiment intensity s of the gold story ending y to construct paired data (x, s; y). As the first attempt to solve the proposed task, we explore three kinds of sentiment analyzers as follows. Rule-based (RB): VADER (Hutto and Gilbert, We first train a linear regression model R on the Stanford Sentiment Treebank (SST) In the absence of sentiment annotations for the story dataset, domain adaptation can provide an effective solution since there exists some labeled datasets of a similar task but from a different domain. We use adversarial learning The sentimental generator G aims to generate story endings that match the target sentiment intensities s. It consists of an encoder and a decoder equipped with a Gaussian Kernel Layer. The encoder is to map the input story context x into a compact vector that can capture its essential context features. Specifically, we use a normal bi-directional LSTM as the encoder. All context words x i are represented by their semantic embeddings E as the input and we use the concatenation of final forward and backward hidden states as the initial hidden state of the decoder. The decoder aims to generate a story ending which accords with the target sentiment intensity s. As shown in Figure where P R (y t ) denotes the semantic generation probability, P S (y t ) denotes the sentiment generation probability, ↵ and are trainable coefficients. Specifically, P R (y t ) is defined as follow: where w is a one-hot indicator vector of word w, W R and b R are trainable parameters, h t is the t-th hidden state of the LSTM decoder with attention mechanism where 2 is the variance, S maps the sentiment embedding into a real value, the target sentiment intensity s is the mean of the Gaussian distribution, W U and b U are trainable parameters. We choose the widely-used ROCStories corpus Since there is no direct related work of this task, we design an intuitive pipeline (generate-andmodify) as baseline. It first generates a story ending using a general sequence-to-sequence model with attention We tune hyper-parameters on the validation set. For the RM and DA sentiment analyzer, we implement the encoder as a 3-layer bidirectional LSTM with a hidden size of 512. We implement the regression module as a MLP with 1 hidden layer of size 32. For domain adaption, we implement a domain discriminator as a MLP with 1 hidden layer of size 32. A Gradient Reversal Layer is added into the domain discriminator. For the sentimental generator, both the semantic and sentiment embeddings are 256 dimensions and randomly initialized. We implement both encoder and decoder as 1-layer bidirectional LSTM with a hidden size of 512. The variance 2 of Gaussian Kernel Layer is set as 1. The batch size is 32 and the dropout (Srivastava et al., 2014) is 0.5. We use the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.0003. For the proposed task, there are no existing accepted metrics. We propose both automatic evaluation and human evaluation for this task. Sentiment Consistency: We propose the pairwise sentiment consistency (SentiCons) to evaluate the consistency of two lists of sentiment intensities. For two lists A and B with the same length, SentiCons(A, B) is calculated by where n is the length of the list and I is the indicator function. BLEU: For each story in the test set, we take the context x and the human-annotated sentiment intensity s of the gold story ending y as input. The corresponding output is ŷ. Then we calculate the BLEU We hire two evaluators who are skilled in English to evaluate the generated story endings. For each story in the test set, we distribute the story context, five target sentiment intensities and corresponding generated story endings to the evaluators. Evaluators are required to score the generated endings from 1 to 5 in terms of three criteria: Coherency, Fluency and Sentiment. Coherency measures whether the endings are coherent with the context. Fluency measures whether the endings are fluent. Sentiment measures how much the endings express the target sentiment intensities. Table The automatic and human evaluation results of four generation models are shown in Table From a comprehensive perspective, our framework can better control the sentiment while guaranteeing the coherency and fluency. We provide an example of story ending generation with five different target sentiment intensities in Table Madison really wanted to buy a new car. She applied to work at different restaurants around town. One day a local restaurant hired her to be their new waitress! Molly worked very hard as a waitress and earned a lot of tips. Seq2Seq + SentiMod cies of generated story endings, e.g. "in trouble" ! "embarrassed" ! "able to" ! "excited" ! "happy" and "new car". Story generation Automatic story generation has attracted interest over the past few years. Recently, many approaches are proposed to generate a better story in terms of coherence Sentimental Text Generation Generating sentimental and emotional texts is a key step towards building intelligent and controllable natural language generation systems. To date several works of dialogue generation In this paper, we make the first endeavor to control the fine-grained sentiment for story ending generation. The proposed framework is generic and novel, and does not need any human annotation of story dataset. Experiments show the effectiveness of the proposed framework to control the sentiment intensity on both automatic evaluation and human evaluation. Future work can combine the analyzer and generator via joint training, hopefully to achieve better results.
1,371
196
1,371
Shapley Head Pruning: Identifying and Removing Interference in Multilingual Transformers
Multilingual transformer-based models demonstrate remarkable zero and few-shot transfer across languages by learning and reusing language-agnostic features. However, as a fixed-size model acquires more languages, its performance across all languages degrades. Those who attribute this interference phenomenon to limited model capacity address the problem by adding additional parameters, despite evidence that transformer-based models are overparameterized. In this work, we show that it is possible to reduce interference by instead identifying and pruning language-specific attention heads. First, we use Shapley Values, a credit allocation metric from coalitional game theory, to identify attention heads that introduce interference. Then, we show that pruning such heads from a fixed model improves performance for a target language on both sentence classification and structural prediction. Finally, we provide insights on language-agnostic and language-specific attention heads using attention visualization. 1
Cross-lingual transfer learning aims to utilize a natural language processing system trained on a source language to improve results for the same task in a different target language. The core goal is to maintain relevant learned patterns from the source while disregarding those which are inapplicable to the target. Multilingual pretraining of transformer language models has recently become a widespread method for cross-lingual transfer; demonstrating remarkable zero and few shot performance across languages when finetuned on monolingual data However, adding languages beyond a threshold begins to harm cross-lingual transfer in a fixedsize model as shown in prior work We offer an alternate hypothesis that interference is caused by components that are specialized to language-specific patterns and introduce noise when applied to other languages. To test this hypothesis, we introduce a methodology that selectively removes noisy components to improve language-specific performance without updating or adding additional language-specific parameters. Our work builds on prior research studying monolingual models that shows they can be pruned aggressively We leverage Shapley Values, the mean marginal contribution of a player to a collaborative reward, to identify attention heads that cause interference. Unlike prior methods, Shapley Values map each head to positive and negative values in a way that abides by all axioms of fair attribution 1. Attention Head Language Affinity: Even when computed from aligned sentences, Attention Head Shapley Values vary based on the language of input. This highlights that a subset of attention heads has language-specific importance, while others are language-agnostic as shown in Figure
In a qualitative study, we find that the most language-agnostic heads identified have a visible language-agnostic function, while language differences can be measured meaningfully for language-specific heads. 2 Related Work A large amount of work has studied both the theoretical underpinnings of learning common structures for language and their applications to crosslingual transfer. Early works exploited commonality through the use of pivot representations, created either by translation As NLP has increasingly used representation learning, dense embedding spaces replaced explicit pivots. This led to methods that identified the commonalities of embedding spaces and ways to align them With language-specific data, further work has studied how to reduce interference by adding a small number of language-specific parameters. These works adapt a model for the target language by training only Adapters Model pruning has largely been focused on reducing the onerous memory and computation requirements of large models. These techniques are broken into two approaches: structured and unstructured pruning. Unstructured pruning aims to remove individual parameters, which allows for more fine-grained removal. This process often has minimal effects even at extremely high degrees of sparsity. To efficiently prune a large number of parameters, many techniques propose using gradients or parameter magnitude Structured pruning, or removing entire structural components, is motivated by computational benefits from hardware optimizations. In the case of Transformers, most of this pruning work targets removal of attention heads, either through static ranking Our work studies pruning without updating model parameters, which aligns with To identify and remove interference, we need a metric that can separate harmful, unimportant, and beneficial attention heads. Prior work Shapley Values We apply Shapley Values to the task of structural pruning. In order to compute Shapley Values for each head, we first formalize the forward pass of a Transformer as a coalitional game between attention heads. Then, we describe a methodology to efficiently approximate Shapley Values using Monte Carlo simulation combined with truncation and multi-armed bandit search. Finally, we propose a pruning algorithm using the resulting values to evaluate the practical utility of this theoretically grounded importance metric. We formalize a Transformer performing a task as a coalitional game. Our set of players A are attention heads of the model. In order to remove self-attention heads from the game without retraining, we follow With G h = 0, that attention head does not contribute to the output of the transformer and is therefore considered removed from the active coalition. Our characteristic function V (A) is the task evaluation metric M v (A) over a set of validation data within a target language, adjusted by the evaluation metric with all heads removed to abide by the V (∅) = 0 property of coalitional games: (2) With these established, the Shapley Value φ h for an attention head Att h is the mean performance improvement from switching gate G h from 0 to 1 across all P permutations of other gates: The exact computation of Shapley Values for N attention heads requires 2 N evaluations of our valida-tion metric, which is intractable for the number of heads used in most architectures. The computation becomes more tractable with Monte Carlo simulation as an approximation Computing low-variance Shapley Value estimates with Monte Carlo simulation alone is computationally expensive and provides no clear metric for convergence. Therefore, we follow Truncation Heuristics Truncation stops sampling the marginal contributions from the rest of a permutation of features once a stopping criterion is reached for that permutation of the Monte Carlo simulation. Prior work selects stopping criterion based on either total performance Multi-Armed Bandit Sampling The multiarmed bandit optimization stops sampling the marginal contributions of a particular player once a stopping criterion has been reached according to the variance of that player. Our stopping criterion is based on Empirical Bernstein Bounds We stop sampling for a particular head once this bound is less than |µ -0|, meaning that we have identified the Shapley Value as positive or negative with probability 1δ. This saves us significant computation while confidently separating heads into helpful and harmful buckets. For all experiments, we use R = 1 since the model's worst-case performance is zero and δ = 0.1 to give a 95% confidence lower and upper bound. Our pruning procedure works with any signed importance metric. Specifically, we test the utility of the Shapley Values metric for removing interference and helping multilingual models generalize to unseen test data. Our hypothesis is that attention heads with negative Shapley Values introduce interference. Our pruning method reflects this by using the sign of our approximation directly. We remove all attention heads whose Shapley Value is negative with probability 1δ by the Empirical Bernstein inequality from Equation Alternatively, once Shapley Values are computed the model could be pruned to any sparsity level. Unlike prior pruning approaches besides We evaluate our methodology on the Cross Lingual Natural Language Inference (XNLI) and Universal Dependencies Part-Of-Speech (UDPOS) tasks. These allow us to analyze the applicability of Attention Head Shapley Values to both sequence classification and structured prediction. We provide a description of dataset sizes in Table Cross-Lingual Natural Language Inference (XNLI) We use the Cross Lingual Natural Language Inference (XNLI) Benchmark Table whether the hypothesis is entailed by the premise, contradicted by the premise, or neither. Universal Dependencies Part-of-Speech (UD-POS) For structured prediction, we evaluate on the Part-of-Speech (POS) tags from the Universal Dependencies (UD) v2 corpus For direct comparison with our experiments on XNLI, we only retain the 13 languages from UD-POS which have a development and test split, which also exist in XNLI. Unlike XNLI, each language in UDPOS hasa different number of examples which are not aligned across languages. As the basis for our experiments, we finetune XLM-R Base First, we analyze the Attention Head Shapley values for XNLI. We focus only on the role of the source language by using an aligned sample from XNLI to control our results for differences independent from language variation. In Figure Despite this consistency, we find some attention heads demonstrate high language-specificity. Most notably, the fifth attention head in layer six is positive for Swahili but strongly negative for all other 14 languages. This indicates that this head serves a function specific to Swahili within the model. We investigate the behavior of language-specific and language-agnostic heads further in Section 5. It is worth noting that the outlier, Swahili, is the language with the fewest number of examples in the data used in the pretraining of XLM-R. Whether the large variation between Swahili and all other languages is induced by linguistic features or the training dynamics of low-resource languages within multilingual models is unclear. We leave this to be explored further in future work. To understand the practical applicability of the resulting Shapley Values, we evaluate models before and after pruning all attention heads with negative Shapley Values as described in Section 3.3. Each resulting language-specific model can be represented with only the 144 mask parameters which indicate whether each attention head is removed or kept. Therefore, this pruning can be seen alternatively as a parameter-efficient learning method, using 1 • 10 -6 % of the parameters it would require to finetune the model for each language 2 . XNLI In Table Comparison to Baselines Randomly pruning is ineffectual or harms performance in both tasks, indicating that pruning alone is not the source of our improvement. Pruning according to the gradientbased metric proposed by Given the high rank correlation between many of the languages, we evaluate transferability by using the Shapley Values for English to prune the model for all languages. We report results in Table for English. Two languages (Urdu and German) achieve better results in the zero-shot pruning than they did in the targeted pruning, five achieve worse results, and the remaining eight are equivalent. It is likely that the strength of zero-shot transfer is largely due to the removal of the fifth head of layer six, which is one of the top 2 most negative heads for all languages barring Swahili. Interestingly, the Attention Head Shapley Values for Swahili also have the lowest rank correlation with English of any language. UDPOS However, UDPOS highlights the major shortcoming of zero-shot pruning: all attention heads receive a positive Shapley Value for English for UDPOS. This means that no zero-shot pruning is performed despite targeted pruning finding benefits for languages shown in Table Finally, we evaluate the effectiveness of Shapley Values as a ranking methodology for the iterative pruning evaluation performed by Averaged across all levels of sparsity, our method outperforms the Random baseline (+5.8), Monte Carlo Shapley Values (+1.6), and the Gradient baseline (+0.6). At different stages. Depending on the target sparsity of interest however, Shapley Values and Gradient-based pruning have different levels of sparsity. Our method is the only method that identifies strongly harmful heads, with performance improving compared to the unpruned model for the first 6 heads removed. Our method achieves the largest performance gap at 44% of model capacity outperforming the Gradient baseline, Monte Carlo Shapley Values, and the Random Baseline by +12.2, +15.1, and +20.9 respectively. However, the gradient baseline outperforms our method when more than 80% of heads are pruned, although neither method performs well above chance at this sparsity. In order to provide intuition into the function of attention heads, prior work has turned to attention visualization as the basis for qualitative analysis of the inner workings of transformer models. We visualize the attention patterns of outlier attention heads using BertViz We define the set of language-agnostic heads as the intersection of the the top 20 attention heads for each language. In Figure The synonym-matching pattern clearly applies to NLI, where synonyms critically participate in commonalities and contradictions between the premise and hypothesis. Synonym linking is possible via token semantics and the separator tokens, so this pattern does not require any knowledge of languagespecific syntax or morphology. The visualization reveals a meaningful languageagnostic pattern which may explain why the positive Shapley Value across all languages. This usage highlights that while we utilize Shapley Values to remove harmful learned patterns, they also can direct mechanistic interpretability work to understand the effectiveness of transformers for a particular task As highlighted in Section 4.3, the fifth head of layer six has a positive Shapley Value only for Swahili. In Figure The frequency of separator attention combined with the minimal negative performance impact from removing this head for Swahili in Section 4.5 supports the idea that this head supports a rare pattern, perhaps stemming from poor tokenization. However, the relatively low rate of separator attention indicates that this head does impact other languages often, introducing noise. In this work, we developed a simple yet effective approach to measure the impact of individual attention heads on task performance by leveraging Shapley Values. We used this to identify languagespecific and language-agnostic structural components of multilingual transformer language models. We demonstrated that the resulting values exhibit language affinity, varying across languages. We then applied these Attention Head Shapley Values to improve cross-lingual performance through pruning for both sequence classification and structured prediction. Finally, we performed provided insights on language-agnostic and language-specific attention heads using attention visualization. We believe that attention head Shapley Values have strong potential to systematically inform future studies of multilingual models and transformers broadly. Future work should explore the relationship between linguistic features, training data volume, and the language-specificity of attention heads. Additionally, the benefits of removing heads motivates work that reduces cross-lingual interference introduced by language-specific components during pre-training, such as pruning during pretraining or utilizing sparsely activated networks. Even with our optimizations, using Shapley Values as an importance metric requires a significant computational cost compared to gradient-based methods: gradient-based methods take approximately 3.33e14 FLOPs and our optimized Shapley Values take approximately 3.27e16 FLOPs to converge. While the computation is parallelizable, it took several days on a single GPU to compute accurate estimates. This expense is reasonable for understanding the behavior of base models more deeply but limits the use of this method as a rapid iteration tool. For those looking to reduce this computational cost further, we recommend first using gradient-based methods to identify a set of heads to which the output is sensitive and then using Shapley Values to interpret the direction of the effect. While this may miss some harmful heads, it is likely to find the most harmful heads for a reduced cost. Additionally, we rely on analysis of attention patterns to help ground our findings. However, there is debate as to whether analysis of attention patterns is a sound analytical tool
1,016
1,734
1,016
R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling
Human language understanding operates at multiple levels of granularity (e.g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined. However, existing deep models with stacked layers do not explicitly model any sort of hierarchical process. This paper proposes a recursive Transformer model based on differentiable CKY style binary trees to emulate the composition process. We extend the bidirectional language model pre-training objective to this architecture, attempting to predict each word given its left and right abstraction nodes. To scale up our approach, we also introduce an efficient pruned tree induction algorithm to enable encoding in just a linear number of composition steps. Experimental results on language modeling and unsupervised parsing show the effectiveness of our approach. 1 * Equal contribution.
The idea of devising a structural model of language capable of learning both representations and meaningful syntactic structure without any humanannotated trees has been a long-standing but challenging goal. Across a diverse range of linguistic theories, human language is assumed to possess a recursive hierarchical structure Pretrained language models such as BERT Inspired by In this paper, we revisit these ideas, and propose a model applying recursive Transformers along differentiable trees (R2D2). To obtain differentiability, we adopt Gumbel-Softmax estimation We make the following contributions: • Our novel CKY-based recursive Transformer on differentiable trees model is able to learn both representations and tree structure (Section 2.1). • We propose an efficient optimization algorithm to scale up our approach to a linear number of composition steps (Section 2.2). • We design an effective pre-training objective, which predicts each word given its left and right syntactic nodes (Section 2.3). For simplicity and efficiency reasons, in this paper we conduct experiments only on the tasks of language modeling and unsupervised tree induction. The experimental results on language modeling show that our model significantly outperforms baseline models with same parameter size even in fewer training epochs. At unsupervised parsing, our model as well obtains competitive results. Differentiable Tree. We follow
Here, k is a split point from i to j -1, f (•) is a composition function that we shall further define later on, p k i,j and p k i,j denote the single step combination probability and the subtree probability, respectively, at split point k, p i,j and p i,j are the concatenation of all p k i,j or p k i,j values, and GUMBEL is the Straight-Through Gumbel-Softmax operation of where W p ∈ R 1×d , b p ∈ R, and σ refers to sigmoid activation. Then, c k i,j is computed as where W w ∈ R 2×d with w k i,j ∈ R 2 capturing the respective weights of the left and right hidden states h i,k and h k+1,j , and the final c k i,j is a weighted sum of h i,k and h k+1,j . Tree Recovery. As the Straight-Through Gumbel-Softmax picks the optimal splitting point k at each cell in practice, it is straightforward to recover the complete derivation tree, Tree(T 1,n ), from the root node T 1,n in a top-down manner recursively. T Create a new 2-d array 5: for i ∈ 1 to n -1 do 6: for j ∈ i to n -1 do 7: i ← i ≥ u + 1 ? i + 1 : i 8: j ← j ≥ u ? j + 1 : j 9: T i,j ← T i ,j Skip dark gray cells in Fig. return T 11: function TREEINDUCTION(T , m) 12: T ← T 13: for t ∈ 1 to T .len -1 do 14: if t ≥ m then 15: T ← PRUNING (T ,m) 16: l ← min(t + 1, m) Clamp the span length 17: for i ∈ 1 to T .len -l + 1 do 18: if T i,j is empty then 20: Compute cell T i,j with Equation 1 21: return T As the core computation comes from the composition function f (•), our pruned tree induction algorithm aims to reduce the number of composition calls from O(n 3 ) in the original CKY algorithm to linear. Our intuition is based on the conjecture that locally optimal compositions are likely to be retained and participate in higher-level feature combination. Specifically, taking T 2 in Figure Algorithm 2 Find the best merge point Create an array 4: Collect cells on the 2nd row 6: τ ← ∅ 7: If index out of boundary then set to 0 16: Figure In terms of the time complexity, when t ≥ m, there are at most m cells to update, so the complexity of each step is less than O(m 2 ). When t ≤ m, the complexity is O(t 3 ) ≤ O(m 2 t). Thus, the overall times to call the composition function is O(m 2 n), which is linear considering m is a constant. Different from the masked language model training of BERT, we directly minimize the sum of all negative log probabilities of all words or word-pieces As shown in Figure In cases where e 1,i-1 or e i+1,n is missing due to the pruning algorithm in Section 2.2, we simply use the left or right longest adjacent non-empty cell. For example, T x,i-1 means the longest nonempty cell assuming we cannot find any non-empty T x ,i-1 for all x < x. Analogously, T i+1,y is defined as the longest non-empty right cell. Note that although the final table is sparse, the sentence representation e 1,n is always established. As our approach (R2D2) is able to learn both representations and intermediate structure, we evaluate its representation learning ability on bidirectional language modeling and evaluate the intermediate structures on unsupervised parsing. Baselines and Data. As our approach is a wordpiece level pretrained model, to enable a fair comparison, we train all models on word-pieces and learn models with the same settings as in the original papers. Evaluation at the word-piece level reveals the model's ability to learn structure from a smaller granularity. In this section, we keep the word-level gold trees unchanged and invoke Stanford CoreNLP Evaluation. Our metric is based on the notion of quantifying the compatibility of a tree by counting how many spans comply with dependency relations in the gold dependency tree. Specifically, as illustrated in Figure For binary tree spans for word-piece level input, if z breaks word-piece spans, then I(z) = 0. Otherwise, word-pieces are merged to words and the word-level logic is followed. Specifically, to make the results at the word and word-piece levels comparable, I(z) is forced to be zero if z only covers a single word. The final compatibility for Z is z∈Z I(z) |S(D)| -1 . Table Our model performs better than other systems at the word-piece level on both English and Chinese and even outperforms the baselines in many cases at the word level. It is worth noting that the result is evaluated on the same binary predicted trees as we use for unsupervised constituency parsing, yet our model outperforms baselines that perform better in Table To further understand the strengths and weaknesses of each baseline, we analyzed the compatibility of different sentence length ranges. Interestingly, we find that our approach performs better on long sentences compared with C-PCFG at the word-piece level. This shows that a bidirectional language modeling objective can learn to induce accurate structures even on very long sentences, on which custom-tailored methods may not work as well. We next assess to what extent the trees that naturally arise in our model bear similarities with human-specified parse trees. We compared examples of trees inferred by our model with the corresponding ground truth constituency trees (see Appendix), encountering reasonable structures that are different from the constituent structure posited by the manually defined gold trees. Experimental results of previous work Pre-trained models. Pre-trained models have achieved significant success across numerous tasks. ELMo Representation with structures. In the line of work on learning a sentence representation with structures, This makes it possible to train with backpropagation. However, their model runs in O(n 3 ) and they use Tree-LSTMs. In this paper, we have proposed an efficient CKYbased recursive Transformer to directly model hierarchical structure in linguistic utterances. We have ascertained the effectiveness of our approach on language modeling and unsupervised parsing. With the help of our efficient linear pruned tree induction algorithm, our model quickly learns interpretable tree structures without any syntactic supervision, which yet prove highly compatible with human-annotated trees. As future work, we are investigating pre-training our model on billion word corpora as done for BERT, and fine-tuning our model on downstream tasks.
874
1,425
874
Dependency resolution at the syntax-semantics interface: psycholinguistic and computational insights on control dependencies
Using psycholinguistic and computational experiments we compare the ability of humans and several pre-trained masked language models to correctly identify control dependencies in Spanish sentences such as 'José le prometió/ordenó a María ser ordenado/a' ('Joseph promised/ordered Mary to be tidy'). These structures underlie complex anaphoric and agreement relations at the interface of syntax and semantics, allowing us to study lexically-guided antecedent retrieval processes. Our results show that while humans correctly identify the (un)acceptability of the strings, language models often fail to identify the correct antecedent in non-adjacent dependencies, showing their reliance on linearity. Additional experiments on Galician reinforce these conclusions. Our findings are equally valuable for the evaluation of language models' ability to capture linguistic generalizations, as well as for psycholinguistic theories of anaphor resolution.
Treating pre-trained language models (LMs) as psycholinguistic subjects via the behavioral evaluation of their probability distributions has proven to be a very useful strategy to study to which extent they are able to generalize grammatical information from raw text (1) a. María i f le prometió a José j m ser ordenada i f . María promised José to be tidy. b. José i m le ordenó a María j f ser ordenada j f . José ordered María to be tidy. At the infinitive verb ser in (1), it is crucial to interpret its implicit subject. In other words, who is tidy? The term control reflects the idea that the interpretation of the implicit subject is controlled by, or is determined by, another referent In this work, we take advantage of the rich agreement properties of two Romance languages (Spanish and Galician) in order to examine humans' and language models' ability to correctly identify control dependencies. To do so, we have carefully created an experimental design via the manipulation of the gender of the NPs (feminine/masculine), the type of control verb (subject/object control), and the gender of the embedded adjective. This design will allow us to test whether humans and LMs identify or produce agreement violations at the adjective, which is used as a proxy for the accuracy of antecedent retrieval processes. Furthermore, this design will allow us to test for the presence of interference effects of non-controlling NPs (referred to as distractors) when they match or mismatch in gender with the embedded adjective. We created several datasets that have been used for a human acceptability judgement task (Experiment 1), a LM acceptability task (Experiment 2), and a LM prediction task (Experiment 3). For Experiments 2 and 3, we tested the most prominent monolingual and multilingual masked LMs based on transformers for Spanish, and provide additional translated datasets and results from the same computational experiments carried out with Galician LMs in order to confirm the cross-linguistic robustness of our findings. Our results show that while humans correctly identify the acceptability of the strings regardless of the configuration of the NPs, language models often fail to correctly identify the relevant antecedent in subject control dependencies, showing their reliance on linear relations rather than linguistic information, something which is observed in their below-chance accuracy for discontinuous dependencies. The main contributions of our paper are: (i) the release of wide-covering and highly controlled datasets to evaluate control structures in Spanish and Galician, (ii) a psycholinguistic evaluation of humans' performance, a computational evaluation of monolingual and multilingual LMs' performance, and a careful comparison between humans and LMs; (iii) a demonstration of the limitations of LMs to capture grammatical information thanks to the adversarial example of control constructions.
Targeted evaluation of LMs: Targeted evaluations of LMs focusing on different syntactic phenomena have found evidence suggesting that these models may generalize syntactic information from raw text Despite the fact that most of the work evaluating the linguistic capabilities of LMs has been carried out in English, there exist some experiments that have focused on Spanish and Galician LMs showing that the LMs tested in this work perform very well in the context of different linguistic dependencies, including simple and complex agreement dependencies with distractors. Recent studies in both Spanish and Galician show that models' performance for these dependencies (which rely on morphosyntactic information) are similar to those in English (with expected variations across models). For instance, Concerning control constructions, studies exploring LMs' abilities to solve these complex relations are very scarce. In a recent paper, Even though control constructions have been at the center of linguistic theorizing over the past decades, their theoretical interest has not translated into an equivalent amount of experimental research in the psycholinguistics literature. The key question, though, is whether (and how) control information is used in parsing. Some early works have argued that control information was not used during initial parsing stages due to its lexico-semantic nature (e.g., The present work takes control dependencies as an adversarial case to test LMs' ability to generalize grammatical information at the syntax-semantics interface (Experiments 2 and 3). Given the complexity of these constructions, and the lack of psycholinguistic evidence, we go one step further and start by evaluating humans' grammaticality perception (Experiment 1), not only to obtain a grammatical verification of the acceptability status of such innovative experimental materials and to be able to directly compare humans' and LMs' performance, but also to contribute to the scarce psycholinguistic evidence on the processing of control. The datasets, code, and results from all the experiments are freely available. The experimental materials used for Experiment 3 are an adaptation of the dataset described in section 3.1 (including its variants with personal pronouns and Galician translations) so that they could be used in the masked prediction task. This allows us to evaluate our dataset in the two possible gender configurations, expanding it such that each sentence has two possible outcomes: a grammatical and an ungrammatical one. Therefore, the manipulation is a 2x2 factorial design (control x distractor), as shown in Table 2 Dist. match María f le ordenó a Carmen f ser más ordenada f con los apuntes. Dist. mismatch José m le ordenó a Carmen f ser más ordenada f con los apuntes. Dist. match María f le ordenó a Manuel m ser más ordenada f con los apuntes. Dist. mismatch José m le ordenó a Manuel m ser más ordenada f con los apuntes. ject and object control. While subject control constructions engage in a discontinuous dependency where the object NP (the distractor) is intervening, object control dependencies engage in an adjacent dependency, where the subject NP (the distractor) precedes the dependency. Those conditions in which the two NPs (controller and distractor) have the same gender are respectively taken as grammatical and ungrammatical baselines for both subject and object control sentences. Hence, the critical conditions are those in which only one of the NPs agrees in gender with the adjective (i.e. grammatical sentences with a matching distractor and ungrammatical sentences with a mismatching distractor). Humans' and LMs' behavior in these conditions will be essential to ascertain whether they can accurately implement control-determined antecedent retrieval processes and whether they are fallible to interference effects from gender matching but structurally irrelevant antecedents, in a similar vein as the attraction effects observed in agreement dependencies (e.g. While there are very few gender-ambiguous names in Spanish, in order to maximize gender transparency, the nouns used to create the materials were carefully selected according to the most frequent female-only and male-only names on the official Spanish census. In addition, we created an adaptation of the main dataset substituting proper nouns with personal pronouns (e.g. 'She promised him to be tidier'), to avoid potential bias, ambiguities or misrepresentations of proper nouns We evaluate the following pre-trained models using HuggingFace's transformers library Multilingual: mBERT (12 layers) Galician: Bertinho small and base (6 and 12 layers) The primary goal of this acceptability task is to determine whether native speakers of Spanish are able to detect agreement violations that do not conform with the control properties of main predicates. This is, to our knowledge, the first experimental investigation on control of its kind, and we believe it is essential to corroborate native speakers' offline sensitivity to the different control manipulations that will be then put to the test with artificial LMs. It will be of particular importance to elucidate whether comprehenders are able to correctly distinguish the acceptability of the strings regardless of the type of control (subject or object) and the presence of a gender matching or mismatching distractor. 40 native speakers of Spanish recruited at the Universidade de Santiago de Compostela participated in this experiment. Their participation was voluntary and all of them provided informed consent. Participants were presented with the entire sentence in the middle of the screen along with a rating scale, and they could only move to the next one once they had emitted a rating. They were instructed to rate the sentences in terms of whether they came across as well-formed Spanish: 7 meaning totally acceptable and 1 totally unacceptable. Experimental sentences were intermixed with 96 filler sentences of similar structure and complexity. The task was completed by all participants in less than 30 minutes. Table RoBERTa-large and XLM-RoBERTa-base. The same pattern of results is observed using pronouns instead of names (see Figure The results from Experiment 3 reinforce and complement the findings from Experiment 2 in several respects. First, reliance on linear proximity is, if anything, even clearer, as subject control sentences with a mismatching distractor display clear interference effects, which are materialized in a dramatically below-chance accuracy. These are the cases in which the distractor is the sentence object, which is also the closer NP. In these cases, LMs' predict a target adjective that agrees in gender with the object, rather than the subject (i.e. the correct antecedent) and hence, demonstrating that antecedent retrieval processes unfold disregarding the lexico-semantic information on control. Importantly, these effects are almost absent in object control sentences, where only two models show evidence for interference effects, these being much less pronounced (only a few accuracy points). Even though the results from this experiment cannot be directly compared with those of humans (Experiment 1), it should be noted that human accuracy was above 80% for all conditions. 7 General discussion and conclusions The empirical evidence gathered in this work provides a very straightforward picture: whereas humans' can coordinate lexico-semantic and syntactic information in order to determine the (un)acceptability of control structures, LMs resort to a heuristic based on linear proximity, disregarding control information. These findings are robust, as they replicate across tasks (acceptability and masked prediction), models (monolingual and multilingual), languages (Spanish and Galician LMs), and type of antecedent (names and pronouns). Furthermore, they go in line with evidence advanced in Control verbs and control structures have a high frequency in natural language and, ideally, state-ofthe-art LMs should be able to capture their meaning differences and the consequences they have for phrase-structure relations (ultimately, who does what to whom?). Some authors have suggested that their performance on similar structures could be improved in one-shot learning scenarios, or by adding more control constructions in the training data One of the biggest challenges of working with control constructions is the elaboration of appropriate experimental materials. This is why the carefully curated Spanish and Galician datasets used in this work, which are freely available, represent a key contribution, as we hope they are valuable for further computational and psycholinguistic research beyond English, the dominant language in these fields. This experiment aims at observing whether the probabilities of the language models are similar 207 to those of humans. That is, whether LMs assign lower surprisal to grammatical than to ungrammatical sentences regardless of the presence of a matching or mismatching distractor. For this purpose, we use the exact same dataset as in Experiment 1. We rely on the standard approach for targeted syntactic evaluation to obtain the accuracy of the models on the minimal pairs This experiment aims at further exploring the behavior of LMs using the masked prediction task. In contrast with Experiment 2, where we compute the surprisal for the same adjective in a given (grammatical or ungrammatical) sentence, our objective here is to test whether LMs predict grammatically compatible adjectives in subject and object control sentences regardless of the presence of a matching or mismatching distractor. In Experiment 2 the adjective's gender was kept constant across experimental conditions, and hence, we could not assess LMs' preferences for the masculine or feminine version. By contrast, here we test if LMs predict grammatically compatible adjectives in subject and object control sentences by directly comparing the probabilities of a given adjective in its masculine or feminine form, something which provides us with more comprehensive information in this respect. Furthermore, evaluating model accuracy rather than surprisal values will also allow us to assess and compare the performance across models. Given that the training data for most pre-trained models has not been released, further investigation of the frequency effects of control verbs in the corpora, or for that matter, of any other critical word in the sentence (names, adjectives, etc.) is not feasible. This is a shortcoming of our work because word frequency during training is known to be an important factor for model performance Besides, detailed comparisons between models have been left out for reasons of space and scope, since the objective of the research was not to compare model performance, although it is a relevant and interesting issue in itself (for instance, the fact that the LMs based on the RoBERTa architecture performed better across tasks, or that the high performance of XLM-RoBERTa contrasts with that of mBERT). In relation to this, the comparison of models with different architectures and training objectives (e.g. generative models) was also left for further research. Finally, it is worth noting that the two languages evaluated in this study (Spanish and Galician) are very similar, so that it could be interesting to expand the research to non-romance languages. Experiment 1 complied with the standards of research involving human subjects. Their participation was voluntary, all of them were informed of the nature of the task, and provided informed consent before starting the experiment. With respect of C02 consumption for the computational experiments (Experiments 2 and 3), it should be noted that we used pre-trained models and hence the impact of the calculations obtained is expected to be minimal. The experiments were run on a NVIDIA A100 GPU, and the results were obtained in a few minutes. Since this work is circumscribed within basic research on artificial language modelling, no applications or tools are to be directly derived by it and hence, we do not think of any potential harms or bias that can be derived from our work. C Did you run computational experiments? Sections 5 and 6 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We use pre-trained models. Some notes are added on the ethics statement. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Results section: 4.2, 5.2, and 6.3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D Did you use human annotators (e.g., crowdworkers) or research with human participants? Section 4.1 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We reported a summary in section 4.1. The acceptability task is a wide-spread method and this is why the full detailed instructions were not provided. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4.1 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 4.1 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? It was not required by the institution at the time of data collection. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 4.1
947
2,933
947
CLAD-ST: Contrastive Learning with Adversarial Data for Robust Speech Translation
The cascaded approach continues to be the most popular choice for speech translation (ST). This approach consists of an automatic speech recognition (ASR) model and a machine translation (MT) model that are used in a pipeline to translate speech in one language to text in another language. MT models are often trained on well-formed text and therefore lack robustness while translating noisy ASR outputs in the cascaded approach, degrading the overall translation quality significantly. We address this robustness problem in downstream MT models by forcing the MT encoder to bring the representations of a noisy input closer to its clean version in the semantic space. This is achieved by introducing a contrastive learning method that leverages adversarial examples in the form of ASR outputs paired with their corresponding human transcripts to optimize the network parameters. In addition, a curriculum learning strategy is then used to stabilize the training by alternating the standard MT log-likelihood loss and the contrastive losses. Our approach achieves significant gains of up to 3 BLEU scores in English-German and English-French speech translation without hurting the translation quality on clean text.
Neural machine translation (NMT) has made significant advancements over the past several years with claims of achieving 'human parity' Robustness is especially important in cascaded speech translation (ST) systems, where an NMT model works on the output of the upstream automatic speech recognition (ASR) system. In this scenario, significant MT performance degradation has Human Transcript: I'm not sure that's wise, given the importance of the problem, but there's now the geoengineering discussion about: Should that be in the back pocket in case things happen faster, or this innovation goes a lot slower than we expect? ASR Output: I'm not sure that's why it's given the importance of the problem. But now that the geoengineering discussion about should that be in the back pocket in case things happen faster or this innovation goes a lot slower than we expect. been measured due to i) error propagation from the ASR and ii) the mismatch between training-testing condition as the NMT model is typically trained on well-formed text making it weak in dealing with noisy inputs. For these reasons, there has been significant effort towards building end-to-end ST models Prior research has tried to tackle the robustness problem in NMT models independently by (1) synthetic noise injection To address the robustness problem, particularly in the context of cascaded ST, we propose to combine the best of the two approaches. This is obtained by training the NMT model with adversarial examples generated from the ASR outputs and encouraging the encoder representations of both the ASR outputs and their corresponding human transcripts to be closer to each other. This is done via contrastive learning
Our NMT model is a Transformer model To improve the robustness of the NMT model on noisy ASR outputs for cascaded speech translation, we use a contrastive learning method To get the encoder sentence representations efficiently for contrastive learning, a [CLS] token is prepended to the input sentences similar to the BERT model Contrastive learning uses speech transcription corpora (i.e., speech paired with human transcripts). The speech input is passed through the upstream ASR model to obtain the ASR outputs. Given the noisy ASR output x paired with its corresponding clean human transcript x, we minimize the contrastive objective L CTL , which is an average of two symmetric sentence-level contrastive loss functions. Given where D(u, v) denotes the cosine distance between two vectors u and v. ŝx ′ represents a negative example constructed for every other sentence x ′ in the batch. Given x ′ ∈ (X ∪ X) \ {x, x} and its sentence embedding where . The above linear interpolation in Eq 2 with exponentially decaying λ x is implemented following To stabilize the NMT parameters before applying contrastive learning, we also use curriculum learning We experiment with English-German (En-De) and English-French (En-Fr) language directions. Training Data: For parallel text translation data, we use WMT'16 En-De (2) CSA-NMT Other training details and hyperparameters are in Appendix A.1. points (see Table In Figure We study the effect of the curriculum learning strategy (Section 2) and reports the results in Table In Table Model En-De Wbase Wlarge (WER = 14.3) (WER = 7.6) Transformer-base 23.5 24.4 CONF-ASR Table To investigate if our results scale when we increase the size of the network, we use Transformer-large instead of Transformer-base for our baseline models as well as CLAD-ST. The results are reported in Table We also experiment with adding in-domain training data from MuST-C for training the NMT models in We improve the robustness of MT to ASR outputs using contrastive learning to bring the representations of clean and noisy examples closer in the semantic space. Our approach does not require any speech translation corpus. We significantly improve the translation accuracy on noisy ASR outputs without degrading translation accuracy on clean text. We also show that the approach is scalable to better-quality ASR models in the cascade other than the one used during training. The proposed approach is generic and is applicable beyond the context of speech translation alone, such as translating user-generated chat text or non-native text if paired noisy-clean data is available. The limitations of our paper are: • As in any ST cascade architecture, the performance of our MT system depends on the quality of the ASR model outputs. Since we use examples generated by the ASR to train the MT model, the quality of ASR outputs directly affects the model performance. In our work we tested two ASR systems having different quality (base: 14.3 WER and large: 7.6 WER). This is particularly relevant when English is not the source language side. • Evaluation is done using only the BLEU score. We did not use human evaluation or COMET. • The evaluation is limited to two language pairs having English as the source. This selection of the source language is quite important because having a no-English language as a source will expose our MT model to a) ASR models of probably lower quality and b) different and more varied linguistic challenges that might affect the work of the adversarial method.
1,216
1,700
1,216
Automatic Construction of Machine Translation Knowledge Using Translation Literalness
When machine translation (MT) knowledge is automatically constructed from bilingual corpora, redundant rules are acquired due to translation variety. These rules increase ambiguity or cause incorrect MT results. To overcome this problem, we constrain the sentences used for knowledge extraction to "the appropriate bilingual sentences for the MT." In this paper, we propose a method using translation literalness to select appropriate sentences or phrases. The translation correspondence rate (TCR) is defined as the literalness measure.
Along with the efforts made to accumulate bilingual corpora for many language pairs, quite a few machine translation (MT) systems that automatically construct their knowledge from corpora have been proposed Such rules increase ambiguity and may cause inappropriate MT results. Translation variety increases with corpus size. For instance, large corpora usually contain multiple translations of the same source sentences. Moreover, peculiar translations that depend on context or situation proliferate in large corpora. Our targets are corpora that contain over one hundred thousand sentences. To reduce the influence of translation variety, we attempt to control the bilingual sentences that are appropriate for machine translation (here called "controlled translation"). Among the measures that can be used for controlled translation, we focus on translation literalness in this paper. By restricting bilingual sentences during MT knowledge construction, the MT quality will be improved. The remainder of this paper is organized as follows. Section 2 describes the problems caused by translation varieties. Section 3 discusses the kinds of translations that are appropriate for MTs. Section 4 introduces the concept of translation literalness and how to measure it. Section 5 describes construction methods using literalness, and Section 6 evaluates the construction methods.
First, we describe the problems inherent in bilingual corpora when we automatically construct MT knowledge. Some bilingual sentences in corpora depend on the context or situation, and these are not always correct in different contexts. For instance, the English determiner 'the' is not generally translated into Japanese. However, when a human translator cannot semantically identify the following noun, a determinant modifier such as `watashi-no (my)' or 'son° (its)' is supplied. As an example of a situation-dependent translation, the Japanese sentence "Shashin wo tot-te itadake masu ka? (Could you take our photograph?)" is sometimes translated into an English sentence as "Could you press this shutter button?" This translation is correct from the viewpoint of meaning, but it can only be applied when we want a photograph to be taken. Such examples show that most context/situation-dependent translations are non-literal. MT knowledge constructed from context/situation-dependent translations cause incorrect target sentences, which may contain omissions or redundant words, when it is applied to an inappropriate context or situation. Generally speaking, a single source expression can be translated into multiple target expressions. Therefore, a corpus contains multiple translations even though they are translated from the same source sentence. For example, the Japanese sentence "Kono toraberaazu chekku wo genkin ni shite kudasai" can be translated into English any of the following sentences. • I'd like to cash these traveler's checks. • Could you change these traveler's checks into cash? • Please cash these traveler's checks. These translations are all correct. Actually, the corpus of Controlled language A similar idea can be applied to bilingual corpora. Namely, the expressions in bilingual corpora should be restricted, and "translations that are appropriate for the MT" should be used in knowledge construction. This approach assumes that context/situation-dependent translations should be removed before construction so that ambiguities in MT can be decreased. Restricted bilingual sentences are called controlled translations in this paper. The following measures are assumed to be available for controlled translation. First three measures are for each of the bilingual sentences in the corpus and the fourth measure is for the whole corpus: • Literalness: Few omissions or redundant words appear between the source and target sentences. In other words, most words in the source sentence correspond to some words in the target sentence. • Context-freeness: Source word sequences correspond to the target word sequences independent of the contextual information. With this measure, partial translation can be reused in other sentences. • Word-order Agreement: The word order of a source sentence agrees substantially with that of a target sentence. This measure ensures that the cost of word order adjustment is small. A source word is better translated into the same target word through the corpus. For example, the Japanese adjectival verb `hitstiyoo-da' can be translated into the En-glish adjective 'necessary,' the verb 'need,' or the verb 'require.' It is better for an MT system to always translate this word into 'necessary,' if possible. Effective measures of controlled translation depend on MT methods. For example, word-level statistical We use Hierarchical Phrase Alignment-based Translator (HPAT) The procedure of HPAT is briefly described as follows (Figure Transfer-based MT We compared 6,304 bilingual sentences rewritten for an English-to-Japanese version of TDMT and the original translations in the corpus 1 . The statistics in Table Literalness Focusing on the number of linked target words, the value of the rewritten translations is considerably higher than that of the original translations. This result shows that the words of source sentences are translated into target words more directly in the case of the rewritten translations. Thus, the rewritten translations are more literal. Word Translation Stability Focusing on the number of different words in the target language and the mean number of translation words, both values of the rewritten translations are lower than those of the original translations. This is because the rule writers rewrote translations to make target words as simple as possible, and thus the variety of target words was decreased. In other words, the rewritten translations are more stable from the viewpoint of word translation. Context-freeness Mean context-freeness in Table 1 denotes the mean number of word-link combinations in which word sequences of the source and the target contain word links only between their constituents (cross-links are allowed). If a bilingual sentence can be divided into many translation parts, this value become high. This value depends on the number of word links When it is calculated only from the sentences that contain four word links, the value of the rewritten translations is higher than that of the original translations. We particularly focus on the literalness among the controlled translation measures in order to reduce the incorrect rules that result from context/situation-dependent translations. Word translation stability and context freeness must serve as countermeasures for multiple translations, since they ensure that word translations and structures are steady throughout the corpus. However, the reduction of incorrect translations is done prior to the reduction of ambiguities. A literal translation means that source words are translated one by one to target words. Therefore, a bilingual sentence that has many word correspondences is literal. The word correspondences can be acquired by referring to translation dictionaries or using statistical word aligners (e.g., However, not all source words always have an exact corresponding target word. For example, in the case of English and Japanese, some prepositions are not translated into Japanese. On the contrary, the preposition 'after' may be translated into Japanese as the noun `ato.' These examples show that some functional words have to be translated while others do not. Thus, literalness is not determined only by counting word correspondences but also by estimating how many words in the source and target sentences have to be translated. Based on the above discussion, the translation literalness of a bilingual sentence is measured by the following procedure. Note that a translation dictionary is utilized in this procedure. The dictionary is automatically constructed by gathering the results of word alignment at this time, though hand-made dictionaries may also be utilized. In this process, we assume that one source word corresponds to one target word. 1. Look up words in the translation dictionary by the source word. T, denotes the number of source words found in the dictionary entries. 2. Look up words in the dictionary by target words. Tt denotes the number of target words found in the definition parts of the dictionary. 3. If there is an entry that includes both the source and target word, the word pair is regarded as the word link L denotes the number of word links. 4. Calculate the literalness with the following equation, which we call the Translation Correspondence Rate (TCR) in this paper. The TCR denotes the portion of the directly translated words among the words that should be translated. This definition is bi-directional, Lines between sentences denote word links.) so omission and redundancy can be measured equally. Moreover, the influence of the dictionary size is low because the words that do not appear in the dictionary are ignored. For example, suppose that a Japanese source sentence (Source) and its English translations (Targets 1 and 2) are given as shown in Figure On the other hand, in the case of Target 2, four words are found in the dictionary (Tt = 4), and there are three word links. Thus, the TCR is *3 "' 0.67, and Target 1 is judged as more lit-5+4eral than Target 2. The literalness based on the TCR is judged from a tagged result and a translation dictionary. In other words, 'deep analyses' such as parsing are not necessary. In this section, two approaches for constructing translation knowledge are introduced. One is bilingual corpus filtering, which selects highly literal bilingual sentences from the corpus. Filtering is done as preprocessing before rule acquisition. The other is split construction, which divides a bilingual sentence into literal and non-literal parts and applies different generalization strategies to these parts. We consider two approaches to corpus filtering. Filtering Based on Threshold A partial corpus is created by selecting bilingual sentences with TCR values higher than a threshold, and MT knowledge is constructed from the extracted corpus. By making the threshold higher, the coverage of MT knowledge will decrease because the size of the extracted corpus becomes smaller. Filtering Based on Group Maximum First, sentences that have the identical source sentence are grouped together, and a partial corpus is created by selecting the bilingual sentences that have the maximal TCR from each group. As opposed to filtering based on a threshold, all source sentences are used for knowledge construction, so the coverage of MT knowledge can be maintained. However, some context/situation-dependent translations remain in the extracted corpus when only one non-literal translation is in the corpus. The TCR can be calculated not only for sentences but also for phrases. In the case of filtering, the coverage of the MT knowledge is decreased by limiting translation to highly literal sentences. However, even though they are non-literal, such sentences may contain literal translations at the phrase level. Thus, the coverage can be maintained if we extract literal phrases from non-literal sentences and construct knowledge from them. A problem with this approach is that non-literal bilingual sentences sometimes contain idiomatic translations that should not be translated literally. For example, the Japanese greeting "Hajime mashi te" should be translated into "How do you do," not into its literal translation, "For the first time." Such idioms are usually represented by a long word sequence. To cope with literal and idiomatic translations, a sentence is divided into literal and non-literal parts, and a different construction is applied. Short rules, which are more generalized and easier to reuse, are generated from the literal parts. Long rules, which are more strict in their use in MT, are generated from the non-literal parts. The procedure is described as follows. 1. Phrasal correspondences are acquired by Hierarchical Phrase Alignment. 2. The hierarchy is traced from top to bottom, and the literalness of each correspondence is measured. If the TCR is equal to or higher than the threshold, the phrase is judged as a literal phrase and the tracing stops before reaching the bottom. 3. If the phrase is literal, transfer rules that include its lower hierarchy are generalized. 4. If the top structure (i.e., entire sentence) is not literal, a rule is generated in which only the literal parts are generalized. For example, suppose that different target sentences from the same source are given as shown in Figure Thus, by using the split construction, rules like templates are generated from non-literal translations and primary rules for transfer-based MT are generated only from literal phrases. Rules generated from non-literal translations are used only when the input word sequence exactly matches the sequence in the rule. In other words, they are hardly used in different contexts. In order to evaluate the effect of literalness in MT knowledge construction, we constructed knowledge by using the methods described in Section 5 and evaluated the MT quality of the resulting English-to-Japanese translation. Bilingual Corpus We used as the training set 149,882 bilingual sentences from the Basic Travel Expression Corpus Translation Dictionary: Extraction of Word Correspondence For word correspondences that occur more than nine times in the corpus, statistical word alignment was carried out by a similar method to Evaluation for MT Quality We used the following two methods to evaluate MT quality. We used BLUE From the above-mentioned test set, 510 sentences were evaluated by paired comparison. In detail, the source sentences were translated using the base rule set created from the entire corpus, and the same sources were translated using the rules constructed with literalness. One by one, a Japanese native speaker judged which MT result was better or that they were of the same quality. Subjective quality is represented by the following equation, where I denotes the number of improved sentences and D denotes the number of degraded sentences. -D Subj. Quality = (2) # of test sentences 6.2 MT Quality vs. Construction Methods The level of MT quality achieved by each of the construction methods is compared in Table First, focusing on the filtering, the subjective qualities or the BLEU scores are better than the base in both methods. Comparing the threshold with the group maximum, the BLEU score is increased by the group maximum. The coverage of the exact rules is higher even if the corpus size decreases. Filtering based on the group maximum improves the quality while maintaining the coverage of the knowledge. Although we used a high-density corpus where many English sentences have multiple Japanese translations, the quality improved by only about 1%. It is difficult to significantly improve the quality by bilingual corpus filtering because it is difficult to both remove insufficiently literal translations and maintain coverage of MT knowledge. On the other hand, the BLEU score and the subjective quality both improved in the case of split construction, even though the coverage of the exact rules decreased. In particular, the subjective quality improved by about 4.9%. Incorrect translations were suppressed because the rules generated from non-literals are restricted when the MT system applies them. In summary, all construction methods helped to improve the BLEU scores or the subjective qualities; therefore, construction with translation literalness is an effective way to improve MT quality. In this paper, we proposed restricting the translation variety in bilingual corpora by controlled translation, which limits bilingual sentences to the appropriate translations for MT. We focused on literalness from among the various measures for controlled translation and defined a Translation Correspondence Rate for calculating literalness. Less literal translations could be removed by fil- The TCR is capable of measuring literalness not only for bilingual sentences but also for phrases. In other words, a bilingual sentence can be divided into literal phrases and other phrases. Using this feature, sentences were divided into literal parts and non-literal parts, and transfer rules that could be applied with strong conditions were generated from the non-literal parts. As a result, MT quality as judged by subjective evaluation improved in about 4.9% of the sentences. Word translation stability and context-freeness were also effective measures. MT quality is expected to be further improved by using these measures because they reduce multiple translations.
537
1,376
537
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better
Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies.
Recurrent neural networks (RNNs) are remarkably effective models of sequential data. Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs Here we revisit the question asked by Contrary to the findings of Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias? We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English. As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner. In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set. As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.
We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by While the pretrained large-scale language model of Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent. We empirically test this conjecture by running a strong character-based LSTM language model of A priori, we expect that number agreement is harder for character LSTMs for two reasons. First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors. tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens. Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model. On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'. As demonstrated on the last row of Table Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures? We focus on recurrent neural network grammars Our choice of RNNGs is motivated by the findings of RNNGs We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity. (VP < l a t e x i t s h a 1 _ b a s e 6 4 = " U O y F 7 n q T Z Q e j W k T + z 8 q 4 h H 0 0 / o H / u F c 6 n u L n g P y J v y j V y k 1 t Z o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U O y F 7 n q T Z Q e j W k T + z 8 q 4 h H 0 0 / o H / u F c 6 n u L n g P y J v y j V y k 1 t Z o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U O y F 7 n q T Z Q e j W k T + z 8 q 4 h H 0 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 9 6 n 4 v i m N 6 5 k 0 w 3 + P e J 7 3 w / v t + 3 5 t J / W 9 f c 9 z 8 k / 4 z T + H s r E g < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 9 6 n 4 v i m N 6 5 k 0 w 3 + P e J 7 3 w / v t + 3 5 t J / W 9 f c 9 z 8 k / 4 z T + H s r E g < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 9 6 n 4 v i m N 6 5 k 0 w 3 < l a t e x i t s h a 1 _ b a s e 6 4 = " U H g a A u q w B F H O + w 3 < l a t e x i t s h a 1 _ b a s e 6 4 = " q 0 c 1 3 W R T 9 0 Z 0 P h q B h 9 x c w h + g P n 8 w d Y g p F a < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 0 c 1 3 W R T 9 0 Z 0 P h q B h 9 x c w h + g P n 8 w d Y g p F a < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 0 c 1 3 W R T 9 0 Z 0 P h q B h 9 < l a t e x i t s h a 1 _ b a s e 6 4 = " n B j L E i n 0 8 i j R v a o q c w f t u Avg.(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory. Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement. Perplexity. To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric? We answer this question by using an importance sampling marginalization procedure Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of 4 Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con-struction order than the top-down, left-to-right order used above. These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g. Hale, 2014, chapter 3). They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure. In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig. In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig. In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed. 16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters n). In step 5 of Fig. Left-corner traversals combine some aspects of top-down and bottom-up processing. As illustrated in Fig. The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack. This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed. In step 1 of Fig. Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors. Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation. Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies. We explore the possibility that how the structure is built affects number agreement performance. Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors.
962
1,663
962
Text and Causal Inference: A Review of Using Text to Remove Confounding from Causal Estimates
Many applications of computational social science aim to infer causal conclusions from nonexperimental data. Such observational data often contains confounders, variables that influence both potential causes and potential effects. Unmeasured or latent confounders can bias causal estimates, and this has motivated interest in measuring potential confounders from observed text. For example, an individual's entire history of social media posts or the content of a news article could provide a rich measurement of multiple confounders. Yet, methods and applications for this problem are scattered across different communities and evaluation practices are inconsistent. This review is the first to gather and categorize these examples and provide a guide to dataprocessing and evaluation decisions. Despite increased attention on adjusting for confounding using text, there are still many open problems, which we highlight in this paper.
In contrast to descriptive or predictive tasks, causal inference aims to understand how intervening on one variable affects another variable strongly biased estimates and thus invalid causal conclusions. To eliminate confounding bias, one approach is to perform randomized controlled trials (RCTs) in which researchers randomly assign treatment. Yet, in many research areas such as healthcare, education, or economics, randomly assigning treatment is either infeasible or unethical. For instance, in our running example, one cannot ethically randomly assign participants to smoke since this could expose them to major health risks. In such cases, researchers instead use observational data and adjust for the confounding bias statistically with methods such as matching, propensity score weighting, or regression adjustment ( §5). In causal research about human behavior and society, there are potentially many latent confounding variables that can be measured from unstructured text data. Text data could either (a) serve as a surrogate for potential confounders; or (b) the language of text itself could be a confounder. Our running example is an instance of text as a surrogate: a researcher may not have a record of an individual's occupation but could attempt to measure this variable from the individual's entire history of social media posts (see Fig. A challenging aspect of this research design is the high-dimensional nature of text. Other work has explored general methods for adjusting for highdimensional confounders We narrow the scope of this paper to review methods and applications with text data as a causal confounder. In the broader area of text and causal inference, work has examined text as a mediator Outside of this prior work, there has been relatively little interaction between natural language processing (NLP) research and causal inference. NLP has a rich history of applied modeling and diagnostic pipelines that causal inference could draw upon. Because applications and methods for text 1 For instance, there have been four workshops on representation learning at major NLP conferences in the last four years • For applied practitioners, we collect and categorize applications with text as a causal confounder (Table • For causal inference researchers working with text data, we highlight recent work in representation learning in NLP ( §4) and caution that this is still an open research area with questions of the sensitivity of effects to choices in representation. We also outline existing interpretable evaluation methods for adjustments of text as a causal confounder ( §6). • For NLP researchers working with causal inference, we summarize some of the most-used causal estimators that condition on confounders: matching, propensity score weighting, regression adjustment, doubly-robust methods, and causally-driven representation learning ( §5). We also discuss evaluation of methods with constructed observational studies and semi-synthetic data ( §7).
In Table Text as a surrogate for confounders. Traditionally, causal research that uses human subjects as the unit of analysis would infer demographics via surveys. However, with the proliferation of the web and social media, social research now includes large-scale observational data that would be challenging to obtain using surveys Open problems: NLP systems have been shown to be inaccurate for low-resource languages There is growing interest in measuring language itself (e.g. the sentiment or topical content of text) as causal confounders. For example, Step 3: Choose a method that adjusts for confounding in causal estimates ( §5). Evaluation should include (A) sensitivity analysis ( §4), (B) human evaluation of adjustments when appropriate ( §6), and (C) evaluation of recovering the true causal effects ( §7). citations). Other domains that analyze language as a confounder include news Two predominant causal inference frameworks are structural causal models (SCM) In the ideal causal experiment, for each each unit of analysis, i (e.g., a person), one would like to measure the outcome, y i (e.g., an individual's life expectancy), in both a world in which the unit received treatment, t i = 1 (e.g., the person smoked), as well as in the counterfactual world in which the same unit did not receive treatment, t i = 0 (e.g the same person did not smoke). 3 A fundamental challenge of causal inference is that one cannot simultaneously observe treatment and non-treatment for 3 In this work, we only address binary treatments, but multivalue treatments are also possible (e.g., a single individual The most common population-level estimand of interest is the average treatment effect (ATE). where n 1 is the number of units that have received treatment and n 0 is the number of units that have not received treatment. However, this equation will be biased if there are confounders, z i , that influence both treatment and outcome. Structural causal models (SCMs) use a graphical formalism that depicts nodes as random variables and directed edges as the direct causal dependence between these variables. The typical estimand of choice for SCMs is the probability distribution of an outcome variable Y given an intervention on a treatment variable T : in which the do-notation represents intervening to set variable T to the value t and thereby removing all incoming arrows to the variable T . Identification. In most cases, Equation 2 is not equal to the ordinary conditional distribution P (Y | T = t) since the latter is simply filtering to the sub-population and the former is changing the underlying data distribution via intervention. Thus, for observational studies that lack intervention, one needs an identification strategy in order to represent P (Y | do(T = t)) in terms of distributions of observed variables. One such identification strategy (assumed by the applications throughout this review) is the backdoor criterion which applies to a set of variables, S, if they (i) block every backdoor path between treatment and outcome, and (ii) no node in S is a descendant of treatment. Without positive identification, the causal effects cannot be estimated and measuring variables from text is a secondary concern. Drawing the causal graph. Causal graphs help clarify which variables should and should not be conditioned on. The causal graphs in Figure After drawing the causal graph, the next step is to use available text data to recover latent confounders. Some approaches pre-specify the confounders of interest and measure them from text, P (z | x). Others learn confounders inductively and use a low-dimensional representation of text as the confounding variable z in subsequent causal adjustments. Pre-specified confounders. When a practitioner can specify confounders they want to measure from text (e.g., extracting "occupation" from text in our smoking example), they can use either (1) lexicons or (2) trained supervised classifiers as the instrument of measurement. Lexicons are word lists that can either be hand-crafted by researchers or taken off-the-shelf. Open problems: Since NLP methods are still far from perfectly accurate, how can one mitigate error that arises from approximating confounding variables? Closely related to this question is effect restoration which addresses error from using proxy variables (e.g., a father's occupation) in place of true confounders (e.g, socioeconomic status) Inductively derived confounders. Other researchers inductively learn confounders in order to condition on all aspects of text, known and unknown. For example, some applications condition on the entirety of news Open problems: Estimates of causal effects are contingent on the "garden of forking paths" of data analysis, meaning any "paths" an analyst did not take could have resulted in different conclusions We highlight that these decisions have been shown to alter results in predictive tasks. For instance, studies have shown that pre-processing decisions dramatically change topic models Given a set of variables Z that satisfy the backdoor criterion ( §3.2), one can use the backdoor adjustment to estimate the causal quantity of interest, Conditioning on all confounders is often impractical in high-dimensional settings such as those found in natural language. We provide an overview of methods used by applications in this review that approximate such conditioning, leading to unbiased estimates of treatment effect; however, we acknowledge this is not an exhaustive list of methods and direct readers to more extensive guides Open problems: Causal studies typically make an assumption of overlap, also known as common support or positivity, meaning that any individual has a non-zero probability of assignment to each treatment condition for all possible values of the covariates: ∀z, A propensity score estimates the conditional probability of treatment given a set of possible confounders (4) Inverse Probability of Treatment Weighting (IPTW) assigns a weight to each unit based on the propensity score (5) thus emphasizing, for example, treated units that were originally unlikely to be treated (t i = 1, low π i ). The ATE is calculated with weighted averages between the treatment and control groups, 7 w j y j (6) Matching aims to create treatment and control groups with similar confounder assignments; for example, grouping units by observed variables (e.g., age, gender, occupation), then estimating effect size within each stratum Once the matching algorithm is implemented, counterfactuals (estimated potential outcomes) are obtained from the matches M i for each unit i: 7 Lunceford and Davidian (2004) note there are two versions of IPTW, where both the weighted sum and the raw count have been used for the n0 and n1 denominators. which is plugged into the matching estimator, Open problems: Ho et al. ( Regression adjustment fits a supervised model from observed data about the expected conditional outcomes Then the learned conditional outcome, q, is used to predict counterfactual outcomes for each observation under treatment and control regimes, Unlike methods that model only treatment (IPTW) or only outcome (regression adjustment), doubly robust methods model both treatment and outcome, and have the desirable property that if either the treatment or outcome models are unbiased then the effect estimate will be unbiased as well. These methods often perform very well in practice Several research efforts design representations of text specifically for causal inference goals. These approaches still initialize their models with representations of text described in Section 4, but then the representations are updated with machine learning architectures that incorporate the observed treatment assignment and other causal information. Open problems: These methods have yet to be compared to one another on the same benchmark evaluation datasets. Also, when are the causal effects sensitive to hyperparameter and network architecture choices and what should researchers do in these settings? Text data has the advantage of being interpretablematched pairs and some low-dimensional representations of text can be read by humans to evaluate their quality. When possible, we suggest practitioners use (1) interpretable balance metrics and/or (2) human judgements of treatment propensity to evaluate intermediate steps of the causal estimation pipeline. For matching and propensity score methods, the confounder balance should be assessed, since ideally P (Z | T = 1) = P (Z | T = 0) in a matched sample where z ij is a single confounder j for a single unit i and σ t=1 j is the standard deviation of z ij for all i such that t i = 1. SDM can also be used to evaluate the propensity score, in which case there would only be a single j For causal text applications, Roberts et al. ( Open problems: For embeddings and causallydriven representations, each dimension in the confounder vector z is not necessarily meaningful. How can balance metrics be used in this setting? When possible, one can also improve validation by evaluating matched items (posts, sentences, documents etc.) to humans for evaluation. Humans can either (a) use a scale (e.g., a 1-5 Likert scale) to rate items individually on their propensity for treatment, or (b) assess similarity of paired items after matching. A simple first step is for analysts to do "inhouse" evaluation on a small sample (e.g., Open problems: How can these human judgement experiments be improved and standardized? Future work could draw from a rich history in NLP of evaluating representations of topic models and embeddings Because the true causal effects in real-world causal inference are typically unknown, causal evaluation is a difficult and open research question. As algorithmic complexity grows, the expected performance of causal methods can be difficult to estimate theoretically Constructed observational studies collect data from both randomized and non-randomized experiments with similar participants and settings. Evaluations of this kind include job training programs in economics Open problems: To extend constructed observational studies to text data, one could build upon Semi-synthetic datasets use real covariates and synthetically generate treatment and outcome, as in the 2016 Atlantic Causal Inference Competition Open problems: Semi-synthetic datasets that use real covariates of text seem to be a better evaluation strategy than purely synthetic datasets. However, with semi-synthetic datasets, researchers could be inadvertently biased to choose metadata that they know their method will recover. A promising future direction is a competition-style evaluation like Computational social science is an exciting, rapidly expanding discipline. With greater availability of text data, alongside improved natural language processing models, there is enormous opportunity to conduct new and more accurate causal observational studies by controlling for latent confounders in text. While text data ought to be as useful for measurement and inference as "traditional" lowdimensional social-scientific variables, combining NLP with causal inference methods requires tackling major open research questions. Unlike predictive applications, causal applications have no ground truth and so it is difficult distinguish modeling errors and forking paths from the true causal effects. In particular, we caution against using all available text in causal adjustment methods without any human validation or supervision, since one cannot diagnose any potential errors. Solving these open problems, along with the others presented in this paper, would be a major advance for NLP as a social science methodology.
935
2,993
935
Copyright Violations and Large Language Models
Language models may memorize more than just facts, including entire chunks of texts seen during training. Fair use exemptions to copyright laws typically allow for limited use of copyrighted material without permission from the copyright holder, but typically for extraction of information from copyrighted materials, rather than verbatim reproduction. This work explores the issue of copyright violations and large language models through the lens of verbatim memorization, focusing on possible redistribution of copyrighted text. We present experiments with a range of language models over a collection of popular books and coding problems, providing a conservative characterization of the extent to which language models can redistribute these materials. Overall, this research highlights the need for further examination and the potential impact on future developments in natural language processing to ensure adherence to copyright regulations. Code is at
If you remember what Pride and Prejudice is about, you have not necessarily memorized it. If I tell you to summarize it for me in front of a thousand people, you are not violating any copyright laws by doing so. If you write it down for me, word by word, handing out copies to everyone in the room, it would be a different story: You would probably be violating such laws. But what then, with language models? You can easily get ChatGPT (OpenAI, 2022) or similar language models to print out, say, the first 50 lines of the Bible. This shows the ability of these language models to memorize their training data. Memorization in large language models has been studied elsewhere, mostly focusing on possible safeguards to avoid memorizing personal * Equal contribution. information in the training data There has been one attempt that we are aware of, to probe language models memorization of copyrighted books Copyright laws exist to protect the rights of creators and ensure they receive recognition and compensation for their original works. Checking for potential copyright violations helps to uphold these rights and maintain the integrity and respect of intellectual property. Do language models memorize and reproduce copyrighted text? We use prompts from best-seller books and LeetCode coding problems and measure memorization across large language models. If the models show verba-tim memorization, they can be used to redistribute copyrighted materials. See Figure • We discuss potential copyright violations with verbatim memorization exhibited by six distinct language model families, leveraging two kinds of data, and employing two probing strategies along with two metrics. • Our findings confirm that larger language models memorize at least a substantial repository of copyrighted text fragments, as well as complete LeetCode problem descriptions. • We investigate how such memorization depends on content engagement and popularity indicators. • We obviously do not draw any legal conclusions, but simply suggest methods that would be relevant for extracting the empirical data that would be the basis for such a discussion.
The trade-off between memorization and generalization Based on how memorization is distributed, and what is predictive thereof, Copyright laws and conventions grant the creators of a work exclusive rights to use and distribute their creations, with certain exceptions In a European context, quotation is listed as one of the so-called exceptions and limitations to copyright under §Article 5(3)(d) of the copyright and related rights in the information society directive 2001/29/EC. The legislation states that membership states may provide exceptions to copyright laws to allow for 'quotations for purposes such as criticism or review, provided that they relate to a work or other subject-matter which has already been lawfully made available to the public, that, unless this turns out to be impossible, the source, including the author's name, is indicated, and that their use is in accordance with fair practice, and to the extent required by the specific purpose' Language models generating full citations could be a good practice to avoid copyright violations. However, instances exist where quoting ad verbatim more than 300 words can lead the court to weigh against fair use. 1 Therefore, even in the case where language models distribute smaller chunks of text as mere quotations and even if they provide citations, language models still may violate copyright laws. Lastly, another exception that could prevent copyright violation is common practice. Here, there is some variation. For book-length material, some say a quotation limit of 300 words 2 is 1 Copyright and Fair Use 2 Sample Permission Letter common practice, but others have argued for anything from 25 words We experiment with a variety of large language models and probing methods, evaluating verbatim memorization across bestsellers and LeetCode problems. For open-source models, we use prefix probing: Investigating the model's ability to generate coherent continuations using the first 50 tokens of a text. A similar setting is followed by Datasets. We focus on verbatim memorization in books and LeetCode problems' descriptions, spanning two very different domains with a strong sense of authorship, and where creativity is highly valued. Copyright violations, such as unauthorized Language models. We select open-source families of models that progressively increase in size: OPT Do larger language models memorize more? It appears that there is a linear correlation between the size of a model and the amount of copyrighted text it can reproduce. Results for books are summarized in Figure What works are memorized the most? See the right part of Figure Popularity indicators. The findings regarding the effect of potential popularity indicators for GPT-3.5 are presented in Figure 6 Other examples were more interesting. When prompted to print out the first line of Pinocchio, for example, GPT-3.5 said: The first line of "The Adventures of Pinocchio" is: "Once upon a time there was a piece of wood." In reality, the first line reads: 'Centuries ago there lived -' However, this sentence is, in a sense, not part of the story. The first paragraph reads: Centuries ago there lived-"A king!" my little readers will say immediately. No, children, you are mistaken. Once upon a time, there was a piece of wood. It was not an expensive piece of wood. Far from it. Just a common block of firewood, one of those thick, solid logs that are put on the fire in winter to make cold rooms cozy and warm. This suggests that language models may have memorized books starting from the fourth or fifth sentence, when the first sentences are, in a sense, not really part of the story. Confabulations. Language models are known to confabulate. They also confabulate, at times, when asked to print out literary works. GPT-3.5, when asked to print out The Girl with the Dragon Tattoo by Stieg Larsson, printed out the following lines: "It happened every year, was almost a ritual. The ritual was based on the same premise each time: Henrik Vanger began receiving a pressed flower in a frame . . . In reality, the book begins by: It happened every year, was almost a ritual. And this was his eighty-second birthday. When, as usual, the flower was delivered, . . . Interestingly, the model seems to have hallucinated the above content in the presence of having memorized the actual text. When prompting more specifically for the second line of the novel, GPT-3.5 delivers, saying: I apologize for the confusion in my previous response. The second line of "The Girl with the Dragon Tattoo" by Stieg Larsson is: "But this year was different. This year he had turned eighty-two." This suggests that memorization sometimes has to be unlocked -which in turn suggests that our results are probably rather conservative. Given previous results that models often first learn to memorize and then suppress memorization to facilitate generalization Overall, this paper serves as a first exploration of verbatim memorization of literary works and educational material in large language models. It raises important questions around large language models and copyright laws. No legal conclusions should be drawn from our experiments, but we think we have provided methods and preliminary results that can help provide the empirical data to ground such discussions. The analysis conducted in this study focuses on a specific range of best-selling books and educational materials, which may of course not fully represent the broader landscape of copyrighted materials. Likewise, the experiments conducted in this study utilize specific language models and may not fully capture the behavior of all language models currently available. Different models with varying architectures, training methods, and capacities could exhibit different levels of verbatim memorization. Moreover, we did not include cloze probing (i.e. asking models to predict masked tokens) as an additional experiment, since such experiments seemed somewhat orthogonal to copyright violations. Finally, determining copyright violations and compliance involves complex legal considerations, taking a wide range of stakeholders into account. Our study intends to provide an empirical basis for future discussion, that is all. What is fair use in language models is also an ethical question. Our study aims to shed light on the extent of verbatim memorization in large language models. Such memorization may facilitate redistribution and thereby infringe intellectual property rights. Is that really fair? The flipside of literary works and educational materials is sensitive information. Here, new risks arise. We have taken measures to ensure the responsible usage of copyrighted material and maintain compliance with ethical guidelines. Key considerations include respect for intellectual property rights, adherence to legal regulations, transparency and accountability in model capabilities and limitations, ethical data usage and permissions. Table Gone Plots for all models for all 19 books and 1826 Leet-Code problems. In Figures
960
2,138
960
Keywords and Instances: A Hierarchical Contrastive Learning Framework Unifying Hybrid Granularities for Text Generation
Contrastive learning has achieved impressive success in generation tasks to militate the "exposure bias" problem and discriminatively exploit the different quality of references. Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word, while keywords are the gist of the text and dominant the constrained mapping relationships. Hence, in this work, we propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text. Concretely, we first propose a keyword graph via contrastive correlations of positive-negative pairs to iteratively polish the keyword representations. Then, we construct intra-contrasts within instance-level and keyword-level, where we assume words are sampled nodes from a sentence distribution. Finally, to bridge the gap between independent contrast levels and tackle the common contrast vanishing problem, we propose an inter-contrast mechanism that measures the discrepancy between contrastive keyword nodes respectively to the instance distribution. Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
Generation tasks such as storytelling, paraphrasing, and dialogue generation aim at learning a certain correlation between text pairs that maps an arbitrary-length input to another arbitrary-length output. Traditional methods are mostly trained with "teacher forcing" and lead to an "exposure bias" problem Based on the above motivation, in this paper, we propose a hierarchical contrastive learning method built on top of the classic CVAE structure. We choose CVAE due to its ability in modeling global properties such as syntactic, semantic, and discourse coherence To unify individual intra-contrasts and tackle the "contrast vanishing" problem in independent contrastive granularities, we leverage an inter-contrast, the Mahalanobis contrast, to investigate the contrastive enhancement based on the Mahalanobis distance We empirically show that our model outperforms CVAE and other baselines significantly on three generation tasks: paraphrasing, dialogue genera-tion, and storytelling. Our contributions can be summarized as follows: • To our best knowledge, we are the first to propose an inter-level contrastive learning method, which unifies instance-level and keyword-level contrasts in the CVAE framework. • We propose three contrastive learning measurements: KL divergence for semantic distribution, cosine distance for points, and Mahalanobis distance for points with distribution. • We introduce a global keyword graph to obtain polished keyword representations and construct imposter keywords for contrastive learning.
Contrastive learning is used to learn representations by teaching the model which data points are similar or not. Due to the excellent performance on self-supervised and semi-supervised learning, it has been widely used in natural language processing (NLP). Firstly, The Mahalanobis distance is a measure of the distance between a point and a distribution (De Variational autoencoder (VAE) was proposed by 3.1 Background VAE: Variational auto-encoder (VAE) is a typical encoder-decoder structural model with certain types of latent variables. Given an input x, VAE models the latent variable z through the prior distribution p θ (z) , and the observed data x is reconstructed by the generative distribution p θ (x|z) which is the likelihood function that generates x conditioned on z. Since z is unknown, it should be estimated according to the given data x as p θ (z|x). While the posterior density p θ (z|x) = p θ (x|z)p θ (z)/p θ (x) is intractable, VAE introduces a recognition posterior distribution q ϕ (z|x) approximates to the true posterior p θ (z|x). Thus, VAE is trained by optimizing the lower bound on the marginal likelihood of data x as: where D KL is the Kullback-Leibler divergence. The conditional variational auto-encoder (CVAE) is the supervised version of VAE with an additional output variable. Giving a dataset {x i , y i } N i=1 consisting of N samples, CVAE is trained to maximize the conditional log-likelihood, and the variational lower bound of the model is written as follows: (2) Assuming the type of latent variable obeys Gaussian distribution, the first right-hand side term can be approximated by drawing samples {z i } N i=1 from the recognition posterior distribution q ϕ (z|x, y), where z ∼ N (µ, σ 2 I), and then objective of the CVAE with Gaussian distribution can be written as: where The distribution q ϕ (z|x, y) is reparameterized with a differentiable function g ϕ , which enables the model trainable via stochastic gradient descent. Inspired by In this section, we introduce our hierarchical contrastive learning method, which is comprised of three parts: instance-level contrast based on KL divergence (sec.3.2.1), keyword-level contrast based on keyword graph (sec.3.2.2), and inter-contrast: Mahalanobis contrast (sec.3.2.3). To tackle the "exposure bias" problem and discriminatively exploit the different quality of references, instance-level contrastive learning is introduced to learn discrepancies of targets. Specifically, in addition to the observed input data x and positive output y + , a negative output y -is added to construct a contrastive pair {(x, y + ), (x, y -)}. In this case, the prior distribution p θ (z|x) is learned from a prior network, which is denoted as f θ (x). The approximate posteriors q ϕ (z|x, y + ) and q ϕ (z|x, y -) are learned from a posterior network and represented as f ϕ (x, y + ) and f ϕ (x, y -), respectively. The objective here is to make the distance between a prior distribution and positive posterior distribution closer than with the negative posterior distribution. Thus, the instance-level contrastive loss function can be written as: where the y * ∈ Y can be positive sample y + or negative sample y -, and the τ is a temperature parameter to control push and pull force. The function h(•) denotes the distance between elements, which is set as Kullback-Leibler divergence Since the instance-level contrast focuses on learning high-level information and fails to discriminate the contribution of each word, we incorporate it with a keyword-level contrast to pay more attention to the specific keyword. Keyword Graph: Given an input-output text pair (x, y), keywords k x , k y can be extracted from x and y, respectively. For an input text x i with keyword k x,i , input texts that contain the same keyword are gathered into a cluster C i = {x j } n j=1 , k x,j ∈ x j , where n is the number of texts in C i . Each text x j ∈ C i has a positive-negative output text pair {(y + j , y - j )} containing a positive output keyword k + y,j and a negative one k - y,j , respectively. Thus, spreading to the entire cluster C i , for the output text y i , there exists positive relations r + i,j between its keyword k y,i and each of the surrounded positive keywords {k + y,j } n j=1 . Likewise, negative relations r - i,j correlates the output keyword k y,i and the surrounded negative ones {k - y,j } n j=1 . Based on these keywords as nodes and their relations as edges where h t * can be h t i or h t j . Then, based on the obtained edge representation r t+1 ij , we update the node representations considering both the related nodes and relation edges by the graph attention layer, GAT(h t i , h t j , r t ij ), which is designed as: where W q , W k , W r and W v are all learnable parameters, and the α t ij is the attention weight between h t i and h t j . Besides, to avoid gradient vanishing after several iterations, a residual connection is added to the output u t i and the updated node representations h t+1 i is obtained. In this way, the new representation of each keyword node consists of the relation dependency information from neighbor nodes N i . We take the node representations from the last iteration as the final keyword representations, denoted as u for brevity. The keyword-level contrastive learning arises from input keywords against positive output keywords and negative impostor keywords. The input keyword u in is extracted from the input text as an anchor, and the output keyword u out is extracted from ground-truth output text. While the impostor keyword is calculated from the negative neighbours of the output keyword u out , written as u imp = i W i u i , where u i is the representation of keyword node which is obtained by the keyword graph learning procedure described above. In this way, with the help of neighbour nodes in the graph, we can obtain a more indistinguishable and difficult negative sample. The loss of keyword level contrastive learning thus can be written as: where u * ∈ U denotes the positive output keyword u out or imposter keyword u imp . In keyword-level contrast, h(•) utilizes cosine similarity to calculate the distance between points. Note that there exists a space gap between the instance-level contrast and the keyword-level contrast, which disturbs the completeness of this hierarchical contrastive architecture. Besides, the contrastive values vanish when the distance metric is hard to measure the actual discrepancy between positive and negative merely in instance distributions or in keyword representations. To mitigate such problems, we design a Mahalanobis contrastive mechanism to correlate the instance distribution and keyword representation, where the objective is to minimize the margin between the output keyword u out and the posterior semantic distribution q ϕ (z|x, y) ≜ f ϕ (x, y) and maximize the margin between the imposter keyword u imp and the posterior distribution f ϕ (x, y): u * ∈U e h(f ϕ (x,y),u * )/τ )], (11) where u * ∈ U can be the positive output keyword u out or negative imposter keyword u imp . In Mahalanobis contrast, h(•) utilizes Mahalanobis distance Finally, we equip the CVAE model with the proposed hierarchical contrastive learning framework to unify hybrid granularities by adding L ins , L keyword and L ma to the reconstructed loss of Equation 3. We conduct experiments on three public datasets QQP, Douban, RocStories for paraphrasing, dialogue generation, and storytelling task, respectively. The details of the datasets are as follows: Dialogue (Douban) Douban Paraphrasing (QQP) QQP Storytelling (RocStories) RocStories consists of 98,163 high-quality hand-crafted stories, which capture causal and temporal commonsense relations of daily events For the above three datasets, in order to construct different levels of contrastive learning, we performed the same preprocessing of extracting keywords. We utilize the TextRank model Our experiments are implemented in Tensorflow We compare our method against several traditional generation models, pretrained-based generation models, and contrastive learning models. Traditional generation models: (1) CVAE Contrastive learning methods: (8) Groupwise To evaluate the performance of our model against baselines, we adopt the following metrics widely used in existing studies. BLEU We utilize BLEU score Embedding To evaluate our model more comprehensively, we also capture the semantic matching degrees between the bag-of-words (BOW) embeddings of generated text and reference Human Evaluation We also assessed system performance by eliciting human judgments on 100 randomly selected test instances on QQP dataset. Three annotators are asked to rate paraphrasing questions generated by T5-CLAPS, DialoGPT, Seq2Seq-DU, and our model according to Fluency (Flu), Meaningfulness (Mean), and Differential (Diff). The rating score ranges from 1 to 3, with 3 being the best. To study the hierarchical contrastive learning, we visualize the vectors of keyword, input text, positive and negative output text on randomly sampled cases from QQP dataset, as shown in Figure Which is the best site to learn German ? We finally investigate the influence of sampling different keywords. As shown in Table We also compare our model with several baselines in Table In this paper, we propose a hierarchical contrastive learning mechanism, which consists of intra-contrasts within instance-level and keywordlevel and inter-contrast with Mahalanobis contrast. The experimental results yield significant out-performance over baselines when applied in the CVAE framework. In the future, we aim to extend the contrastive learning mechanism to different basic models, and will explore contrastive learning methods based on external knowledge.
1,242
1,532
1,242
Using Neural Machine Translation Methods for Sign Language Translation
We examine methods and techniques, proven to be helpful for the text-to-text translation of spoken languages in the context of gloss-to-text translation systems, where the glosses are the written representation of the signs. We present one of the first works that include experiments on both parallel corpora of the German Sign Language (PHOENIX14T and the Public DGS Corpus). We experiment with two NMT architectures with optimization of their hyperparameters, several tokenization methods and two data augmentation techniques (back-translation and paraphrasing). Through our investigation we achieve a substantial improvement of 5.0 and 2.2 BLEU scores for the models trained on the two corpora respectively. Our RNN models outperform our Transformer models, and the segmentation method we achieve best results with is BPE, whereas back-translation and paraphrasing lead to minor but not significant improvements.
Sign languages (SL), the main medium of exchanging information for the deaf and the hard of hearing, are visual-spatial natural languages with their own linguistic rules. In contrast to the spoken ones, they lack a written form, on one hand, and use face, hands and body to convey meaning, on the other. However, in our society, spoken languages are used by and large, leading to social exclusion in the everyday life of the deaf and hard of hearing. Therefore, recent research is making the most out of the technical advances in the fields of Natural Language Processing (NLP), Deep Neural Networks (DNN), and Machine Translation (MT), with the aim to develop systems that are able to translate between signed and spoken languages in order to fill the gap of communication between the SL speaking communities and the people using vocal language. Most latest approaches tackle the problem, dividing it into two sub-tasks: Sign Language Recognition (SLR), also called video-to-gloss, and Sign Language Translation (SLT), also known as gloss-to-text translation. The latter uses as an intermediate representation the glosses, described in Section 3.1 and Section 4.2.1. Isolating glossto-text translation serves as a building block of a bigger project, which considers SL as a whole and is done in direct co-operation with members of the SL community. For the rest of this work, we focus on the glossto-text sub-task and treat it as a low-resource textto-text machine translation problem. We explore different known techniques for MT of written languages on the glosses, and report our findings during our experiments with: • two neural architectures (RNN and Transformer) • several tokenization and sub-word segmentation methods (BPE, unigram and custom tokenization of the gloss annotations) • two data augmentation techniques (backtranslation and paraphrasing) Preprocessing scripts and data are publicly available.
Sign language translation is a relatively new research field with recent findings made possible thanks to the continuous advances in neural machine translation (NMT). Several experiments with SL gloss-to-text translation have taken place in the previous decade using statistical phrase-based machine translation To the best of our knowledge, currently Glosses are the most commonly used written form for annotating SL, where each sign has a written gloss transcription. However, a limitation of using them is the fact that they do not sufficiently capture all the information, expressed through body posture, movement of the head and mimics, which also occur in parallel. As a result, there is a loss of information on a semantic level In contrast to the classical text-to-text translation task, where the pairs consist of pre-aligned sentences -one in the source language and one in the target language, for our gloss-to-text translation models we work with matching pairs of gloss sentences on the source side, and German sentences on the target side (see Table In our work we investigate two model architectures implementing different types of attention mechanisms -RNN and Transformer. RNN is an encoder-decoder architecture with attention suggested by The Transformer is another encoder-decoder architecture The decoder adds one additional sub-layer, which is using multi-head decoder-encoder attention on the encoder output helping the decoder to focus on the relevant parts of the input sequence. Byte Pair Encoding (BPE) is a simple data compression technique that has been succesfully applied to NMT Unigram sub-word segmentation Back-translation is a semi-supervised method for improving the quality of translation relying on monolingual data Paraphrasing is the task of using an alternative formulation to express the same semantic content For our experiments we utilize the following corpora of the German SL, which due to the different gloss annotations are used only separately for our experiments. Statistics of the two corpora can be seen in Table Introduced by ZUˆ3 to squeeze, squeezed ZU7 closed ZU9 towards The data was extracted via the ELAN The gloss annotations of the DGS corpus are far more complex and comprehensive than the ones of the PHOENIX14T corpus. The glosses are written in capitalized letters -a common convention used for annotating SL. An essential part of the annotations are the gloss suffixes. For instance, they are used to represent lexical variants or to indicate different meanings of a word, as can be seen in the example with the German word "zu" Focusing in depth on all of the linguistic rules used to create the different gloss annotations is out of scope for this work. Therefore, here we mention briefly some of the main sign categories. The lexical signs are approximately equivalent to the commonsense notion of the words, and also form the corpus dictionary. The productive signs in combination with other signs illustrate intended meaning, but they do not convey meaning of their own. The pointing signs indicate orientation or movement. There are also fingerspelling signs for annotating when the signers sketch the form of letters in the air. The number type forms a special system for easily representing different kind of numbers. The annotation of the sign language videos is structured in parallel channels, the tiers, supporting multi-level and multi-participant annotations (Appendix, Figure The ordering of glosses to a gloss sentence was achieved by considering the starting and the ending time of the corresponding German sentence and of the individual glosses. One particular obstacle we encountered during the formation of the parallel data set were the overlapping timestamps of some glosses done with both hands. Such is the case of the fingerspelling signs. Because signers have a "dominant" and a "non-dominant" hand, the dominant one is usually used for one-handed signs and for fingerspellings We separate our experiments in three main groups. In the first one, described in Section 5.1 we initially train two baseline models for both corpora and consecutively make changes to them with the goal to investigate how different model architectures and known configurations of neural MT systems influence them. Therefore, we use the best performing models from the first group to further continue our experiments in the second one, described in Section 5.2, where we apply three different tokenization schemes -BPE, unigram and custom tokenization, on the gloss and on the German sides of the corpora. Ultimately, we utilize the models, which produce the best translations up to this point, in the third group of experiments in Section 5.3, where we separately look into two data augmentation techniques -back-translation and paraphrasing. All models are trained using MarianNMT Our initial motivation to approach the gloss-to-text translation task as a classical low-resource MT problem were the findings by We continue the first set of experiments using techniques for improving the MT quality in a lowresource setting During the tokenization experiments, using the best performing models up to this point, we investigate if and to what extend existing tokenization methods -BPE, unigram and custom tokenization -proven to be effective for NMT of written natural languages, could be beneficial in the gloss-to-text setting. The tokenization of BPE and unigram was done using SentencePiece On the PHOENIX14T corpus we train RNN systems using the same parameters as the ones from the previous group of experiments. The only difference is the way the input and output sentences are tokenized. We conduct additional experiments where we reduce the vocabulary size of the BPE models and compare the translation scores. The DGS corpus has groups of glosses that are more complicated and rich in annotations, which we describe in Section 4. A comparison can be seen in Figure Stripping the gloss parameters In a different, more naive, experiment on the DGS corpus we decide to strip the gloss parameters -such as signs or numbers, as shown in Figure Custom tokenization for the glosses For our custom tokenization experimenent on the DGS corpus, we choose to add the token "@@" to separate prefix, suffix and compound glosses without losing this information in difference to the above case of leaving only the stem. The chosen custom token is not a part of the gloss parameters. For the last group of experiments we make the assumption that, according to We start with the PHOENIX14T corpus. As a first step, we train a model in the opposite direction, German sentences on the source side and gloss sentences on the target side. Based on the suggestions on back-translation in previous work In-domain back-translation A major challenge for the purpose of using back-translation is to find a big monolingual corpus of the target languages, given the very specific domain of the PHOENIX14T corpus, because it contains strictly weather-related sentences. Our first idea is to try and find weather-related corpus, but unfortunately, popular crawled monolingual corpora do not contain such specific sentences. We collect data manually by selecting sentences from online German weather-related articles or German weather websites. We pay attention to not only choose recent articles, but also to search sentences from some available archive sources. Additionally, we manually process the sentences which includes splitting them in shorter ones, removing some words we know are out-of-vocabulary for our models, rewriting complex verb forms. Needless to say, this process is slow and not scalable. Hence, we stop at 1,202 sentences and add their back-translated variants to our training data. In the first of the two following experiments we observe the effect of adding filtered out-of-domain back-translated sentences to our training data, and in the second one we combine in-domain and outof-domain sets. We use crawled data from the German part of the News Crawl corpus Considering our low scores on the DGS corpus and the conclusions of For this purpose we filter the first 10,000 sentences from the news-crawl without taking into account their domains, because the DGS Corpus also does not have a specific domain. For the last experiment we add 3,612 translated sentences from our original training set, using DeepL Translate In this section we report the results from the three groups of experiments we have conducted. We evaluate all our models using SacreBLEU The results from our first group of experiments, described in Section 5.1, where we compare two types of model architecture, combined with adjustment of hyperparameters for improving the translation quality in a low-resource setting, are shown in Table After conducting the first tokenization experiments, described in Section 5.2, we observe the results, shown in Table The BLUE score we achieve on the DGS corpus after stripping the parameters from the glosses is only 2.8 which, we assume, is due to the fact that each gloss annotation consists of important parameters, both contributing to the meaning, and communicating nuances. Removing this information, makes it impossible for our model to learn meaningful and correct representations as the stems of many glosses may be the same, but with added parameters the annotations may have very different meanings. Custom tokenization By adding a custom token to split the parameters from the stem of the glosses we achieve 3.3 BLEU score on the test set, which is the second best score we manage to obtain. Unfortunately, the translation performance remains low. Before conducting the back-translation experiments based on previous work Results from the comparison of models with synthetic sentences, using a tag and not, can also be seen in Table Using back-translation on the DGS corpus we achieve only a small improvement of +0.1 on the test set (results are also shown in Table In this work we investigated the effect of several methods used in NMT on the gloss-to-text translation task for a sign language. We present one of the first works that does extensive experiments on both Table existing corpora for the German Sign Language -PHOENIX14T and the DGS Corpus. Further, we ran three successive groups of experiments: Neural MT architectures, contrasting RNN and Transformer, with extensive search of hyperparameters and techniques, proven to be effective in a lowresource setup. In contrary to previous research, we found that RNN performs better than the Transformer. Tokenization schemes, where our findings were in favor of the BPE tokenization for both corpora. This improved our PHOENIX14T model by 0.3 BLEU on the test set (reaching 22.5 BLEU), and our DGS model by 1 BLEU on the test set (reaching 3.7 BLEU). Data augmentation techniques, i.e. backtranslation and paraphrasing via bilingual pivoting, with the intention to create variance in the data. Back-translation gave small improvements: +0.2 on the PHOENIX14T corpus and +0.1 on the DGS corpus. Further investigation on the reasons for the limited contribution of the above augmentation techniques may be directed to the extremely lowresource scenario, the amount and domain of the data, or the particular nature of the sign language glosses. All above methods allowed an improvement of 5 BLEU points on the test set (22.7 BLEU) for the PHOENIX14T model, and 2.2 BLEU points on the test set (3.8 BLEU) for the DGS one. In conclusion, in line with previous research
915
1,916
915
Can Pretrained Language Models (Yet) Reason Deductively?
Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks. Their good performance has led the community to believe that the models do possess a modicum of reasoning competence rather than merely memorising the knowledge. In this paper, we conduct a comprehensive evaluation of the learnable deductive (also known as explicit) reasoning capability of PLMs. Through a series of controlled experiments, we posit two main findings. (i) PLMs inadequately generalise learned logic rules and perform inconsistently against simple adversarial surface form edits. (ii) While the deductive reasoning fine-tuning of PLMs does improve their performance on reasoning over unseen knowledge facts, it results in catastrophically forgetting the previously learnt knowledge. Our main results suggest that PLMs cannot yet perform reliable deductive reasoning, demonstrating the importance of controlled examinations and probing of PLMs' deductive reasoning abilities; we reach beyond (misleading) task performance, revealing that PLMs are still far from robust reasoning capabilities, even for simple deductive tasks.
Pretrained Language Models (PLMs) such as BERT (2) (3) (1) (2) (3) (1) (2) (3) Automatic reasoning, a systematic process of deriving previously unknown conclusions from given formal representations of knowledge In particular, deductive reasoning 2 is one of the most promising directions Despite promising applications of PLMs, some recent studies have pointed out that they could only perform a shallow level of reasoning on textual data In particular, we test various reasoning training approaches on two knowledge reasoning datasets. Our experimental results indicate that such deductive reasoning training of the PLMs (e.g., BERT and RoBERTa) yields strong results on the standard benchmarks, but it inadequately generalises learned logic rules to unseen cases. That is, they perform inconsistently against simple surface form perturbations (e.g., simple synonym substitution, paraphrasing or negation insertion), advocating a careful rethinking of the details behind the seemingly flawless empirical performance of deduc-2 This type of reasoning is also often referred to as explicit reasoning in the literature tive reasoning using the PLMs. We hope our work will inspire further research on probing and improving the deductive reasoning capabilities of the PLMs. Our code and data are available online at
Knowledge Probing, Infusing, and Editing with PLMs. PLMs appear to memorise (world) knowledge facts during pretraining, and such captured knowledge is useful for knowledge-intensive tasks Knowledge Reasoning with PLMs. In recent years, PLMs have also achieved impressive progress in knowledge reasoning Although some research has demonstrated that PLMs can learn to effectively perform inference which involves taxonomic and world knowledge, chaining, and counting What is Deductive Reasoning? Psychologists define reasoning as a process of thought that yields a conclusion from precepts, thoughts, or assertions We investigate deductive reasoning in the context of NLP and neural PLMs. In particular, the goal of this deductive reasoning task is to train a PLM (e.g. BERT) over some reasoning examples (each with a set of premises and a conclusion) to become a potential reasoner (e.g. R-BERT as illustrated in Figure Softmax Softmax [CLS] A bird can fly. A raven is a bird. A raven can fly. cess. In this paper, we only focus on the encoderbased PLMs (e.g. BERT and RoBERTa) as they can be evaluated under more controllable conditions and scrutinised via automatic evaluation. In particular, we investigate two task formulations of the deductive reasoning training: 1) classificationbased and 2) prompt-based reasoning, as follows. The classification-based approach formulates the deductive reasoning task as a sequence classification task. Let D = {D (1) , D (2) , • • • , D (n) } be a reasoning dataset, where n is the number of examples. Each example D (i) ∈ D contains a set of premises j }, a hypothesis h (i) , and a binary label l (i) ∈ {0, 1}. A classification-based reasoner takes the input of P (i) and h (i) , then outputs a binary label l (i) indicating the faithfulness of h (i) , given that P (i) is hypothetically factual. The goal of the classification-based reasoning training is to build a statistical model parameterised by θ to characterise P θ (l (i) |h (i) , P (i) ). Those PLMs built on the transformer encoder architecture, such as BERT To do so, the contextualised representation of the '[CLS]' token is subsequently projected down to two logits and passed through a softmax layer to form a Bernoulli distribution indicating that a hypothesis is true or false. Deductive reasoning can also be approached as a cloze-completion task by formulating a valid con-clusion as a cloze test. Specifically, given a reasoning example, i.e., D (i) with its premises P (i) , and a cloze prompt c (i) (e.g. "A [MASK] can fly"), instead of predicting a binary label, this clozecompletion task is to predict the masked token a (i) (e.g. raven) to the cloze question c (i) . The BERT-based models have been widely used in the prompt-based reasoning tasks Recent PLMs have shown surprisingly near-perfect performance in deductive reasoning Two datasets are used to examine the PLM-based reasoners, namely, the Leap of Thought (LoT) dataset LoT was originally proposed for conducting the classification-based reasoning experiments for deductive reasoning For the prompt-based reasoning task, we can reformulate the LoT dataset to fit our clozecompletion task. Instead of having a set of premises P, a hypothesis h, and a binary label l, we rewrite the hypothesis in LoT into a cloze c and the answer a (e.g. A raven can fly. → A [MASK] can fly.). Note that we only generate those cloze questions on the positive examples. Consequently, the results across these two tasks are not directly comparable. The WD dataset is an auxiliary reasoning dataset which we generated and extracted from Wiki-data5m Previous work demonstrates that PLMs can achieve near-perfect empirically results in reasoning tasks. For example, RoBERTa-based models record a near-perfect accuracy of 99.7% in the deductive reasoning task on LoT As both LoT and WD datasets are prompted from knowledge graphs, the lexical and syntactical variance of the dataset is minimal, with imaginable artefacts. To examine if the PLM-based reasoner could consistently perform reasoning against linguistic diversity and variability (in terms of both the token-level and the syntactic-level diversity), we employ two types of surface form perturbations to the data items from the original datasets: • Synonym Substitution: In order to investigate to what extent the PLM-based reasoners would be sensitive to the token-level semantic diversity in terms of deriving their conclusions, we employ synonym substitution • Paraphrasing: To further investigate the PLM-based reasoners' robustness on sentencelevel semantic variability, we paraphrase the premises P with two paraphrasing systems: (i) PEGASUS, an end-to-end model fine-tuned for paraphrasing Anchors of Retained Knowledge. Prior work has shown that PLMs are prone to forgetting previously learnt knowledge when fine-tuning with new knowledge data We create such a set of anchors for both LoT and WD, and investigate the behaviour of the reasoning models over these anchors based on the promptbased reasoning task. In particular, these anchors should be real-world textual statements that contain the target word (to meet criterion (i) above), but their newly composed sentences (by the reasoning replacement) are unlikely true statements (to meet criterion (ii) above). To this end, we use the BM25 algorithm (Sparck Following previous work In the following, we report our findings and numerical results based on the BERT-based reasoners (in particular bert-base-uncased), but we note that other PLMs (such as RoBERTa) of various sizes observe the same performance trends and result in the same findings and conclusions. Appendix B provides results for other PLMs. Table We evaluate the impact of reasoning training on the PLMs and investigate their robustness against three well-known issues of PLMs: utilising artefacts from data, incapability of modelling negation, and catastrophic forgetting. We further conduct qualitative analysis to understand the inference errors introduced by deductive reasoning training. Finding 1 All the deductive reasoning training approaches significantly improve PLMs' reasoning capabilities, achieving near-perfect deductive reasoning performance on both the reasoning test sets. Table Finding 2 Surface form perturbations drastically decrease PLMs' reasoning performance. A natural follow-up question to ask is to what extent the aforementioned near-perfect numbers really reflect the model's reasoning abilities. We thus perform surface form perturbations to add lexical and syntactic variance to the test datasets and probe the model against such variations. Table Finding 3 All reasoners cannot distinguish between negated and non-negated examples. Figure A quick error analysis, provided in Table Finding 4 Previously learnt knowledge is not fully retained after reasoning training, and the trained reasoners (catastrophically) forget it. Figure Furthermore, Table Several strategies might help mitigate catastrophic forgetting. A promising direction is encapsulating lightweight adapter modules Despite the thorough experiments on standard and popular PLMs of various sizes, this study explores only encoder-based models. Some generationbased models under other Transformer architec-tures, such as encoder-decoder (T5) or decoderonly (GPT-3), were also deployed in the reasoning tasks In addition, we note that better evaluation resources that could address paraphrases and word senses, especially for mask-filling tasks, are still lacking. This limitation is particularly significant in our setting. For example, in addition to the singletoken answer in the evaluation datasets we used, there are some other feasible answers (e.g. synonyms) for the same query, which should also be considered a correct prediction. However, such answers are ignored by the current standard evaluation protocols. As a result, there is a certain level of unavoidable noise in the evaluation process. Finally, introducing a reasoning dataset is highly challenging and appreciated by the community. Leap-of-thought is to our knowledge the only existing dataset that is suitable for our deductive reasoning evaluation. To solidify our conclusions, we further constructed an auxiliary dataset (WD) following a similar procedure to LoT. Although our data construction method is commonly used to extract reasoning examples, such an automatic procedure, unfortunately, inevitably reflects the quality and errors (e.g. nonsensical statements) from our source (WikiData). To reduce such noisy examples, we have conducted multiple rounds of filtering (see Table 7 lists our model hyperparameters. Among these models, MLM-BERT and Cloze-BERT were implemented using the HuggingFace transformers package Table Table We construct the WD dataset following the pipeline shown in Figure We choose the Wikidata5m dataset Prompting. We manually select a set of relations based on their frequency and design their corresponding prompts shown in Table Filtering. We filter those constructed inference instances with the following properties: (i) We only choose examples with answers being a single masked token, and these answers should be included in the BERT vocabulary. (ii) For all in- ference instances, the maximum number of occurrences of a single answer is 50 to balance the dataset and avoid excessive repetition. Final WD Dataset. The final WD dataset contains 4,851 instances, which are randomly split into 4,124/413/314 instances for training/validation/testing while keeping that the answers of the testing set should not appear in the training/validation sets. This is to ensure that trained reasoners need to draw conclusions via conducting deductive reasoning rather than via memorisation.
1,208
1,311
1,208
A Supertag-Context Model for Weakly-Supervised CCG Parser Learning
Combinatory Categorial Grammar (CCG) is a lexicalized grammar formalism in which words are associated with categories that specify the syntactic configurations in which they may occur. We present a novel parsing model with the capacity to capture the associative adjacent-category relationships intrinsic to CCG by parameterizing the relationships between each constituent label and the preterminal categories directly to its left and right, biasing the model toward constituent categories that can combine with their contexts. This builds on the intuitions of Klein and Manning's (2002) "constituentcontext" model, which demonstrated the value of modeling context, but has the advantage of being able to exploit the properties of CCG. Our experiments show that our model outperforms a baseline in which this context information is not captured.
Learning parsers from incomplete or indirect supervision is an important component of moving NLP research toward new domains and languages. But with less information, it becomes necessary to devise ways of making better use of the information that is available. In general, this means constructing inductive biases that take advantage of unannotated data to train probabilistic models. One important example is the constituentcontext model (CCM) of Baldridge observed is that, cross-linguistically, grammars prefer simpler syntactic structures when possible, and that due to the natural correspondence of categories and syntactic structure, biasing toward simpler categories encourages simpler structures. In previous work, we were able to incorporate this preference into a Bayesian parsing model, biasing PCFG productions toward sim-pler categories by encoding a notion of category simplicity into a prior In this paper, we present a novel parsing model that is designed specifically for the capacity to capture both of these universal, intrinsic properties of CCG. We do so by extending our previous, PCFG-based parsing model to include parameters that govern the relationship between constituent categories and the preterminal categories (also known as supertags) to the left and right. The advantage of modeling context within a CCG framework is that while CCM must learn which contexts are likely purely from the data, the CCG categories give us obvious a priori information about whether a context is likely for a given constituent based on whether the categories are combinable. Biasing our model towards both simple categories and connecting contexts encourages learning structures with simpler syntax and that have a better global "fit". The Bayesian framework is well-matched to our problem since our inductive biases -those derived from universal grammar principles, weak supervision, and estimations based on unannotated data -can be encoded as priors, and we can use Markov chain Monte Carlo (MCMC) inference procedures to automatically blend these biases with unannotated text that reflects the way language is actually used "in the wild". Thus, we learn context information based on statistics in the data like CCM, but have the advantage of additional, a priori biases. It is important to note that the Bayesian setup allows us to use these universal biases as soft constraints: they guide the learner toward more appropriate grammars, but may be overridden when there is compelling contradictory evidence in the data. Methodologically, this work serves as an example of how linguistic-theoretical commitments can be used to benefit data-driven methods, not only through the construction of a model family from a grammar, as done in our previous work, but also when exploiting statistical associations about which the theory is silent. While there has been much work in computational modeling of the interaction between universal grammar and observ-able data in the context of studying child language acquisition (e.g., In this paper, we seek to learn from only raw data and an incomplete dictionary mapping some words to sets of potential supertags. In order to estimate the parameters of our model, we develop a blocked sampler based on that of
In the CCG formalism, every constituent, including those at the lexical level, is associated with a structured CCG category that defines that constituent's relationships to the other constituents in the sentence. Categories are defined by a recursive structure, where a category is either atomic (possibly with features), or a function from one category to another, as indicated by a slash operator: Categories of adjacent constituents can be combined using one of a set of combination rules to form categories of higher-level constituents, as seen in Figure In this section, we present our novel supertagcontext model (SCM) that augments a standard PCFG with parameters governing the supertags to the left and right of each constituent. The CCG formalism is said to be naturally associative since a constituent label is often able to combine on either the left or the right. As a motivating example, consider the sentence "The lazy dog sleeps", as shown in Figure Assuming T is the full set of known categories, the generative process for our model is: The lazy dog sleeps n Figure 2: Higher-level category n subsumes the categories of its constituents. Thus, n should have a strong prior on combinability with its adjacent supertags np/n and s\np. Parameters: until the tree y is valid where , y, r | t ∼ SCM(t) is defined as: The process begins by sampling the parameters from Dirichlet distributions: a distribution θ ROOT over root categories, a conditional distribution θ BIN t over binary branching productions given category t, θ UN t for unary rewrite productions, θ TERM t for terminal (word) productions, and θ LCTX t and θ RCTX t for left and right contexts. We also sample parameters λ t for the probability of t producing a binary branch, unary rewrite, or terminal word. Next we sample a sentence. This begins by sampling first a root category s and then recursively sampling subtrees. For each subtree rooted by a category t, we generate a left context supertag and a right context supertag r. Then, we sam- The generative process starting with non-terminal A ij , where t x is the supertag for w x , the word at position x, and "A → B C" is a valid production in the grammar. We can see that nonterminal A ij generates nonterminals B ik and C kj (solid arrows) as well as generating left context t i-1 and right context t j (dashed arrows); likewise for B ik and C kj . The triangle under a non-terminal indicates the complete subtree rooted by the node. ple a production type z corresponding to either a (B) binary, (U) unary, or (T) terminal production. Depending on z, we then sample either a binary production u, v and recurse, a unary production u and recurse, or a terminal word w and end that branch. A tree is complete when all branches end in terminal words. See Figure Like CCM, this model is deficient since the same supertags are generated multiple times, and parses with conflicting supertags are not valid. Since we are not generating from the model, this does not introduce difficulties One additional complication that must be addressed is that left-frontier non-terminal categories -those whose subtree span includes the first word of the sentence -do not have a left-side supertag to use as context. For these cases, we use the special sentence-start symbol S to serve as context. Similarly, we use the end symbol E for the right-side context of the right-frontier. We next discuss how the prior distributions are constructed to encode desirable biases, using universal CCG properties. For the root, binary, and unary parameters, we want to choose prior means that encode our bias toward cross-linguistically-plausible categories. To formalize the notion of what it means for a category to be more "plausible", we extend the category generator of our previous work, which we will call P CAT . We can define P CAT using a probabilistic grammar For each sentence s, there will be one S and one E , so we set p se = 1/(25 + 2), since the average sentence length in the corpora is roughly 25. To discourage the model from deleting tokens (only applies during testing), we set p del = 10 -100 . For P C , the distribution over standard categories, we use a recursive definition based on the structure of a CCG category. If p = 1 -p, then: The category grammar captures important aspects of what makes a category more or less likely: (1) simplicity is preferred, with a higher p term meaning a stronger emphasis on simplicity; We can use P CAT to define priors on our production parameters that bias our model toward rules that result in a priori more likely categories: 3 θ ROOT-0 (t) = P CAT (t) For simplicity, we assume the production-type mixture prior to be uniform: λ 0 = 1 3 , 1 3 , 1 3 . We employ the same procedure as our previous work for setting the terminal production prior distributions θ TERM-0 t (w) by estimating word-givencategory relationships from the weak supervision: the tag dictionary and raw corpus In order to encourage our model to choose trees in which the constituent labels "fit" into their supertag contexts, we want to bias our context parameters toward context categories that are combinable with the constituent The right-side context of a non-terminal category -the probability of generating a category to the right of the current constituent's category -corresponds directly to the category transitions used for the HMM supertagger of To encode a notion of combinability, we follow atoms have features associated, then the atoms are allowed to unify if the features match, or if at least one of them does not have a feature. In defining κ, it is also important to ignore possible arguments on the wrong side of the combination since they can be consumed without affecting the connection between the two. To achieve this for κ(t, u), it is assumed that it is possible to consume all preceding arguments of t and all following arguments of u. So κ(np, (s\np)/np) = 1. This helps to ensure the associativity discussed earlier. For "combining" with the start or end of a sentence, we define κ( S , u)=1 when u seeks no left-side arguments (since there are no tags to the left with which to combine) and κ(t, E )=1 when t seeks no right-side arguments. So κ( S , np/n)=1, but κ( S , s\np)=0. Finally, due to the frequent use of the unary rule that allows n to be rewritten as np, the atom np is allowed to unify with n if n is the argument. So κ(n, s\np) = 1, but κ(np/n, np) = 0. The prior mean of producing a right-context supertag r from a constituent category t, P right (r | t), is defined so that combinable pairs are given higher probability than non-combinable pairs. We further experimented with a prior that biases toward both combinability and category likelihood, replacing the uniform treatment of categories with our prior over categories, yielding P right CAT (r | t). If T is the full set of known CCG categories: Distributions P left ( | t) and P left CAT ( | t) are defined in the same way, but with the combinability direction flipped: κ( , t), since the left context supertag precedes the constituent category. We wish to infer the distribution over CCG parses, given the model we just described and a corpus of sentences. Since there is no way to analytically compute these modes, we resort to Gibbs sampling to find an approximate solution. Our strategy is based on the approach presented by Our inference procedure takes as input the distribution prior means, along with the raw corpus and tag dictionary. During sampling, we restrict the tag choices for a word w to categories allowed by the tag dictionary. Since real-world learning scenarios will always lack complete knowledge of the lexicon, we, too, want to allow for unknown words; for these, we assume the word may take any known supertag. We refer to the sequence of word tokens as w and a non-terminal category covering the span i through j -1 as y ij . While it is technically possible to sample directly from our context-sensitive model, the high number of potential supertags available for each context means that computing the inside chart for this model is intractable for most sentences. In order to overcome this limitation, we employ an accept/reject Metropolis-Hastings (MH) step. The basic idea is that we sample trees according to a simpler proposal distribution Q that approximates the full distribution and for which direct sampling is tractable, and then choose to accept or reject those trees based on the true distribution P . For our model, there is a straightforward and intuitive choice for the proposal distribution: the PCFG model without our context parameters: (θ ROOT , θ BIN , θ UN , θ TERM , λ), which is known to have an efficient sampling method. Our acceptance step is therefore based on the remaining parameters: the context (θ LCTX , θ RCTX ). To sample from our proposal distribution, we use a blocked Gibbs sampler based on the one proposed by We then pass "downward" through the chart, sampling productions until we reach a terminal word on all branches. ∀ y ik , y kj when j > i + 1, where x is either a split point k and pair of categories y ik , y kj resulting from a binary rewrite rule, a single category y ij resulting from a unary rule, or a word w resulting from a terminal rule. The MH procedure requires an acceptance distribution A that is used to accept or reject a tree sampled from the proposal Q. The probability of accepting new tree y given the previous tree y is: Since Q is defined as a subset of P 's parameters, it is the case that: After substituting this for each P in A, all of the Q factors cancel, yielding the acceptance distribution defined purely in terms of context parameters: For completeness, we note that the probability of a tree y given only the context parameters is: Before we begin sampling, we initialize each distribution to its prior mean (θ ROOT =θ ROOT-0 , θ BIN t =θ BIN-0 , etc). Since MH requires an initial set of trees to begin sampling, we parse the raw corpus with probabilistic CKY using these initial parameters (excluding the context parameters) to guess an initial tree for each raw sentence. The sampler alternates sampling parse trees for the entire corpus of sentences using the above procedure with resampling the model parameters. Resampling the parameters requires empirical counts of each production. These counts are taken from the trees resulting from the previous round of sampling: new trees that have been "accepted" by the MH step, as well as existing trees for sentences in which the newly-sampled tree was rejected. It is important to note that this method of resampling allows the draws to incorporate both the data, in the form of counts, and the prior mean, which includes all of our carefully-constructed biases derived from both the intrinsic, universal CCG properties as well as the information we induced from the raw corpus and tag dictionary. After all sampling iterations have completed, the final model is estimated by pooling the trees resulting from each sampling iteration, including trees accepted by the MH steps as well as the duplicated trees retained due to rejections. We use this pool of trees to compute model parameters using the same procedure as we used directly above to sample parameters, except that instead of drawing a Dirichlet sample based on the vector of counts, we simply normalize those counts. However, since we require a final model that can parse sentences efficiently, we drop the context parameters, making the model a standard PCFG, which allows us to use the probabilistic CKY algorithm. In our evaluation we compared our supertagcontext approach to (our reimplementation of) the best-performing model of our previous work Each corpus was divided into four distinct data sets: a set from which we extract the tag dictionaries, a set of raw (unannotated) sentences, a development set, and a test set. We use the same splits as The English development set was used to tune hyperparameters using grid search, and the same hyperparameters were then used for all three languages. For the category grammar, we used p punc =0.1, p term =0.7, p mod =0.2, p fwd =0.5. For the priors, we use α ROOT =1, α BIN =100, α UN =100, α TERM =10 4 , α λ =3, α LCTX =α RCTX =10 3 . CCG parsers are typically evaluated on the dependencies they produce instead of their CCG derivations directly since there can be many different CCG parse trees that all represent the same dependency relationships (spurious ambiguity), and CCG-to-dependency conversion can collapse those differences. To convert a CCG tree into a dependency tree, we follow When evaluating on test set sentences, if the model is unable to find a parse given the constraints of the tag dictionary, then we would have to take a score of zero for that sentence: every dependency would be "wrong". Thus, it is important that we make a best effort to find a parse. To accomplish this, we implemented a parsing backoff strategy. The parser first tries to find a valid parse that has either s dcl or np at its root. If that fails, then it searches for a parse with any root. If no parse is found yet, then the parser attempts to strategically allow tokens to subsume a neighbor by making it a dependent (first with a restricted root set, then without). This is similar to the "deletion" strategy employed by For each language and level of supervision, we executed four experiments. The no-context baseline used (a reimplementation of) the best model from our previous work The results of our experiments are given in Table 1. We find that the incorporation of supertagcontext parameters into a CCG model improves performance in every scenario we tested; we see gains of 2-5% across the board. Adding context parameters never hurts, and in most cases, using priors based on intrinsic, cross-lingual aspects of the CCG formalism to bias those parameters toward connectivity provides further gains. In particular, biasing the model toward trees in which constituent labels are combinable with their adjacent supertags frequently helps the model. However, for English, we found that additionally biasing context priors toward simpler categories using P left CAT /P right CAT degraded performance. This is likely due to the fact that the priors on production parameters (θ BIN , θ UN ) are already biasing the model toward likely categories, and that having the context parameters do the same ends up over-emphasizing the need for simple categories, preventing the model from choosing more complex categories when they are needed. On the other hand, this bias helps in Chinese and Italian. 6 Related Work Other researchers have shown positive results for grammar induction by introducing relatively small amounts of linguistic knowledge. Because of the structured nature of CCG categories and the logical framework in which they must assemble to form valid parse trees, the CCG formalism offers multiple opportunities to bias model learning based on universal, intrinsic properties of the grammar. In this paper we presented a novel parsing model with the capacity to capture the associative adjacent-category relationships intrinsic to CCG by parameterizing supertag contexts, the supertags appearing on either side of each constituent. In our Bayesian formulation, we place priors on those context parameters to bias the model toward trees in which constituent labels are combinable with their contexts, thus preferring trees that "fit" together better. Our experiments demonstrate that, across languages, this additional context helps in weak-supervision scenarios.
845
3,263
845
Information Presentation in Spoken Dialogue Systems
To tackle the problem of presenting a large number of options in spoken dialogue systems, we identify compelling options based on a model of user preferences, and present tradeoffs between alternative options explicitly. Multiple attractive options are structured such that the user can gradually refine her request to find the optimal tradeoff. We show that our approach presents complex tradeoffs understandably, increases overall user satisfaction, and significantly improves the user's overview of the available options. Moreover, our results suggest that presenting users with a brief summary of the irrelevant options increases users' confidence in having heard about all relevant options.
The goal of spoken dialogue systems (SDS) is to offer efficient and natural access to applications and services, such as email and calendars, travel and entertainment booking, and product recommendation. In evaluating nine SDS in the DARPA Communicator domain (flight, hotel, and rental car hire), A return flight is with KLM departing at 10:30 am and arriving at 11:28 am with 1 stopover. Ticket price is £165. Please say "next option", or, "flight details", or, "I'll take it". U: Next option S: For option 2, I have an outbound flight with KLM departing at 3:30 pm and arriving at 5:19 pm nonstop. A return flight is with KLM departing at 8:25 pm and arriving at 12:28 pm with 1 stopover. Ticket price is £165 please say "next option", or, "flight details", or, "I'll take it." U: Next option S: . . . Clearly, alternative strategies to sequential presentation of information in SDS are needed. Recently, two approaches have been proposed. In the user-model (UM) based approach, the system identifies a small number of options that best match the user's preferences In this paper, we present an algorithm that combines the benefits of these two approaches in an approach to information presentation that integrates user modelling with automated clustering. Thus, the system provides detail only about those options that are of some relevance to the user, where relevance is determined by the user model. If there are multiple relevant options, a clusterbased tree structure orders these options to allow for stepwise refinement. The effectiveness of the tree structure, which directs the dialogue flow, is optimized by taking the user's preferences into account. Complex tradeoffs between alternative options are presented explicitly to allow for a better overview and a more informed choice. In addition, we address the issue of giving the user a good overview of the option space, despite selecting only the relevant options, by briefly accounting for the remaining (irrelevant) options. In the remainder of this paper, we describe the prior approaches in more detail, and discuss their limitations (Section 2). In section 3, we describe our approach, which integrates user preferences with automated clustering and summarization in an attempt to overcome the problems of the original approaches. Section 4 presents our clustering and content structuring algorithms and addresses issues in information presentation. In Section 5, we describe an evaluation of our approach and discuss its implications.
Previous work in natural language generation showed how a multi-attribute decision-theoretic model of user preferences could be used to determine the attributes that are most relevant to mention when generating recommendations tailored to a particular user However, there are several limitations to this approach. First, it does not scale up to presenting a large number of options. When there are hundreds of options to consider (e.g., when choosing among consumer products, hotels, or restaurants) there may be many options that are close in score. In addition, users may not be able to provide constraints until they hear more information about the space of options. This brings up a second problem with the UM-based approach, namely that it does not provide the user with an overview of the option space, because options scoring below a specified threshold are not mentioned. This is related to the third problem, which is that users might miss out on options they would have chosen if they had heard about them. These last two problems may reduce user confidence in the system, if users have the perception that the system is not telling them about all of the available options. This may ultimately lead to a decrease in user satisfaction. Polifroni Although the SR approach provides a solution to the problem of presenting information when there are large numbers of options in a way that is suitable for SDS, it has several limitations. First, there may be long paths in the dialogue structure. Because the system does not know about the user's preferences, the option clusters may contain many irrelevant entities which must be filtered out successively with each refinement step. In addition, the difficulty of summarizing options typi-cally increases with their number, because values are more likely to be very diverse, to the point that a summary about them gets uninformative ("I found flights on 9 airlines."). A second problem with the SR approach is that exploration of tradeoffs is difficult when there is no optimal option. If at least one option satisfies all requirements, this option can be found efficiently with the SR strategy. But the system does not point out alternative tradeoffs if no "optimal" option exists. For example, in the flight booking domain, suppose the user wants a flight that is cheap and direct, but there are only expensive direct and cheap indirect flights. In the SR approach, as described by Polifroni, the user has to ask for cheap flights and direct flights separately and thus has to explore different refinement paths. Finally, the attribute that suggests the next user constraint may be suboptimal. The procedure for computing the attribute to use in suggesting the next restriction to the user is based on the considerations for efficient summarization, that is, the attribute that will partition the data set into the smallest number of clusters. If the attribute that is best for summarization is not of interest to this particular user, dialogue duration is unnecessarily increased, and the user may be less satisfied with the system, as the results of our evaluation suggest (see section 5.2). Our work combines techniques from the UM and SR approaches. We exploit information from a user model to reduce dialogue duration by (1) selecting all options that are relevant to the user, and (2) introducing a content structuring algorithm that supports stepwise refinement based on the ranking of attributes in the user model. In this way, we keep the benefits of user tailoring, while extending the approach to handle presentation of large numbers of options in an order that reflects user preferences. To address the problem of user confidence, we also briefly summarize options that the user model determines to be irrelevant (see section 4.3). Thus, we give users an overview of the whole option space, and thereby reduce the risk of leaving out options the user may wish to choose in a given situation. The integration of a user model with the clustering and structuring also alleviates the three problems we identified for the SR approach. When a user model is available, it enables the system to determine which options and which attributes of options are likely to be of interest to the particular user. The system can then identify compelling options, and delete irrelevant options from the refinement structure, leading to shorter refinement paths. Furthermore, the user model allows the system to determine the tradeoffs among options. These tradeoffs can then be presented explicitly. The user model also allows the identification of the attribute that is most relevant at each stage in the refinement process. Finally, the problem of summarizing a large number of diverse attribute values can be tackled by adapting the cluster criterion to the user's interest. In our approach, information presentation is driven by the user model, the actual dialogue context and the available data. We allow for an arbitrarily large number of alternative options. These are structured so that the user can narrow in on one of them in successive steps. For this purpose, a static option tree is built. Because the structure of the option tree takes the user model into account, it allows the system to ask the user to make the most relevant decisions first. Moreover, the option tree is pruned using an algorithm that takes advantage of the tree structure, to avoid wasting time by suggesting irrelevant options to the user. The tradeoffs (e.g., cheap but indirect flights vs. direct but expensive flights) are presented to the user explicitly, so that the user won't have to "guess" or try out paths to find out what tradeoffs exist. Our hypothesis was that explicit presentation of tradeoffs would lead to a more informed choice and decrease the risk that the user does not find the optimal option. Our approach was implemented within a spoken dialogue system for flight booking. While the content selection step is a new design, the content presentation part of the system is an adaptation and extension of the work on generating natural sounding tailored descriptions reported in The clustering algorithm in our implementation is based on that reported in Clustering attribute values with the above algorithm allows for database-dependent labelling. A £300 flight gets the label cheap if it is a flight from Edinburgh to Los Angeles (because most other flights in the database are more costly) but expensive if it is from Edinburgh to Stuttgart (for which there are a lot of cheaper flights in the data base). Clustering also allows the construction of user valuation-sensitive clusters for categorial values, such as the attribute airline: They are clustered to a group of preferred airlines, dispreferred airlines and airlines the user does not-care about. The tree building algorithm works on the clusters produced by the clustering algorithm instead of the original values. Options are arranged in a refinement tree structure, where the nodes of an option tree correspond to sets of options. The root of the tree contains all options and its children contain complementary subsets of these options. Each child is homogeneous for a given attribute (e.g., if the parent set includes all direct flights, one child might include all direct cheap flights whereas another child includes all direct expensive flights). Leaf-nodes correspond either to a single option or to a set of options with very similar values for all attributes. This tree structure determines the dialogue flow. To minimize the need to explore several branches of the tree, the user is asked for the most essential criteria first, leaving less relevant criteria for later in the dialogue. Thus, the branching criterion for the first level of the tree is the attribute that has the highest weight according to the user model. For example, Figure A special case occurs when an attribute is homogeneous for all options in an option set. Then a unary node is inserted regardless of its importance. This special case allows for more efficient summarization, e.g., "There are no business class flights on KLM." In the example of Figure The user is not forced to impose a total ordering on the attributes but may specify that two attributes, e.g., arrival-time and number-of-legs, are equally important to her. This partial ordering leads to several attributes having the same ranking. For equally ranked attributes, we follow the approach taken by The tree building algorithm introduces one of the main differences between our structuring and Polifroni's refinement process. Polifroni et al.'s system chooses the attribute that partitions the data into the smallest set of unique groups for summarization, whereas in our system, the algorithm takes the ranking of attributes in the user model into account. To determine the relevance of options, we did not use the notion of compellingness (as was done in Pruning dominated options is crucial to our structuring process. The algorithm uses information from the user model to prune all but the dominant options. Paths from the root to a given option are thereby shortened considerably, leading to a smaller average number of turns in our system compared to Polifroni et al.'s system. An important by-product of the pruning algorithm is the determination of attributes which make an option cluster compelling with respect to alternative clusters (e.g., for a cluster containing direct flights, as opposed to flights that require a connection, the justification would be #-of-legs). We call such an attribute the "justification" for a cluster, as it justifies its existence, i.e., is the reason it is not pruned from the tree. Justifications are used by the generation algorithm to present the tradeoffs between alternative options explicitly. Additionally, the reasons why options have been pruned from the tree are registered and provide information for the summarization of bad options in order to give the user a better overview of the option space (e.g., "All other flights are either indirect or arrive too late."). To keep summaries about irrelevant options short, we back off to a default statement "or are undesirable in some other way." if these options are very heterogeneous. In a spoken dialogue system, it is important not to mention too many facts in one turn in order to keep the memory load on the user manageable. Obviously, it is not possible to present all of the options and tradeoffs represented in the tree in a single turn. Therefore, it is necessary to split the tree into several smaller trees that can then be presented over several turns. In the current implementation, a heuristic cut-off point (no deeper than two branching nodes and their children, which corresponds to the nodes shown in Figure The identification of an option set is based on its justification. If an option is justified by several attributes, only one of them is chosen for identification. If one of the justifications is a contextually salient attribute, this one is preferred, leading to constructions like: ". . . you'd have to make a connection in Brussels. If you want to fly direct,. . . "). Otherwise, the cluster is identified by the highest ranked attribute e.g.,"There are four flights with availability in business class.". If an option cluster has no compelling homogeneous attribute, but only a common negative homogeneous attribute, this situation is acknowledged: e.g., "If you're willing to travel economy / arrive later / accept a longer travel time, . . . ". After the identification of a cluster, more information is given about the cluster. All positive homogeneous attributes are mentioned and contrasted against all average or negative attributes. An attribute that was used for identification of an option is not mentioned again in the elaboration. In opposition to a single flight, attributes may have different values for the entities within a set of flights. In that case, these attribute values need to be summarized. There are three main cases to be distinguished: 1. The continuous values for the attributes price, arrival-time etc. need to be summarized, as they may differ in their values even if they are in the same cluster. One way to summarize them is to use an expression that reflects their value range, e.g. "between x and y". Another solution is to mention only the evaluation value, leading to sentences like "The two flights with shortest travel time" or "The cheapest flights." 2. For discrete-valued attributes with a small number of possible values, e.g., number-of-legs and fare-class, summarization is not an issue, because when homogeneous for a cluster, the attribute values of its options are identical. 3. The third group are attributes with categorial values, e.g., "airline". If there are no more than three different values, we summarize using quantifications like "none/all/both of them", as done in If the values are more diverse, the user model comes back into play to produce a tailored summary based on user preferences (e.g., liking KLM). For example, we would generate "None are on KLM.", which takes into account the user's preference and is shorter than mentioning all airlines the flights are on. An issue arising from summarization with negation is that the negated value has to be salient, otherwise the utterance might be irritating. For example, it would be better to say "These flights are not direct." in a neutral context, but "You would not need to connect in London Heathrow." if London Heathrow had already been mentioned. A sample dialogue produced by our system, when given the business user model (see Figure A within-participants laboratory experiment was conducted in order to determine whether user model-based clustering leads to increased overall user satisfaction, a better overview of the available options, quicker accessibility to the optimal option and higher confidence of having heard all relevant options. The experiment furthermore assessed whether the options were presented in a way that users found understandable and recorded the time users took to read a dialogue turn. Each of the 38 subjects who completed the experiment was presented with six dialogue pairs, the first of which was used for training and was thus not included in the analysis. Each dialogue pair consisted of one dialogue between a user and our system and one dialogue between the same user and a system designed as described in After reading each dialogue transcript, participants were asked four questions about the system's responses. They provided their answers using Likert scales. 1. Did the system give the information in a way that was easy to understand? 1: very hard to understand 7: very easy to understand 2. Did the system give you a good overview of the available options? 1: very poor overview 7: very good overview 3. Do you think there may be flights that are better options for X 1 that the system did not tell X 1 about? 1 X was instantiated by name of our example users. 1: I think that is very possible 7: I feel the system gave a good overview of all options that are relevant for X 1 . 4. How quickly did the system allow X 1 to find the optimal flight? 1: slowly 3: quickly After reading each pair of dialogues, the participants were also asked the forced choice question: "Which of the two systems would you recommend to a friend?" to assess user satisfaction. A significant preference for our system was observed. (In the diagrams, our system which combines user modelling and stepwise refinement is called UMSR, whereas the system based on Polifroni's approach is called SR.) There were a total of 190 forced choices in the experiment (38 participants * 5 dialogue pairs). UMSR was preferred 120 times (≈ 0.63%), whereas SR was preferred only 70 times (≈ 0.37%). This difference is highly significant (p < 0.001) using a two-tailed binomial test. Thus, the null-hypothesis that both systems are preferred equally often can be rejected with high confidence. The evaluation results for the Likert scale questions confirmed our expectations. The SR dialogues received on average slightly higher scores for understandability (question 1), which can be explained by the shorter length of the system turns for that system. However, the difference is not statistically significant (p = 0.97 using a twotailed paired t-test). The differences in results for the other questions are all highly statistically significant, especially for question 2, assessing the quality of overview of the options given by the system responses, and question 3, assessing the confidence that all relevant options were mentioned by the system. Both were significant at p < 0.0001. These results confirm our hypothesis that our strategy of presenting tradeoffs explicitly and summarizing irrelevant options improves users' overview of the option space and also increases their confidence in having heard about all relevant options, and thus their confidence in the system. The difference for question 4 (accessibility of the optimal option) is also statistically significant (p < 0.001). Quite surprisingly, subjects reported that they felt they could access options more quickly even though the dialogues were usually longer. The average scores (based on 190 val- To get a feel for whether the content given by our system is too complex for oral presentation and requires participants to read system turns several times, we recorded reading times and correlated them to the number of characters in a system turn. We found a linear relation, which indicates that participants did not re-read passages and is a promising sign for the use of our strategy in SDS. In this paper, we have shown that information presentation in SDS can be improved by an approach that combines a user model with structuring of options through clustering of attributes and successive refinement. In particular, when presented with dialogues generated by a system that combines user modelling with successive refinement (UMSR) and one that uses refinement without reference to a user model (SR), participants reported that the combined system provided them with a better overview of the available options and that they felt more certain to have been presented with all relevant options. Although the presentation of complex tradeoffs usually requires relatively long system turns, participants were still able to cope with the amount of information presented. For some dialogues, subjects even felt they could access relevant options more quickly despite longer system turn length. In future work, we would like to extend the clustering algorithm to not use a fixed number of target clusters but to depend on the number of natural clusters the data falls into. We would also like to extend it to be more sensitive to the user model when forming clusters (e.g., to be more sensitive at lower price levels for a user for whom price is very important than for a user who does not care about price). The explicit presentation of tradeoffs made by the UMSR system in many cases leads to dialogue turns that are more complex than typical dialogue turns in the SR system. Even though participants did not report that our system was harder to understand, it would be interesting to investigate how well users can understand and remember information from the system when part of their concentration is absorbed by another task, for example when using the system while driving a car.
695
2,506
695
Compounding Geometric Operations for Knowledge Graph Completion
Geometric transformations including translation, rotation, and scaling are commonly used operations in image processing. Besides, some of them are successfully used in developing effective knowledge graph embedding (KGE). Inspired by the synergy, we propose a new KGE model by leveraging all three operations in this work. Since translation, rotation, and scaling operations are cascaded to form a composite one, the new model is named Com-poundE. By casting CompoundE in the framework of group theory, we show that quite a few distanced-based KGE models are special cases of CompoundE. CompoundE extends the simple distance-based scoring functions to relation-dependent compound operations on head and/or tail entities. To demonstrate the effectiveness of CompoundE, we perform three prevalent KG prediction tasks including link prediction, path query answering, and entity typing, on a range of datasets. CompoundE outperforms extant models consistently, demonstrating its effectiveness and flexibility. 1
Knowledge graphs (KGs) such as DBpedia Geometric operations such as translation and rotation belong to the family of affine transformations. These operations have been used to build effective KGE models such as TransE, RotatE, and PairRE. Previous KGEs often use a single type of operation to model all relation patterns with different properties. This could be problematic since each operator may have modeling limitations. A synergy of different transformations may complement the weaknesses of individual operators. In fact, generic compound operations yielded from a cascade of affine transformations find numerous applications in image processing There are four main contributions of this work. They are summarized below. • We present a novel KG embedding model called CompoundE, which combines three fundamental operations in the affine group and offers a wide range of designs. • It is proved mathematically that CompoundE can handle complex relation types in KG thanks to unique properties of the affine group. • We apply CompoundE to perform three important KG prediction tasks, including link prediction, path query answering, and entity typing on widely adopted KG benchmarking datasets extracted from Freebase, WordNet, Wikidata, and YAGO. CompoundE consistently outperforms prior work. • Against large-scale datasets containing millions of entities under the memory constraint, CompoundE outperforms other benchmarking methods by a big margin with fewer parameters. The rest of this paper is organized as follows. Recent KGE models for both distance-based and entity-Transformation-based categories are first reviewed in Section 2. Then, we present CompoundE, show its relationship with previous KG embedding models, and explain the reason why it can model complex relations well in Section 3. Experiment details and performance comparisons are given in Section 4. Finally, concluding remarks are given and possible extensions are suggested in Section 5.
Distance-based scoring function is a prevailing strategy in optimizing KGE. The main idea is to model a relation as a transformation that places head entity vectors in the proximity of their corresponding tail entity vectors, and vice versa. For a given triple, (h, r, t), the goal is to minimize the distance between h and t vectors after the transformation introduced by r. TransE Adding relation-specific transformation to baseline models is another popular line of work. Although TransH Translation, rotation, and scaling transformations appear frequently in engineering applications. In image processing, a cascade of translation, rotation, and scaling operations offers a set of image manipulation techniques. Such compound operations can be used to develop a new KGE model called Com-poundE. We provide an illustration of CompoundE and comparison with previous KGE in Fig. Three forms of CompoundE scoring function can be written as • CompoundE-Tail • CompoundE-Complete where h, t denote head and tail entity embeddings, T r , R r , S r denote the translation, rotation, and scaling operations for the head entity embedding, and Tr , Rr , Ŝr denote the counterparts for the tail entity embedding, respectively. These constituent operators are relation-specific. To generalize, any order or subset of translation, rotation, and scaling component can be a valid instance of CompoundE. Since matrix multiplications are non-commutative, different orders of cascading the constituent operators result in distinct CompoundE operators. Performance difference between these variations are discussed in Section B of the appendix. Most analysis in previous work was restricted to the Special Euclidean Group SE(n) Definition 3.1. A Lie group is a continuous group that is also a differentiable manifold. Several Lie group examples are given below. • The real vector space, R n , with the canonical addition as the group operation. • The real vector space excluding zero, (R\{0}), with the element-wise multiplication as the group operation. • The general linear group, GL n (R), with the canonical matrix multiplication as the group operation. Furthermore, the following three special groups are commonly used. Definition 3.2. The special orthogonal group is defined as Definition 3.3. The special Euclidean group is defined as (5) Definition 3.4. The affine group is defined as (6) By comparing Eqs. ( Without loss of generality, consider n = 2. If M ∈ Aff (2), we have The 2D translational matrix can be written as while the 2D rotational matrix can be expressed as It is easy to verify that they are both special Euclidean groups (i.e. T ∈ SE(2) and R ∈ SE(2)). On the other hand, the 2D scaling matrix is in form of It is not a special Euclidean group but an affine group of n = 2 (i.e., S ∈ Aff (2)). Compounding translation and rotation operations, we can get a transformation in the special Euclidean group, Yet, if we add the scaling operation, the compound will belong to the Affine group. One of such compound operator can be written as (12) When s x ̸ = 0 and s y ̸ = 0, the compound operator is invertible. It can be written in form of CompoundE is a general form of quite a few distance-based KGE models. That is, we can derive their scoring functions from that of CompoundE by setting translation, scaling, and rotation operations to certain forms. Four examples are given below. Derivation of TransE Derivation of RotatE Derivation of LinearRE Derivation of PairRE With a richer set of operations, CompoundE is more capable of modeling complex relations such as 1to-N, N-to-1, and N-to-N relations in KG datasets. Modeling these relations are important since more than 98% of triples in FB15k-237 and WN18RR datasets involves complex relations. The importance of complex relation modeling is illustrated by two examples below. First, there is a need to distinguish different outcomes of relation compositions when modeling non-commutative relations. That is r 1 • r 2 → r 3 while r 2 • r 1 → r 4 . For instance, r 1 , r 2 , r 3 and r 4 denote isFatherOf, isMotherOf, isGrandfatherOf and isGrandmotherOf, respectively. TransE and RotatE cannot make such Datasets FB15K-237 WN18RR Metrics MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 text-based methods SimKGC where σ is the sigmoid function, ζ 1 is a fixed margin hyperparameter, (h ′ i , r, t ′ i ) is the i-th negative triple, and p(h ′ i , r, t ′ i ) is the probability of drawing negative triple (h ′ i , r, t ′ i ). Given a positive triple, (h i , r, t i ), the negative sampling distribution is where α 1 is the temperature of sampling. Datasets. We conduct experiments on three widely used benchmarking datasets: ogbl-wikikg2, FB15k-237, and WN18RR. ogbl-wikikg2 is one of Open Graph Benchmark dataset We set η = 1.5 as a logical threshold by following the convention. Table In Fig. Path query is important since it is often desired to perform complex queries on knowledge graph. For example, one might ask "where did Michelle Obama's spouse live in?". To obtain the answer, a Predicting Tail Type model first need to correctly predict the fact that (Michelle Obama, spouse, Barack Obama), and then predict (Barack Obama, livedIn, Chicago). CompoundE has the property to perform well on this task since it is capable of modeling the noncommutative relation compositions. In Path Query Answering (PQA), a tuple (s, P, t) is given, where s and t denote the source and target entities and P = {r 1 , . . . , r k } denotes the relation path consisting of a sequence of relations that links s → r 1 → r 2 • • • → r k → t. PQA tests that after traversing through the relation path from a given source entity, whether the model is able to predict the correct target entity. During testing, the ground truth t is hidden and we compute the score for all candidate target entities and evaluate the quantile of ground truth, which is the fraction of irrelevant candidates that's ranked lower than the ground truth. Mean quantile of all test paths are reported. In particular, type match paths are excluded since those are trivial for prediction. Specifically, we use both the KG triples and sampled paths with length |P | ∈ {2, 3, 4, 5} to train the embedding, which is also referred to as the "comp" setting Freebase KG Entity typing predicts class labels for nodes in knowledge graph. Entity type provides semantic signals for information extraction tasks such as relation extraction We perform entity typing using CompoundE embedding on the FB15k-ET and YAGO43k-ET dataset prepared by We compare the computational complexity of Com-poundE and several popular KGE models in Table CompoundE cuts the number of parameters at least by half while achieving much better performance. In the table, n, m, and d denote the entity number, the relation number, and their embedding dimension, respectively. Since n ≫ m in most datasets, we can afford to increase the complexity of relation embedding for better link prediction result without significantly increasing the overall space complexity. In Fig. CompoundE significantly outperforms benchmarking methods, even under low dimension setting. Hyperparameters. We conduct two sets of controlled experiments to find the best model configurations for ogbl-wikikg2, FB15k-237, and WN18RR datasets. For the first set, we evaluate the effect of different combinations of learning rates and embedding dimensions while keeping other hyperparameters constant. For the second set, we evaluate the effect of different combinations of the training batch size and the negative sample size, while keeping other hyperparameters constant. The optimal model configurations for three datasets are given in Table A new KGE model called CompoundE was proposed in this work. We showed that quite a few distance-based KGE models are special cases of CompoundE. Extensive experiments were conducted for three different knowledge graph prediction tasks including link prediction, path query answering, and entity typing. Competitive experimental results demonstrate the effectiveness of Com-poundE. We also mathematically prove the properties of CompoundE and its capability of modeling different relation patterns. We also explain the performance difference of different CompoundE forms, especially for the complex relation patterns. We are interested in exploring two topics as future extensions. First, we may consider more complex operations in CompoundE. For example, there is a recent trend to extend 2D rotations to 3D rotations for rotation-based embeddings such as Ro-tatE3D Similar to many knowledge graph embedding models, our proposed method is yet to handle link prediction under inductive settings. One possible future extension is to leverage entity description information to generate textual features and use CompoundE as a decoder to handle unseen entities. Also, the affine operators we use are limited to translation, rotation, and scaling and this may limit the number of different relation patterns we can handle. In the future, we can include all affine transformations and investigate their difference. Also, because we use 2D givens rotation matrix, the embedding dimension setting needs to be a factor of 2. We can explore higher dimensional transformations such as 3D transformations and compare the modeling power. With these conditions, we can compare the Com-poundE scores generated by (h, r 1 , t) and (h, r 2 , t) as follows: This means that (h, r 1 , t) generates a smaller error score than (h, r 2 , t). If (h, r 2 , t) holds, (h, r 1 , t) must also holds. Therefore, r 1 is a sub-relation of r 2 . We investigate the performance difference of Com-poundE variants. Specifically, the different forms of CompoundE have visible difference in different relation types. We conduct experiment on YAGO3-10 dataset and compare the performance of CompoundE-left, CompoundE-right, CompoundE-Complete for 1-to-1, 1-to-N, and Nto-1 relations. In particular, when evaluating the 1-to-N relations, we focus on predicting (?, r, t) while for N-to-1 relations we focus on predicting (h, r, ?) to correctly reflect the performance on respective relation types. Performance comparison is shown in 4. We observe that for CompoundE-Complete has advantage over other forms for 1-to-1 relations. CompoundE-left and CompoundE-right are the better performing forms for 1-to-N and N-to-1 relations respectively. This observation is consistent with the discussion of the modeling capability of CompoundE. It still remains a questions that how different order of operator composition will affect the performance of CompoundE and we will address that in future work. We provide a 2D t-SNE visualization of the entity embedding generated by CompoundE for FB15k-237 in Fig. Besides the histograms shown in the main paper, we add more plots to visualize CompoundE relation embedding values. In Fig. Fig. The path query dataset can be obtained from the link And the shear matrices on two different directions can be defined as B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
1,007
1,967
1,007
Semantic Simplification for Sentiment Classification
Recent work on document-level sentiment classification has shown that the sentiment in the original text is often hard to capture, since the sentiment is usually either expressed implicitly or shifted due to the occurrences of negation and rhetorical words. To this end, we enhance the original text with a sentiment-driven simplified clause to intensify its sentiment. The simplified clause shares the same opinion with the original text but expresses the opinion much more simply. Meanwhile, we employ Abstract Meaning Representation (AMR) for generating simplified clauses, since AMR explicitly provides core semantic knowledge, and potentially offers core concepts and explicit structures of original texts. Empirical studies show the effectiveness of our proposed model over several strong baselines. The results also indicate the importance of simplified clauses for sentiment classification.
As a critical application of natural language processing, document-level sentiment classification has received considerable attention during the last two decades with the underlying assumption that the entire text has an overall polarity. In the literature, previous studies focus on predicting the overall sentiment from original text using either statistical To tackle the above limitations, we simplify the original text to a simplified clause and employ the simplified clause for sentiment classification. As shown in Figure However, the simplified clause is hard to generate from original text, since we need to reduce the linguistic complexity of the original text, and keep the same polarity as well as the original meaning. Intuitively, such issues can be alleviated by having a structural representation of semantic information, which treats concepts as nodes and builds structural relations between nodes, making it easy to find the important and sentiment-driven content. Explicit structures are more interpretable compared to neural representations and have been shown to be useful in many applications In this study, we employ Abstract Meaning Representation (AMR) Existing work on AMR parsing focuses on the sentence level. However, as shown in the right green box in Figure In summary, we firstly use a sequence-tostructure network to generate the AMR-based semantic graphs from sentences in original text. We then use a simplified graph extraction model to merge the sentence-level semantic graphs and extract a document-level simplified semantic graph. Thirdly, we employ a structure-to-sequence model to generate the simplified clause from the simplified semantic graph. Afterward, we integrate the simplified clause and original review text for sentiment classification. Detailed evaluation shows that our model significantly advances the state-of-the-art performance on several benchmark datasets. The results also show that the simplified clause is very useful for sentiment classification, and indicates AMR is beneficial for simplified clause generation.
In this study, we introduce two related topics of this study: document-level sentiment classification and text simplification. Finally, we employ the pre-trained language model BERT where X is the original text, Y is the generated simplified clause, [CLS] is BERT's special classification token, and [SEP ] is the special token to denote separation. We then employ a multi-layer perceptron to predict the overall polarity based on the representation Ĥ, H P is then used as inputs to a softmax output layer, Here, W h p , b h p , W p , and B p are model parameters, and P P is used to predict the overall polarity from the simplified clause and original text. Text Simplification is the task of reducing the complexity of the vocabulary and sentence structure of the text while retaining its original meaning. Most of the studies can be divided into two categories: lexical simplification and syntactic simplification. Lexical simplification is the process of replacing complex words in a given sentence with simpler alternatives of equivalent meanings Our puppy loves After we learn the AMR-based semantic graphs of sentences in a text, we extract the document-level simplified semantic graph from these sentencelevel semantic graphs. The process of simplified semantic graph extraction can be separated into two stages: document-level semantic graph construction, and graph pruning. Puppy loves to chew up bowls. Structure-to-Sequence The difference between the proposed semantic simplification and vanilla text simplification is that the former one pays more attention to the sentiment of the original text. Meanwhile, after semantic simplification, the simplified clause is more refined in context and more explicit in sentiment than the original text. In this study, we aim to predict the polarity of a given document with its original text and the simplified clause. As shown in Figure In the following, we will illustrate these components of the proposed model, and then discuss the objective function and training process. We first employ a sequence-to-structure network to generate AMR graphs from each sentence in the original text. Since it is much easier to generate a sequence than generate a graph, we linearize AMR graphs to sequences. In particular, AMR graphs are first converted into AMR trees by removing variables and duplicating the co-referring nodes. Then newlines presented in an AMR tree are replaced by spaces to get a sequence Based on the above linearization strategy, the sequence-to-structure model generates the AMR structure via a transformer-based encoder-decoder architecture where each layer of Encoder is a transformer block with the multi-head attention mechanism. After the input token sequence is encoded, the decoder predicts the output structure token-bytoken with the sequential input tokens' hidden vectors. At the i-th step of generation, the selfattention decoder predicts the i-th token y i in the linearized form and decoder state h as: where each layer of Decoder is a transformer block that contains self-attention with decoder state h d i and cross-attention with encoder state H. The generated output structured sequence starts from the start token "⟨bos⟩" and ends with the end token "⟨eos⟩". The conditional probability of the whole output sequence p(R|X) is progressively combined by the probability of each step p(r i |r <i , X): where r <i = {r 1 , ..., r i-1 }, and p(r i |r <i , X) is the probability over target vocabulary V normalized by softmax. Since all tokens in linearized representations are also natural language words, we adopt the pretrained language model BART The semantic graph of a sentence is represented by a rooted, directed, and acyclic AMR graph A major challenge for understanding the document-level semantic graph is posed by pronouns Since there are lots of duplicate and irrelevant information in the original document-level graph, we then need to prune it into a sentiment-driven simplified semantic graph. The rules of pruning are introduced as below, Concept Merging. We first perform concept merging. Graph nodes representing the same concept, determined by the surface word form, are merged to a single node in the graph. It operates on a very ad-hoc principle (van Noord and Bos, 2017): if two nodes have the same concept, the second one is actually a reference to the first one. Therefore, we replace each node that has already occurred in the AMR graph by the variable of the antecedent node. Given the example in Figure Graph Pruning. We then need to remove the duplicate nodes in the graph. We remove nodes with the same argument and concept under the same parent. We also remove nodes that occur three times or more, no matter their parents. Meanwhile, we remove the irrelevant information in the graph, and make sure that the graph is a sentiment-driven graph. Therefore, apart from 'ARG' and 'op' relations 2 , only 'manner', 'mod', and 'polarity' relations are kept in the graph, and we remove all the other relations. 'manner' relation denotes an action between a noun and a verb. 'mod' means modifying relation, which is always related with a noun and an adjective. 'polarity' is represented as negation logically, which expresses modals with concepts. All of these relations are basic and correlated with the sentiment of a document. We thus keep these relations to construct the sentiment-driven simplified graph. As shown in Figure As shown in Figure We then generate the simplified clause from the simplified semantic graph via the transformerbased encoder-decoder architecture Given the input simplified semantic graph G = {w 1 , w 2 , ..., w n }, which is corresponding to the original token sequence X, the structureto-sequence model outputs the simplified clause Y = {y 1 , ..., y n }. Note that, we linearize the semantic graph into a sequence of nodes and edge labels using depth-first traversal of the graph. Therefore, the structure-to-sequence model computes the hidden vector representation H ′ of the input linearized graph sequence via a multi-layer transformer encoder: where each layer of Encoder is a transformer block with the multi-head attention mechanism. After the input token sequence is encoded, the Decoder predicts the simplified clause token-bytoken with the sequential input tokens' hidden vectors using a self-attention decoder. The conditional probability of the whole output sequence p(Y |G) is then progressively combined by the probability of each step p(y i |y <i , G): where y <i = {y 1 ...y i-1 }, and p(y i |y <i , G) is the probability over the target vocabulary V normalized by a softmax layer. In this subsection, we show the objective functions and training process of the proposed model. Sentiment Simplification. The goal is to maximize the probability of the output sentiment-driven simplified clause Y given the input original text X. Therefore, we optimize the negative log-likelihood loss function: where θ is the model parameters, and (X, Y ) is a (original text, simplified clause) pair in training set τ , then where y <i = {y 1 , ..., y i-1 }, and p(y i |y <i , X; θ) is calculated by the decoder. Sentiment Classification. Given a token sequence X from a document, and the corresponding sentiment-driven simplified clause Y generated by the proposed model. Our training objective is to minimize the cross-entropy loss over a set of training examples, with a ℓ 2 -regularization term, where p i and pi are the pre-defined and predicted sentimental labels of the original text X, respectively. θ p is the set of model parameters, and λ is a parameter for ℓ 2 -regularization. In this section, we introduce the datasets used for evaluation and the baseline methods employed for comparison. We then report the experimental results conducted from different perspectives, and analyze the effectiveness of the proposed model with different factors. We conduct our experiments on subsets of sentiment analysis benchmarks from Amazon Product Dataset There are two kinds of datasets in our experiments: one is for sentiment classification, and the other is for simplified clause generation. In sentiment classification dataset, we randomly select 3,000 reviews for each domain, 60% reviews are used as training data, 20% reviews are used as testing data, and the remaining reviews are used as validation data. In simplified clause generation dataset, we select another 12,000 reviews from each domain to train the generation model. The original AMR graph of each sentence is obtained by S2S-AMR-Parser We use BERT The experimental results are obtained by averaging ten runs with the random initialization. We use scikit-learn package Table • LSTM is a basic neural model using LSTM • AGLR • LexicalAT • RGAT • CFSA • BERT-Original employs original text to fine-tune the BERT pre-trained language model • BERT-Clause employs the generated simplified clause to fine-tune BERT. The simplified clause is generated by the proposed semantic simplification model. Comparison with BERT-Original and other stateof-the-art methods, BERT-Clause achieves competitive performance. It indicates that the simplified clause is beneficial to sentiment classification. In addition, our proposed model outperforms the previous state-of-the-art methods significantly (p < 0.05), as the proposed model employs AMRbased semantic representation to generate the simplified clause for sentiment classification. This shows that the semantic simplification architecture is very helpful for generating the simplified clause and predicting the polarity. This subsection analyzes the impact of the simplified clause with different generation models. We employ four kinds of text generation methods to generate simplified clause: TextRank 3) Our proposed model outperforms UniLM and BART significantly (p < 0.05), which indicates that the AMR-based semantic representation is very important for generating the simplified clause. As shown in In addition, if we remove the simplified semantic graph extraction part (-Extraction) of the proposed model, and just employ the document-level AMR graph to generate the simplified clause, the performance drops to 80.1%. It shows that there is a lot of duplicated and irrelevant information in the document-level graph. Furthermore, we also find that both concept merging and graph pruning are beneficial to extract a sentiment-driven simplified graph. If we remove these two components, the performance drops to 80.6% and 81.2% respectively. In this section, we give some analysis and discussion to show the importance of the simplified clause for sentiment classification. Note that, the results in this section are the average of all the domains. In this subsection, we give some statistics to analyze the quality of generated simplified clause compared with original text in Table From Table We choose three examples to illustrate the effectiveness of the proposed model compared with BART-BERT model in Table As shown in Table In this paper, we enhance the original text with a simplified clause for document-level sentiment classification. The simplified clause shares the same opinion with the original text but expresses the opinion much more simply. Meanwhile, we employ AMR for generating the simplified clause, since AMR potentially offers core concepts and explicit structures from the original text. We then integrate the simplified clause with original text for sentiment classification. Empirical studies demonstrate that our model significantly advances the state-of-the-art performance on several benchmark datasets. The results also indicate the simplified clause is very useful for sentiment classification.
898
2,077
898
Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection
A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a posthoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings.
Generic responses which lack specificity have been a major issue in existing dialog models We propose and evaluate an approach for unsupervised knowledge injection into a dialog model's response at decoding time We experiment with two types of knowledge sources: language models, which we treat as parametric knowledge bases We experiment with two scenarios: goaloriented and knowledge-grounded dialog where the training data covers only a fraction of the needed knowledge. Automatic evaluation reveals that our method is capable of generating highly diverse responses in both settings. In some cases, the generated response shows high overlap with the original target response showing that our unsupervised method bridges the knowledge gap between available knowledge and human-written responses present in the existing dialog corpus. An extensive human evaluation confirms that generated responses are indeed engaging, interesting, and human-like without any loss in fluency. To pinpoint the usefulness of knowledge injection in the above settings, we design a real-time study ( §5.3) where users interact with our system to reach a conversational goal (e.g. planning a holiday or knowing more about the solar system). We find that external knowledge enables users to achieve their goals more efficiently. Additionally, we observe that the our approach of sub-selecting relevant but diverse knowledge leads to responses that promote success in achieving conversational goals.
Our goal is to construct a dialog response by injecting knowledge (from external textual sources) at decoding time, without having to retrain the models. Consider a dialog model M from which we can sample a dialog response x d given a dialog history H. We shall refer to the response x d sampled from such a model without any decoding time knowledge injection as the initial response. However, as motivated earlier, samples from such a dialog model often lack detail. To improve such responses, we retrieve and incorporate relevant external knowledge k into the initial response. To achieve our goal, we construct a query using both dialog history H and the initial response x d , and gather a relevant knowledge candidate k from a knowledge source K. The retrieved snippet can provide useful information to the end-user to achieve the conversational goal (see §5.3). We explore both parametric (e.g querying a language model) and non-parametric (e.g. deterministic retrieval using word-overlap) ways to obtain post-hoc knowledge. Pretrained language models (PTLM) are typically trained with a vast amount of text that spans a diverse range of domains. External knowledge in the form of a text corpus can be used as a non-parametric knowledge source available at decoding time. Compared to parametric knowledge sources, such sources do not generate text as knowledge snippets, but offer the advantage of high quality and reliability of human written text. We consider the dialog history and the initial response as a query to retrieve relevant knowledge instances from the corpus. Next, we identify the top relevant instances in the given corpus with respect to the constructed query using cosine similarity on TF-IDF based representations Effectively utilizing the retrieved knowledge snippets to construct an enriched dialog response encompasses two major challenges. Firstly, it is not practical to use potentially hundreds of knowledge snippets obtained from the retrieval step for a single response generation. Thus, we need to find a relevant but diverse subset of the snippets. Secondly, the dialog model M is trained to condition only on the dialog context, and not on the external knowledge. Hence, to leverage the knowledge snippets, we need a decoding strategy to rewrite the initial response x d such that the resulting final response x f should closely follow the knowledge snippet to be injected without a loss in the fluency and consistency. Thus, our method requires no additional training and only assumes a language model trained on dialog context (i.e. M). We refer to our proposed framework (Figure At each turn, we obtain N knowledge snippets from both the parametric and non-parametric sources. We wish to select a subset of B (out of N ) relevant but diverse knowledge snippets. We define relevance score of a snippet k i with respect to the dialog history H using pointwise mutual information (PMI) as follows: Thus, a high PMI score would imply a larger semantic similarity between the snippet k i and H. To account for redundancy between the snippet pair k i , k j we again use the PMI score as follows: The redundancy score is symmetric i.e. RED ij = RED ji as PMI is a symmetric measure. We estimate probabilities (both conditional and marginal) p(.) in the above equations using GPT2 language model, following past work To select B knowledge snippets out of N with a relevance-redundancy trade-off, we use a subset selection process named Determinantal Point Process (DPP) We build an N × N kernel matrix D, which is real, symmetric and positive semi-definite. The diagonal entries D ii are populated by the squared relevance score of the i-th knowledge REL i and the off-diagonal entries D ij are β × squared redundancy scores RED ij . We adjust β in such a way that D always remains positive semi-definite (more details in Choosing B-size submatrix from N -size D is a combinatorial problem and can become prohibitively costly when N is very high. Hence, we use a greedy method Upon selecting B knowledge snippets, we want to individually inject each knowledge snippet into x d to construct a candidate final response x f at inference time. Previous works have addressed the problem of unsupervised modification of already-generated text using gradient-based decoding where τ is the temperature hyperparameter, W is the output embedding matrix (shared with the input), and W z (t) ∈ R V (V is the size of the vocabulary). Following Majumder et al. (2021a), we define a knowledge fidelity objective that encourages x f to be minimally different from the knowledge snippet k. We achieve this by minimizing the cross entropy loss (CE) between knowledge tokens k (1) , . . . , k (T ) as labels and W z (1) , . . . , W z (T ) as the logits. We further notice that injected knowledge can influence the generation in such a way that it contradicts with responses uttered during previous turns. Hence, we also want x f to be entailed with the dialog history H. We build an entailment classifier θ(z, H) that predicts the probability of x f (ideally, the hidden representation z of x f ) entailing H. The classifier θ(z, H) is a bag-of-words classification layer with hidden states z from M and fine-tuned using the DNLI dataset Decoding. In the subsequent forward and backward passes, the hidden representation z is gradually perturbed via gradient ascent on the respective objectives. During backward pass, the objective with constraints is with hyperparameters α and λ. We use back-propagation to update z with the gradient ∇ z L(H, k; z) while the parameters of M remain fixed. The updated latent representations of z after the backward pass are denoted as z bw . A forward pass with M is required to regularize the hidden states z toward the original dialog model objective to obtain z fw . Corresponding to the t th token, the hidden states for the t + 1 th time step are computed via a weighted addition of backward and forward hidden states, i.e., z (t+1 where γ ∈ (0, 1) is a hyperparameter. During generation, we start by sampling the initial response x d with greedy decoding from M. The hidden states z (of x d ) are iteratively updated by alternate backward and forward passes. The final response is sampled as The number of iterations (= 5) and the γ (= 0.45) were chosen by maximizing the Z-normalized sum of dialog model perplexity and linguistic diversity (% of distinct bigrams) in a greedy hyperparameter search. More details are in Appendix B. Several previous works often over-generate and use an additional ranking step in order to select the final candidate in unsupervised text generation For WoW, we use two current-best knowledge-grounded models, KGround Variants of POKI. To investigate the impact of various decoding constraints in POKI, we consider the following two variants of POKI-w/o Entailment and w/o Knowledge (Kw) Fidelity ( § 3.2). In POKI, we use SimpleTOD as the base dialog model in goal-oriented scenarios and use BART (which is a state-of-the-art model for WoW) as the base dialog model in the knowledge-grounded scenario. For all variants of POKI, we use gradientbased inference for decoding the final response. Our primary goal is to generate responses enriched with relevant external knowledge. Arguably, a system which can effectively leverage additional knowledge at decoding time should generate more diverse responses. We measure percentage of distinct bigrams as Distinct-(D-2) MultiWOZ. Table WoW. Despite all systems for WoW use knowledge explicitly in the knowledge-grounded dialog generation task, Table We conduct a comparative human evaluation with 300 samples to evaluate the quality of generated dialog responses following ACUTE-Eval MultiWOZ. Table In POKI, entailment constraint mostly influences coherence whereas knowledge fidelity constraint is important for engagingness and interestingness. WoW. Table Kohinoor has a vibrant environment in the evening. They are best known for their starters. Do check them out. Also Indian sweets are great as desserts. I do not have an inexpensive restaurant that serves English food in the centre area. I can book a reservation for you at Kohinoor. The table will be reserved for 15 minutes. Do you have a location preference? I have several options for you. Asian cuisines such as Chinese or Indian cuisines are inexpensive. They are value for money since they are known for their great taste. I can book a Chinese or Indian restaurant near centre for you. 👧 : I need a place to eat that is cheap. room for improvement in terms of how knowledge utilized. A large gap in win percentages in favor of POKI for evaluating how 'humanlike' is a response when compared to state-of-the-art methods suggests knowledge injection leads to more natural conversation. Here too, both constraints show similar trends to MultiWOZ. Figure Qualitatively, as seen in Figure Relevant knowledge injection has the benefit of adding more justification to terse dialog outputs and hence influencing the task outcome positively. Mirroring observations from For goal-oriented dialog, we construct speculative goals (e.g. looking for entertainment options) manually from the ground truth for 300 dialog samples. Since we are not using the underlying databases, we made sure speculative goals do not require specific information (e.g. booking availability, flight information, etc.). For knowledgegrounded dialog, we provide the intended topic of Table discussion (e.g. science fiction) present in the data; the speculative goal here is to know more about, or to have an engaging conversation about the topic. Results. First of all, we find that POKI is unanimously preferred by users compared to the baseline during the user study. More importantly, we see that when the user successfully accomplished their goal, 84% of those times they found the additional knowledge helpful in the goal-oriented setting (MultiWOZ) as compared to a baseline (Rewriter) that did not use any external knowledge. Most importantly, POKI takes significantly fewer turns for users to accomplish the goal as compared to Rewriter implicitly indicating injected knowledge (we observe high correlation, 0.67) contributes toward more efficient conversations. For the knowledge-grounded setting (WoW), both BART and POKI have access to external knowledge sources. However, 89% (compared to 70%) of success scenarios were directly influenced by the additional post-hoc knowledge. For knowledge-grounded dialog, a longer conversation is indicative of engagingness on a particular topic Performance of Knowledge Selection. The knowledge selection step in POKI acts an information bottleneck where the quality of the generated response directly depends on the quality of the selected knowledge Knowledge grounded dialog datasets such as Wizard-of-Wikipedia Improving the diversity of dialog responses by using diversity-promoting sampling has been explored in past work We propose a framework for unsupervised knowledge injection into dialog responses. We show that knowledge can be obtained post-hoc from any knowledge sources that can improve users' ability to reach their conversational goal more effectively. In future, our idea can be generalized to setups where external knowledge can justify model's predictions such as conversational recommendation. MultiWOZ. To compare with previous works, we use MultiWoz 2.0 following WoW For Wizard-of-Wikipedia, all baselines and the original dialog model for POKI use available paired knowledge present in the training data (not a part of our pipeline). However, POKI additionally uses the external knowledge snippets selected via DPP. We open-source our code at: We obtain the MultiWOZ 2.0 from the official release Network architecture For MultiWOZ, we use the SimpleTOD Hyperparameters POKI does not require any training since we perform gradient-based decoding at the inference time. For hyperparameters involved in the decoding stage, we maximize the Z-normalized sum of dialog model perplexity and linguistic diversity (% of distinct bigrams) of the generated response in a greedy fashion to select the best values. For our best method, in objective function L, we use α as 1 and λ as 1. We keep generation length to be 100 to encourage longer generations. We train the entailment classifier using code from PPLM repository Our initial experiments suggests that that knowledge generated from PTLMs can be inappropriate (contains or toxic content) and misleading/nonfactual. Key-phrase extraction Given a sentence from the context, we first extract n-gram (n ∈ 1,2,3,4) key-phrases using YAKE (Yet-Another-Keyword-Extractor) Prompts We curated prompts inspired by various knowledge-seeking situations (such as for: more information, opinion, review) Human Evaluation We hired two Anglophone (Lifetime HIT acceptance % > 85) annotators for every test sample. Figure • Coherent: Which version is more consistent with the dialog history? • Engaging: Which version is more likely to hold your attention and make you want to hear more? • Interesting: Which version arouses your curiosity or tells you something new or useful? • Humanlike: Which version is more natural and personable? All differences in values from human evaluations are significant with p < 0.05 from bootstrap tests on 1000 subsets of size 50. A snapshot of our human evaluation interface is shown in Figure User Study For user study, we similarly recruited 60 Anglophone users who have at least high-school level of education and are comfortable with handling internet-based technologies. Each session (depending on the systems they interacted) lasted on an average 30 minutes (for MultiWOZ) and 60 minutes (for WoW) including on-boarding, performing actual task and answering post-task questions. Figure We do not foresee any immediate ethical concerns for our method as we use several constraints (less divergence from the extracted knowledge, consistency with the dialog context) that allow the generation to be restricted to the context. In general, we expect our dialog system to be engaging and accessible to the user. Since we use PTLMs as knowledge source, we inherit the general risk of generating biased or toxic language, which should be carefully filtered. In our work, we perform explicit filtering steps to make sure that the knowledge is appropriate. Furthermore, our selection step promotes more factually correct knowledge to be selected. However, the generations may incorporate biases that are already present in the dialog datasets due to crowd-sourced data collection. Finally, our generations are limited only to the English language. Hence we suggest that a system like ours should likely not be used as a 'black box,' but would best be used in a setting where its outputs can be 'audited'. Carbon footprint: Our system uses post-hoc knowledge injection which refrains from retraining newer dialog models to accommodate dynamically evolving external knowledge. This promotes green NLP applications
1,225
1,477
1,225
Multilingual Speech Translation from Efficient Finetuning of Pretrained Models
We present a simple yet effective approach to build multilingual speech-to-text (ST) translation through efficient transfer learning from a pretrained speech encoder and text decoder. Our key finding is that a minimalistic LNA (LayerNorm and Attention) finetuning can achieve zero-shot crosslingual and crossmodality transfer ability by only finetuning 10 ∼ 50% of the pretrained parameters. This effectively leverages large pretrained models at low training cost such as wav2vec 2.0 for acoustic modeling, and mBART for multilingual text generation. This sets a new state-ofthe-art for 36 translation directions (and surpassing cascaded ST for 30 of them) on the large-scale multilingual ST benchmark CoV-oST 2 (Wang et al., 2020b) (+6.4 BLEU on average for En-X directions and +6.7 BLEU for X-En directions). Our approach demonstrates strong zero-shot performance in a many-to-many multilingual model (+5.6 BLEU on average across 28 directions), making it an appealing approach for attaining highquality speech translation with improved parameter and data efficiency.
Recent advances in pretraining over unlabeled data and then finetuning on labeled data leads to significant performance improvement in text understanding and generation tasks Our contributions are as follows: • We propose a simple and effective approach to combine pretrained single-modality modules to perform speech-to-text translation. With minimal architecture change, we add a crossmodal adaptor to bridge the length discrepancy between audio encoder output and text decoder input. Our approach can also perform multi-task finetuning with both speech-to-text translation and text-to-text translation tasks where we find joint training with the latter brings further gains. • We present an efficient transfer learning strategy by only finetuning the LayerNorm and Attention (LNA) parameters of pretrained models. This approach is not only parameterand data-efficient but also effective for zero-shot crosslingual transfer to unseen languages (train on A → B, test on A → C and C → B). • Our approach is also effective for zero-shot multilingual translation (train on A → B and B → C, test on A → C), which provides an efficient approach for many-to-many speechto-text translation without dependency for parallel data for every direction. • Using a pretrained audio encoder (wav2vec We describe our approach in Section 2, namely pretrained models, length adaptor, LNA finetuning and joint speech-text finetuning as is illustrated in Figure
Our model leverages a pretrained wav2vec 2.0 We add a lightweight adaptor module in between encoder and decoder to better align the two mod-ules pretrained with different modalities. The adaptor module performs projection and downsampling to alleviate length inconsistency between the audio and text sequences. Specifically, we use a stack of n 1-dimensional convolutional layers with stride m to shrink the speech sequence (encoder output) by a factor of m n . Instead of finetuning all parameters in pretrained models, we propose parameter efficient finetuning strategy (LNA) of only finetuning the layer normalization (LayerNorm) and multi-head attention (MHA) parameters. LNA is motivated to bridge the discrepancy between pretraining and downstream (ST) task, which we hypothesize are accounted by the following parameters: LayerNorm parameters from pretrained models were trained based on the statistics of the data used in pretraining and thus need to be adapted to downstream tasks during finetuning. The importance of finetuning LayerNorm has been observed in multilingual (text-only) translation Multi-task learning has been shown as an effective approach to improve the performance of the speech translation task using other related tasks, such as MT and ASR We evaluate our proposed models on two largescale multilingual speech translation benchmarks. Statistics of the datasets and implementation details are reported in the A.2 and A.3. CoVoST 2 We evaluate the following instantiation of the proposed method which is referred to as XMEF (Cross-Modal Efficient Finetuning). Encoder. We initialize the encoder using the opensourced 1 wav2vec 2.0 large architecture pretrained on unlabelled English-only (XMEF-En) audio from LibriVox Joint Training. Two encoders are initialized with the pretrained mBART encoder and wav2vec 2.0 encoder mentioned above, and are used for text and speech input respectively. The last 12 transformer layers in the wav2vec encoder are replaced with 12 mBART encoder layers. Parameters in those 12 layers are shared between the two encoders during joint training From scratch: The first baseline trains a sequenceto-sequence model with Transformer architecture without any pretraining.For CoVoST 2 experiments, we use the same model configuration as is provided by ASRPT+Multi: Pretraining encoder on ASR task was shown to be an effective method to improve speech translation and accelerates convergence First, we evaluate the transfer learning performance of finetuning the entire pretrained model as well as the proposed efficient finetuning (LNA). To separate the additional crosslingual transfer learning from multilingual finetuning, we evalute on bilingual ST (En-De and De-En in CoVoST) task. We first evaluate LNA-Minimalist (69M params), comparing to finetuning all parameters and only top layers which were found effective in transfer learning in NLP tasks with pretrained BERT To assess transfer ability from encoder pretrained on English to other (speech) input languages, we evaluate the performance of XMEF-En on CoV-oST 2 De-En ST task. We investigate the role of finetuning encoder self-attention (LNA-ESA) in facilitating crosslingual transfer. We compare to baselines of finetuning the entire encoder (All), and finetuning feature extractor which are commonly used in adaptation in ASR Results are summarized in Figure Next, we evaluate XMEF's crosslingual transfer performance from multilingual finetuning. To precisely measure the transfer capability, we evaluate the zero-shot setting, i.e. finetune XMEF-En with parallel ST data from multiple languages, and evaluate on an unseen language. We study the transfer performance in source (speech) and target (text) separately. Source-side (speech) transfer. We evaluate whether the proposed approach enables positive crosslingual transfer to translate speech from unseen languages in Table (+1.9 BLEU) the previous state-of-the-art for this direction which is a supervised multilingual model. Target-side (text) transfer. Table We evaluate the performance of XMEF with multilingual finetuning on all 36 translation directions in CoVoST 2, respectively all 21 languages into English (many-to-one) and from English into 15 languages (one-to-many). Many to one. Consistent with the observation of source-side crosslingual transfer in Sec 4.1, XMEF-En perform very well on Romance, Germanic and Slavic language families in both high-resource ( ≥ 100 hours training data) and low-resource directions (7 ∼ 44 hours training data) as is summarized in Table One to many. Table improve with LNA finetuning of the decoder were never seen during mBART pretraining. In the many to one case (Table 3), language pairs with reasonable amount speech training data (+ 18 hours) and large amount of parallel text data (+1 million sentences) ("Fr-En", "De-En", "Es-En", "It-En", "Ru-En" and "Fa-En"), outperform the corresponding single task trained models and achieve state-of-art results . However, if the amount of speech data is too small (10 hours or less), joint training is ineffective and may even make the performance worse. In one to many case ("En-X"), where there are 364 hours English audio data for training, joint training improves the results further by another 0.6 BLEU (Table Finally, we evaluate how the proposed approach performs in zero-shot multilingual translation (translating X → Y after training on X → En and En → Y. We apply LNA-D multilingual finetuning using En-X and X-En training data only from the Europarl corpus. Table Ablation on LNA Finetuning. In Table For adapting to a single language pair downstream ST task (English-German), we find finetuning self attention (+SA) parameters in the decoder did not bring further improvement while significantly increasing the amount of parameters to train. Ablation on Length Adaptor. We study whether the performance is sensitive to downsampling ratio in the adaptor module. We conduct the experiments on CoVoST 2 many-to-one experiments, and report perplexity on dev set of three directions with diverse input languages: German-English (De-En), Chinese-English (Zh-En) and Estonian-English (Et-En). Table Translation. Sequence-to-sequence based speech translation has shown very good potential over the traditional cascaded system Pretraining and Finetuning. Our work is motivated by the recent success of self-supervised learning for NLP and speech processing applications Our work belongs to the second category of efficient finetuning without adding extra parameters (e.g. adaptor modules). Empirical studies shows that finetuning the final layers of BERT account for most of the quality gains on downstream tasks We proposed a simple and effective approach to leverage pretrained single-modality models (such as wav2vec 2.0, mBART) to perform speech-totext translation. On two large-scale multilingual speech translation benchmarks, our approach advances the state-of-the-art (+6.6 BLEU on average for 36 translation directions in CoVoST 2, and +5.6 BLEU for 28 translation directions in Europarl). We provide an efficient finetuning strategy which is not only data-and parameter-efficient, but also demonstrates crosslingual transfer ability by only finetuning 10 ∼ 50% of the parameters of large pretrained models.
1,069
1,442
1,069
Leap-of-Thought: Accelerating Transformers via Dynamic Token Routing
Computational inefficiency in transformers has been a long-standing challenge, hindering the deployment in resource-constrained or realtime applications. One promising approach to mitigate this limitation is to progressively remove less significant tokens, given that the sequence length strongly contributes to the inefficiency. However, this approach entails a potential risk of losing crucial information due to the irrevocable nature of token removal. In this paper, we introduce Leap-of-Thought (LoT), a novel token reduction approach that dynamically routes tokens within layers. Unlike previous work that irrevocably discards tokens, LoT enables tokens to 'leap' across layers. This ensures that all tokens remain accessible in subsequent layers while reducing the number of tokens processed within layers. We achieve this by pairing the transformer with dynamic token routers, which learn to selectively process tokens essential for the task. Evaluation results clearly show that LoT achieves substantial improvement on computational efficiency. Specifically, LoT attains up to 25× faster inference time without a significant loss in accuracy 1 .
The advent of Transformer One typical approach to tackle this challenge is to reduce the number of tokens processed within transformer layers In this paper, we propose Leap-of-Thought (LoT) LoT offers several advantages compared to the permanent removal. Primarily, LoT has the potential to mitigate the risk of losing crucial information related to the task, given that the decisions for each token are recoverable in subsequent layers. In addition, LoT provides a higher degree of freedom in token reduction, thereby facilitating the exploration of a diverse search space for greater efficiency, which is similarly observed in network compression To substantiate the efficacy of LoT, we perform evaluations across extensive experiments. Comprehensive results demonstrate that the model employing LoT reveals substantial speedup gains without a significant loss in task accuracy. Additionally, through the analysis of LoT, we provide justification for the efficacy of the dynamic token routing mechanism and illustrate how LoT achieves greater efficiency. In summary, the contributions of the paper include the followings: • We introduce Leap-of-Thought, a novel token reduction approach that enables dynamic token routing within the transformer, which reduces the processed tokens within each layer while preserving crucial information. • We propose a gradient-guided training to steer the dynamic token router towards making more informed decisions about whether the tokens should be processed or leaped over. • We demonstrate the efficacy of LoT through extensive experiments and analysis on various benchmarks, establishing LoT as a promising approach for the token reduction.
In this section, we mainly review the methods that adaptively control the computation in pre-trained language models. Recent approaches can be classified into two categories: width-wise and depthwise approaches. The former focuses on reducing the number of tokens processed by transformers, while the latter aims to decrease the number of computational layers. Figure Given that the computational costs of the transformer are heavily influenced by the length of the input sequence As such, TR-BERT Figure card input tokens, which might lead to a potential loss of crucial information. Moreover, the search space for token removal is proportionally constrained by the number of remaining tokens, thereby restricting flexibility in optimizing reduction strategies. In contrast, since LoT allows the model to revisit all tokens, the crucial information can be better preserved within the transformer. Besides, the ability to revisit tokens endows LoT with a higher degree of flexibility in exploring diverse reduction space that potentially offers greater efficiency. The principle behind depth-wise approach is to allocate minimal layer computations to easy samples while dedicating more layer computations to difficult samples (Figure Instead of implementing an exit strategy, Layer-Drop While these works allow adaptive computation on different inputs to achieve efficiency, the level of granularity in the depth-wise approach is constrained by the number of layers. This could result in the sub-optimal efficiency and difficulty in assigning fine-grained computations to a diverse set of samples. 3 Leap-of-Thought: Dynamic Token Routing for Accelerating Transformer In this section, we elaborate on Leap of Thought (LoT), which dynamically routes tokens across layers to improve computational efficiency. To this end, we introduce a dynamic token router in learning to decide which token should be processed in the current layer or leaped forward to the subsequent layer (Section 3.1). To ensure that the token router makes well-informed decisions, each token router is trained by a gradient-based token importance (Section 3.2). The overall process of LoT is illustrated in Figure In order to enable tokens to leap across transformer layers, we introduce a dynamic token routing mechanism that adaptively selects tokens for utilizing in the current layer, while pushing the unused tokens forward to subsequent layers for potential use. Dynamic Token Router. To initiate the routing mechanism, we start by the definition of a dynamic token router, a lightweight module located between every transformer layers. Each router takes token representations as the input (i.e., embedding or outputs from the previous layer) and learns to produce a binary decision for each token: "1" denotes that it is processed at the current layer, and "0" denotes that it leaps to the next layer. The dynamic token router is formulated as follows: where w is a token representation, W and b denote the weights and biases for linear transformation, respectively, σ 1 and σ 2 indicate the GeLU activation and softmax function, respectively, and LN (•) denotes the layer normalization We then derive the routing decision based on the prediction of the router. where the subscript of u(w) represents the probability for each actions (i.e., process or leap the layer). Routing Tokens. Once the token router is established, the routing decision is applied to all tokens before they are fed into the transformer computation. Formally, let the token representations in the l-th layer be denoted as w n-1 , where n is the length of an input. The routing decision is made for each token where R (l) (•) is the routing function on the l-th layer, ⊙ indicates the Hadamard product, and c (l) is the context vector used to make the routing decision by considering the current context information. Notably, we employ the [CLS] token (i.e., w (l) 0 ) as the context vector, given that it serves as a contextual memory, being retained throughout all layers. However, training the router in an end-to-end manner is non-trivial due to the non-differentiable nature of the routing function. To circumvent this, we utilize the Gumbel-softmax reparameterization where g is a sample from a Gumbel distribution, and τ is the temperature parameter controlling the smoothness of the approximation. During the backward pass, we replace the gradient of the nondifferentiable function with that of the Gumbelsoftmax using straight-through-estimator Token Merging. While the routing ability allows the model to preserve crucial information, maintaining the minimal information of unused tokens can be beneficial. We thus introduce token merging mechanism. Formally, the merged token is constructed as follows: ) where 1[x] is the indicator function that returns one if the statement x is true; otherwise zero, and m is the number of tokens to be leaped. The merged token is appended to the input and only utilized in the self-attention layer. In the next layer, the token is replaced with a new merged token based on the new routing results (i.e., R (l+1) ). To steer the token router towards making informed decisions, we also introduce a gradient-guided router training, which directly provides the supervision of the significant tokens to the routers. Guidance Derivation. As a guidance for the router, the gradients of the token representations are leveraged, given that the gradient information can encode the sensitivity of the output to the input tokens, providing insight into which tokens are being more influential for prediction Based on the gradient-weighted token representations, we derive the importance by the magnitude of each CAT. Specifically, we aggregate the token importance from all layers since it can provide a better identification for the important tokens Lastly, we need to identify which range of token importance should be considered as significant. To this end, we simply select the tokens whose cumulative sum of their sorted and normalized importance scores falls below a pre-defined threshold p, similar to the candidate set of nucleus sampling Training Objective. The dynamic token routers are trained to process only the significant tokens which are selected from the above procedure. Let ŵi be the selection decision for the i-th token given the selected tokens with a value of one otherwise zero, the objective for the router is formulated as follows: ), (8) The overall objective function for the downstream task can be formulated as follows: where L task is the task-specific loss function (e.g., cross entropy for the classification), and a harmony coefficient λ to balance the two loss terms. In this section, we evaluate the proposed method on a series of downstream tasks. We specifically demonstrate that introducing the leap action results in a more favorable computational efficiency compared to the prior methods. We perform diverse tasks to verify the general applicability. These tasks involve scenarios where the model needs to comprehend a single sequence, as well as cases that requires understanding the semantic relationship between multiple sequences. For the single input tasks, we use SST-2 Following the prior work, we use the pre-trained BERT base Following the recent prior work PoWER-BERT We implement the proposed method using PyTorch. For the hyper-parameters associated with LoT (i.e., threshold p in Eq. ( The hyper-parameters are listed in the Appendix. Singe input tasks. In Table Multiple input tasks. We also highlight the results on the tasks that involve pairs of distinct sentences in Table Trade-off. To confirm the better computational efficiency of LoT, we show the trade-off curves between task accuracy and speedup gains on two representative datasets in Figure In this section, we analyze the behavior of LoT in detail. We specifically focus on how LoT achieves a greater efficiency than other baselines. In Table We also analyze the routing distribution across different layers for various datasets. Figure The token routers in LoT are supervised directly from the aggregated gradient information. To verify the significance of the supervised router training, we compare LoT with an alternative version that learns to decide the leap action without the guidance. To implement this baseline, we replace the guidance loss (i.e., Eq. ( Lastly, we examine the behavior of LoT through case studies. Figure In this work, we have proposed Leap-of-Thought (LoT), a novel token reduction strategy that enables the dynamic routing of tokens within the transformer layers. Unlike the previous works that permanently remove tokens, LoT learns to decide whether the given token should be processed in the current layer or leaped forward to the next layer. This ensures that all tokens remain accessible in subsequent layers while reducing the number of tokens processed within layers. Through the guidance from the gradient information, each router learns to process only the significant tokens to the task while bypassing the less contributing tokens. The comprehensive evaluations have convincingly supported the superiority of the proposed method by showing that LoT achieves substantial speedup gains over state-of-the-art methods with the comparable task accuracy. The analysis also have strongly supported that introducing the leap action leads to the substantially improved efficiency. While the proposed method allows transformerbased pre-trained models to achieve greater computational efficiency, there are a few potential limitations. -Interpretability Several existing methods for interpretability, such as layer-wise analysis -Router Overhead In comparison to the vanilla backbone, LoT employs token routers to perform the dynamic computation, which imposes extra model parameters and computation overhead, similar to other baselines Since LoT requires the dynamic token routers in the transformer, it imposes additional computation cost on our method. This is why we design the router to be a lightweight module, which takes only 2% of the FLOPs from the entire model. Here, we analyze the trade-off between the capacity of the router and total speed-up. Specifically, we set the target performance as fixed and evaluate the total speedup gains with the varying capacity 7 of the router. Figure To verify the scalability of LoT, we performed the additional experiments on smaller model (i.e., Tiny-7 For the capacity variation, we adjust the dimension of hidden layers of the routers. BERT To assess the speedup gains on specific computational environments, we measured the inference time on a single NVIDIA V100 GPU. As a result, we observed that the real-time speedup gains (2.2x, Base: 37ms, LoT: 17ms) consist with the gains in FLOPs (2.3x). This observation aligns with the previous finding
1,154
1,681
1,154
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
In recent years, there has been significant progress in developing pre-trained language models for NLP. However, these models often struggle when fine-tuned on small datasets. To address this issue, researchers have proposed various adaptation approaches. Promptbased tuning is arguably the most common way, especially for larger models. Previous research shows that adding contrastive learning to prompt-based fine-tuning is effective as it helps the model generate embeddings that are more distinguishable between classes, and it can also be more sample-efficient as the model learns from positive and negative examples simultaneously. One of the most important components of contrastive learning is data augmentation, but unlike computer vision, effective data augmentation for NLP is still challenging. This paper proposes LM-CPPF, Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models, which leverages promptbased few-shot paraphrasing using generative language models, especially large language models such as GPT-3 and OPT-175B, for data augmentation. Our experiments on multiple text classification benchmarks show that this augmentation method outperforms other methods, such as easy data augmentation, back translation, and multiple templates. 1
Pre-trained language models (PLMs) are trained on large-scaled corpora in a self-supervised fashion. They have fundamentally changed the NLP community in the past few years by achieving impressive results in various Tasks By the introduction of GPT-3 Prompt-based fine-tuning is a method for adapting PLMs to specific tasks or domains by providing a prompt Building on the success of LM-BFF and considering contrastive learning's promising results both in computer vision In this paper, we show that while SCL at the feature space can be beneficial, the use of different templates can limit the full potential of this approach. We propose LM-CPPF (Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models), in which we integrate the knowledge of LLMs like GPT-3 and OPT-175B
LLMs like GPT-3 Paraphrasing is the task of expressing the same meaning with different words or structures. It can be used to create training data with increased diversity and naturalness for NLP tasks, such as text classification Background Contrastive learning's success relies on data augmentation, which creates new views of the input data. Contrastive learning has been utilized for various tasks in deep learning Few-shot paraphrasing Paraphrasing is one of the best methods for data augmentation in NLP. One of the most popular approaches for paraphrasing is back-translation (BT) To avoid violating the prompt-based fine-tuning settings, we do not include any additional task data in generating our paraphrases. Following the fewshot setting in LM-BFF, we assume to have access to a PLM M , datasets D train , and D test with label space Y where there are only K = 16 examples per class in D train . We use this setting for both promptbased few-shot paraphrasing and fine-tuning. To 2 OPT-175B: opt.alpa.ai and GPT-3: openai.com/api generate paraphrases, excluding the one sample that we want to paraphrase, we use QuillBot Contrastive prompt-based fine-tuning LM-CPPF consists of two steps. The first step involves calculating the Masked Language Modeling (MLM) loss by using the target sentence in the given template, the specific demonstrations in the prompt, and the verbalizer matched with the target sentence's label. We calculate the supervised contrastive loss in the second step by comparing the target prompt with another sample with the same template but different random demonstrations. This comparison sample can be in the same or a different class as the target prompt. When the comparison sample belongs to a different class, it is randomly sampled from the dataset. However, in cases where the comparison sample belongs to the same class, an alternative approach is employed. This involves either selecting another sample from the same class Evaluation datasets and protocol Our method is evaluated on six different classification tasks from LM-BFF For the fine-tuning of GPT-2 specifically for paraphrasing, we utilized the ParaNMT-50M This section presents the results of our fine-tuning approach using paraphrasing on various NLP tasks. As shown in Table We also compare the effect of using GPT-3, OPT-175B, and GPT-2 as our language model for fewshot paraphrasing. We did two experiments with GPT-2 large: (I) Using a pre-trained version of GPT-2 where the weights are not tuned at all (II) Fine-tuned GPT-2 where the model has been finetuned on the ParaNMT-50M dataset. The results in Table In this section, we present an experimental comparison of the performance of the few-shot paraphrasing approach and other data augmentation methods, including BT and EDA. The results are shown in than EDA. We believe the reason is that BT can be more effective for longer sequences because longer sequences usually contain more context and nuanced meaning. Moreover, EDA employs additional knowledge from another PLM in certain actions, such as synonym substitution, similar to BT and few-shot paraphrasing. The few-shot paraphrasing approach introduced in this work outperforms both BT and EDA. This confirms that using PLM's knowledge properly in paraphrasing is an effective and efficient data augmentation method. In few-shot paraphrasing, we instruct the model to generate paraphrases that differ in lexicalization and sentence structure. As the heart of our method is the few-shot paraphrase generation done by LLMs, we investigate the impact of different paraphrasing prompt demonstrations and instruction templates on the performance of our model. Table Our experiments demonstrated the effectiveness of using few-shot paraphrasing as a data augmentation method for contrastive prompt-based fine-tuning of PLMs. It outperformed other data augmentation methods in text classification tasks, such as EDA, multiple templates, and back translation. We also found that our approach is effective with GPT-3 or OPT-175b models in generating paraphrases. Overall, LM-CPPF improves the performance of LM-BFF by large margins using contrastive learning applied on paraphrases generated by LLMs. Our approach relies on the performance of the fewshot paraphrasing. This results in two limitations for our approach. One limitation is the difficulty in accessing GPT-3 and OPT-175b models. These models currently need to be more widely available. OPT-175B has a free version but it is very slow. Another limitation is the need for annotated demonstrations for few-shot paraphrasing. While there are available models and tools, like QuillBot, that can be used for this purpose, their quality is not comparable to GPT-3 and OPT-175b. This can limit the power of these tools in our approach. Using human knowledge to paraphrase the demonstration can help these large models generate high-quality paraphrases but it is expensive. We used a learning rate of 1e -5 for MLM loss like LM-BFF. Although contrastive learning algorithms often perform better with larger batch training, due to resource limitations, we had to use half the batch size suggested in We show the batch size and learning rate for SupCon in Table We fine-tuned with a batch size that fits into GPU memory and is divisible by the total number of examples in the task. Experiments were conducted on one NVIDIA RTX-3090 with 24 GB memory using the RoBERTa-base model. Furthermore, as per LM-BFF, we fine-tuned for a maximum of 1000 steps. For the GPT-2 experiments in Table The primary prompts utilized for each task in our experiments are displayed in Table To find the best prompt for paraphrasing, we checked different corpus available online and found out how the paraphrasing examples are introduced. We generated our prompts by using this information and our manual modification in these templates. In this demonstration prompt, we did not provide any explanations or descriptions for the specific transformation applied to the input to produce the output. Instead, we labeled the original sample and its paraphrase. For instance, we used the token In instruction for prompts, we provided examples and simple instructions to the language models. The instructions were used to ask the model to generate paraphrases before presenting them with examples. Table x in 2 =Concat(T (Par(sent)), T (demo in 2 )) ▷ MLM Learning: (1) where the weight vector of the MLM head is denoted by w. In LM-BFF, the authors add demonstrations to the input x prompt to improve the model's understanding of verbalizers. As a result, the input to LM-BFF is in the following form: T (x in ) ⊕ T (x 1 in , y 1 ) ⊕ ... ⊕ T (x k in , y k ) (2) where T (x i in , y i ) illustrates the i-th demonstration in the template mathcalT with where the actual verbalizer of the samples replaces the L M LM = (x in ,y)∈D train -log[p(y|x in )] (3) Supervised Contrastive Loss. Supervised Contrastive Learning is a specific form of contrastive learning where x ′ 1 and x ′ 2 are the augmented version of the input batch x and y is the actual label of the batch. To use SupCon on multiple views of an input text, we first need to obtain two views of the text: x in 1 = T (sent) ⊕ T (demo 1 ) ⊕ T (demo 2 ) (5) x in 2 = T (P ar(sent)) ⊕ T (demo 3 ) ⊕ T (demo 4 ) (6) where x in 1 is the same as x prompt+demo in LM-BFF and T is a function that formats the sentence according to a specific template. Instead of using a new template in which the newly generated sample does not provide a new perspective, we use the few-shot paraphrasing (P ar) function. Also, verb stands for the verbalizer used for the actual label of the sample. Now using Equation 4 on two views, we can calculate the total loss: Algorithm D.1 shows an overview of our method which uses contrastive few-shot fine-tuning with few-shot paraphrasing. It is important to mention that learning from L SupCon requires one additional forward and backward pass, which increases the computational cost by a factor of 1.5. However, the cost is still the same as
1,277
793
1,277
Deep Neural Model Inspection and Comparison via Functional Neuron Pathways
We introduce a general method for the interpretation and comparison of neural models. The method is used to factor a complex neural model into its functional components, which are comprised of sets of co-firing neurons that cut across layers of the network architecture, and which we call neural pathways. The function of these pathways can be understood by identifying correlated task level and linguistic heuristics in such a way that this knowledge acts as a lens for approximating what the network has learned to apply to its intended task. As a case study for investigating the utility of these pathways, we present an examination of pathways identified in models trained for two standard tasks, namely Named Entity Recognition and Recognizing Textual Entailment.
Interpretation of neural models is a difficult task because the knowledge learned within neural networks is distributed across hundreds of thousands of parameters. Interpreting the significance of any individual neuron is tantamount to reconstructing a forest based on a single pine needle. More specifically, the contribution of each individual neuron is a minuscule part in the overall representation of the learned solution, and the mapping between neurons and function may be many-to-many This method, which can be applied simply in a purely post-hoc analysis, independent of the training process, can enable both understanding of individual models and comparison across models. The interpretation process enables investigation of which identified functional groups correspond to linguistic or task level heuristics that may be employed in well understood non-neural methods for performing the task. Furthermore, it enables comparison across very different architectures in terms of the extent and the manner in which each architecture has approximated use of such knowledge. In so doing, the method can also be used to formulate explanations for differences in performance between models based on relevant linguistic or task knowledge that is identified as learned or not learned by the models. This approach builds on and extends prior work using linguistic and task knowledge to understand the behavior and the results of modern neural models In the remainder of the paper we review common techniques for network interpretation followed by a detailed description of the neural pathways approach. Next, we apply the neural pathways approach to previously published neural models, namely models for the task of named entity recognition (NER)
Our work falls under the broad topic of neural network interpretation. Recently, in this area of research a wide variety of models have been the target of investigation, including additive classifiers We observe that neural interpretation approaches fall within several broad categories: visualizations and heatmaps Recent attempts to understand the functioning of trained neural models have limited themselves to investigations of the function of individual neurons or individual architectural components. An early way to probe the function of target components, as More recently, While these approaches have mainly focused on explaining the predictions and performances of a single network at a time, few if any prior attempts have been made to use these techniques for comparison across different network architectures, as we do in this paper. Many previous approaches have analyzed individual neurons or architectures of specific neural networks with gradient methods As this is an interpretation method, there is an assumed set of information about the model, the dataset, and the task that must be known in order to apply the techniques effectively. Namely, there should be a reference set of heuristic knowledge, either at the linguistic or task level, that is associated with the dataset on an instance-by-instance level for at least some subset of the data. The differences in predictions for the task are used as the metric of interest. This is a binary value for each data instance where it is 1 if the two models did not produce the same response and 0 otherwise (correct or not). Neurons from across layers were used for the NER task analysis. Task Knowledge: For our external knowledge, we use a set of features inspired by The proposed neural pathways method is a post-training analytic approach, and thus it requires the existence of pretrained models, that will be the target of the interpretation process. This stands in contrast to previous co-training approaches, where the mechanism for interpretation is trained simultaneously with the networks that are of interest. Task Knowledge: Our interpretation method is built on the assumption that the researcher has external knowledge of the task that their model is being applied to. This can be as straightforward as simply having a feature engineered baseline, as with our named entity recognition example (Section 4.2). However, it can also be as nuanced as having access to an analysis of the types of required knowledge to accurately predict certain instances in the data, as in our recognizing textual entailment example where we use an alternate validation set for the MultiNLI corpus where subsets have been earmarked as of interest for specific kinds of task and linguistic knowledge (Section 4.1). The external knowledge that is brought to the interpretation process will directly affect what conclusions can be drawn from the neural model as this method does not generate new knowledge, but validates the relevance of external knowledge for explaining network function. If the knowledge brought to the process is only partial, then only partial understanding of network function will be possible. However, as one iterates through the interpretation process, the potential relevance of additional knowledge may emerge, and the process can be repeated with the expanded set. This is an advantage of not requiring the interpretation mechanism to be trained along side the model in question. Extracting Activations: As a preparatory step for the interpretation process, an activation matrix is constructed where the columns represent individual neurons, the rows represent instances, and the value of each cell is the activation of the associated neuron in the associated instance. Part of this method's flexibility is that the set of probed neurons can be arbitrarily large or small. This way, the sets can be specified to analyze the pathways within certain subsections of the model or in the model as a whole. This flexibility allows researchers to ignore parts of the model that may already be well explained by other neural interpretation techniques (e.g. low-level feature extraction in convolutional neural networks in image recognition, or attention heatmaps). For our analysis, we selected the number of pathways for each model so that they explain ≈ 75% of the total variance in the model. This number was chosen arbitrarily as a balance between the total variance explained by the dimensionality reduction and the quantity of pathways required. Further experimentation may reveal an optimal balance. For the entailment models, the total variance explained for the decomposable attention model was 76.9% over 15 pathways and for the BiLSTM encoder model variance explained was 76.5% over 175 pathways. This result clearly shows that the representation learned by the decomposable attention model has significantly more internal coherence as compared to the BiLSTM encoder. For the NER models, 74.5% of the variance was explained for the CNN-BiLSTM-CRF with 40 pathways and 75.1% of the variance was explained by 35 pathways in the BiLSTM-BiLSTM-CRF. This shows a that both models have similar amounts of observable structure within them. Entailment: From the linear comparisons for the decomposable attention model, three pathways had a correlation coefficient greater than 0.25 (p < 0.001). However, in the LSTM model, there were NER: Similarly, for the NER task, the differences in predictions for the CNN based character encoder model and the BiLSTM based character encoder via the linear comparisons, were explained by several pathways. For the CNN-BiLSTM-CRF, the top 5 predictive pathways for the differences be-tween the two models' predictions have an average of 0.025 higher correlation coefficient (p < 0.001) than the BiLSTM-BiLSTM-CRF. Neural pathways are a way to abstract the problem of interpreting single neurons in a neural model to interpreting the functional groups of neurons. In isolation, the pathways are not meaningful, though grounded to task-related information via linear probes and rank correlation, the learned representations within the neural model can be evaluated. Linear Probes: Like Conneau et al. ( From each of the linear models, we store the weight vector, which represents the importance of each neuron for predicting the types of task-specific phenomena learned by the linear model and the performance of the linear model which indicates the degree to which that information is embedded in the neural model. Rank Correlation: Using both the factor loadings of the neurons from Section 3.2 and the weights from the linear probes discussed above, we can connect the pathways to known task information. Intuitively, if a neural pathway was approximating a function similar to one of the phenomena examined by the linear probes, then the loadings of each neuron in the pathway would be similar in relative shape to the weights of the relevant linear probe. That is, if the pathway and the probe are viewing the same phenomenon, the neurons with stronger weights in the probe should have higher loadings in the pathway and vice versa. To measure the relatedness of each pathway's loadings to each linear model's weights, we use Spearman's rank correlation coefficient (ρ) For the entailment models, the experiment was designed to explore the predictive behavior of each model for the task. The linear probes indicate that the information about what type of reasoning is required for a task, which is hypothesized to be encoded in the models, was distinctly encoded in each model, but to a greater extent in the decomposable attention model. The connection between the pathways and the linear probes was less strong, however. This indicates that despite the models having an encoding of the knowledge observed by the probe, it is likely a byproduct of a different function that is being approximated by the neural network. The pathways were created by analyzing which neurons behave cohesively, indicating a subprocess within the network. However, these subprocesses do not correspond strongly to any of the tested features. Consequences of this finding could be an indication that the model is 'cheating' on the task and has some inductive bias that is beneficial to the task independent from the task as envisioned by the creators. Otherwise, if many models demonstrate this behavior, the task or dataset may be insufficient to induce the desired learning behavior in neural models. This is consistent with recent highly domain specific analyses of this task The NER model analysis was set up to understand the factors contributing to the differences between the two models rather than the factors influencing the prediction accuracy. Many of the surface features that were tested were present in the models, although there were not significant differences as to which of these features were encoded in one model or the other. Examination of the correlation of each pathway to the prediction differences between the models indicate that the differences were primarily explained by pathways that had high amounts of explained variance. Strong linear probe results, in conjunction with a mismatch between which pathways correlated to the metric of interest and which pathways correlated well to each surface feature that was probed, indicate that each of the models learned the surface features from the data and that other functions are responsible for differences. This can guide future examination of these models to pinpoint exactly what knowledge the model is using for the task. For example, a high variance pathway for the CNN-BiLSTM-CRF included some neurons from the CNN and some from the LSTMs and was typically activated by words with capital letters. However, it also activated on notable exceptions such as "van" and "de" that serve as a lowercase part of some names indicated that it had memorized those exceptions to the broader heuristic. No such pathway was identified in the BiLSTM-BiLSTM-CRF model. To evaluate our interpretation technique on real world data, we applied our method on four trained models over two tasks: recognizing textual entailment using the Multi-genre Natural Language Inference corpus Recognizing textual entailment is a task comprised of deciding whether the concepts presented in one text can be determined to be true given some context or premise in a different text Given an input sequence, the NER task involves predicting a tag for each token in the sequence that denotes whether the token is an entity or not, as well as what type of entity it is. An example of such a tag might be PER for a "person" entity or ORG for an "organization" entity. We implemented two neural models for our experiments: the first (Figure the NER models was done using DyNet We used the CoNLL 2003 dataset Table Linear Probes: The results from the linear probes are presented in Table For the NER task, 13 out of 50 features are almost perfectly predicted by the activation probes (i.e. greater than 0.90 F1) and there are no significant differences between higher performing probes for the BiLSTM-CRF with the CNN character encoder versus the BiLSTM character encoder. The main difference seen in the results is that the CNN trades off storing information about plural nouns and adjectives for storing clearer representations for parentheses and digits. Rank Correlation: Presented in Table For the NER analysis, the pathways that correspond with the surface features represent a very small amount of the variance within the model (with few exceptions). A notable difference between the two models is that the BiLSTM character encoder seems to have a considerably more organized pathway corresponding to title case than the CNN based character encoder. In this paper, we have demonstrated an approach for neural interpretation using neural pathways on recognizing textual entailment and named entity recognition. By abstracting away from individual neurons and combining linear probes, task knowledge, and correlation techniques, insight into the knowledge learned by the neural models have been made more transparent. This general interpretation method draws similar conclusions to highly domain-specific analyses, and while it will not replace the need for deep analysis, it provides a much simpler starting point for a broad class of models. Future work can improve this method further by examining the effects of different dimensionality reduction methods with varying properties on extracting the most informative pathways from the activations.
768
1,746
768
Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources
In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.
Vector representations of words (embeddings) have become the cornerstone of modern Natural Language Processing (NLP), as learning word vectors and utilizing them as features in downstream NLP tasks is the de facto standard. Word embeddings To summarize, our contributions in this work are as follows: • We introduce a set of new linguistic constraints (i.e. synonyms and antonyms) created with BabelNet for three languages: English, German and Italian. * Equal contribution • We introduce an improved post-specialization method (dubbed WGAN-postspec), which demonstrates improved performance as compared to state-of-the-art DFFN • We show that the proposed approach achieves performance improvements on an intrinsic task (word similarity) as well as on a downstream task (dialog state tracking).
Numerous methods have been introduced for incorporating structured linguistic knowledge from external resources to word embeddings. Fundamentally, there exist three categories of semantic specialization approaches: (a) joint methods which incorporate lexical information during the training of distributional word vectors; (b) specialization methods also referred to as retrofitting methods which use post-processing techniques to inject semantic information from external lexical resources into pre-trained word vector representations; and (c) post-specialization methods which use linguistic constraints to learn a general mapping function allowing to specialize the entire distributional vector space. In general, joint methods perform worse than the other two methods, and are not model-agnostic, as they are tightly coupled to the distributional word vector models (e.g. Word2Vec, GloVe). Therefore, in this work we concentrate on the specialization and post-specialization methods. Approaches which fall in the former category can be considered local specialization methods, where the most prominent examples are: retrofitting On the other hand, the latter group, postspecialization methods, performs global specialization of distributional spaces. We can distinguish: explicit retrofitting In this paper, we propose an approach that builds upon previous works In this step a subspace of distributional vectors for words that occur in the external constraints is specialized. To this end, fine-tuning of seen words can be performed using any specialization method. In this work, we utilize Attract-Repel model The negative examples serve the purpose of pulling synonym pairs closer and pushing antonym pairs further away with respect to their corresponding negative examples. For synonyms: where τ is the rectifier function, and δ att is the similarity margin determining the distance between synonymy vectors and how much closer they should be comparing to their negative examples. Similarly, the equation for antonyms is given as: A distributional regularization term is used to retain the quality of the original distributional vector space using L 2 -regularization. where λ reg is a L 2 -regularization constant, and x i is the original vector for the word x i . Consequently, the final cost function is formulated as follows: Once the initial specialization is completed, postspecialization methods can be employed. This step is important, because local specialization affects only words seen in the constraints, and thus just a subset of the original distributional space X d . While post-specialization methods learn a global specialization mapping function allowing them to generalize to unseen words X u . Given the specialized word vectors X s from the vocabulary of seen words V S , our proposed method propagates this signal to the entire distributional vector space using a generative adversarial network (GAN) Our proposed post-specialization approach is based on the principles of GANs, as it is composed of two elements: a generator network G and a discriminator network D. The gist of this concept, is to improve the generated samples through a minmax game between the generator and the discriminator. In our post-specialization model, a multi-layer feed-forward neural network, which trains a global mapping function, acts as the generator. Consequently, the generator is trained to produce predictions G(x; θ G ) that are as similar as possible to the corresponding initially specialized word vectors x s . Therefore, a global mapping function is trained using word vector pairs, such that On the other hand, the discriminator D(x; θ D ), which is a multilayer classification network, tries to distinguish the generated samples from the initially specialized vectors sampled from X s . In this process, the differences between predictions and initially specialized vectors are used to improve the generator, resulting in more realistically looking outputs. In general, for the GAN model we can define the loss L G of the generator as: While the loss of the discriminator L D is given as: In principle, the losses with Wasserstein distance can be formulated as follows: and An alternative scenario with a gradient penalty (WGAN-GP) requires adding gradient penalty λ coefficient in the Eq. ( Pre-trained Word Embeddings. In order to evaluate our proposed approach as well as to compare our results with respect to current state-ofthe-art post-specialization approaches, we use popular and readily available 300-dimensional pretrained word vectors. Word2Vec Let us discuss in more detail how the lists of constraints were constructed. In this work, we use two sets of linguistic constraints: external and babelnet. The first set of constraints was retrieved from WordNet Similarly, we refer to the work of Scheible and Schulte im Walde ( Initial Specialization and Post-Specialization. Although, initially specialized vector spaces show gains over the non-specialized word embeddings, linguistic constraints represent only a fraction of their total vocabulary. Therefore, semantic specialization is a two-step process. Firstly, we perform initial specialization of the pre-trained word vectors by means of Attract-Repel (see §2) algorithm. The values of hyperparameter are set according to the default values: λ reg = 10 -9 , δ sim = 0.6, δ ant = 0.0 and k 1 = k 2 = 50. Afterward, to perform a specialization of the entire vocabulary, a global specialization mapping function is learned. In our WGAN-postspec proposed approach, the post-specialization model uses a GAN with improved loss functions by means of the Wasserstein distance and gradient penalty. Importantly, the optimization process differs depending on the algorithm implemented in our model. In the case of a vanilla GAN (AuxGAN), standard stochastic gradient descent is used. While in the WGAN model we employ RMSProp We report our experimental results with respect to a common intrinsic word similarity task, using standard benchmarks: SimLex-999 and WordSim-353 for English, German and Italian, as well as SimVerb-3500 for English. Each dataset contains human similarity ratings, and we evaluate the similarity measure using the Spearman's ρ rank correlation coefficient. In Table In the tasks we report scores for Original (nonspecialized) word vectors, initial specialization method Attract-Repel The results suggest that the post-specialization methods bring improvements in the specialization of the distributional word vector space. Overall, the highest correlation scores are reported for the models with adversarial losses. We also observe that the proposed WGAN-postspec achieves fairly consistent correlation gains with GLOVE vectors on the SimLex dataset. Interestingly, while exploiting additional constraints (i.e. external + babelnet) generally boosts correlation scores for German and Italian, the results are not conclusive in the case of English, and thus they require further investigation. We also evaluate our proposed approach on a dialog state tracking (DST) downstream task. This task is a standard language understanding task, which allows to differentiate between word similarity and relatedness. To perform the evaluation we follow previous works In our experiments, we report results with a standard joint goal accuracy (JGA) score. The results in Table In this work, we presented a method to perform semantic specialization of word vectors. Specifically, we compiled a new set of constraints obtained from BabelNet. Moreover, we improved a state-of-theart post-specialization method by incorporating adversarial losses with the Wasserstein distance. Our results obtained in an intrinsic and an extrinsic task, suggest that our method yields performance gains over current methods. In the future, we plan to introduce constraints for asymmetric relations as well as extend our proposed method to leverage them. Moreover, we plan to experiment with adapting our model to a multilingual scenario, to be able to use it in a neural machine translation task. We make the code and resources available at:
573
795
573
Penn & BGU BabyBERTa+ for Strict-Small BabyLM Challenge
The BabyLM Challenge aims at pre-training a language model on a small-scale dataset of inputs intended for children. In this work, we adapted the architecture and masking policy of BabyBERTa (Huebner et al., 2021) to solve the strict-small track of the BabyLM challenge. Our model, Penn & BGU BabyBERTa+, was pre-trained and evaluated on the three benchmarks of the BabyLM Challenge. Experimental results indicate that our model achieves higher or comparable performance in predicting 17 grammatical phenomena, compared to the RoBERTa baseline. 1
With the emergence of deep-learning techniques
In this section, we provide the descriptions of our BabyBERTa+ model including the architectures, tokenizers, training objectives and so on. As shown in Table Following previous work To train the masked language model, the standard RoBERTa masking strategy replaces 80% of the corrupted tokens with the "<mask>" token, while 10% of the tokens are replaced with random tokens, and the remaining 10% are left unchanged. The unmasking removal policy proposed in Huebner et al. ( In this section, we evaluate our pre-trained models on the tasks in BabyLM challenges of the strictsmall tracks. There are three different evaluation benchmarks: BLiMP test suites We use the default hyperparameters as defined in In this study, we propose a model named Baby-BERTa+ by adapting BabyBERTa
547
46
547
ArgAnalysis35K : A large-scale dataset for Argument Quality Analysis
Argument Quality Detection is an emerging field in NLP which has seen significant recent development. However, existing datasets in this field suffer from a lack of quality, quantity and diversity of topics and arguments, specifically the presence of vague arguments that are not persuasive in nature. In this paper, we leverage a combined experience of 10+ years of Parliamentary Debating to create a dataset that covers significantly more topics and has a wide range of sources to capture more diversity of opinion. With 34,890 high-quality argument-analysis pairs (a term we introduce in this paper), this is also the largest dataset of its kind to our knowledge. In addition to this contribution, we introduce an innovative argument scoring system based on instance-level annotator reliability and propose a quantitative model of scoring the relevance of arguments to a range of topics.
Parliamentary Debate is an extemporaneous form of debating. One of the major intersections of Natural Language Processing and Debating was IBM Project Debater The dimension that we introduce here is a detailed explanation of why the argument made is true, applicable or impactful, henceforth referred to as "analysis". Analysis is defined as logical links provided to defend a statement, an example of which can be seen in Table • Reason 1: It's neither a claim nor a premise: while we can say that "arguments" as we use it is equivalent to a claim used in argumentation, the same cannot be said for "analysis". In the context of parliamentary debating, analysis can be a combination of one claim and multiple premises, just a premise, multiple claims and multiple premises, and so on. Premise would be a part of "analysis" but may not be all of it. An example of this is given below: Argument (claim) : Education is the basis of everything a person achieves. -Analysis: Educated people are 80% more likely to be in the top 10% of the richest people in the world. (Analysis as a premise) -Analysis: Rich people send their kids to private schools and better colleges. This leads to them getting better jobs and being rich. (Analysis as a claim and one premise) -Analysis: If you get a good primary education, you are more likely to get into an Ivy League. If you get into an Ivy league, you are more likely to get a higher paying job. With this job, you have a higher chance of sending your kids to private schools, who then go on to achieve the same things. You and your family are then likely to be the top 10% of the richest people in the world. (Analysis as multiple claims and premises) These logical links need to be seen as one "analysis" instead of multiple claims and subclaims because each subsequent link needs to be seen in the context of the links that come before to build the overall reason for defending the argument. (good primary education → ivy league → high paying job → generational wealth). Here, each individual sub-claim does not defend the overall argument, but rather the collection of links in order that performs that function. • Reason 2: Premises, as presented in Argument Motions a child is still growing, physically and mentally, cosmetic surgery should not be considered until they are an adult and able to make these decisions We should ban cosmetic surgery for minors
We should end racial profiling Argument relevance is an important indicator of persuasiveness according to Application of Instance-based annotator reliability to argumentation is another important contribution described in this paper. Some annotators might know a lot more about art than about the criminal justice system, hence might judge certain arguments as more or less persuasive using their knowledge; secondly, because of the element of bias that comes in when ranking arguments. Annotators might be biased about a certain argument on race, for example, because of the strong sentiments they feel towards them in their daily life, but they may not be biased when judging an argument on art. We propose a system that enables us to keep the scores of these annotators instead of dropping them, like previous systems have, and show how this leads to a better overall dataset with a more uniform distribution of scores. The dataset is crucial to designing systems that can interact efficiently with humans. Arguments generated with this system can analyze arguments better, and create effective rebuttals using high scoring arguments of the other side. The dataset can also be used to judge a debate by assigning scores to arguments as per their level. Any interactive system, such as IBMs Project Debater needs this dataset as a preliminary base to analyze and win debates with a human. In summary, our major contributions detailed in this paper are: (1) Argument-analysis pairs collected from a variety of sources on a variety of topics; (2) Introduction of a relevance model that enables the use of multiple arguments in different contexts; (3) Introduction of an instance based annotator scoring system that reduces bias and makes argument scores more accurate. There have been several datasets in the field of argument quality using empirical methods that focus on finding arguments and evidence. Lastly, a major contribution in this work is the proposal of a relevance model. Argument Collection for ArgAnalysis35K was primarily done through two ways. were collected through contribution by a set of active debaters of varying levels of expertise. These people were recruited at debating tournaments, through active debate circuits, debating facebook groups and contacts of past/current debaters. • Experts: Won 5+ tournaments at a global or regional level or have 3+ years of active debating experience. Experts contributed around 22% of our argumentanalysis pairs. • Intermediate: Won 2+ tournaments at a global or regional level or have 1-3 years of active debating experience. Intermediates contributed around 22% of our argument-analysis pairs. • Novice: Not won a tournament or < 1 year of debating experience. Novice debaters contributed around 15% of our argument-analysis pairs. 2. ∼ 40% of argument-analysis pairs were extracted from speeches given in the outrounds of tournaments. We took an automatically generated transcript of the speech and manually heard the debates to correct minute errors. We then wrote down the argument analysis statements verbatim as the speakers said it. The tournaments considered were regional majors (EUDC, UADC, etc.) or global majors (Worlds University Debating Championships While collecting arguments from contributors, we used the following procedure. Each contributor was presented with a single motion at a time and asked to contribute one argument for and one argument against the motion. It was explained that an argument is a statement in defence of or against the motion presented. Then, the contributor was asked to come up with analysis statements defending the arguments. An analysis statement was explained to be a reason why we find the specific argument persuasive. We also set a character limit of 20-210 for each argument and 35-400 for each analysis point. This limit was set taking into consideration that an argument is expected to be a mere statement that is short and impactful, and analysis is expected to have more content as it defends the argument. All argument contributions were on a non-compensated volunteer basis and the workload for each volunteer was kept to a maximum of 20 minutes. 200 individuals were involved in the annotation process for the dataset. The annotators chosen had participated in at least one debate at a school or college level. The experience level was set in order to better deal with the additional complexity of annotating argument-analysis pairs, since this concept is part of the fundamental training that is required to participate in a debate. They came from debating circuits all around the world to ensure that diversity (in arguments, thoughts, etc) is being expressed in the dataset. Considering the relatively high experience level of the annotators, each argument was annotated by three annotators. 1. Is the argument something you would recommend a friend use as-is in a speech supporting/opposing a topic, regardless of personal opinion? 2. Would you recommend a friend use the analysis to defend the argument as it is? The questions are designed in a way that detaches the annotator and their opinions from the content. We also found this element of detachment to be standard NLP practice in papers that asked subjective questions of this nature Annotator-Rel score is required for the calculation of the Weighted Average scoring function proposed by (2020a)'s reported value of 0.83, we find that our task-average κ value is 0.89. We hypothesise that this high value is due to the lower number of annotators involved and the comparatively higher and consistent experience level of the annotators. All annotation was done on a non-compensated volunteer basis. Scoring an argument-analysis pair is an inherently subjective task. In order to make it as objective as possible, we have reduced the annotator involvement to two binary questions. However in order to make our dataset usable and interfaceable with others in the field To determine how dependable annotators are, we use MACE-P. Since we have asked two questions, one related to argument and one to analysis, cor-respondingly, we have two scores generated per argument-analysis pair. We denote these scores as MACE-P Arg and MACE-P Analysis . By combining the annotators' opinions, the technique predicts the ground truth and enables the identification of reliable annotators. Each annotator's reliability score is estimated by MACE, which is subsequently used to weigh this annotator's conclusions. In order to learn from redundant annotations, MACE does not necessary require that all annotators provide answers on all data, but it does require at least that a sizable pool of annotators annotate a portion of the same data. In our method, each argument is annotated by multiple individuals, thus making it a good use case for the application of MACE. As mentioned previously, we utilize the annotator reliability we have calculated in order to compute Weighted Average scores for the two questions. As before, we get two scores per argument-analysis pair -WA arg and WA analysis We have applied a third scoring function to our dataset considering the following assumptions: • Since we are selecting our annotators with a baseline level of expertise in the field of debating and have ruled out unattentive people, the remaining annotators are unlikely to be incompetent. • Annotators are human and have human biases. They are likely to be biased, prejudiced and unreliable in specific instances Considering these assumptions, we decided to apply the scoring function proposed by Since we are scoring arguments and analysis separately, we have come up with two scores per scoring function discussed so far. Arguments and analysis are linked intrinsically in the context of debate. A good argument defended badly is non-persuasive, as is a bad argument defended well. In order to model this behaviour, we propose that to get the overall score of an argument analysis pair, we multiply the two scores together to get an overall score as shown in equation 1. Score pair = Score arg * Score analysis (1) Here, we have compared the three scoring functions described by performing two experiments. In all experiments, delta indicates the difference between the scores under consideration. Additional details about these experiments can be found in the appendix. Here, we paired up argument-analysis pairs where we see a difference in scoring between MACE-P, WA and IA scoring functions. Annotators were asked to pick the argument-analysis pair that they would prefer to recommend to someone regardless of personal bias to use as-is. We then look at the agreement between the different annotators on each of the pairs. For those pairs differing in WA and IA, annotators preferred IA in 68% of the pairs. Similarly, for those pairs differing in IA and MACE-P, annotators preferred IA in 64% of the pairs. Ideally, a scoring function should be consistent across the dataset. This means that if we were to sample the dataset and follow the same procedure of creating and scoring argument analysis pairs, we should end up with similar scores for the arguments. In order to perform this experiment, we Standard Arguments, showing that the higher the delta between the scores, the higher is the precision value for annotators recognizing the higher rated pair, i.e. the difference between an argument scoring 0.2 and an argument scoring 0.8 (delta 0.6) is easier to recognize than the difference between an argument scoring 0.8 and 0.9 (delta 0.1) . randomly sample 500 argument-analysis pairs from our dataset and send them to a different set of annotators following the same procedure. We then calculate the Spearman's Rank Correlation Coefficient between the scores calculated using the new annotations and the scores calculated originally. We find that there is a strong correlation for all three scoring functions in terms of the argument scores, but that correlation gets slightly weaker when it comes to analysis scores. This can be explained due to the slightly more subjective nature of the analysis. In terms of the scoring functions, we find that there is a slightly higher correlation for weighted average as opposed to the other two methods, which is an observation that agrees with the previous experiment's findings. These results are shown in Table In this section, we describe the relevance model that quantifies the applicability of each argumentanalysis pair to a topic. The underlying assumption is that each argument-analysis pair has a degree of applicability to at least one and likely more topics. This assumption is made on the basis of the personal experience that we have gathered while debating and discussions with experts in the field, where we often find that arguments repeat across multiple topics and motions. In order to build our relevance model, we utilize the following algorithm. 1. We generate a list of 24 topics (Table 2. In order to get more nuance on these topics, we asked 50 annotators to come up with a list of 5 keywords (also referred to as subtopics) per topic resulting in 250 keywords per topic. We observed that this process generated keywords that provided holistic coverage of the topics. Moreover, the repetition we noticed with the keywords showed us that asking annotators to come up with any more keywords would not have been productive. The annotators chosen for this task were the ones scoring the highest in the previous tasks we set. 3. The keywords were then aggregated for similarity and reduced to the simplest representation 4. The list of keywords was then sent to the experts who were asked to classify them into two bins: one bin containing keywords that they perceived to be highly relevant to the topic and one bin containing keywords that they perceived to be not as relevant. The weight of the keyword was taken to be the percentage of experts placing the keyword in the high relevance bin. 5. The probability of each argument-analysis pair belonging to the topics was then calculated. This was achieved by applying W2V and BERT to generate a list of scores per argument-analysis pair and subtopic, which indicates the probability of the pair belonging to that topic. 6. These scores are then combined via the following formula to generate the overall relevance score of a particular argument-analysis pair to the main topic. (2) We observe a small degree of overlap (approximately 15% of keywords having more than one non zero relevance score) in the keyword generation process, i.e. the same keyword being generated for different topics. We take this as evidence that there is a significant overlap of themes when it comes to debate. In this case they were assigned different weights for the different topics depending on the percentage of experts that placed the word in the high relevance bin for that particular topic. This created a set of 84 unique keywords with different weights for the 24 topics. In order to validate the relevance model we propose a simple experiment. The hypothesis is that as the delta of relevance scores increases, it will be easier for annotators to identify which of the pair of arguments is more relevant to the given topic. 1. To make the comparisons fairer, we randomly select a topic for which the relevance scores will be considered. 3. We then randomly sample 150 pairs and send them for pairwise annotations to a set of 50 people (highest scoring annotators and experts). Each annotator was asked to pick the more relevant argument for the given topic and the percentage of annotators picking the higher ranked argument was noted as the precision. 4. If sufficient agreement (> 80%) between annotators was not achieved, the pair was dropped. This procedure was followed for two more randomly sampled topics to ensure coverage of the dataset and the agreements with the relevance scores are recorded in Table 7 Experimental Results We use several methods to learn the task of ranking the quality of arguments. We evaluate the following methods, some accepted standard baselines, some taken from • Arg Length: We evaluate the effect the length of an argument has on the scores of the argument to see if there is a correlation between the two, or if the annotators are biased to score longer arguments higher. • Bi-LSTM GloVe: We implemented the model proposed by Levy et al. on a dropout of 0.10 and an LSTM layer of size 128. 300 dimensional GloVe embeddings were used for input features. For the purpose of evaluating our methods on the ArgAnalysis35K dataset, we split the dataset into 70-20-10, 70% for training, 10% for tuning hyper parameters (to be used as a dev set), and 20% for testing. To keep the experiments consistent for comparing results with The results are presented in Table We then look at the agreement between the different annotators on each of the pairs, similar to the experiment performed to compare the different scoring functions. We found that annotators preferred a ArgAnalysis35K argument 71% of the time, hence showing that the arguments in ArgAnalysis35K are more relevant in the context of parliamentary debating, and that an argument is more persuasive when followed by analysis. 7.4 Comparing the relative effect of argument and analysis for the overall score One of the major purposes of asking annotators to answer two questions and reporting two separate scores of argument and analysis is to answer the question of what makes an argument persuasive: the argument itself or the explanation and analysis given for it. In order to test this, we plot a histogram of arguments and analysis separately against the distribution of the score (additional graphs attached in appendix). We find that analysis points have more scores above 0.7 than arguments alone, hence proving that logical links and explanations are critical to increase the persuasiveness of an argument. In this work, we create ArgAnalysis35K and validate it using a variety of methods. This system can be integrated with existing models to create a system that is able to debate more efficiently, be more persuasive, and as a result win more debates. The collection and verification of this work has required help from over 250 annotators. This makes the dataset difficult to replicate, as is the case with many dataset papers. We have selected annotators carefully, considering relevant experience and using techniques to determine annotator quality to minimise the subjective variance. We have tried to cover the arguments involved in debating by talking to experts and people from debate circuits across the world, with different experiences and expertise. However, due to the nature of this activity, it is possible that there are arguments and experiences have not been covered in the dataset. These could be experiences of marginalized communities, underrepresented debate circuits, etc. Moreover, some debate motions used are relevant to the time period in which the motion was the most prominent (for example, motions about Trump and his actions, certain policy decisions, wars and their outcomes, etc). Our dataset does not account for the changes that might have taken place pertinent to that issue after the generation of arguments. We have attempted to ensure that the broader impact of this work is positive to the best of our ability. We have validated our list using data from multiple tournaments, experts, Core adjudicators to ensure that the maximum possible amount of diversity is incorporated. We have included a large number of high quality arguments, unlike other similar projects, to increase the possibility of creating a system capable of winning against a human, a chance that is otherwise missing with other datasets. The number of annotators used to create and validate the dataset and its functions is small (200 at most), we find that this is on par with similar projects. We have compensated all annotators as applicable. Lastly, even though arguments were taken from WUDC speeches by watching and recording them, they were anonymized by removing names, paraphasing the argument and making it otherwise unrecognizable to point out where an argument came from (even for an expert debater). neous. IA on the other hand, tends to provide a much smoother curve as we attempt to preserve as much contribution from each annotator as possible, thus leading to a more representative annotation set. Furthermore, Weighted Average tends to generate a continuous scoring scale while MACE-P tends to cluster argument-analysis pairs around either of the two extremes, but we observe that IA offers a middle ground approach to get as close to the true value of an argument as possible, while still maintaining a smooth, continuous scoring curve. However, in order to make our dataset interfaceable with others in the field and to not lose out on the value generated by the other two scoring functions, we report all six scores in the final dataset. and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Yes, discussed in all footnotes in dataset creation and scoring functions D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Not applicable to our case, no sensitive data collected, participants were told about the study and how their answers would be used and that they would be anonymous. Discussed in ethical impacts. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Not applicable to our use case D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Dicussed diversity in introduction and broader implications
890
2,401
890
HIT-SCIR at MRP 2019: A Unified Pipeline for Meaning Representation Parsing via Efficient Training and Effective Encoding
This paper describes our system (HIT-SCIR) for the CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing. We extended the basic transition-based parser with two improvements: a) Efficient Training by realizing stack LSTM parallel training; b) Effective Encoding via adopting deep contextualized word embeddings BERT (Devlin et al., 2019). Generally, we proposed a unified pipeline to meaning representation parsing, including framework-specific transitionbased parsers, BERT-enhanced word representation, and post-processing. In the final evaluation, our system was ranked first according to ALL-F1 (86.2%) and especially ranked first in UCCA framework (81.67%).
The goal of the CoNLL 2019 shared task Recently, a lot of semantic graphbanks arise, which differ in the design of graphs Most semantic parsers are only designed for one or few specific graphbanks, due to the differences in annotation schemes. For example, the currently best parser for SDP is graph-based Therefore, the main challenge in crossframework semantic parsing task is that diverse framework differs in the mapping way between surface string and graph nodes, which incurs the incompatibility among framework-specific parsers. To address that, we propose to use transition-based parser as our basic parser, since it's more flexible to realize the mapping (node generation and alignment) compared with graphbased parser, and we improve it from the two as- pects: 1) Efficient Training Aligning the homogeneous operation in stack LSTM within a batch and then computing them simultaneously; 2) Effective Encoding Fine-tuning the parser with pretrained BERT Our contribution can be summarised as follows: • We proposed a unified parsing framework for cross-framework semantic parsing. • We designed a simple but efficient method to realize stack LSTM parallel training. • We showed that semantic parsing task benefits a lot from adopting BERT. • Our system was ranked first in CoNLL 2019 shared task among 16 teams upon ALL-F1.
Our system architecture is shown in Figure In order to design the unified transition-based parser, we refer to the following frameworkspecific parsers: A tuple (S, L, B, E, V ) is used to represent parsing state, where S is a stack holding processed words, L is a list holding words popped out of S that will be pushed back in the future, and B is a buffer holding unprocessed words. E is a set of labeled dependency arcs. V is a set of graph nodes include concept nodes and surface tokens. The initial state is , where STACK LSTM(s) encodes the state s into a vector, g a and b a are embedding vector, bias vector of action a respectively. The oracle transition action sequence is obtained through transition system, proposed in in Section 3. Figure They will be merged into a batch once batch-processing is triggered. After that, new LSTM states will be pushed to corresponding stacks. Kiperwasser and Goldberg (2016) shows that batch training increases the gradient stability and speeds up the training. Delaying the backward to simulate mini-batch update is a simple way to realize batch training, but it fails to compute over data in parallel. To solve this, we propose a method of maintaining stack LSTM structure and using operation buffer. stack LSTM The stack LSTM augments the conventional LSTM with a 'stack pointer'. And it supports the operation including: a) INSERT adds elements to the end of the sequence; b) POP moves the stack pointer to the previous element; c) QUERY returns the output vector where the stack pointer points. Among these three operation, POP and QUERY only manipulates the stack without complex computing, but INSERT performs lots of computing. Batch Data in Operation-Level Like conventional LSTM can't form a batch inside a sequence due to the characteristics of sequential processing, stack LSTM can't either. Thus, we collect undercomputed operations between different pieces of data to form a batch. In other words, we construct batch data on operation-level other than data-level in tradition. After collecting a batch of operation, we compute them simultaneously. Operation Buffer To be more efficient, we adopt a buffer to collect operations and let it trigger the computing of those operations automatically (batch-processing), as shown in Figure To ensure correctness, batch-processing will only be triggered when satisfy some conditions. More specifically, when a) operation INSERT comes and there is already an INSERT in the buffer; b) operation POP or QUERY comes. To clarify, the depth of buffer per data is 1. Neural parsers often use pretrained word embeddings as their primary input, i.e. word2vec We adopt BERT in our model, which uses the language-modeling objective and trained on unannotated text for getting deep contextualized embeddings. BERT differs from ELMo in that it employs a bidirectional Transformer To encode the whole sentence, we extract the first piece s k,1 of each token w k , with applying a scalar mix on all L layers of transformer, to represent the corresponding token w k . Semantic graphs in all frameworks can be broken down into 'atomic' component pieces, i.e. tuples capturing (a) top nodes, (b) node labels, (c) node properties, (d) node anchoring, (e) unlabeled edges, (f) edge labels, and (g) edge attributes. Not all tuple types apply to all frameworks, however. Our transition-based parser can provide the edge information, while the other node information, such as pos, frame and lemma, require us to use additional tagger models to label the sentence sequence. The tagger we adopted is directly imported from AllenNLP library, which only models the dependency between node and label (emission score), not models the dependency between labels (transition score). The details about integrating and converting system output into MRP format will be introduced in Section 4. Building on previous work on parsing reentrancies, discontinuities, and non-terminal nodes, we define an extended set of transitions and features that supports the conjunction of these properties. To solve cross-arc problem, we use list-based arceager algorithm for DM, PSD, and EDS framework as Node Properties Nodes in DM and PSD are labeled with lemmas and carry two additional properties that jointly determine the predicate sense, viz. pos and frame. We use two taggers to handle this problem. Top Nodes At first, we construct an artifact node called ROOT. Then we add an edge (node, ROOT, ROOT) where the node is enumerated from top nodes. Node Label We copy the lemmas from additional companion data and set it as node labels. Top Nodes There is only one top node in UCCA, which used to initialize the stack. Meanwhile, top node is the protect symbol of stack (never be popped out). Edge Properties UCCA is the only framework with edge properties, used as a sign for remote edges. We treat remote edges the same as primary edge, except the edge label added with a special symbol, i.e. star(*). Node Anchoring Refer to the original UCCA framework design, we link the the node in layer 0 to the surface token with edge label 'Terminal'. In post-processing, we combine surface token and layer 0 nodes via collapsing 'Terminal' edge to extract the alignment or anchor information. Based on the work of To clarify, w i is the top element in stack and w j is the top element in buffer. Moreover, w i could only be concept node (stack and list only contain concept node), and w j could be concept node or surface token. • SHIFT and REDUCE operations are the same as DM and PSD. • LEFT-EDGE X and RIGHT-EDGE X add an arc with label X between w j and w i . (w j is the concept node) • DROP pops w j . Then push all elements in list into stack. (w j is the surface token). • REDUCE is performed only when w i has head and is not the head or child of any node in buffer B, which pops w i out of stack S. • NODE-START X generates a new concept node with label X and set it's alignment starting from w j . (w j is the surface token) • NODE-END set the alignment of w i ending in w j . (w j is the surface token) • PASS is performed when neither SHIFT nor REDUCE l can be performed, which moves w i to the front of list . • FINISH pops the root node and marks the state as terminal. Alignment There is no anchor between tokens from surface string and nodes from AMR graph. So we have to know which token aligns to which node, or we cannot train our model. Actually, finding alignment is a quite hard problem so that we could only get approximate solutions through heuristic searching. Although basic alignments have been contained in the companion data, we decide to use an enhanced rule-based aligner TAMR TAMR recalls more alignments by matching words and concepts from the view of semantic and morphological. (a) semantic match: Glove embedding represents words in some vector space. Considering a word and a concept striping off trailing number, we think them matching if their cosine similarity is small enough. (b) morphological match: Morphosemantic database in the Word-Net project provides links connecting noun and verb senses, which helps match words and concepts. Top Nodes There is exact one top node in AMR. For the convenience of processing, we add a guard element to the stack and use operation LEFT-EDGE ROOT between guard element and concept nodes to predict top nodes. Node Labels Node label appears as the name of each concept which is parameter of operation EN- MRP stands for cross-framework evaluation metric. LF1 stands for SDP Labeled F1 as nodes and edges, we need an extra procedure to recognize which nodes should be properties in the final result. Once recognized, node along with the corresponding edge will be converted to the property of its parent node, edge label for the key, and node label for the value. We write some rules to perform the recognizing procedure. Rules come from 2 basic facts. (a) attribute node: Numbers, URLs, and other special tokens like '-'(value of 'polarity') should be values of properties. (b) constant relation: When an edge has a label like 'value', 'quant', 'op x ' and so on, it is usually a key to property. We treat it as property if there is an edge of constant relation connecting to an attribute node. The TOP operation will set the first concept node in buffer as top nodes. Node Labels We train a tagger to handle this. Although there are many node labels exists, the result shows our system performs well on this. Node Properties The only framework-specific property used on EDS nodes is carg (for constant argument), a string-valued parameter that is used with predicates(node label) like named or dofw, for proper names and the days of the week, respectively. We write some rules to convert the surface token into properties value, such as converting million(token) to 1000000(value) when card(node label). Node Anchoring We obtain alignment information through NODE START and NODE END operation, In this section, we will show the basic model setup including BERT fine-tuning, and results including overall evaluation, training speed. More details about training, including model selection, hyperparameters and so on, are contained in supplementary material. Our work uses the AllenNLP library built for the PyTorch framework. We split parameters into two groups, i.e., BERT parameters and the other parameters (base parameters). The two parameter groups differ in learning rate. For training we use Adam Fine-Tuning BERT with Parser Based on Overall Evaluation We list the evaluation results on Table In recent years, graph-based parser holds the stateof-the-art in dependency parsing area due to its ability in the global decision, compared with transition-based parser. However, when we concatenated those models with BERT, we receive the similar performance, which shows that powerful representation could eliminate the gap between structure or parsing strategy. Our system extends the basic transition-based parser with the following improvements: 1) adopting BERT for better word representation; 2) realizing batch-training for stack LSTM to speed up the training process. And we proposed a unified pipeline for meaning representation parsing, suitable for main stream graphbanks. In the final evaluation, we were ranked first place in CoNLL 2019 shared task according to ALL-F1 (86.2%) and especially ranked first in UCCA framework (81.67%).
680
1,332
680
Information-Theoretic Probing for Linguistic Structure
The success of neural networks on a diverse set of NLP tasks has led researchers to question how much these networks actually "know" about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotations in that linguistic task from the network's learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that simpler models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic operationalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate, and thus reveal more of the linguistic information inherent in the representation. The experimental portion of our paper focuses on empirically estimating the mutual information between a linguistic property and BERT, comparing these estimates to several baselines. We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research-plus Englishtotalling eleven languages. Our implementation is available in
Neural networks are the backbone of modern stateof-the-art natural language processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the natural operationalization of probing is estimating the mutual information Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones In the experimental portion of the paper, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task We also remark that operationalizing probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT
Following Let S be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary V is finite and, thus, the values S can take are in V * . We write s ∈ S as s = s 1 • • • s |s| for a specific sentence, where each s i ∈ V is a specific token in the sentence at the position i ∈ Z + . We also define the random variable W that ranges over the vocabulary V. We define both a sentence-level random variable S and a word type-level random variable W since each will be useful in different contexts during our exposition. Next, let T be a random variable whose possible values are the analyses t that we want to consider for token s i in its sentential context, s = In the discussion, we focus on predicting the part-of-speech tag of the i th word s i , but the same results apply to the dependency label of an edge between two words. We denote the set of values T can take as the set T . Finally, let R be a representation-valued random variable for a token s i derived from the entire sentence s. We write r ∈ R d for a value of R. While any given value r is a continuous vector, there are only a countable number of values R can take. Next, we assume there exists a true distribution p(t, s, i) over analyses t (elements of T ), sentences s (elements of V * ), and positions i (elements of Z + ). Note that the conditional distribution p(t | s, i) gives us the true distribution over analyses t for the i th word token in the sentence s. We will augment this distribution such that p is additionally a distribution over r, i.e., p(r, t, s, i) = δ(r | s, i) p(t, s, i) (1) where we define the augmentation as: Since contextual embeddings are a deterministic function of a sentence s, the augmented distribution in eq. ( where we define the deterministic distribution The task of supervised probing is an attempt to ascertain how much information a specific representation r tells us about the value of t. This is naturally operationalized as the mutual information, a quantity from information theory: where we define the entropy, which is constant with respect to the representations, as and we define the conditional entropy as where the point-wise conditional entropy inside the sum is defined as Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq. (1). The desired conditional entropy, H(T | R) is not readily available, but with a model q θ (t | r) in hand, we can upper-bound it by measuring their empirical cross entropy: = - expected estimation error where H q θ (T | R) is the cross-entropy we obtain by using q θ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information This bound gets tighter, the more similar-in the sense of the KL divergence-q θ (• | r) is to the true distribution p(• | r). Bigger Probes are Better. If we accept mutual information as a natural operationalization for how much representations encode a target linguistic task ( §2.2), the best estimate of that mutual information is the one where the probe q θ (t | r) is best at the target task. In other words, we want the best probe q θ (t | r) such that we get the tightest bound to the actual distribution p(t | r). This paints the question posed in We will consider two different control functions. Each is defined as the composition c = e • id with a different look-up function: • e fastText returns a language specific fastText embedding • e onehot returns a one-hot embedding. These functions can be considered type level, as they remove the influence of context on the word. We focus on type-level control functions in this paper. These functions have the effect of decontextualizing the embeddings, being related to the common trend of analyzing probe results in comparison to input layer embeddings Assumption 1. Every contextualized embedding is unique, i.e., for any pair of sentences s, s ∈ V * , we have We note that Assumption 1 is mild. Contextualized word embeddings map words (in their context) to R d , which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in R d that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. Corollary 1. There exists a function id : R d → V that maps a contextualized embedding to its word type. The function id is not a bijection since multiple embeddings will map to the same type. Using Corollary 1, we can show that any noncontextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function e : V → R d that maps a word to a word embedding. This embedding may be onehot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fast-Text This result We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function c(•). We term how much more information the contextualized embeddings have about a task than a control variable the gain, G, which we define as The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function c. We will empirically estimate this value in §6. Interestingly enough, the gain has a straightforward interpretation. Proposition 1. The gain function is equal to the following conditional mutual information Proof. The jump from the first to the second equality follows since R encodes, by construction, all the information about T provided by c(R). Proposition 1 gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations. The gain, as defined in eq. ( these cross-entropies can be empirically estimated. We will assume access to a corpus {(t i , r i )} N i=1 that is human-annotated for the target linguistic property; we further assume that these are samples (t i , r i ) ∼ p(•, •) from the true distribution. This yields a second approximation that is tractable: This approximation is exact in the limit N → ∞ by the law of large numbers. We note the approximation given in eq. ( where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative by the data-processing inequality, but we have yet to devise a remedy. We justify the approximation in eq. ( Corollary 3. We have the following lower-bound on the gain The conjunction of Corollary 2 and Corollary 3 suggest a simple procedure for finding a good approximation: We choose q θ1 (• | r) and q θ2 (• | r) so as to minimize eq. ( In §3, we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in §4.1 that contextualized word embeddings, by construction, contain no more information about a wordlevel syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in §4.2, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech. To start, we note the following corollary Corollary 4. It directly follows from Assumption 1 that BERT is a bijection between sentences s and sequences of embeddings r 1 , . . . , r |s| . As BERT is a bijection, it has an inverse, which we will denote as BERT -1 . Theorem 1. BERT(S) cannot provide more information about T than the sentence S itself. Proof. ≥ I(T ; BERT -1 (BERT(S))) This implies I(T ; S) = I(T ; BERT(S)). While Theorem 1 is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, Theorem 1 is a cynical statement: under our operationalization, the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under Assumption 1, we know the answer a priori-the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself. Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Indeed, the same reasoning used in Corollary 1 can be used to devise a function id s (r) which maps a contextual representation of a token back to its sentence. For a typelevel control function c, by the data-processing inequality, we have that I(T ; W ) ≥ I(T ; c(R)). Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is I(T ; S) -I(T ; W ) = I(T ; S | W ). This quantity is interpreted as the amount of knowledge we gain about the word-level task T by knowing S (i.e., the sentence) in addition to W (i.e., the word type). Therefore, a perfect probe provides insights about language and not about the actual representations. We do acknowledge another interpretation of the work of Although for perfect probes the above results should hold, in practice id(•) and c(•) may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. Learning a random embedding control function, for example, would require a dataset containing all words in the vocabulary V -in an open vocabulary setting an infinite dataset would be required! "Better" representations should make their respective probes easily learnable-and consequently their encoded information is more accessible We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously-even though we do not attempt this ourselves. As previously argued by We agree with Hewitt and Liang (2019)-and with both Hewitt and Liang (2019) introduces control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from What is more, there is a closed-form solution for an optimal, retrieval-based "probe" that has zero learned parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-mostfrequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words. Hewitt and Liang (2019) proposes that probes should be optimized to maximize accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role Hewitt and Liang (2019) acknowledges that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, Despite our discussion in §4, we still wish to empirically vet our estimation technique for the gain and we use this section to highlight the need to formally define ease of extraction (as argued in §4.3). We consider the tasks of POS and dependency labeling, using the universal POS tag As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as an m-layer neural network with the non-linearity σ(•) = ReLU(•). The initial projection matrix is W (1) ∈ R r 1 ×d and the final projection matrix is , where r i = r 2 i-1 . The remaining matrices are W (i) ∈ R r i ×r i-1 , so we halve the number of hidden states in each layer. We optimize over the hyperparameters-number of layers, hidden size, one-hot embedding size, and dropout-by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate. For dependency labeling, we follow We know BERT can generate text in many languages. Here we assess how much it actually "knows" about syntax in those languages-or at least how much we can extract from it given as powerful probes as we can train. We further evaluate how much it knows above and beyond simple type-level baselines. We propose an information-theoretic operationalization of probing that defines it as the task of estimating conditional mutual information. We introduce control functions, which put in context our mutual information estimates-how much more informative are contextual representations than some knowledge judged to be trivial? We further explored our operationalization and showed that, given perfect probes, probing can only yield insights into the language itself and cannot tell us anything about the representations under investigation. Keeping this in mind, we suggest a change of focus-instead of concentrating on probe size or information, we should pursue ease of extraction going forward. On a final note, we apply our formalization to evaluate multilingual BERT's syntactic knowledge on a set of eleven typologically diverse languages. Although it does encode a large amount of information about syntax-more than 76% and 65%, respectively, about POS and dependency labels in all languages = G q θ (T, R, e) estimated gain + KL q θ1 (T | R) -KL q θ2 (T, c(R))
1,375
1,376
1,375
ELLEIPO: A module that computes coordinative ellipsis for language generators that don't
Many current sentence generators lack the ability to compute elliptical versions of coordinated clauses in accordance with the rules for Gapping, Forward and Backward Conjunction Reduction, and SGF (Subject Gap in clauses with Finite/Fronted verb). We describe a module (implemented in JAVA, with German and Dutch as target languages) that takes non-elliptical coordinated clauses as input and returns all reduced versions licensed by coordinative ellipsis. It is loosely based on a new psycholinguistic theory of coordinative ellipsis proposed by Kempen. In this theory, coordinative ellipsis is not supposed to result from the application of declarative grammar rules for clause formation but from a procedural component that interacts with the sentence generator and may block the overt expression of certain constituents.
Coordination and coordinative ellipsis are essential tools for the sentence aggregation component of any language generator. Very often, when the aggregator chooses to combine several clauses into a single coordinate structure, the need arises to eliminate unnatural reduplications of coreferential constituents. In the literature, one often distinguishes four major types of clause-level coordinative ellipsis: • Gapping (as in (1)), with a special variant called Long-Distance Gapping (LDG). In LDG, the second conjunct consists of constituents stemming from different clauses -in (2), the main clause and the complement. • Forward Conjunction ( The subscripts denote the elliptical mechanism at work: g=Gapping, gl=LDG, f=FCR, s=SGF, b=BCR. We will not deal with VP Ellipsis and VP Anaphora because they generate pro-forms rather than elisions and are not restricted to coordination (cf. the title of the paper). In current sentence generators, the coordinative ellipsis rules are often inextricably intertwined with the rules for generating nonelliptical coordinate structures, so that they cannot easily be ported to other grammar formalisms -e.g., The module (dubbed ELLEIPO, from Greek Ἐλλείπω 'I leave out') we present here, is less formalism-dependent and, in principle, less liable to over-or undergeneration than its competitors. In Section 2, we sketch the theoretical background. Section 3 and the Appendix describe our implementation, with examples from German. Finally, in Section 4, we discuss the prospects of extending the module to additional constructions.
ELLEIPO is loosely based on Kempen's (subm.) psycholinguistically motivated syntactic theory of clausal coordination and coordinative ellipsis. It departs from the assumption that the generator's strategic (conceptual, pragmatic) component is responsible for selecting the concepts and conceptual structures that enable identification of discourse referents (except in case of syntactically conditioned pronominalization). The strategic component may conjoin two or more clauses into a coordination and deliver as output a non-reduced sequence of conjuncts. 1 The concepts in these conjuncts are adorned with reference tags, and identical tags express coreferentiality. 2 Structures of this kind serve as input to the (syn)tactical component of the generator, where they are grammatically encoded (lexicalized and given syntactic form) without any form of coordinative ellipsis. The resulting non-elliptical structures are input to ELLEIPO, which computes and executes options for coordinative ellipsis. ELLEIPO's functioning is based on the assumption that coordinative ellipsis does not result from the application of declarative grammar rules for clause formation but from a procedural component that interacts with the sentence generator and may block the overt expression of certain constituents. Due to this feature, ELLEIPO can be combined, at least in principle, with various grammar formalisms. However, this advantage is not entirely gratis: The module needs a formalism-dependent interface that converts gen-1 The strategic component is also supposed to apply rules of logical inference yielding the conceptual structures that underlie "respectively coordinations." Hence, the conversion of clausal into NP coordination (such as Anne likes biking and Susi likes skating into Anne and Susi like biking and skating, respectively is supposed to arise in the strategic, not the (syn)tactical component of the generator. This also applies to simpler cases without respectively, such as John is skating and Peter is skating versus John and Peter are skating. The module presented here does not handle these conversions (see erator output to a (simple) canonical form. This sketch presupposes and-coordinations of only n=2 conjuncts. Actually, ELLEIPO handles and-coordinations with n 2 conjuncts if, in every pair of conjuncts, the major constituents embody the same pattern of coreferences and contrasts. ELLEIPO takes as input a non-elliptical syntactic structure that should meet the following four canonical form criteria (see Fig. (7) Susi hörte dass Hans einen Unfall hatte Susi heard that Hans an accident had und dass f Hans f sterben könnte and that Hans die might 'Susi heard that Hans had an accident and might die' • Categorial (phrasal and lexical) nodesbolded in Fig. • The conjuncts are sister nodes separated by coordinating conjunctions; we call these configurations coordination domains. The order of the conjuncts and their constituents is defined. • Every categorial node of the input tree is immediately dominated by a functional node. • Each clausal conjunct is rooted in an S-node whose daughter nodes (immediate constituents) are grammatical functions. Within a clausal conjunct, all functions are represented at the same hierarchical level. Hence, the trees are "flat," as illustrated in Fig. ELLEIPO starts by demarcating "superclauses." Kempen (subm.) introduced this notion in his treatment of Gapping and LDG. An S-node dominates a superclause iff it dominates the entire sentence or a clause beginning with a subordinating conjunction (CNJ). In Fig. clauses. Note that S 12 includes clause S 13 , which is not a superclause. Then, ELLEIPO checks all coordination domains for elision options, as follows: • Testing for forward ellipsis: Gapping (including LDG), FCR, or SGF. This involves inspecting (recursively for every S-node) the set of immediate constituents (grammatical functions) of the two conjuncts, and their reference tags. Complete constituents of the right-hand conjunct may get marked for elision, depending on the specific conditions listed in the Appendix. • Testing for BCR. ELLEIPO checks -wordby-word, going from right to left -the coreference tags of the conjuncts. As a result, complete or partial constituents in the right-hand periphery of the left conjunct may get marked for elision. The final step of the module is ReadOut. After all coordination domains have been processed, a (possibly empty) subset of the terminal leaves of the input tree has been marked for elision. In the examples below, this is indicated by subscript marks. E.g., the subscript "g" attached to esst 'eat' in (9b) indicates that Gapping is allowed. ReadOut interprets the elision marks and, in 'standard mode,' produces the shortest elliptical string(s) as output (e.g. ( Example (10) illustrates a combination of Gapping and BCR, with the three licensed elliptical output strings shown in (10c). In (11), Gapping combines with BCR in the subordinate clauses. The fact that here, in contrast with (10), the subordinate clauses do not start their own superclauses, now licenses LDG. However, ReadOut prevents LDG to combine with BCR, which would have yielded the unintended string Anne versucht Bücher und Susi Artikel. Currently, ELLEIPO can handle all major types of clausal coordinative ellipsis in German and Dutch. However, further finetuning of the rules is needed, e.g., in order to take subtle semantic conditions on SGF and Gapping into account. We expect further improvements by allowing for interactions between the ellipsis module and the generator's pronominalization strategy. Work on porting ELLEIPO to related languages, in particular English, and to coordinations of non-clausal constituents (NP, PP, AP) is in progress.
825
1,576
825
Re-evaluating the Role of BLEU in Machine Translation Research
We argue that the machine translation community is overly reliant on the Bleu machine translation evaluation metric. We show that an improved Bleu score is neither necessary nor sufficient for achieving an actual improvement in translation quality, and give two significant counterexamples to Bleu's correlation with human judgments of quality. This offers new potential for research which was previously deemed unpromising by an inability to improve upon Bleu scores.
Over the past five years progress in machine translation, and to a lesser extent progress in natural language generation tasks such as summarization, has been driven by optimizing against n-grambased evaluation metrics such as Bleu However, there is a question as to whether minimizing the error rate with respect to Bleu does indeed guarantee genuine translation improvements. If Bleu's correlation with human judgments has been overestimated, then the field needs to ask itself whether it should continue to be driven by Bleu to the extent that it currently is. In this paper we give a number of counterexamples for Bleu's correlation with human judgments. We show that under some circumstances an improvement in Bleu is not sufficient to reflect a genuine improvement in translation quality, and in other circumstances that it is not necessary to improve Bleu in order to achieve a noticeable improvement in translation quality. We argue that Bleu is insufficient by showing that Bleu admits a huge amount of variation for identically scored hypotheses. Typically there are millions of variations on a hypothesis translation that receive the same Bleu score. Because not all these variations are equally grammatically or semantically plausible there are translations which have the same Bleu score but a worse human evaluation. We further illustrate that in practice a higher Bleu score is not necessarily indicative of better translation quality by giving two substantial examples of Bleu vastly underestimating the translation quality of systems. Finally, we discuss appropriate uses for Bleu and suggest that for some research projects it may be preferable to use a focused, manual evaluation instead.
The rationale behind the development of Bleu The way that Bleu and other automatic evaluation metrics work is to compare the output of a machine translation system against reference human translations. Machine translation evaluation metrics differ from other metrics that use a reference, like the word error rate metric that is used Orejuela appeared calm as he was led to the American plane which will take him to Miami, Florida. Orejuela appeared calm while being escorted to the plane that would take him to Miami, Florida. Orejuela appeared calm as he was being led to the American plane that was to carry him to Miami in Florida. Orejuela seemed quite calm as he was being led to the American plane that would take him to Miami in Florida. Appeared calm when he was taken to the American plane, which will to Miami, Florida. Table Bleu attempts to capture allowable variation in word choice through the use of multiple reference translations (as proposed in Bleu's n-gram precision is modified to eliminate repetitions that occur across sentences. For example, even though the bigram "to Miami" is repeated across all four reference translations in Table Counting punctuation marks as separate tokens, the hypothesis translation given in Table Because Bleu is precision based, and because recall is difficult to formulate over multiple reference translations, a brevity penalty is introduced to compensate for the possibility of proposing highprecision hypothesis translations which are too short. The brevity penalty is calculated as: where c is the length of the corpus of hypothesis translations, and r is the effective reference corpus length. w n logp n ) A Bleu score can range from 0 to 1, where higher scores indicate closer matches to the reference translations, and where a score of 1 is assigned to a hypothesis translation which exactly matches one of the reference translations. A score of 1 is also assigned to a hypothesis translation which has matches for all its n-grams (up to the maximum n measured by Bleu) in the clipped reference n-grams, and which has no brevity penalty. The primary reason that Bleu is viewed as a useful stand-in for manual evaluation is that it has been shown to correlate with human judgments of translation quality. In the next section we discuss theoretical reasons why Bleu may not always correlate with human judgments. While Bleu attempts to capture allowable variation in translation, it goes much further than it should. In order to allow some amount of variant order in phrases, Bleu places no explicit constraints on the order that matching n-grams occur in. To allow variation in word choice in translation Bleu uses multiple reference translations, but puts very few constraints on how n-gram matches can be drawn from the multiple reference translations. Because Bleu is underconstrained in these ways, it allows a tremendous amount of variation -far beyond what could reasonably be considered acceptable variation in translation. In this section we examine various permutations and substitutions allowed by Bleu. We show that for an average hypothesis translation there are millions of possible variants that would each receive a similar Bleu score. We argue that because the number of translations that score the same is so large, it is unlikely that all of them will be judged to be identical in quality by human annotators. This means that it is possible to have items which receive identical Bleu scores but are judged by humans to be worse. It is also therefore possible to have a higher Bleu score without any genuine improvement in translation quality. In Sections 3.1 and 3.2 we examine ways of synthetically producing such variant translations. One way in which variation can be introduced is by permuting phrases within a hypothesis translation. A simple way of estimating a lower bound on the number of ways that phrases in a hypothesis translation can be reordered is to examine bigram mismatches. Phrases that are bracketed by these bigram mismatch sites can be freely permuted because reordering a hypothesis translation at these points will not reduce the number of matching ngrams and thus will not reduce the overall Bleu score. Here we denote bigram mismatches for the hypothesis translation given in Table If b is the number of bigram matches in a hypothesis translation, and k is its length, then there are possible ways to generate similarly scored items using only the words in the hypothesis translation. In addition to the factorial number of ways that similarly scored Bleu items can be generated by permuting phrases around bigram mismatch points, additional variation may be synthesized by drawing different items from the reference ngrams. For example, since the hypothesis trans- This problem is made worse by the fact that Bleu equally weights all items in the reference sentences The problem is further exacerbated by Bleu not having any facilities for matching synonyms or lexical variants. Therefore words in the hypothesis that did not appear in the references (such as when and taken in the hypothesis from Table The lack of recall combined with naive token identity means that there can be overlap between similar items in the multiple reference translations. For example we can produce a translation which contains both the words carry and take even though they arise from the same source word. The chance of problems of this sort being introduced increases as we add more reference translations. Bleu's inability to distinguish between randomly generated variations in translation hints that it may not correlate with human judgments of translation quality in some cases. As the number of identically scored variants goes up, the likelihood that they would all be judged equally plausible goes down. This is a theoretical point, and while the variants are artificially constructed, it does highlight the fact that Bleu is quite a crude measurement of translation quality. A number of prominent factors contribute to Bleu's crudeness: • Synonyms and paraphrases are only handled if they are in the set of multiple reference translations. • The scores for words are equally weighted so missing out on content-bearing material brings no additional penalty. • The brevity penalty is a stop-gap measure to compensate for the fairly serious problem of not being able to calculate recall. Each of these failures contributes to an increased amount of inappropriately indistinguishable translations in the analysis presented above. Given that Bleu can theoretically assign equal scoring to translations of obvious different quality, it is logical that a higher Bleu score may not Fluency How do you judge the fluency of this translation? 5 = Flawless English 4 = Good English 3 = Non-native English 2 = Disfluent English 1 = Incomprehensible Adequacy How much of the meaning expressed in the reference translation is also expressed in the hypothesis translation? 5 = All 4 = Most 3 = Much 2 = Little 1 = None Table The NIST Machine Translation Evaluation exercise has run annually for the past five years as part of DARPA's TIDES program. The quality of Chinese-to-English and Arabic-to-English translation systems is evaluated both by using Bleu score and by conducting a manual evaluation. As such, the NIST MT Eval provides an excellent source of data that allows Bleu's correlation with human judgments to be verified. Last year's evaluation exercise The manual evaluation conducted for the NIST MT Eval is done by English speakers without reference to the original Arabic or Chinese documents. Two judges assigned each sentence in Iran has already stated that Kharazi's statements to the conference because of the Jordanian King Abdullah II in which he stood accused Iran of interfering in Iraqi affairs. n-gram matches: 27 unigrams, 20 bigrams, 15 trigrams, and ten 4-grams human scores: Adequacy:3,2 Fluency:3,2 Iran already announced that Kharrazi will not attend the conference because of the statements made by the Jordanian Monarch Abdullah II who has accused Iran of interfering in Iraqi affairs. n-gram matches: 24 unigrams, 19 bigrams, 15 trigrams, and 12 4-grams human scores: Adequacy:5,4 Fluency:5,4 Reference: Iran had already announced Kharazi would boycott the conference after Jordan's King Abdullah II accused Iran of meddling in Iraq's affairs. Table Table We investigated this by performing a manual evaluation comparing the output of two statistical machine translation systems with a rule-based machine translation, and seeing whether Bleu cor- Figure We then performed a manual evaluation where we had three judges assign fluency and adequacy ratings for the English translations of 300 French sentences for each of the three systems. These scores are plotted against the systems' Bleu scores in Figure A number of projects in the past have looked into ways of extending and improving the Bleu metric. In this paper we have shown theoretical and practical evidence that Bleu may not correlate with human judgment to the degree that it is currently believed to do. We have shown that Bleu's rather coarse model of allowable variation in translation can mean that an improved Bleu score is not sufficient to reflect a genuine improvement in translation quality. We have further shown that it is not necessary to receive a higher Bleu score in order to be judged to have better translation quality by human subjects, as illustrated in the 2005 NIST Machine Translation Evaluation and our experiment manually evaluating Systran and SMT translations. What conclusions can we draw from this? Should we give up on using Bleu entirely? We think that the advantages of Bleu are still very strong; automatic evaluation metrics are inexpensive, and do allow many tasks to be performed that would otherwise be impossible. The important thing therefore is to recognize which uses of Bleu are appropriate and which uses are not. Appropriate uses for Bleu include tracking broad, incremental changes to a single system, comparing systems which employ similar translation strategies (such as comparing phrase-based statistical machine translation systems with other phrase-based statistical machine translation systems), and using Bleu as an objective function to optimize the values of parameters such as feature weights in log linear translation models, until a better metric has been proposed. Inappropriate uses for Bleu include comparing systems which employ radically different strategies (especially comparing phrase-based statistical machine translation systems against systems that do not employ similar n-gram-based approaches), trying to detect improvements for aspects of translation that are not modeled well by Bleu, and monitoring improvements that occur infrequently within a test corpus. These comments do not apply solely to Bleu. Meteor Finally, that the fact that Bleu's correlation with human judgments has been drawn into question may warrant a re-examination of past work which failed to show improvements in Bleu. For example, work which failed to detect improvements in translation quality with the integration of word sense disambiguation
468
1,707
468
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
Continual relation extraction (CRE) aims to continually learn new relations from a classincremental data stream. CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations. Most previous work attributes catastrophic forgetting to the corruption of the learned representations as new relations come, with an implicit assumption that the CRE models have adequately learned the old relations. In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process. To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic. Experimental results show that ACA can consistently improve the performance of state-of-theart CRE models on two popular benchmarks. Our code is available at
Relation extraction (RE) aims to detect the relation of two given entities in a sentence. Traditional RE models are trained on a fixed dataset with a predefined relation set, which cannot handle the reallife situation where new relations are constantly emerging. To this end, continual relation extraction (CRE) the model needs to learn some new relations and is evaluated on all seen relations. Like other continual learning systems, CRE models also suffer from catastrophic forgetting, i.e., the performance of previously learned relations seriously degrades when learning new relations. The mainstream research in CRE With a series of empirical studies, we observe that catastrophic forgetting mostly happens on some specific relations, and significant performance degradation tends to occur when their analogous relations appear. Based on our observations, we find another reason for catastrophic forgetting, i.e., CRE models do not learn sufficiently robust representations of relations in the first place due to the relatively easy training task. Taking "child" in Figure Recently, adversarial data augmentation emerges as a strong baseline to prevent models from learning shortcuts from the easy dataset We summarize our contributions as: 1) we conduct a series of empirical studies on two strong CRE methods and observe that catastrophic forgetting is strongly related with the existence of analogous relations. 2) we find an important reason for catastrophic forgetting in CRE which is overlooked in all previous work: the CRE models suffer from learning shortcuts to identify new relations, which are not robust enough against the appearance of their analogous relations. 3) we propose an adversarial class augmentation mechanism to help CRE models learn more robust representations. Exper-imental results on two benchmarks show that our method can consistently improve the performance of two state-of-the-art methods.
Relation Extraction Conventional Relation Extraction (RE) focuses on extracting the predefined relation of two given entities in a sentence. Recently, a variety of deep neural networks (DNN) have been proposed for RE, mainly including: 1) Convolutional or Recurrent neural network (CNN or RNN) based methods Continual Learning Continual Learning (CL) aims to continually accumulate knowledge from a sequence of tasks Shortcuts Learning Phenomenon Shortcuts learning phenomenon denotes that DNN models tend to learn unreliable shortcuts in datasets, leading to poor generalization ability in real-world applications In CRE, the model is trained on a sequence of tasks (T 1 , T 2 , ..., T k ). Each task T i can be represented as a triplet (R i , D i , Q i ), where R i is the set of new relations, D i and Q i are the training and testing set, respectively. Every instance (x j , y j ) ∈ D i ∪ Q i belongs to a specific relation y j ∈ R i . The goal of CRE is to continually train the model on new tasks to learn new relations, while avoiding forgetting of previously learned ones. More formally, in the i-th task, the model learns new relations R i from D i , and should be able to identify all seen relations, i.e., the model will be evaluated on the all seen testing sets i j=1 Q j . To alleviate catastrophic forgetting in CRE, previous work Characteristics and the Cause In this section, we conduct a series of empirical studies on two state-of-the-art CRE models, namely EMAR We use Forgetting Rate (FR) where pd j r and F 1 j r are the performance degradation and F1 score of r after the model trains on task j, respectively. The sequence length k is 10 for both FewRel and TACRED. We divide all relations into three equal-sized groups based on their FR from small to large. As shown in Table Where catastrophic forgetting happens? With careful comparison between G1 and G3, we find that relations in G3 seem to have analogous relations in the dataset. For example, "mother" belongs to G3, and there are its semantically analogous relations, such as "spouse", in the dataset. To confirm our finding, we first define the similarity for a pair of relations as the cosine distance of their prototypes, i.e., the mean vanilla BERT sentence embedding of all corresponding instances. Then, for a certain relation, we compute its max similarity (MS) to all the other relations in the dataset. As shown in Table When catastrophic forgetting happens? We also observe that the performance of the relations with high FR always has a sudden drop in some tasks. To explore the characteristic of the task with severe performance drop, we run two CRE models on 50 different task sequences, and record all the bad cases where catastrophic forgetting happens (the F1 scores of a relation degrades greater than 10 points after the model learns a new task). Given a certain relation r and its corresponding bad cases, we mark cases where exist top-5 most similar relations of r. As shown in Table All of the previous CRE works attribute the catastrophic forgetting to the corruption of the learned knowledge during the continual learning process, with the assumption that the CRE models have adequately learned the previous relations. However, we argue that this assumption may not hold. In CRE, models are continually trained on a sequence of stand-alone easy training datasets, where each dataset usually only consists of very few new re- To confirm our hypothesis, we propose a retrieval test: after the CRE model is trained to identify a specific relation r, we use the trained model to retrieve instances of r from the whole test set according to the similarity of representations Recently, adversarial data augmentation has shown promise for avoiding models from learning shortcuts in the easy dataset Our ACA is model-agnostic and utilizes popular state-of-the-art CRE models as the backbone. Therefore, we first briefly introduce the two-stage training process of these CRE models. CRE model aim to finish a sequence of tasks (T 1 , T 2 , ..., T k ). Without loss of generality, we represent CRE model with two components: 1) an encoder, which maps an input instance x into a representation vector; 2) a classifier, which produces a probability distribution over all seen relations till current task as the prediction for x. As shown in Figure Orthogonal to all previous CRE models, our ACA instead focuses on the first initial training stage to improve the robustness of newly learned relation representations. Specifically, when a new task T i comes, ACA first augments the new relations R i based on the new training set D i , and then trains the original relations and synthesized classes together. Given N new relations, we pair them randomly and get ⌊N/2⌋ relation pairs. We construct hybrid synthetic classes based on these relation pairs. Specifically, for a relation pair {r i , r j } with the relations r i and r j , we use two instances, x i from r i and x j from r j , to generate a hybrid instance x hybrid for the extra synthetic class r ij . As shown in Figure We classify all relations into two categories, symmetric and asymmetric relations. The symmetric relation means that the order of the head and tail entities does not matter, e.g, "sibling" and "spouse" (please refer to Appendix E for details of symmetric relations on two datasets). In contrast, the semantic of the asymmetric relations is related to the choice of head and tail entities, e.g., "located in" and "mother". As shown in Figure Datasets Following previous works Evaluation Metrics Following Baselines We consider the following baselines: EA-EMR Implement Details Our ACA is model-agnostic, and we choose two state-of-the-art CRE models, EMAR and RP-CRE as our backbone to evaluate ACA. The number of stored instances of each relation in the memory bank is 10. All hyperparameters of EMAR and RP-CRE are the same as that of their origin paper. ACA does not introduce any model hyperparameters. We run our code on a single NVIDIA A40 GPU with 48GB memory, and report the average result of 5 different task sequences. The performances of our ACA and baselines are shown in Table To further explore the effectiveness of our proposed two class augmentation methods, we conduct an ablation study. Table Our proposed ACA aims to learn robust representations that can better distinguish analogous relations. To further confirm the effectiveness of our method, we first reproduce the retrieval test introduced in our pilot experiments (see Appendix C for more details). Table We also conduct a case study to intuitively show the effectiveness of our method. We consider two analogous relations, P25 ("mother") and P26 ("spouse") Memory size is the number of memorized instances for each relation, which is a key factor for the model performance of rehearsal-based CRE methods. Therefore, in this section, we study the influence of memory size on our ACA. We compare the performance of EMAR and EMAR+ACA with memory sizes 5, 10 and 20. As shown in Figure In this section, we conduct an error analysis to show the effectiveness of ACA and the challenge of CRE. Through our analysis of catastrophic forgetting, we find that the performance of relations is highly related to their max similarity. Therefore, we equally divide the relations into three groups according to their max similarity. As shown in Table In this paper, we conduct a series of empirical study to analyze catastrophic forgetting in CRE, and observe that catastrophic forgetting mostly happens on some specific relations, and significant performance degradation tends to occur when their analogous relations appear in subsequent tasks. Based on our observations, we find an important reason for catastrophic forgetting in CRE that all previous works overlooked, i.e., the CRE models suffer from learning shortcuts to identify new relations, which are not robust enough against the appearance of their analogous relations. To this end, we propose a simple yet effective adversarial class augmentation mechanism to help CRE models learn more robust representations. Extensive experiments on two benchmarks show that our method can further improve the performance of two state-of-the-art CRE models. Our paper has several limitations: 1) Although we provide a new perspective from the shortcut learning to explain catastrophic forgetting, and utilize a retrieval test to confirm our hypothesis, we do not explore which types of shortcuts are learned by CRE models; 2) Our ACA with two class augmentation methods is specially designed for CRE. However, our findings about catastrophic forgetting in this paper may be common in the context of continual learning. Therefore, it would be better if we can propose more universal adversarial training methods which can be adapted to all continual learning systems; 3) ACA conducts the class augmentation before the initial training stage, which introduces extra computational overhead on top of backbone CRE models. We show the performance of two strong baselines on TACRED in Table As illustrated in Figure As discussed in Section 4.2, a potential reason for catastrophic forgetting in CRE is the model only learns the spurious shortcuts in the continual learning setting. In order to evaluate the representation ability of the CRE model, we propose a retrieval test analysis. Given an instance x, a CRE method utilizes an encoder f to encode its semantic features for learning and classifying relations, (3) For a relation, if its F1 score degrades greater than 0.1 after the model learning a new task, we consider it as an relation suffering severe forgetting. We group all relations suffering severe forgetting into a set R f . For a relation r ∈ R f , we additionally randomly sample 7 relations {r * 1 , ..., r * 7 } (3 relations for TACRED) from R \ R f to build a pseudo task T * containing instances from R * = {r, r * 1 , ..., r * 7 }, where R is the relation set of the entire dataset. After training the CRE model on our built pseudo task T * , we obtain the prototype p r of the relation r, that is, the mean embedding of all instances belonging to r, where |r| is the number of instances of relation r. We also obtain embeddings of each instance in the entire test set, Then we compute the cosine similarity between p r and h i ∈ I: We consider rank-based metrics and use the mean precision at k (P@k), which is the proportion of instances whose label is r in the top-k similar set. Specifically, for FewRel, we use P@100 as metric. For TACRED, because this dataset has a severe imbalance problem and some relations only have less than 50 instances, we use mean P@|Q r | as metric, where |Q r | is the size of test set corresponding to relation r. If the retrieval precision is high, we can say that the model learns robust representations. Following previous work In our reversed-class augmentation, we divide all relations into two categories, symmetric relation and asymmetric relation. The symmetric relation denotes the relation semantic is independent of which of the two given entities is the head or tail entity, and the relations except symmetric relations are asymmetric relations. (1) In FewRel, we find 2 symmetric relations, "P26 (spouse)" and "P3373 (sibling)". (2) In TACRED, we find 5 symmetric relations, "per:siblings", "org:alternate names", "per:spouse", "per:alternate names" and "per:other family". In this section, we provide more cases to intuitively show the effectiveness of our look-ahead learning for learning robust representation. Please refer to Figure As discussed in Section 4, we divide all relations into three equal-sized groups based on their FR from small to large. In this section, we show example relations in Group 1 and Group 3 of FewRel in Table
1,150
1,928
1,150
Predicting and Using Target Length in Neural Machine Translation
Attention-based encoder-decoder models have achieved great success in neural machine translation tasks. However, the lengths of the target sequences are not explicitly predicted in these models. This work proposes length prediction as an auxiliary task and set up a sub-network to obtain the length information from the encoder. Experimental results show that the length prediction sub-network brings improvements over the strong baseline system and that the predicted length can be used as an alternative to length normalization during decoding.
In recent years, neural network (NN) models have achieved great improvements in machine translation (MT) tasks. Despite the success achieved in neural machine translation (NMT), current NMT systems do not model the length of the output explicitly, and thus various length normalization approaches are often used in decoding. Length normalization is a common technique used in the beam search of NMT systems to enable a fair comparison of partial hypotheses with different lengths. Without any form of length normalization, regular beam searches will prefer shorter hypotheses to longer ones on average, as a negative logarithmic probability is added at each step, resulting in lower (more negative) scores for longer sentences. The simplest way is to normalize the score of the current partial hypothesis (e i 1 ) by its length (|i|): where f J 1 is the source sequence. To use a softer approach, the denominator |i| can also be raised to the power of a number between 0 and 1 or replaced by more complex functions, as proposed in In addition to investigating various types of length normalization, their rationality is rarely explored. Although length normalization appears to be simple and effective, it is still an additional technique to help a "weak" machine translation model that cannot handle the hypothesis length properly. In this work it is proposed to model the target length using the neural network itself in a multi-task learning way. The estimated length information can either be implicitly included in the network to "guide" translation, or it can be used explicitly as an alternative to length normalization during decoding. The experimental results on various datasets show that the proposed system achieves improvements compared to the baseline model and the predicted length can easily be used to replace the length normalization.
Multi-task learning is an important training strategy that aims to improve the generalization performance of the main task with some other related tasks To predict the target length based on the standard transformer architecture We predict the length of the target sequence by a classifier in the range of We also embed the length of source sequence into a 201 dimension vector with a length embedding matrix, which is initialized by the empirical distribution of the length. This length embedding is then concatenated with the output logit of the length prediction sub-network. Again, this concatenated vector is projected through a linear layer onto a vector s with 201 dimensions. Finally, the length distribution q l is given by a softmax over s. And the predicted length l pred is the l with the highest probability. The complete structure of the proposed length prediction sub-network is illustrated in Figure When we train the model with the translation and length prediction tasks jointly, the gradient of the length model will propagate to the translation model (referred to as no-connection in this paper). Thus, these two models will influence each other during multi-task training. In addition, the translation model could benefit from concatenating the length prediction output vector s to the outputs of each decoder layer (referred to as cross-concat in this paper). After the vector is concatenated, a linear projection is run through to maintain the feature dimension of the vector as the original one, so that it can be used without modifying the rest of the original transformer model. Here we detach s from the backpropagation graph so that the length prediction is not affected by this connection. In this method, we think that with the concatenation, the length information could be passed to the decoder and used implicitly. During training, Kullback-Leibler (KL) divergence where q l is the probability from model output. Suppose l target is the actual length of the target sequence, p l is the target distribution given by a Gaussian function added with a neighborhood reward d(l, l target ). Formally, p l is given as: where where (5) here σ is a constant and is used to control the shape of the distribution. In contrast to cross entropy with label smoothing, in which there is only one true label with a high probability and others are treated equally, the probability p l becomes smaller if l is further away from l target , which creates the desired relationship between each class in the classifier. We use cross entropy with label smoothing as the training loss for the translation task. We linearly combine the translation loss with the length loss, so that the training loss is given by Loss all = λ 1 Loss translation + λ 2 Loss length (6) Besides using the length information implicitly (as the two methods mentioned above), we can also guide the decoding step with the length prediction explicitly. With the help of the length prediction, we have a mathematically reasonable control of the output length in comparison to the length normalization in beam search. Since the predicted target length cannot be 100% accurate and a source sentence can have multiple possible translations of different lengths, we control the length of the inference by penalizing the score (logarithmic probability) of the end-of-sentence (EOS) token during beam search, rather than forcing the length of the inference to match the predicted length. More specifically, if the length of the hypothesis is shorter than the predicted length, the EOS token score is penalized; if the hypothesis is longer than the predicted length, the EOS token score is rewarded to facilitate the selection of the EOS token in beam search to finalize the hypothesis. A logarithmic linear penalty is introduced, which is added to the score of EOS token at each time step during beam search: where L hyp is the length of the hypothesis, L pred is the predicted length of the target sentence, and α is a hyperparameter to control the penalty. We first conduct experiments on a relatively small dataset, IWSLT2014 German→English (160k sentence pairs) For the length prediction task, the inference length does not have to correspond exactly to the reference length, since there can be multiple correct translations with different lengths. Therefore, we consider the predictions that fulfill |l predict -l target |/l target ≤ T to be accurate, where T is a threshold. Table language pair es-en it-en nl-en ro-en baseline 41.2 32.6 37.8 38.4 no-connection 41.3 32.8 37.8 38.8 cross-concat 41.3 32.7 38.3 38.7 Table We use no-connection, cross-concat model to train on other language pairs with the same hyperparameters as on IWSLT de-en to test the performance, as shown in Table Figure In this paper, we propose a length prediction subnetwork based on the transformer architecture, and a method of using the length prediction information on the decoder side, namely cross-concat. In decoding, we use the predicted length to calculate a logarithmic linear penalty in the beam search in order to replace the length normalization. Experimental results show that the sub-network can predict target length well and further improve translation quality. In addition, the predicted length can be used to replace the length normalization with a better and more mathematically explainable control of the output length. For future work, the use of length prediction in positional encoding
546
1,852
546
Lexical Morphology in Machine Translation: a Feasibility Study
This paper presents a feasibility study for implementing lexical morphology principles in a machine translation system in order to solve unknown words. Multilingual symbolic treatment of word-formation is seducing but requires an in-depth analysis of every step that has to be performed. The construction of a prototype is firstly presented, highlighting the methodological issues of such approach. Secondly, an evaluation is performed on a large set of data, showing the benefits and the limits of such approach.
Formalising morphological information to deal with morphologically constructed unknown words in machine translation seems attractive, but raises many questions about the resources and the prerequisites (both theoretical and practical) that would make such symbolic treatment efficient and feasible. In this paper, we describe the prototype we built to evaluate the feasibility of such approach. We focus on the knowledge required to build such system and on its evaluation. First, we delimit the issue of neologisms amongst the other unknown words (section 2), and we present the few related work done in NLP research (section 3). We then explain why implementing morphology in the context of machine translation (MT) is a real challenge and what kind of aspects need to be taken into account (section 4), and we show that translating constructed neologisms is not only a mechanical decomposition but requires more fine-grained analysis. We then describe the methodology developed to build up a prototyped translator of constructed neologisms (section 5) with all the extensions that have to be made, especially in terms of resources. Finally, we concentrate on the evaluation of each step of the process and on the global evaluation of the entire approach (section 6). This last evaluation highlights a set of methodological criteria that are needed to exploit lexical morphology in machine translation.
Unknown words are a problematic issue in any NLP tool. Depending on the studies Usually, three main groups of unknown words are distinguished: proper names, errors, and neologisms, and the possible solution highly depends on the type of unknown word to be solved. In this paper, we concentrate on neologisms which are constructed following a morphological process. The processing of unknown "constructed neologisms" in NLP can be done by simple guessing (based on the sequence of final letters). This option can be efficient enough when the task is only tagging, but in a multilingual context (like in MT), dealing with constructed neologisms implies a transfer and a generation process that require a more complex formalisation and implementation. In the project presented in this paper, we propose to implement lexical morphology phenomena in MT. Implementing lexical morphology in a MT context has seldom been investigated in the past, probably because many researchers share the following view: "Though the idea of providing rules for translating derived words may seem attractive, it raises many problems and so it is currently more of a research goal for MT than a practical possibility" Even in monolingual contexts, lexical morphology is not very often implemented in NLP. Morphological analyzers like the ones described in Since morphological processes are regular and exist in many languages, we propose an approach where constructed neologisms in source language (SL) can be analysed and their translation generated in a target language (TL) through the transfer of the constructional information. For example, a constructed neologism in one language (e.g. ricostruire in Italian) should firstly be analysed, i.e. find (i) the rule that produced it (in this case <reiteration rule>) and (ii) the lexeme-base which it is constructed on (costruire, with all morphosyntactic and translational information). Secondly, through a transfer mechanism (of both the rule and the base), a translation can be generated by rebuilding a constructed word, (in French reconstruire, Eng: to rebuild). On a theoretical side, the whole process is formalised into bilingual Lexeme Formation Rules (LFR), as explained below in section 4.3. Although this approach seems to be simple and attractive, feasibility studies and evaluation should be carefully performed. To do so, we built a system to translate neologisms from one language into another. In order to delimit the project and to concentrate on methodological issues, we focused on the prefixation process and on two related languages (Italian and French). Prefixation is, after suffixation, the most productive process of neologism, and prefixes can be more easily processed in terms of character strings. Regarding the language, we choose to deal with the translation of Italian constructed neologisms into French. These two languages are historically and morphologically related and are consequently more "neighbours" in terms of neologism coinage. In the following, we firstly describe precisely the phenomena that have to be formalized and then the prototype built up for the experiment. Like in any MT project, the formalisation work has to face different issues of contrastivity, i.e. highlighting the divergences and the similarities between the two languages. In the two languages chosen for the experiment, few divergences were found in the way they construct prefixed neologisms. However, in some cases, although the morphosemantic process is similar, the item used to build it up (i.e. the affixes) is not always the same. For example, to coin nouns of the spatial location "before", where Italian uses the prefix retro, French uses rétro and arrière. A deeper analysis shows that Italian retro is used with all types of nouns, whereas in French, rétro only forms processual nouns (derived from verbs, like rétrovision, rétroprojection). For the other type of nouns (generally locative nouns), arrière is used (arrière-cabine, arrière-cour). Other problematic issues appear when there is more than one prefix for the same LFR. For example, the rule for "indeterminate plurality" provides in both languages a set of two prefixes (multi/pluri in Italian and multi/pluri in French) with no known restrictions for selecting one or the other (e.g. both pluridimensionnel and multidimensionnel are acceptable in French). For these cases, further empirical research have to be performed to identify restrictions on the rule. Another important divergence is found in the prefixation of relational adjectives. Relational adjectives are derived from nouns and designate a relation between the entity denoted by the noun they are derived from and the entity denoted by the noun they modify. Consequently, in a prefixation such as anticostituzionale, the formal base is a relational adjective (costituzionale), but the semantic base is the noun the adjective is derived from (costituzione). The constructed word anticostituzionale can be paraphrased as "against the constitution". Moreover, when the relational adjective does not exist, prefixation is possible on a nominal base to create an adjective (squadra antidroga). In cases where the adjective does exist, both forms are possible and seem to be equally used, like in the Italian collaborazione interuniversità / collaborazione interuniversitaria. From a contrastive point of view, the prefixation of relational adjectives exists in both languages (Italian and French) and in both these languages prefixing a noun to create an adjective is also possible (anticostituzione (Adj)). But we notice an important discrepancy in the possibility of constructing relational adjectives (a rough estimation performed on a large bilingual dictionary (Garzanti IT-FR All these divergences require an in-dept analysis but can be overcome only if the formalism and the implementation process are done following a rigorous methodology. In order to evaluate the approach described above and to concretely investigate the ins and outs of such implementation, we built up a prototype of a machine translation system specialized for constructed neologisms. This prototype is composed of two modules. The first one checks every unknown word to see if it is potentially constructed, and if so, performs a morphological analysis to individualise the lexeme-base and the rule that coined it. The second module is the actual translation module, which analyses the constructed neologism and generates a possible translation. The whole prototype relies on one hand on lexical resources (two monolingual and one bilingual) and on a set of bilingual Lexeme Formation Rules (LFR). These two sets of information helps the analysis and the generation steps. When a neologism is looked-up, the system checks if it is constructed with one of the LFRs and if the lexeme-base is in the lexicon. If it is the case, the transfer brings the relevant morphological and lexical information in the target language. The generation step constructs the translation equivalent, using the information provided by the LFR and the lexical resources. Consequently, the whole system relies on the quality of both the lexical resources and the LFR. The whole morphological process in the system is formalised through bilingual Lexeme Formation Rules. Their representation is inspired by Such rules match together two monolingual rules (to be read in columns). Each monolingual rule describes a process that applies a series of instructions on the different sections of the lex-eme : the surface section (G and F), the syntactic category (SX) and the semantic (S) sections. In this theoretical framework, affixation is only one of the instructions of the rule (the graphemic and phonological modification), and consequently, affixes are called "exponent" of the rule. reiterativity (V it '(...)) reiterativity (V fr '(...)) where V it ' = V fr ', translation equivalent This formalisation is particularly useful in a bilingual context for rules that have more than one prefix in both languages: more than one affix can be declared in one single rule, the selection being made according to different constraints or restrictions. For example, the rule for "indeterminate plurality" explained in section 4.1 can be formalised as follows: In this kind of rules with "multiple exponents", the two possible prefixes are declared in the surface section (G and F). The selection is a monolingual issue and cannot be done at the theoretical level. Such rules have been formalised and implemented for the 56 productive prefixes of Italian As in any MT system, the acquisition of bilingual knowledge is an important issue. In morphology, the method should be particularly accurate to prevent any methodological bias. To formalise translation rules for prefixed neologisms, we adopt a meaning-to-form approach, i.e. discovering how a constructed meaning is morphologically realised in two languages. We build up a tertium comparationis (a neutral platform, see Prefixes of both languages are then literally "projected" (or classified) onto the tertium. For each terminal sub-class, we have a clear picture of the prefixes involved in both languages. For example, the LFR presented in figure At the end of the comparison, we end up with more than 100 LFRs (one rule can be reiterated according the different input and output categories). From a computing point of view, constraints have to be specified and the lexicon has to be adapted consequently. Implementation of the LFR is set up as a database, from where the program takes the information to perform the analysis, the transfer and the generation of the neologisms. In our approach, LFRs are simply declared in a tab format data-base, easily accessible and modifiable by the user, as shown below: Implemented LFRs describe (i) the surface form of the Italian prefix to be analysed, (ii) the category of the base, (iii) the category of the derived lexeme (the output), (iv) a reference to the rule implied and (v) the French prefix(es) for the generation. The surface form in (i) should sometimes take into account the different allomorphs of one prefix. Consequently, the rule has to be reiterated in order to be able to recognize any forms (e.g. the prefix in has different forms according to the initial letter of the base, and four rules have to be implemented for the four allomorphs In (ii), the information of the category of the base has been "overspecified", to differentiate qualitative and relational adjectives, and deverbal nouns and the other ones (a_rel/a or n_dev/n). These overspecifications have two objectives: optimizing the analysis performance (reducing the noise of homographic character strings that look like constructed neologisms but that are only misspellings -see below in the evaluation section), and refining the analysis, i.e. selecting the appropriate LFR and, consequently, the appropriate translation. To identify relational adjectives and deverbal nouns, the monolingual lexicon that supports the analysis step has to be extended. Thereafter, we present the symbolic method we used to perform such extension. Our MT prototype relies on lexical resources: it aims at dealing with unknown words that are not in a Reference lexicon and these unknown words are analyzed with lexical material that is in this lexicon. From a practical point of view, our prototype is based on two very large monolingual data- bases (Mmorph As stated above, identifying the prefix and the base is not enough to provide a proper analysis of constructed neologisms which is detailed enough to be translated. The main information that is essential for the achievement of the process is the category of the base, which has to be sometimes "overspecified". Obviously, the Italian reference lexicon does not contain such information. Consequently, we looked for a simple way to automatically extend the Italian lexicon. For example, we looked for a way to automatically link relational adjectives with their noun bases. Our approach tries to take advantage of only the lexicon, without the use of any larger resources. To extend the Italian lexicon, we simply built a routine based on the typical suffixes of relational adjectives (in Italian: -ale, -are, -ario, -ano, -ico, -ile, -ino, -ivo, -orio, -esco, -asco, -iero, -izio, -aceo A similar extension is performed for the deverbal aspect, for the lexicon should also distinguish deverbal noun. From a morphological point of view, deverbalisation can be done trough two main productive processes: conversion (a command to command) and suffixation. If the first one is relatively difficult to implement, the second one can be easily captured using the typical suffixes of such processes. Consequently, we considere that any noun ending with suffixes like ione, aggio,or mento are deverbal. Thanks to this extended lexicon, overspecified input categories (like a_rel for relational adjective or n_dev for deverbal noun) can be stated and exploited in the implemented LFR as shown in figure Once the prototyped MT system was built and the lexicon adapted, it was applied to a set of neologisms (see section 6 for details). For example, unknown Italian neologisms such as arcicontento, ridescrizione, deitalianizzare, were automatically translated in French: archi-content, redescription, désitalianiser. The divergences existing in the LFR of <locative position before> are correctly dealt with, thanks to the correct analysis of the base. For example, in the neologism retrobottega, the lexeme-base is correctly identified as a locative noun, and the French equivalent is constructed with the appropriate prefix (arrière-boutique), while in retrodiffusione, the base is analysed as deverbal, and the French equivalent is correctly generated (rétrodiffusion). For the analysis of relational adjectives, the overspecification of the LFRs and the extension of the lexicon are particularly useful when there is no French equivalent for Italian relational adjectives because the corresponding construction is not possible in the French morphological system. For example, the Italian relational adjective aziendale (from the noun azienda, Eng: company) has no adjectival equivalent in French. The Italian prefixed adjective interaziendale can only be translated in French by using a noun as the base (interentreprise). This translation equivalent can be found only if the base noun of the Italian adjective is found (interaziendale, in-ter+aziendale azienda, azienda = entreprise, interentreprise). The same process has been applied for the translation of precongressuale, post-transfuzionale by précongrès, posttransfusion. Obviously, all the mechanisms formalised in this prototype should be carefully evaluated. The advantages of this approach should be carefully evaluated from two points of view: the evaluation of the performance of each step and of the feasibility and portability of the system. As previously stated, the system is intended to solve neologisms that are unknown from a lexicon with LFRs that exploit information contained in the lexicon. To evaluate the performance of our system, we built up a corpus of unknown words by confronting a large Italian corpus from journalistic domain (La Repubblica Online As we previously stated, the analysis step can actually be divided into two tasks. First of all, the program has to identify, among the unknown words, which of them are morphologically constructed (and so analysable by the LFRs); secondly, the program has to analyse the constructed neologisms, i.e matching them with the correct LFRs and isolating the correct base-words. For the first task, we obtain a list of 42 673 potential constructed neologisms. Amongst those, there are a number of erroneous words that are homographic to a constructed neologism. For example, the item progesso, a misspelling of progresso (Eng: progress), is erroneously analysed as the prefixation of gesso (eng: plaster) with the LFR in pro. In the second part of the processing, LFRs are concretely applied to the potential neologisms (i.e. constraints on categories and on overspecified category, phonological constraints). This stage retains 30 376 neologisms. A manual evaluation is then performed on these outputs. Globally, 71.18 % of the analysed words are actually neologisms. But the performance is not the same for every rule. Most of them are very efficient: among all the rules for the 56 Italian prefixes, only 7 cause too many erroneous analyses, and should be excluded -mainly rules with very short prefixes (like a, di, s), that cause mistakes due to homograph. As explained above, some of the rules are strongly specified, (i.e. very constrained), so we also evaluate the consequence of some con-straints, not only in terms of improved performance but also in terms of loss of information. Indeed, some of the constraints specified in the rule exclude some neologisms (false negatives). For example, the modality LFRs with co and ri have been overspecified, requiring deverbal base-noun (and not just a noun). Adding this constraint improves the performance of the analysis (i.e. the number of correct lexemes analysed), respectively from 69.48 % to 96 % and from 91.21 % to 99.65 %. Obviously, the number of false negatives (i.e. correct neologisms excluded by the constraint) is very large (between 50 % and 75 % of the excluded items). In this situation, the question is to decide whether the gain obtained by the constraints (the improved performance) is more important than the un-analysed items. In this context, we prefer to keep the more constrained rule. Un-analysed items remain unknown words, and the output of the analysis is almost perfect, which is an important condition for the rest of the process (i.e. transfer and generation). Generation can also be evaluated according to two points of view: the correctness of the generated items, and the improvement brought by the solved words to the quality of the translated sentence. To evaluate the first aspect, many procedures can be put in place. The correctness of constructed words could be evaluated by human judges, but this kind of approach would raise many questions and biases: people that are not expert of morphology would judge the correctness according to their degree of acceptability which varies between judges and is particularly sensitive when neologism is concerned. Questions of homogeneity in terms of knowledge of the domain and of the language are also raised. Because of these difficulties, we prefer to centre the evaluation on the existence of the generated neologisms in a corpus. For neologisms, the most adequate corpus is the Internet, even if the use of such an uncontrolled resource requires some precautions (see Concretely, we use the robot Golf Because of the uncontrolled aspect of the resource, we distinguish three groups of reported frequencies: 0 occurrence, less than 5 occurrences and more than 5. The threshold of 5 helps to distinguish confirmed existence of neologism (> 5) from unstable appearances (< 5), that are closed to hapax phenomena. The table below summarizes some results for some prefixed neologisms. Globally, most of the generated prefixed neologisms have been found in corpus, and most of the time with more than 5 occurrences. Unfound items are very useful, because they help to point out difficulties or miss-formalised processes. Most of the unfound neologisms were illanalysed items in Italian. Others were due to misuses of hyphens in the generation. Indeed, in the program, we originally implemented the use of the hyphen in French following the established norm (i.e. a hyphen is required when the prefix ends with a vowel and the base starts with a vowel). But following this "norm", some forms were not found in corpus (for example antibraconnier (Eng: antipoacher) reports 0 occurrence). When re-generated with a hyphen, it reports 63 occurrences. This last point shows that in neology, usage does not stick always to the norm. The other problem raised by unknown words is that they decrease the quality of the translation of the entire sentence. To evaluate the impact of the translated unknown words on the translated sentence, we built up a test-suite of sentences, each of them containing one prefixed neologism (in bold in table 2). We then submitted the sentences to a commercial MT system (Systran©) and recorded the translation and counted the number of mistakes (FR1 in table 2 below). On a second step, we "feed" the lexicon of the translation system with the neologisms and their translation (generated by our prototype) and resubmit the same sentences to the system (FR2 in table 2). For the 60 sentences of the test-suit (21 with an unknown verb, 19 with an unknown adjective and 20 with a unknown noun), we then counted the number of errors before and after the introduction of the neologisms in the lexicon, as shown below (errors are underlined). For a global view of the evaluation, we classified in the table below the number of sentences according to the number of errors "removed" thanks to the resolution of the unknown word. Most of the improvements concern only a reduction of 1, i.e. only the unknown word has been solved. But it should be noticed that improvement is more impressive when the unknown words are nouns or verbs, probably because these categories influence much more items in the sentence in terms of agreement. In two cases (involving verbs), errors are corrected because of the translation of the unknown words, but at the same time, two other errors are caused by it. This problem comes from the fact that adding new words in the lexicon of the system requires sometimes more information (such as valency) to provide a proper syntaxctic generation of the sentence. The relatively good results obtained by the prototype are very encouraging. They mainly show that if the analysis step is performed correctly, the rest of the process can be done with not much further work. But at the end of such a feasibility study, it is useful to look objectively for the conditions that make such results possible. The good quality of the result can be explained by the important preliminary work done (i) in the extension/specialisation of the lexicon, and (ii) in the setting up of the LFRs. The acquisition of the contrastive knowledge in a MT context is indeed the most essential issue in this kind of approach. The methodology we proposed here for setting these LFR proves to be useful for the linguist to acquire this specific type of knowledge. Lexical morphology is often considered as not regular enough to be exploited in NLP. The evaluation performed in this study shows that it is not the case, especially in neologism. But in some cases, it is no use to ask for the impossible, and simply give up implementing the most inefficient rules. We also show that the efficient analysis step is probably the main condition to make the whole system work. This step should be implemented with as much constraints as possible, to provide an output without errors. Such implementation requires proper evaluation of the impact of every constraint. It should also be stated that such implementation (and especially knowledge acquisition) is time-consuming, and one can legitimately ask if machine-learning methods would do the job. The number of LFRs being relatively restrained in producing neologisms, we can say that the effort of manual formalisation is worthwhile for the benefits that should be valuable on the long term. Another aspect of the feasibility is closely related to questions of "interoperability", because such implementation should be done within existing MT programs, and not independently as it was for this feasibility study. Other questions of portability should also be considered. As we stated, we chose two morphologically related languages on purpose: they present less divergences to deal with and allow concentrating on the method. However, the proposed method (especially that contrastive knowledge acquisition) can clearly be ported to another pair of languages (at least inflexional languages). It should also be noticed that the same approach can be applied to other types of construction. We mainly think here of suffixation, but one can imagine to use LFRs with other elements of formation (like combining forms, that tend to be very "international", and consequently the material for many neologisms). Moreover, the way the rules are formalised and the algorithm designed allow easy reversibility and modification. This feasibility study presents the benefit of implementing lexical morphology principles in a MT system. It presents all the issues raised by formalization and implementation, and shows in a quantitative manner how those principles are useful to partly solve unknown words in machine translation. From a broader perspective, we show the benefits of such implementation in a MT system, but also the method that should be used to formalise this special kind of information. We also emphasize the need for in-dept work of knowledge acquisition before actually building up the system, especially because contrastive morphological data are not as obvious as other linguistic dimensions. Moreover, the evaluation step clearly states that the analysis module is the most important issue in dealing with lexical morphology in multilingual context. The multilingual approach of morphology also paves the way for other researches, either in representation of word-formation or in exploitation of multilingual dimension in NLP systems.
513
1,404
513
DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization
Dialogue summarization has recently garnered significant attention due to its wide range of applications. However, existing methods for summarizing dialogues have limitations because they do not take into account the inherent structure of dialogue and rely heavily on labeled data, which can lead to poor performance in new domains. In this work, we propose DIONYSUS (dynamic input optimization in pre-training for dialogue summarization), a pre-trained encoder-decoder model for summarizing dialogues in any new domain. To pretrain DIONYSUS, we create two pseudo summaries for each dialogue example: one from a fine-tuned summarization model and the other from important dialogue turns. We then choose one of these pseudo summaries based on information distribution differences in different types of dialogues. This selected pseudo summary serves as the objective for pre-training DIONYSUS using a self-supervised approach on a large dialogue corpus. Our experiments show that DIONYSUS outperforms existing methods on six datasets, as demonstrated by its ROUGE scores in zero-shot and few-shot settings.
Text summarization aims to produce concise and accurate summaries of long texts. Recent research on pre-trained neural language models has shown success in summarizing monologues Self-supervised text summarization models To address these challenges, we propose DIONY-SUS, a pre-trained sequence-to-sequence model designed to summarize dialogues in any domain, even with a lack of labeled data. It uses pseudo summaries as its pre-training objective, which can be dynamically selected from two sources. First, for daily chats where multiple dialogue turns are not sufficient to summarize the dialogue, we train a summary helper using high-quality dialogue summarization datasets to generate pseudo summaries for these types of dialogues. On the other hand, for dialogues like meeting minutes, interviews, and debates, which can be summarized through a selection of essential turns, we use a method inspired by the gap sentence generation (GSG) technique in PEGASUS to select these turns as pseudo summaries for training. For instance, choosing the final few turns in a conversation can effectively summarize meeting minutes. We have improved upon the GSG method by using the generated summaries from the summary helper as references during gap sentence selection, as they tend to have less noise compared to the full dialogue context. We refer to this source of pseudo summaries as "Principal" and refer to our improved method as GSG+. We find that our improved method outperforms previous methods in low-resource settings across different domains, such as daily chats, emails, and customer service dialogues. Additionally, we study different objective strategies for selecting the pseudo summary as a pre-training objective from the generated summary and the "Principal." We evaluate DIONYSUS on six dialogue summarization datasets. Our best model trained on 19 dialogue corpora surpasses PEGASUS LARGE in a zero-shot setting across all domains. We also found that the best performance is achieved by selecting the source with the highest ROUGE score as the objective strategy. Our main contributions are: • The development of DIONYSUS, a pretrained sequence-to-sequence model for summarizing dialogues in any domain in a zeroshot or few-shot setting. • The introduction of new self-supervised pretraining objectives for dialogue summarization using a summary helper and GSG+. • The demonstration that DIONYSUS outperforms baselines on six domains in low-resource settings, and can be fine-tuned with only 10 training examples to outperform vanilla T5
Figure In certain types of dialogue, such as daily chats, it can be challenging to gather all necessary information from just a few dialogue turns due to the dispersed nature of dialogue information. To address this problem, we have created a summary helper model that generates pseudo summaries for each training example in our pre-training corpus. We build our summary helper upon the T5 (Raffel et al., 2022) model. To capture essential information in a dialogue, we have trained our helper on the MultiWoz dataset Algorithm 1 GSG+ 1: P ← ∅ 2: for j ← 1 to m do 3: k := argmax{s i } n 5: P := P ∪ {x k } 6: end for Dialogues in certain settings, such as meetings and medical dialogues, often include summary to select dialogue turns ( § 2.2) as the "Principal" (P) and using various strategies turns that summarize the entire conversation. For example, a participant may summarize a meeting, or a doctor may explain the outcome. These summary turns can be used as a pre-training objective because they highlight the main points of the dialogue and provide a concise overview of the topic discussed. In order to make DIONYSUS more adaptable to these scenarios, we have improved the independent principal method in the GSG method To generate the final pseudo summary S for each specific dialogue training example, we consider three strategies. These strategies are based on the generated pseudo summary G and the extracted "Principal" P . These strategies serve as the pretrain objective for the dialogue training example. All G S = G: We always select the generated summary from the summary helper as the pretraining objective. All P S = P : We always select the "Principal" as the pre-training objective. Better ROUGE We use either G or P based on the recall of information from the dialogue to determine the pre-training objective. We utilize Algorithm 2 to get the pre-training objective by calculating the ROUGE1-F1 score for the pseudo summaries and the dialogue, excluding the "Principal" D \ P . It is important to note that we use the same reference to ensure a fair comparison. For pre-training with above strategies, if we choose G as the pseudo summary, we input the full dialogue. If we choose P , we input the dialogue, excluding the "Principal," D \ P to create an abstract summary. However, we also include the "Principal" with a probability, using a copying mechanism to create an extractive summary. More information about this copy mechanism can be found in Section 5.4. It is important to note that we do not combine these two pseudo summaries for a single training example. Each example in our pre-training corpus will have either G or P as its designated pseudo summary. To train DIONYSUS, we utilized 19 conversational corpora that do not come with pre-defined dialogue summaries. We employed a self-supervised approach by using pseudo-summaries as the pretraining objective. Conversational Corpora We collect 19 available conversational corpora consisting of 1.7M examples after truncating for pre-training. Corpus information is listed in Table CaSiNo (Chawla et al., 2021) 1,030 Chromium Table We train our objective summary helper with a rule-based dialogue summarization dataset (DS2) and an abstractive summarization dataset (Dialog-Sum). DS2 This dataset DialogSum This dataset We evaluate our methods on three public dialogue summarization datasets or benchmarks: SAMSum ConvoSumm It is a benchmark of four domains: New York Times comment, StackExchange, W3C email, and Reddit. Dialogues are extracted from publicly available data, and each domain has 500 dialogues. They hire crowdsorce workers on Amazon Mechanical Turk to annotate dialogue summary. TweetSumm This dataset contains 1,100 reconstructed real-world customer support dialogues from Tweet. Each dialogue has human annotated abstractive summaries and extractive summaries. We only use abstractive summaries in the dataset as references in our experiments. We report ROUGE-1, ROUGE-2, and ROUGE-L scores We compare our methods with three competitive baselines. T5v1.1 It is an improved version of the original T5 model PEGASUS GSG* We use the independent principal strategy of GSG training objective in PEGASUS We focus on low-resource dialogue summarization settings because it is difficult to collect enough training examples. We evaluate DIONYSUS with "All G", "All P", and "Better ROUGE" strategies in zero-shot and few-shot settings and compare it to the baselines. In order to evaluate the effectiveness of DIONYSUS, we conduct a zero-shot test on DIONYSUS LARGE with all strategies and other baselines. We present the results in Table We investigated reducing annotation labor in dialogue summarization tasks by using few-shot dialogue summarization. We report ROUGE1-F1, ROUGE2-F1, ROUGEL-F1, and ROUGELSum-F1 scores to evaluate model performance. Specifically, We fine-tune DIONYSUS LARGE , PEGASUS LARGE , and T5v1.1 LARGE with the first 1/10/100/1K/10K training examples from the SAMSum dataset. We show the results of our experiments with varying training data sizes in Figure In GSG+, we can choose a fixed number of turns in the dialogue as a training objective or select turns with a compression ratio. We investigate the compression ratio in a dialogue turn level as the number of selected turns over the number of totals turns in the dialogue (N principal /N dialogue ). A low compression ratio will select fewer turns in the dialogue as the objective, making pre-training less challenging. However, it tends to have a lower ROUGE1-F1 score with the remaining dialogue turns, meaning the "Better ROUGE" strategy selects more generated summaries as the objective. While choosing a high compression ratio will make the pre-training more challenging. Nevertheless, it has a higher ROUGE score compared to generated summaries, leading to more principal under the "Better ROUGE" strategy. We show the zero-shot performance on development sets of the SAMSum dataset and TweetSumm dataset with compression rates from 10% to 60% in Figure Threshold ConvoKit DS2 DialogSum Table In order to ensure a fair comparison, we check for overlap between pre-training and downstream test datasets. This is done by calculating the similarity between all pairs of test set targets in the SAMSum dataset and pre-training documents using the ROUGE2-recall measure, which is calculated as the number of overlapping bigrams divided by the total number of bigrams in the test target. We then count the number of test set examples that have a similarity to any pre-training example above a certain threshold. As shown in Table Dialogue summarization is a rapidly growing area of research that focuses on automatically generating concise and informative summaries of conversations Pre-trained Transformer-based Another line of work focuses on pre-trained models for dialogues. DialoGPT We present DIONYSUS, a pre-trained encoderdecoder model for zero-shot dialogue summarization in any new domain. We pre-train using a self-supervised approach that generates pseudosummaries for large dialogue corpora as the pretraining objective. We investigate the impact of various pre-training objective strategies and model sizes on dialogue summarization performance. Our experiments show that DIONYSUS outperforms state-of-the-art models on six datasets in a zeroshot setting. Furthermore, DIONYSUS can be fine-tuned with only 10 examples to outperform vanilla T5 fine-tuning with 1,000 examples. This makes dialogue summarization more practical and easier to use in various contexts with minimal effort. We plan to extend this method to abstractive summarization tasks to develop a general zero-shot summarization model. Training Data Our pre-training data is sourced from 19 existing dialogue datasets. However, it's important to note that these datasets may contain noise, such as harmful content, irrelevant file names, and URL links. Despite utilizing multiple automatic tools to filter out this content during preprocessing, there is still a chance that some noise may be present in our pre-training data. This could potentially impact the performance of DIONYSUS, making it important to monitor and improve the pre-processing steps continuously. We also know the potential drawbacks of constructing pseudo summaries using the GSG method, which may lead to unnatural summaries for dialogue data. To mitigate this, we introduced the Summary Helper in Section 2.1, which is specifically trained on two dialogue summarization datasets containing natural summaries. This approach enables more realistic pseudo-summaries and enhances zero-shot performance. Although we employ top-m turns as an additional source of pseudo summaries, Figure Training Resource To improve our model's performance, we employ the "Better ROUGE" strategy, which calculates the ROUGE score for both candidates and selects the best one as the final training objective. This data pre-processing process can be pretty time-consuming, taking approximately one day to complete for our pre-training data when utilizing 100 threads. Additionally, we utilize 16 Nvidia V100 GPUs to train our models, which may not be accessible or reproducible for all researchers. This could present a significant obstacle for those looking to replicate or build upon our work. Test Data Another potential concern is the test datasets used to evaluate DIONYSUS. The test set size is relatively small, which may not fully represent the breadth of dialogue types that a general dialogue summarization model should be able to handle. This could lead to the model performing well on the test set but not generalizing to other unseen dialogue types. Further, our analysis did not include the assessment of long dialogue summarization, such as lengthy meetings Our gratitude goes out to Microsoft Research for providing us with computational resources. We would also like to thank Kun Qian for valuable discussions and the Columbia NLP and Microsoft Deep Learning Group members for their feedback and discussions. Additionally, we thank the Mechanical Turk workers for conducting the human evaluation. We could use two possible orders to align the dialogue turns in the principal. The first order is to align the text with the ROUGE1-F1 score. The second order is to align the principal with the order in the original dialogue. This means that the principal will be arranged in the same order as in the original dialogue, without rearrangement. This option helps preserve the original flow and structure of the dialogue. We compare these two orders of principal in the GSG* baseline. As shown in Table To evaluate the performance of DIONYSUS during pre-training, we measured the ROUGE1-F1, ROUGE2-F1, ROUGEL-F1, and ROUGELSum-F1 scores on the SAMSum dataset in Figure In order to evaluate the performance of DIONY-SUS, we randomly selected model output examples
1,104
2,551
1,104
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain state-of-theart results on the Natural Questions and Triv-iaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that sequence-to-sequence models offers a flexible framework to efficiently aggregate and combine evidence from multiple passages.
Recently, several works have shown that factual information can be extracted from large scale language models trained on vast quantities of data Retrieval based approaches were previously considered in the context of open domain question answering with extractive models support documents, before extracting the answer from these documents. Different retrieval techniques have been considered, either using sparse representations based on TF/IDF or using dense embeddings Aggregating and combining evidence from multiple passages is not straightforward when using extractive models, and multiple techniques have been proposed to address this limitation In this paper, we explore a simple approach having the best of both worlds, by building on the exciting developments in generative modeling and retrieval for open domain question answering. This method proceeds in two steps, by first retrieving supporting passages using either sparse or dense representations. Then, a sequence-to-sequence model generates the answer, taking as input the retrieved passages in addition to the question. While conceptually simple, this method sets new state-ofthe-art results on the TriviaQA and NaturalQuestions benchmarks. In particular, we show that the performance of our method significantly improves when the number of retrieved passages increases. We believe that this is evidence that generative models are good at combining evidence from multiple passages, compared to extractive ones.
Open domain question answering is the task of answering general domain questions, in which the evidence is not given as input to the system. While being a longstanding problem in natural language processing Passage retrieval is an important step in open domain question answering, and is an active area of research to improve QA systems. Initially, sparse representations based on TF/IDF were used to retrieve support documents Generative question answering was mostly considered in previous work for datasets requiring to generate answers, such as NarrativeQA In this section, we describe our approach to open domain question answering. It proceeds in two steps, first retrieving support passages before processing them with a sequence to sequence model. Reading. Our generative model for open domain QA is based on a sequence-to-sequence network, pretrained on unsupervised data, such as T5 or BART The model takes as input the question, as well as the support passages, and generates the answer. More precisely, each retrieved passage and its title are concatenated with the question, and processed independently from other passages by the encoder. We add special tokens question:, title: and context: before the question, title and text of each passage. Finally, the decoder performs atten-1 lucene.apache.org 2 spacy.io 3 github.com/facebookresearch/faiss tion over the concatenation of the resulting representations of all the retrieved passages. The model thus performs evidence fusion in the decoder only, and we refer to it as Fusion-in-Decoder. By processing passages independently in the encoder, but jointly in the decoder, this method differs from In this section, we report empirical evaluations of Fusion-in-Decoder for open domain QA. Datasets. We consider the following datasets, and use the same setting as • NaturalQuestions • TriviaQA Technical details. We initialize our models with the pretrained T5 models Comparison to state-of-the-art. In table 1, we compare the results obtained by Fusion-in-Decoder with existing approaches for open domain question answering. We observe that while conceptually simple, this method outperforms existing work on the NaturalQuestion and TriviaQA benchmarks. In particular, generative models seem to perform well when evidence from multiple passages need to be aggregated, compared to extractive approaches. Our method also performs better than other generative models, showing that scaling to large number of passages and processing them jointly leads to improvement in accuracy. Second, we observe that using additional knowledge in generative models by using retrieval lead to important performance gains. On NaturalQuestions, the closed book T5 model obtains 36.6% accuracy with 11B parameters, while our approach obtains 44.1% with 770M parameters plus Wikipedia with BM25 retrieval. Both methods use roughly the same amount of memory to store information, indicating that text based explicit memories are competitive for knowledge retrieval tasks. Scaling with number of passages. In Figure On the other hand, the performance of most extractive models seems to peak around 10 to 20 passages Impact of the number of training passages. In the previous section, the model was trained and evaluated with the same number of passages. To reduce the training computational budget, a simple solution consists in training the model with fewer passages. In Table In this paper, we study a simple approach to open domain question answering, which relies on retrieving support passages before processing them with a generative model. We show that while conceptually simple, this approach is competitive with existing methods, and that it scales well with the number of retrieved passages. In future work, we plan to make this model more efficient, in particular when scaling to large number of support passages. We also plan to integrate the retrieval in our model, and to learn the whole system end-to-end.
748
1,479
748
"The Boating Store Had Its Best Sail Ever": Pronunciation-attentive Contextualized Pun Recognition
Humor plays an important role in human languages and it is essential to model humor when building intelligence systems. Among different forms of humor, puns perform wordplay for humorous effects by employing words with double entendre and high phonetic similarity. However, identifying and modeling puns are challenging as puns usually involved implicit semantic or phonological tricks. In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor, detect if a sentence contains puns and locate them in the sentence. PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols. Extensive experiments are conducted on two benchmark datasets. Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks. In-depth analyses verify the effectiveness and robustness of PCPR.
During the last decades, social media has promoted the creation of a vast amount of humorous web contents These two forms of puns have been studied in literature from different angles. To recognize puns in a sentence, word sense disambiguation techniques (WSD) In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to jointly model the contextualized word embeddings and phonological word representations for pun recognition. To capture the phonological structures of words, we break each word into a sequence of phonemes as its pronunciation so that homophones can have similar phoneme sets. For instance, the phonemes of the word pun are {P, AH, N}. In PCPR, we construct a pronunciation attentive module to identify important phonemes of each word, which can be applied in other tasks related to phonology. We jointly encode the contextual and phonological features into a self-attentive embedding to tackle both pun detection and location tasks. We summarize our contributions as following. • To the best of our knowledge, PCPR is the first work to jointly model contextualized word embeddings and pronunciation embeddings to recognize puns. Both contexts and phonological properties are beneficial to pun recognition.
Pun Recognition and Generation To recognize puns, In this section, we first formally define the problem and then introduce the proposed method, PCPR. Suppose the input text consists of a sequence of N words {w 1 , w 2 , • • • , w N }. For each word w i with M i phonemes in its pronunciation, the phonemes are denoted as R(w where r i,j is the j-th phoneme in the pronunciation of w i . These phonemes are given by a dictionary. In this paper, we aim to recognize potential puns in the text with two tasks, including pun detection and pun location, as described in the following. Task 1: Pun Detection. The pun detection task identifies whether a sentence contains a pun. Formally, the task is modeled as a classification problem with binary label y D . Task 2: Pun Location. Given a sentence containing at least a pun, the pun location task aims to unearth the pun word. More precisely, for each word w i , we would like to predict a binary label y L i that indicates if w i is a pun word. In addition to independently solving the above two tasks, the ultimate goal of pun recognition is to build a pipeline from scratch to detect and then locate the puns in texts. Hence, we also evaluate the end-to-end performance by aggregating the solutions for two tasks. Figure The context is essential for interpreting a word in the text. Hence, we propose to apply contextualized word embeddings to derive word representations. In the framework of PCPR, any contextualized word embedding method, such as BERT Pun BERT deploys a multi-layer bidirectional encoder based on transformers with multi-head selfattention To learn the phonological characteristics of words, PCPR models the word phonemes. For each phoneme r i,j of the word w i , we project r i,j to a d P -dimensional embedding space as a trainable vector u i,j to represent its phonological properties. Based on the phoneme embeddings of a word, we apply the attention mechanism where F P (•) is a fully-connected layer with d A outputs and d A is the attention size; v s is a d Adimensional context vector that estimates the importance score of each pronunciation embedding. Finally, the pronunciation embeddings T P i can be represented as the weighted combination of phoneme embeddings as follows: Moreover, we can further derive the joint embedding T J i to indicate both word semantics and phonological knowledge for the word w i by concatenating two different embeddings as follows: Note that the joint embeddings are d J -dimensional vectors, where d J = d C + d P . For the task of pun detection, understanding the meaning of input text is essential. Due to its advantages of interpretability over convolutional neural network , where F S (•) is the function to estimate the attention for queries, and d is a scaling factor to avoid extremely small gradients. Hence, the self-attentive embedding vector is computed by aggregating joint embeddings: Note that the knowledge of pronunciations is considered by the self-attentive encoder but not the contextualized word encoder. Finally, the pronunciation-attentive contextualized representation for the whole input text can be derived by concatenating the overall contextualized embedding and the self-attentive embedding: Moreover, each word w i is benefited from the selfattentive encoder and is represented by a joint embedding: Based on the joint embedding for each word and the pronunciation-attentive contextualized embedding for the whole input text, both tasks can be tackled with simple fully-connected layers. Pun Detection. Pun detection is modeled as a binary classification task. Given the overall embedding for the input text T J [CLS] , the prediction ŷD is generated by a fully-connected layer and the softmax function: where F D (•) derives the logits of two classes in binary classification. Pun Location. For each word w i , the corresponding self-attentive joint embedding T J i,[ATT] is applied as features for pun location. Similar to pun detection, the prediction ŷL i is generated by: where F L (•) derives two logits for classifying if a word is a pun word. Since both tasks focus on binary classification, we optimize the model with cross-entropy loss. In this section, we describe our experimental settings and explain the results and interpretations. We will verify some basic assumptions of this paper: (1) the contextualized word embeddings and pronunciation embeddings are both beneficial to the pun detection and location tasks; (2) the attention mechanism can improve the performance. Experimental Datasets. We conducted experiments on the SemEval 2017 shared task 7 dataset For pun detection, the SemEval dataset consists of 4, 030 and 2, 878 examples for pun detection and location while each example with a pun can be a homographic or heterographic pun. In contrast, the PTD dataset contains 4, 826 examples without labels of pun types. word embedding To tune the hyperparameters, we search the phoneme embedding size d P and the attention size d A from {8, 16, 32, 64, 128, 256, 512} as shown in Figure For the SemEval dataset, nine baseline methods are compared in the experiments, including Duluth For the PTD dataset, four baseline methods with reported performance are selected for comparisons. MCL Pun Detection. Table We notice that some of the baseline models such as UWaterloo, UWAV and PunFields have poor performances. These methods consider the word position in a sentence or calculate the inverse document frequency of words. recognition is to establish a pipeline to detect and then locate puns. Table Ablation Study. To better understand the effectiveness of each component in PCPR, we conduct an ablation study on the homographic puns of the SemEval dataset. Table < l a t e x i t s h a 1 _ b a s e 6 4 = " k f A busy barber is quiet harried. < l a t e x i t s h a 1 _ b a s e 6 4 = " k f I phoned the zoo but the lion was busy. a 8 2 c w + + V P e 5 w 8 v y 5 w N < / l a t e x i t > I phoned the zoo but the lion was busy. a 8 2 c w + + V P e 5 w 8 v y 5 w N < / l a t e x i t > The boating store had its best sail ever. < l a t e x i t s h a 1 _ b a s e 6 4 = " x f u W q S e W P F Q y The boating store had its best sail ever. < l a t e x i t s h a 1 _ b a s e 6 4 = " x f u W q S e W P F Q y amples from heterographic puns in the SemEval dataset. The word highlighted in the upper sentence (marked in pink) is a pun while we also color each word of the lower sentence in blue according to the magnitude of its attention weights. The deeper colors indicate higher attention weights. In the first example, busy has the largest weight because it has the most similar semantic meaning as harried. The barber also has relatively high weights. We suppose it is related to hairy which should be the other word of this double entendre. Similar, the zoo is corresponded to lion while phone and busy indicate line for the pun. Moreover, boating confirms sail while store supports sale. Interpreting the weights out of our self-attentive encoder explains the significance of each token when the model detects the pun in the context. The phonemes are essential in these cases because they strengthen the relationship among words with distant semantic meanings but similar phonological expressions. Sensitivity to Text Lengths. Figure Case Study and Error Analysis. Table In this paper, we propose a novel approach, PCPR, for pun detection and location by leveraging a contextualized word encoder and modeling phonemes as word pronunciations. Moreover, we would love to apply the proposed model to other problems, such as general humor recognition, irony discovery, and sarcasm detection, as the future work.
1,008
1,256
1,008
A Relational Memory-based Embedding Model for Triple Classification and Search Personalization
Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems. To this end, we introduce a novel embedding model, named R-MeN, that explores a relational memory network to encode potential dependencies in relationship triples. R-MeN considers each triple as a sequence of 3 input vectors that recurrently interact with a memory using a transformer self-attention mechanism. Thus R-MeN encodes new information from interactions between the memory and each input vector to return a corresponding vector. Consequently, R-MeN feeds these 3 returned vectors to a convolutional neural network-based decoder to produce a scalar score for the triple. Experimental results show that our proposed R-MeN obtains state-of-theart results on SEARCH17 for the search personalization task, and on WN11 and FB13 for the triple classification task.
Knowledge graphs (KGs) -representing the genuine relationships among entities in the form of triples (subject, relation, object) denoted as (s, r, o) -are often insufficient for knowledge presentation due to the lack of many valid triples Early embedding models such as TransE Existing embedding models are showing promising performances mainly for knowledge graph completion, where the goal is to infer a missing entity given a relation and another entity. But in real applications, less mentioned, such as triple classification To this end, we leverage the relational memory network • We present R-MeN -a novel KG embedding model to memorize and encode the potential dependencies among relations and entities for two real applications of triple classification and search personalization. • Experimental results show that R-MeN obtains better performance than up-to-date embedding models, in which R-MeN produces new state-of-the-art results on SEARCH17 Let G be a KG database of valid triples in the form of (subject, relation, object) denoted as (s, r, o). KG embedding models aim to compute a score for each triple, such that valid triples obtain higher scores than invalid triples. We denote v s , v r and v o ∈ R d as the embeddings of s, r and o, respectively. Besides, we hypothesize that relative positions among s, r and o are useful to reason instinct relationships; hence we add to each position a positional embedding. Given a triple (s, r, o), we obtain a sequence of 3 vectors {x 1 , x 2 , x 3 } as: where W ∈ R k×d is a weight matrix, and p 1 , p 2 and p 3 ∈ R d are positional embeddings, and k is the memory size. We assume we have a memory M consisting of N rows wherein each row is a memory slot. We use M (t) to denote the memory at timestep t, and M (t) i,: ∈ R k to denote the i-th memory slot at timestep t. We follow ] with M (t+1),h i,: where H is the number of attention heads, and ⊕ denotes a vector concatenation operation. Regarding the h-th head, W h,V ∈ R n×k is a valueprojection matrix, in which n is the head size and k = nH. Note that {α i,j,h } N j=1 and α i,N +1,h are attention weights, which are computed using the softmax function over scaled dot products as: where W h,Q ∈ R n×k and W h,K ∈ R n×k are query-projection and key-projection matrices, respectively. As following i,: to a multi-layer perceptron followed by a memory gating to produce an encoded vector y t ∈ R k for timestep t and the next memory slot M (t+1) i,: for timestep (t + 1). As a result, we obtain a sequence of 3 encoded vectors {y 1 , y 2 , y 3 } for the triple (s, r, o). We then use a CNN-based decoder to compute a score for the triple as: where we view [y 1 , y 2 , y 3 ] as a matrix in R k×3 ; Ω denotes a set of filters in R m×3 , in which m is the window size of filters; w ∈ R |Ω| is a weight vector; * denotes a convolution operator; and max denotes a max-pooling operator. Note that we use the max-pooling operator -instead of the vector concatenation of all feature maps used in ConvKB We illustrate our proposed R-MeN as shown in Figure where G and G are collections of valid and invalid triples, respectively. G is generated by corrupting valid triples in G. 3 Experimental setup 3.1 Task description and evaluation
Table Regarding TransE, we obtain the second-best accuracy of 89.2% on WN11 and a competitive accuracy of 88.1% on FB13. Figure Method WN11 FB13 Avg. NTN Table Method MRR H@1 SE (Original rank) 0.559 38.5 CI Table Next, we present in Figure For the last experiment, we compute and report our ablation results over 2 factors in Table We propose a new KG embedding model, named R-MeN, where we integrate transformer self-attention mechanism-based memory interactions with a CNN decoder to capture the potential dependencies in the KG triples effectively. Experimental results show that our proposed R-MeN obtains the new state-of-the-art performances for both the triple classification and search personalization tasks. In future work, we plan to extend R-MeN for multihop knowledge graph reasoning. Our code is available at:
943
3,244
943
Weakly-Supervised Questions for Zero-Shot Relation Extraction
Zero-Shot Relation Extraction (ZRE) is the task of Relation Extraction where the training and test sets have no shared relation types. This very challenging domain is a good test of a model's ability to generalize. Previous approaches to ZRE reframed relation extraction as Question Answering (QA), allowing for the use of pre-trained QA models. However, this method required manually creating gold question templates for each new relation. Here, we do away with these gold templates and instead learn a model that can generate questions for unseen relations. Our technique can successfully translate relation descriptions into relevant questions, which are then leveraged to generate the correct tail entity. On tail entity extraction, we outperform the previous state-of-the-art by more than 16 F1 points without using gold question templates. On the RE-QA dataset where no previous baseline for relation extraction exists, our proposed algorithm comes within 0.7 F1 points of a system that uses gold question templates. Our model also outperforms the stateof-the-art ZRE baselines on the FewRel and WikiZSL datasets, showing that QA models no longer need template questions to match the performance of models specifically tailored to the ZRE task. Our implementation is available at
Building models that capture abstract knowledge rather than just memorizing data is one of the inspirations for zero-shot benchmarks In ZRE, the test relations do not appear in the training data, so one cannot apply typical relation classification approaches. One method for ZRE is to reframe the task as a Question-Answering (QA) problem by manually creating question templates for each relation type. Extracting the tail entity is accomplished by finding the answer span for the corresponding question template We treat ZRE as a tail entity generation task for which we consider the dual training of question and answer generators. We create question and answer generators by pre-training the publicly available T5 models Our experiments show that our off-policy sampling technique is critical for creating semantically relevant questions. For ZRE, we show that our weakly-supervised questions produce a model with competitive F1 score of 65.4 compared to the F1 score of 66.1 achieved by using gold question templates. Our contributions can be summarized as the following: • We propose a new learning objective that combines off-policy sampling and MML optimization. • We can successfully generate semantically relevant questions for a given relationship signal. • We report a new state-of-the-art ZRE performance on the RE-QA dataset
In this work, we train models to extract facts from unstructured text. Facts are represented as triplets (e 1 , r, e 2 ) where e 1 is the head entity, e 2 is the tail entity, and r is the relation keywords. We explore relation extraction To transfer the models pre-trained on QA corpora to the task of TE, we can provide a natural question for the given head entity and the relation keywords. However, providing question templates for every relation type is infeasible. We explore the idea of generating questions semantically relevant to the given input context, relation and the head entity. Therefore, we marginalize the joint distribution P (e 2 , q|c, e 1 , r) with respect to the unobserved questions q to learn the tail entity generator: P (e 2 |c, e 1 , r) = q P (e 2 , q|c, e 1 , r). As illustrated by Figure For example, given a context biography about the person "Donald Trump" and the relation keywords "place of birth," we would use P θ Q to first generate the question "Where was Donald Trump born?" and then the answer module P θ A can generate the response "New York." We then re-purpose this TE model for the task of RE. Whether we train the tail entity generator P (e 2 |c, e 1 , r) with gold, pseudo, or generated questions, we perform the RE task by scoring every possible relation of the test data and choosing the highest scoring relation: r = arg max r P (e 2 |c, e 1 , r) (2) In this section we discuss the pre-training steps for the question and answer modules and then explore combinations of four training objectives based on maximizing the marginal likelihood of Equation The answer generator is based on the publicly available T5 model 3) RACE dataset as a multiple-choice QA dataset where we generate the question's correct choice as the answer text in the decoder We now fine-tune a T5 model using the QA corpora. We learn the distribution P (q|c, e 1 , r, e 2 ), and use the answer in each QA instance as a proxy for the tail entity e 2 . To specify the synthetic head entity e 1 , we run a named entity tagger 2 over questions in the QA corpus. For questions with multiple extracted entities, one of the entities are randomly selected as the head entity. To specify the relation r from the question, after ignoring punctuation or interrogative words (e.g., what, where, etc.), at most four tokens (excluding the extracted entities) are sampled. We fine-tune the T5 model on this synthetic dataset, giving gold passage as the context c, one of the extracted named entities as e 1 , the gold answer tokens as e 2 , and the sampled words as the relationship keyword r. We train the model to produce the gold question. We specifically fine-tune the model on the answerable questions from the SQuAD Having the pre-trained answer generator, we optimize log P (e 2 |c, q * ), and log P (e 2 |c, q pseudo ) on RE datasets using the gold q * and pseudo questions q pseudo , respectively. To generate questions and learn from them, we also directly maximize the Marginal Log-Likelihood (MML) to train both the question and answer modules in Equation Similarly, we use the following gradient update for θ A : As we ultimately want to generate the correct tail entity regardless of the input question in the answer module, we can use the best question from P θ Q and then optimize the probability of the tail entity for such a decoded question. This idea results in the following G gradient (G: greedy) update for the answer module: We encountered two issues while training the question generator with the previous MML objective. The first issue is that samples become ungrammatical templates as we continue training the question module. We hypothesize that the issue originates from taking direct samples from the question generator P θ Q (q|c, e 1 , r) while simultaneously updating it during training. These spurious questions can hinder the model from generalizing to a new unseen relation. The program synthesis and semantic parsing research have reported a similar issue The second issue in taking samples from P θ Q (q|c, e 1 , r) is that omitting the tail entity in the question generator while using the input context c sometimes results in multiple plausible questions. For example, consider the context "The Facebook brand was replaced by Meta in November 2021." For the ambiguous relationship word "replace", given the head entity "Facebook", we can generate two valid questions: a) "Which brand replaced Facebook?", b) "When was the brand Facebook replaced?". To resolve the first issue, we apply an off-policy sampling technique To resolve the second issue, we feed the tail entity along with the context into the search module; however, we cannot feed the tail entity into the question generator P θ Q (q|c, e 1 , r) as the correct tail entity is unknown during the test phase. Furthermore, to augment the information of the relationship keyword r, we append the relation description for each r and feed it to the question generator P θ Q and the sampling module S(q). We assume that our search module S(q) is a fixed model approximating the question posterior P (q|c, e 1 , r, e 2 ), which can be our pre-trained question generator. Thus, we use two identical copies of our pre-trained question generator. One is fixed and will be used to provide samples for training. The other serves as the initial network for P θ Q (q|c, e 1 , r), and we update its parameters during the training phase. The search module will also receive the tail entity e 2 as input, whereas the P θ Q only receives c, e 1 , and r. During the test phase, we only generate questions from P θ Q . Using the off-policy samples, the new MML gradient updates will have the following forms, where S(q) is our pre-trained question module and ϕ(q) approximates the true question posterior according to Equation 3: Table To generate the tail entity on the test data, regardless of fine-tuning the pre-trained answer module with gold, pseudo, or generated questions, we use greedy decoding to find the top-scoring tokens. To generate questions from P θ Q (q|c, e 1 , r), we use the top-scoring question found with beam search decoding having the beam size of 8. During training, to estimate the G gradient vector listed in Table For all the datasets, if possible, we retrieve the relation description from the wikidata For the TE task, as we generate a single tail entity e ′ 2 for each input sentence, the extracted triple (e 1 , r, e ′ 2 ) matches the ground truth triple (e 1 , r, e 2 ) only if e ′ 2 = e 2 (case insensitive). For the negative examples, we generate a special 'NO_ANSWER' output specifying the null tail entity. We then use the official evaluation script 4 to compute the Precision, Recall, and F1-score for the TE task when there are negative examples in the test data. Precision is the true positive count divided by the number of times the system generates a non-null tail entity. Recall is the true positive count divided by the number of positive examples having the tail entity given e 1 and r. For the ZRE performed by the inference objective 2, we compute the macro Precision, Recall, and F1-score; averaged across relation types only on positive examples. Few related works have focused on ZRE. The ZS-BERT model We also provide the reported ZRE results on the FewRel and WikiZSL datasets from Tran et al. ( For pre-training/fine-tuning T5 on QA and RE datasets, we use the Adafactor optimizer, which requires less memory compared to Adam For fine-tuning our models and all the baselines on all the RE datasets, we train them for one epoch using the batch size of 16 on the RE-QA, 4 on the FewRel, and 16 on the WikiZSL datasets, respectively. Every 100 training steps, we evaluate the models on the dev sets, and then we report the performance of the dev set's best model on the test split. For our models, on all the three datasets, we pre-train/fine-tune T5-small with 6 transformer blocks and 512 hidden states. Due to GPU constraints, we use eight samples or a beam size of 8 in the top-p sampling and beam search decoding algorithms, respectively. In our first experiment, we compare the different training objectives listed in Table Table In Table We now compare systems trained with the OffMML-G training objective with the two primary baselines of using pseudo or gold questions to fine-tune the answer generator. For the tail entity generation task, Table For the ZRE task, as listed in Table In Table Weakly-Supervised Semantic Parsing: The objective in Equation 1 follows a similar searchand-learn framework used in the recent works of weakly-supervised program synthesis and semantic parsing Reading Comprehension for RE: The joint ex-traction of entities and relations has also been reduced as a multi-hop QA task using pre-defined question templates for a few entity and relation types present in the ACE and the CoNLL04 datasets Several works have done event argument extraction through QA Question Generation: The Question Generation (QG) research aims at generating natural questions given a document such that the answer modules can find the answers to these questions. Recent systems build end-to-end neural sequence-tosequence models for QG Earlier work suggests dual training of the QG and answer-sentence selection tasks, however, they could marginally improve both of the QG and QA tasks However, its rules are not comprehensive for generating questions for complex relations. Apart from training the QG models on supervised data, recent work uses policy gradient reinforcement learning to a) Optimize rewards related to the fluency and answer-ability of the generated questions Despite the previous studies on QG, we treat questions as latent variables without having access to gold questions given the input context and the relation triplets. This work introduces the OffMML-G training objective to fine-tune the question and answer generators for RE. Our method generates semantically relevant questions for the answer module given the head entity and the relationship keywords. We demonstrated that with these weakly-supervised questions, one could fine-tune QA models on the RE corpora, achieving competitive results in detecting unseen relations. Our future direction would deploy the technique on document-level entityrelation extraction, further exploiting the inference capabilities of QA models. A major limitation in our method is that we need to provide three distributions during training: a) P (q|c, e 1 , r) as the question generator, b) P (e 2 |c, e 1 , r, q) as the answer module, and c) S(q) as our fixed search module over questions. This generative approach (i.e. generating the tail entity) for relation extraction requires more compute resources compared to a direct discriminate approach to learn P (r|c, e 1 , e 2 ), however, such a direct approach cannot be used to transfer QA models into the RE task. We have used the T5-small models in all our experiments. Further gain can be achieved by switching into T5-large models, however, we leave those large-scale experiments for future work. the RE results that we present here (e.g., producing poor performance for certain kinds of relations, or for entities with names different from the training data).
1,285
1,337
1,285
Self-Adaptive Scaling Approach for Learnable Residual Structure
Residual has been widely applied to build deep neural networks with enhanced feature propagation and improved accuracy. In the literature, multiple variants of residual structure are proposed. However, most of them are manually designed for particular tasks and datasets and the combination of existing residual structures has not been well studied. In this work, we propose the Self-Adaptive Scaling (SAS) approach that automatically learns the design of residual structure from data. The proposed approach makes the best of various residual structures, resulting in a general architecture covering several existing ones. In this manner, we construct a learnable residual structure which can be easily integrated into a wide range of residual-based models. We evaluate our approach on various tasks concerning different modalities, including machine translation (IWSLT-2015 EN-VI and WMT-2014 EN-DE, EN-FR), image classification (CIFAR-10 and CIFAR-100), and image captioning (MSCOCO). Empirical results show that the proposed approach consistently improves the residual-based models and exhibits desirable generalization ability. In particular, by incorporating the proposed approach to the Transformer model, we establish new state-of-thearts on the IWSLT-2015 EN-VI low-resource machine translation dataset.
Recently, residual learning attracts considerable attention in training deep neural networks, and many efforts have been devoted to study the utilization of residual structure in tasks across a broad span of fields, including but not limited to computer vision Generally, the residual structures (as illustrated in Figure where x denotes the input (i.e., the skip connection), F denotes the residual function (i.e., residual branch) parameterized by W , and y is the output of the residual block. The balance between x and F is governed by the weights α and β, followed by G, which could be either identity mapping or normalization. Previous works on residual structure designing, which differ in the way that the information flows are regulated, mainly concern two elements, namely the mapping formulation (weight assignment) and the normalization mechanism. As shown in Figure Despite their respective advantages and success in certain fields, we argue that the structures are only particular cases of a more general one, which necessitates further insights into possible combinations. However, the determination of an effective combination may require prior knowledge of the data distribution, which is not always available, or extensive hyper-parameter exploration, which is inefficient. In this paper, we aim at constructing a comprehensive and flexible residual structure. To this end, we propose the Self-Adaptive Scaling approach. In the residual structure, the proposed approach automatically computes scaling factors to adjust the mapping formulation and the normalization mechanism, respectively. By assigning different importance to the skip connection, the residual branch and a normalized result, the scaling factors adaptively controls the topology of the residual building blocks. As a result, the structure learned by our proposed approach can be easily generalized to various kinds of tasks and data, dispensing with the timeconsuming architecture search, to some extent. The proposed learnable residual structure can be easily integrated into existing residual-based models. We evaluate the proposed approach on representative residual models for various tasks. The experiment results and analyses attest to our argument and the effectiveness of the proposal. Overall, the contributions are summarized as followed: • We proposed a novel self-adaptive scaling (SAS) approach to acquire a learnable residual structure, which allows deep neural models to automatically learn the residual structure and can cover different types of existing ones. • The proposed approach is simple and can be easily applied to a wide range of existing residual-based models. According to our empirical studies, the SAS can enable existing models to achieve consistent performance gains, demonstrating its generalization ability to a wide range of existing systems. • The experimental results on the IWSLT-2015 EN-VI show that SAS helps the Transformer-Base model to perform even better than the Transformer-Big model and, encouragingly, we establish a new state-of-the-art on this lowresource machine translation dataset.
In recent years, the application of residual structure to deep neural networks has become an active research topic The first is how should the information from the skip connection and the residual branch be well balanced so that the best improvements can be achieved. The second is how should the neural network with residual connections be optimized so that its representation capability could be fully mined. These two types of problems are mainly addressed by designing appropriate mapping formulation and normalization mechanism, respectively, and we refer to them as On the Connection Problem and On the Optimization Problem. On the connection problem. There are roughly three lines of methods to control the balance in residual connections: identity mapping, constant scaling ratio and adjusted scaling ratio. On the optimization problem. In the realm of computer vision, PreAct-ResNet Layer normalization is widely believed to be helpful for stabilizing training and facilitating convergence. According to our experiments and analyses, the layer normalization can indeed facilitate optimization and therefore improve the overall performance of the model. Different from existing work, we summarize the combination of normalization and residual connection in existing works with a general form y = α * x + β * F + γ * LN(x + F), where the mapping formulation and the normalization mechanism are both taken into account. By changing the scaling factors α, β and γ, the topology of the residual block can be adaptively adjusted, resulting in a learnable residual structure. The learned architecture distinguishes itself from the previous ones with generality and flexibility. Our work is also related to the line of research on neural architecture search In this section, we first briefly introduce the Scaling Gate in Section 3.1, which is used to predict the scaling factors for the mapping formulation and the normalization mechanism. Then, based on the scaling factors, in Section 3.2, we describe how to adaptively make the best of different types of residual structure to build a learnable residual structure. The Scaling Gate should be able to predict reasonable scaling factors. Our motivation stems from the superior performance of Feed-Forward Network used in where [;] denotes concatenation operation, W f ∈ R 2h×h and W f f ∈ R h×1 are the parameters to be learned, and S(x, F) is followed by a Sigmoid activation function. In HighWay Net No. The residual structure of Transformer The residual structure of ResNet The residual structure of Inception-v4 The residual structure of shortcut-only gating The combination of Highway Net Table It is intuitive to combine different types of residual structure through adjustable scaling factors. Therefore, we reformulate the residual block as follows: where α, β and γ can be predicted by scaling gates with different parameters, and LN stands for layer normalization By choosing certain values for α, β and γ, we can get several special cases, as shown in Table where α and β act as the scaling factors predicted as aforementioned. From the above formula, we In this section, we evaluate the proposed approach on three representative tasks in the natural language processing field, computer vision field, and crossmodal scenario, that is, image classification, machine translation and image captioning. We first briefly introduce the baseline models for comparison, the datasets, the metrics and implementation details, followed by the discussions about the experimental results. Since our major concern is the combination of different components in residual units, we keep the internal structure (i.e., the residual function F(x, W)) of each component unaffected. The training and inference strategies also remain the same as the original models. For more details, please refer to the cited publications. Baselines, Datasets, Metrics and Settings. For the task of machine translation, we adopt the popular Transformer There pairs in the IWSLT-2015 English-Vietnamese (EN-VI) Results. The results of machine translation are presented in Table Baselines, Datasets, Metrics and Settings. In the computer vision field, we benchmark our proposed learnable residual structure with residualbased image classification systems, i.e., Pre-Activated ResNet (PreAct-ResNet) The remaining weights are initialized in the same way as in Results. As can be seen in Table Baselines, Datasets, Metrics and Settings. To further demonstrate the generalization ability of our proposed approach, we conduct experiments on the task of image captioning. The experiments are based on the multi-head attention mechanism There are several datasets that consist of imagesentence pairs. Our reported results are evaluated on the popular Microsoft COCO (MSCOCO) We adopt SPICE, CIDEr, BLEU, METEOR and ROUGE for testing. They are previously used as evaluation methods for image captioning, we report the results using the MSCOCO captioning evaluation toolkit Results. The results on Karpathy test split In this section, we conduct several analyses to give further insights into our proposed approach, which are based on the image classification task and we adopt PreAct-ResNet-110 Analysis on Scaling Gate. Gate in Highway Net and our Scaling Gate to PreAct-ResNet-110, as well as the results of the vanilla model. As we can see, when equipped with Transform Gate, the effect is counter-productive on CIFAR-10 dataset. This indicates that the information from x along is not robust and effective enough to predict the scaling factors in the residual structure. The single layer version of our Scaling Gate takes into account the residual branch F, thereby improving over the Transform Gate. It is worth mentioning that compared with the Transform Gate (T (x) = xW T f + b T f ), which has (h × h) + h learnable parameters, Scaling Gate (Single Layer) only introduces (2h×1)+1 learnable parameters, which is much more efficient. By modeling the scaling factor with Scaling Gate (Full Model), a 0.16 points promotion is achieved over the baseline on CIFAR-10, which further demonstrates the advantages and effectiveness of the proposed Scaling Gate. Analysis on Self-Adaptive Scaling. Averaging over 10,000 experimental examples, we display in Figure x + F (Baseline) Table very helpful to information transfer and eases the optimization of deep neural networks. This can be attributed to the facilitated backward propagation of error signals by identity mappings. It is shown in the second column that as the number of network layers increases, the value of β grows simultaneously, which indicates that the representation ability of the residual branch F is stronger when it comes closer to the output of the model. The main reason is that the error signal passed to F is more adequate in the upper blocks, which is beneficial to optimization. As can be seen from the third column, more importance is assigned to the normalized result when it comes to the lower parts of the entire architecture, which means that the value of (1 -α)(1 -β) is larger. This is because that in the underlying static blocks of deep neural networks, the guidance from error signal is weak and the optimization is unstable, thus making the introduction of layer normalization necessary. In all, the proposed approach regulates the information from individual components with scaling factors to build the learnable residual structure, which helps make the best of different type of residual structure, resulting in an effective combination for better performance. Analysis on Using Batch Normalization. The batch normalization is used commonly in the field of computer vision. Therefore, we replace the layer normalization with the batch normalization in the proposed approach to see the difference. As shown To understand how the proposed approach helps the optimization of deep neural models, we inspect into the gradient norm of the output of each residual block in the pre-trained PreAct-ResNet-110 on CIFAR-10. The gradients are averaged over 10,000 randomly selected training examples. As shown in the left plot of Figure In this work, we focus on building a learnable residual structure, which automatically learns the design of residual structure from data, instead of the handy-crafted designs in previous work. We propose the Self-Adaptive Scaling approach to achieve this goal, which combines various residual structures via the predicted scaling factors, resulting in a general residual structure covering several existing models. The proposed approach is simple and can be easily integrated into existing residual-based models. Experiments on the machine translation, image classification and image captioning tasks validate the effectiveness of the proposed method, which successfully promotes the performance of all the strong baselines. This also demonstrates the generalization ability of our method. In particular, when being applied to the recently proposed Transformer model, our approach establishes new state-of-the-arts on the IWSLT EN-VI low resource machine translation task, which further substantiates its efficiency. Detailed analyses prove that the proposed approach can also promote the optimization ability of deep neural networks, and is conducive to exerting the expressive power of existing models.
1,311
3,119
1,311
Elastic weight consolidation for better bias inoculation
The biases present in training datasets have been shown to affect models for sentence pair classification tasks such as natural language inference (NLI) and fact verification. While fine-tuning models on additional data has been used to mitigate them, a common issue is that of catastrophic forgetting of the original training dataset. In this paper, we show that elastic weight consolidation (EWC) allows finetuning of models to mitigate biases while being less susceptible to catastrophic forgetting. In our evaluation on fact verification and NLI stress tests, we show that fine-tuning with EWC dominates standard fine-tuning, yielding models with lower levels of forgetting on the original (biased) dataset for equivalent gains in accuracy on the fine-tuning (unbiased) dataset.
A number of recent works have illustrated shortcomings in sentence-pair classification models that are used for Natural Language Inference (NLI). These arise from limited or biased training data and the lack of suitable inductive bias in models. Such biases also affect fact verification Symmetric (Counterfactual) Evidence To mitigate these undesirable behaviors For fine-tuning specifically, the reduction in accuracy on the original task called catastrophic forgetting On all experiments on the FEVER dataset, finetuning with symmetric counterfactual data
Fine-tuning broadly refers to approaches where a model is initially trained on one dataset and then further improved by training on another. We refer to these datasets as fine-tuning training and test data as FT-train and FT-test respectively. This technique is commonly used to mitigate model biases Elastic Weight Consolidation For efficiency, we use the empirical Fisher We assess the application of EWC to minimize catastrophic forgetting when mitigating model biases in the context of two sentence-pair classi-fication tasks: fact verification and natural language inference. We compare the untreated model (original), fine-tuning (FT), FT+EWC, FT+L2, and merging instances from the FT-train dataset when training (Merged). Each model is first trained using the original dataset and splits from the respective task, using the AllenNLP implementations The FEVER This is mitigating the claim-only bias by reducing the mutual information between claims and labels. The availability of counterfactual data meant that it is possible to experiment with fine-tuning as a mitigation strategy, using the published dev and test data as FT-train and FT-test respectively. Following Schuster et al. ( Mitigating model limitations in NLI stress tests: The MultiNLI Fine-tuning the models, rather than merging datasets, yielded the greatest improvements in accuracy on FT-test. All improvements from the untreated model were significant (p < 0.05, denoted #). Without L2 or EWC, catastrophic forgetting occurs due to the shift in label distribution between the FEVER and FT-train dataset, which only contains 2 of the original 3 label classes. Training a model where the FEVER training and FT-train were merged yielded modest improvements on the FT-test without harming the original FEVER task accuracy. We attribute this to the impact of these 700 instances being diluted by the large number of training instances in FEVER (FT-train is <1% the size of FEVER). Fine-tuning can be applied to any task using a small amount of bias-mitigating labeled data, whereas explicit modeling of hypothesis-only biases The trade-off between accuracy on the original and FT-test datasets is visualized in Figure In a separate experiment, we apply EWC to a different domain. We inoculate biases on the ESIM model for natural language inference reported in The ESIM model was sensitive to fine-tuning, attaining near perfect accuracy (top row of Figure The antonym stress-test only contains instances labeled contradiction: a change in label distribution that causes catastrophic forgetting. Without EWC, accuracy on MultiNLI fell to just above chance levels as the model learned only to predict contradiction (yellow dashed line). However, using an appropriate EWC penalty attained near-perfect accuracy with a smaller reduction in MultiNLI accuracy (purple dashed line). The ESIM model was sensitive to fine-tuning to introduce numerical reasoning behaviours to the model. As the difference in label distribution in the inoculation dataset was less severe than the Antonym dataset, the catastrophic forgetting was less severe. Nevertheless, FT+EWC minimized catastrophic forgetting at the expense of reducing sample efficiency: accuracy on MultiNLI fell from 77.9% to 75.4% without EWC and 76.8% with EWC. Fine-tuning can be used to mitigate model bias but has the risk that the model catastrophically forgets the data it was originally trained on. Incorporating elastic weight consolidation (EWC) when finetuning minimizes catastrophic forgetting, yielding higher accuracy on the original task. We show this holds for both the NLI stress-tests, as well debiasing fact-verification systems We use the FEVER dataset in this paper due to the availability of the symmetric counterfactual data released by If there was no mutual information between claims (without evidence) and labels, this should be 33%. For FEVER, this bias is introduced through synthetic generation of the claims and is more problematic than the biases that occur in the datasets consisting of naturally occurring claims. In Liar and MultiFC, the claims arise from real-world events and the biases in the data reflect political viewpoints, rather than cognitive shortcuts taken by the FEVER annotators. The RoBERTa model, trained on only the claims outperforms the sentence-pair setup for
782
558
782
How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?
Existing approaches for the Table-to-Text task suffer from issues such as missing information, hallucination and repetition. Many approaches to this problem use Reinforcement Learning (RL), which maximizes a single manually defined reward, such as BLEU. In this work, we instead pose the Table-to-Text task as Inverse Reinforcement Learning (IRL) problem. We explore using multiple interpretable unsupervised reward components that are combined linearly to form a composite reward function. The composite reward function and the description generator are learned jointly. We find that IRL outperforms strong RL baselines marginally. We further study the generalization of learned IRL rewards in scenarios involving domain adaptation. Our experiments reveal significant challenges in using IRL for this task.
Table -to-Text generation focuses on explaining tabular data in natural language. This is increasingly relevant due to the vast amounts of tabular data created in domains including e-commerce, healthcare and industry (for example, infoboxes in Wikipedia, tabular product descriptions in online shopping sites, etc.). Table-to-Text can make data easily accessible to non-experts and can automate certain pipelines like auto-generation of product descriptions. Traditional methods approached the general problem of converting structured data to text using slot-filling techniques However, defining a single reward that addresses all of the above-described issues is difficult. To use multiple reward components with RL, one has to manually find an optimal set of weights of each component either through a trial-and-error approach or expensive grid search which gets infeasible as the number of such reward components increases. Inverse Reinforcement Learning
The training data for this task consists of pairs of tables and corresponding natural language descriptions, as shown in Figure We pose Table-to-Text under the IRL framework where we aim to jointly learn a policy for generating description from the table and the underlying composite reward function. At the core of our approach, we have a neural description generator that we adapt from where φ is a weight vector, C t is the vector of reward component values at step t in a generated description and τ denotes total steps. Following the MaxEnt IRL paradigm, we assume the expert descriptions come from a loglinear distribution (p φ (D)) on reward values. The objective of the reward approximator (J r (φ)) is to maximize the likelihood of the expert descriptions. The partition function for this distribution (p φ (D)) is approximated by using importance sampling from the learned description generation policy. For sake of brevity, we skip the mathematical derivation here. Please refer to Appendix A.1 for detailed derivation. We draw N expert descriptions and M descriptions from the learned policy. The gradient of the objective (J r (φ)) w.r.t. reward function parameters φ is then the difference between the expected expert reward and expected reward obtained by the policy where D i and D j are drawn from the training data and the learned policy respectively and β's are importance sampling weights. The linear functional form of the reward simplifies individual weight updates as a simple difference of the expected expert and the expected roll out reward component from policy. Weight update for component c is: where c i is total value of reward component over all steps for i th expert description and c j is total value of reward component over all steps for j th generated description. To stabilize training when learning the policy for description generation we mix in weighted MLE loss with the policy gradient loss before backpropagation. Please refer to supplementary material (Appendix A.5) for model training details. We aim to find a reward function that can combine multiple characteristics present in a good description such as faithfulness to the table and fluency. To encourage faithfulness, we use recall and reconstruction as reward components, while to characterize grammatical correctness and fluency we use repetition and perplexity. We also consider BLEU score as a reward component. BLEU is a supervised reward component as it requires ground-truth descriptions for its computation. However, all other reward components are unsupervised. • Recall: Fraction of slot values in the table mentioned in the description. • Reconstruction: We use QA models to extract answers from the description against a few "extractor" slot types (for example, "What is the name of the person in the description?" is used as a question for the slot type "Name ID"). Details about other extractor slot types are provided in Appendix A.3. Reconstruction score is the average of lexical overlap scores between predicted and true slot values, corresponding to the extractor slot types present in the table. • Repetition: Fraction of unique trigrams in the description. • Perplexity: This is the normalized perplexity of the description calculated using GPT-2 model In this section we describe our experiments and their results in detail. Wang et al. ( For evaluation, we report BLEU Table In Table We also find that having more reward components does not help IRL improve significantly. We note that IRL using all reward components gets the best BLEU but suffers a marginal drop in ROUGE. To evaluate if rewards learned using IRL generalize better to unseen data distributions, we evaluate it for scenarios involving domain adaptation. For this, we divide the dataset into disjoint subsets of categories involving people in sports, academia, art, etc. (category details in Appendix A.2). Each category has different table schemas. We train RL and IRL models on one category and test them on a different category. Since training on a single category limits the amount of labelled data, we consider training with unsupervised rewards that do not rely on the ground truth. Table We highlight some challenges with IRL training that potentially hinder IRL to get significantly better than RL baselines. Further, we discuss qualitative differences between RL and IRL models. Importance of reward components: During training, for most reward components, their values for expert and generated descriptions are close. However, the values of BLEU for generated descriptions are quite smaller than the BLEU value for expert descriptions. This shadows the contribution of other reward components irrespective of the weights assigned to them. Since BLEU optimizes for n-gram overlap with the expert text, it is undesirable to drop this component as it leads to text degeneration. As described in Section 3.2, we use adaptive multipliers to alleviate its dominance. However, its effect is limited and the method does not correspond to optimizing a fixed objective. Unstable training: To stabilize training, we mix the weighted MLE loss (cross-entropy loss) and the policy gradient objective. However, these losses can differ largely in scale. Having a larger weight to MLE loss diminishes the contribution of reward components, while larger weight to policy gradient leads to degeneration. These observations indicate the need for future research on training paradigms and better-designed reward components to address these challenges. Using only BLEU as a reward leads to generated descriptions that fit a general template resembling descriptions from the most common category ("Sports"). Including other reward components helps the model avoid this behavior. We still observe hallucination from both IRL and RL fine-tuned models. However, hallucinated information generated from IRL fine-tuned models often matches the overall theme (for example, it generates incorrect football league names but gets the name of the club mentioned in the table correct). Appendix A.7 shows an example of description generated by IRL (All) model. We present an approach using IRL for Table-to-Text generation using a set of interpretable reward components. While the approach outperforms RL, improvements are marginal, and we identify several challenges. In particular, using metrics like BLEU as reward components is problematic, since they affect weight convergence for IRL. Based on our study, the application of IRL for Table-to-Text generation would broadly benefit from designing better-calibrated reward components and improvements in training paradigms. We hope our exploration encourages the community to engage in interesting directions of future work. of a description is sum of rewards at each step. Let q θ (D) be the policy for description generation. We maximise the log-likelihood of the samples in the training set (Equation The gradient w.r.t. reward parameters is given by The partition function requires enumerating all possible descriptions which makes this intractable. This is tackled by approximating the partition function by sampling descriptions from the policy using importance sampling. The importance weight β i for a generated description D i is given by The gradient is now approximated as: where D i and D j are drawn from training data and q θ (D) respectively. We split the entire dataset as 80%, 10% and 10% for training, validation and testing respectively. Table • Reconstruction: We use Question Answering models to extract answers from the description corresponding to few slot types. For example, to extract the name from the description we ask a question "What is the name of the person?". The questions corresponding to each slot type is pre-determined. We extract values for four most common slot types occurring in the dataset -"name", "place of birth", "place of death" and "country". We will refer to these slots as "extraction slot types". The questions for these extractor slot types are "What is the name of the person in the description?", "What is the place of birth of the person in the description?", "What is the place of death of the person in the description?" and "Which country does the person in the description belong to?" respectively. All extraction slot types are not present in every table of the dataset (example, "place of death" is not present for a living sportsperson). Following SQUAD-like where Perplexity high and Perplexity low are the maximum and minimum perplexity of expert texts and texts generated by pretrained MLE model respectively. Let us assume that after the i th iteration of IRL, we have the multiplier value as m i . Let b be the average BLEU score obtained by the model. For (i + 1) th iteration we update the multiplier value as In case the change in weight is less than 0.00001, we instead increase multiplier value by 0.1. The maximum of multiplier value is 1. We start with initial multiplier value (m 0 ) as 1. Model parameters We follow the same training scheme and model parameters from
807
957
807
Making Use of Latent Space in Language GANs for Generating Diverse Text without Pre-training
Generating diverse texts is an important factor for unsupervised text generation. One approach is to produce the diversity of texts conditioned by the sampled latent code. Although several generative adversarial networks (GANs) have been proposed thus far, these models still suffer from mode-collapsing if the models are not pre-trained. In this paper, we propose a GAN model that aims to improve the approach to generating diverse texts conditioned by the latent space. The generator of our model uses Gumbel-Softmax distribution for the word sampling process. To ensure that the text is generated conditioned upon the sampled latent code, reconstruction loss is introduced in our objective function. The discriminator of our model iteratively inspects incomplete partial texts and learns to distinguish whether they are real or fake by using the standard GAN objective function. Experimental results using the COCO Image Captions dataset show that, although our model is not pre-trained, the performance of our model is quite competitive with the existing baseline models, which requires pre-training.
Generative adversarial networks (GANs) For language GANs, the diversity of the generated texts is an important evaluation metric. There are mainly two approaches to produce the diversity of texts by the generative models. One approach, which includes SeqGAN In this paper, we propose a GAN model that aims to improve the approach to generating diverse texts from the latent space. As for TextGAN and FM-GAN, the generator almost decisively selects each word using soft-argmax approximation to generate a sentence depending on the latent space information. To avoid mode-collapsing, instead of using standard GAN objective function, the discriminator of each model respectively measures the Maximum Mean Discrepancy (MMD) or the Feature Mover's Distance (FMD) between the true text representations and fake ones. These models succeed in generating diverse texts if the generators are pre-trained by a Variational Autoencoder. However, it is verified that these methods still fall into mode-collapsing if the generator is not pre-trained (Section 4.3.1). One of the possible reasons for the mode-collapsing is the deterministic word sampling process through a soft-argmax approximation from the beginning of the training. Deterministic word sampling process hinders the generator from exploring a variety of text generation, which may lead the generator to fall into sub-optimal point. The second possible reason is that the discriminator tries to discriminate the completed sentences. Generating a good-completed sentence from the beginning of the training is too difficult for the generator because the possible number of combinations of words increases exponentially if the number of words sampled becomes large. Therefore, there is a possibility that, without pre-training, the discriminator does not serve useful signals to the generator if the discriminator looks at only completed sentences. Based on these assumptions, our model adopts the following approach: The generator randomly samples words depending on the word probability distribution using the Gumbel-Softmax distribution We trained our model using the COCO Image Captions dataset
The language GANs, in which the generator and discriminator optimize their objective functions in an adversarial manner to generate realistic texts, have two main perspectives. As the first perspective, a reinforcement learning approach is used for optimizing the generator. SeqGAN As the second perspective, the model is end-toend differentiable from the discriminator to the generator. TextGAN Other text generation approaches beyond those described above also exist, such as a VAE-based model To the best of our knowledge, our approach is the first GAN model that does not require variational Autoencoder pre-training and is able to generate texts conditioned by the latent code Figure Our goal is to generate sentences conditioned by the latent space in a GAN framework. When training language GANs, if the discriminator only looks at the complete sentences, the generator obtains no learning signals early in the training be- Gumbel Softmax + Vector Quantization (=hard onehot) Gumbel Softmax (=soft onehot) Figure (1) where x <t := {x 1 , .., x t-1 } denotes a sequence of words before timestep t, and p * is the real data distribution. In a practical sense, the typical word sampling process makes the differentiability from the discriminator to the generator impossible because sampling a word from a word probability distribution is equivalent to creating a non-differentiable onehot vector. As a workaround, we use the Gumbel-Softmax distribution where Here, g is a randomly sampled value from the Gumbel distribution Gumbel(0, 1), o is the output from the generator. Note that τ 1 is the Gumbel-Softmax temperature, and τ 2 is the word probability distribution temperature. To ensure that the texts generated are conditioned by the sampled latent code, our model introduces a reconstructor R ψ , which is fed the generated text and outputs a reconstructed latent code to minimize the reconstruction loss between the latent code and the reconstructed code. The generator p θ and reconstructor R ψ are optimized simultaneously. Therefore, the objective function of the reconstructor is added to that of the generator multiplied by a coefficient λ. where N z denotes the dimension size of the latent code z. It should be noted that the joint distribution p θ (x 1 ...x t ) is decomposed into the iterative conditional distribution p θ (x 1 ...x t ) = p θ (x 1 )p θ (x 2 |x 1 )...p θ (x t |x <t ) such that conditional sampling can be executed using the Gumbel-Softmax distribution described above. We found several heuristic approaches for stabilizing the training. As the first, vector quantization First, we describe the data setting and evaluation metrics for the experiment. Second, we describe the experimental results to better evaluate the performance of our model. We experimentally evaluated the quality and diversity of our generated models using a real sentence dataset, i.e., COCO Image Captions The quality and diversity of the generated text were measured using the Negative BLEU score and the Self-BLEU score We compared the performance of our model with the baseline models: VAE, FM-GAN, and TextGAN. VAE used the CNN-LSTM autoencoder architecture as in We observe the effect of changing the value of the reconstruction coefficient λ in equation ( This paper proposed a GAN model that aims to improve the approach to generating diverse texts conditioned by a latent space. In a quantitative experiment using the COCO Image Captions dataset, it was shown that although our model is not pretrained, its performance is quite competitive with the existing baseline models, which require pretraining. Future work will include further improvements to the performance of our model, and application of our model to other tasks that need to transform the data between domains through a latent space, such as improving the quality and diversity of machine translation or multi-modal learning related to text generation. Real Data a bicycle replica with a clock as the front wheel. a black honda motorcycle parked in front of a garage. a room with blue walls and a white sink and door. a car that seems to be parked illegally behind a legally parked car a large passenger airplane flying through the air. there is a gol plane taking off in a partly cloudy sky. blue and white color scheme in a small bathroom. this is a blue and white bathroom with a wall sink and a lifesaver on the wall. a blue boat themed bathroom with a life preserver on the wall VAE a bathroom sink with only the tub in the bathroom a large boy and a plane sitting on the landing . a clock tower with pots and windows a car at an open door leading to a bunch of foot . office space force force jet on display during day . an image of benches on a street and chairs being terminal . a large airplane flying in the open of a kitchen . a couple of an airplane flying through the clear blue oven . an airplane with some chairs on a table by the counter . a man wearing two sheep on a blue umbrella a group of birds standing around a table in a forest . a bathroom with a vanity , sink , and white and shower . a building with a clock on a clock tower a large white plane sits on a sidewalk in the kitchen . a row of cars are parked outside the street at an intersection . a woman looks plays in the kitchen an orange and woman walking around a park bench . a man standing in the kitchen at a tv . TextGAN a person with a football standing in front of a house an old airplane flying through a blue sky above a house . a man sitting on a bed with a dog and fries inside a car . a group of people riding bicycles down a city street . two motorcycles lined up with green seats in snow . a man wearing glasses wearing glasses and black bookbag riding a horse down a street . a bathroom with a toilet , shower , and toilet , trash can on the wall a cat drinking the back of a white toilet paper a man and motorcycle riders are riding on the road Our Model a small bird sit on a white bathroom with a mirror seat . two chefs counter standing in front of a toilet . a modern black and white checkered oven underneath area . looking off from you doors from doors . a white kitchen with chrome space at cabinets . a racing plane in a sky by land on a track . there is a yellow bathroom stands next to a toilet under a mirror . a kitchen with wooden appliances in flight a bathroom that has a mirror and a wall and basket . an image of men are crossing from the car . • For the generator, reconstructor, and discriminator, we used long short-term memory (LSTM) • Two different fully connected layers are set to linearly transform z into the initial states C 0 and H 0 respectively for the LSTM network of the generator. • A dropout is applied to the word embeddings before the word embeddings are fed into each LSTM network of the discriminator. • All trainable parameters are optimized using Adam • The prior distribution of Latent space is defined as Gaussian distribution G(0, 1). • LSTM feature size of the discriminator: 64 • LSTM feature size of the generator: 128 • LSTM feature size of the reconstructor: 128 • Dimension size of latent code z: 8 • Learning rate for Adam: 0.0002 • β 1 for Adam: 0.5 • β 2 for Adam: 0.999
1,104
2,146
1,104
Discovering Dialogue Slots with Weak Supervision
Task-oriented dialogue systems typically require manual annotation of dialogue slots in training data, which is costly to obtain. We propose a method that eliminates this requirement: We use weak supervision from existing linguistic annotation models to identify potential slot candidates, then automatically identify domain-relevant slots by using clustering algorithms. Furthermore, we use the resulting slot annotation to train a neural-network-based tagger that is able to perform slot tagging with no human intervention. This tagger is trained solely on the outputs of our method and thus does not rely on any labeled data.
Task-oriented dialogue systems typically use annotation based on slots to represent the meaning of user utterances Getting raw data for dialogue system training is not difficult, especially if we restrict the target domain. A requirement for dialogue state labels makes this process much more costly. However, both traditional pipeline systems In this paper, we present a novel approach to discovering a set of domain-relevant dialogue slots and their values given a set of dialogues in the target domain (such as transcripts from a call center). Our approach requires no manual annotation at all in order to tag slots in dialogue data. This substantially simplifies dialogue system design and training process, as the developer no longer needs to design a set of slots and annotate their occurrences in training data. We discover slots by using unsupervised clustering on top of annotation obtained by domain-independent generic models such as a semantic frame parser or a named entity recognizer (NER). To illustrate our approach, let us consider an example given in Figure Find a chinese restaurant that's cheap.
Figure Although the annotation is descriptive, it contains concepts irrelevant for the domain under consideration. Our method selects only relevant slot candidates (depicted in blue). Slots discovered by our approach can then be used to design or adapt the database backend for the target domain. Our contributions can be summarized as follows: NER Frame parser ... Figure 2: Illustration of our pipeline. First, we analyze an unlabeled in-domain corpus with supplied domainagnostic linguistic annotation models, such as a frame-semantic parser or NER 1. Selecting domain-relevant slots from candidates provided by weak supervision from domain-generic linguistic annotation tools. We use Training a standalone slot tagger for the selected slots. Based on the discovered slots, we train a slot tagger to annotate in-domain utterances. After it is trained, the slot tagger can be used as a standalone component -it does not need the original annotation tools for prediction, and is able to improve on their results. 3. Evaluation on multiple domains. We show that our approach is domain-independent. We achieve state-of-the-art results for slot tagging without manual supervision in four different domains, with a 6-16% absolute F1 score increase over the previous benchmark. 4. Downstream task application. We evaluate our approach in a full dialogue response generation task. Our slots can be directly used to perform dialogue state tracking by merging annotations from consecutive turns. We train an end-to-end neural dialogue system using our automatically discovered slots in the restaurant domain and demonstrate that our approach improves performance over an unsupervised model, finding the correct venue in 5% more cases (35% more when no restaurant ontology is provided). Our experimental code is available on GitHub. The idea of using weak supervision to perform finegrained language understanding based on domainrelevant (slot-like) attributes was proposed by Chen et al. ( Unsupervised and semi-supervised methods were also investigated for predicting intents (user Figure input sentence types). Most applications of unsupervised or semisupervised methods to end-to-end dialogue response generation avoid explicit dialogue state modeling (e.g., In contrast, Our slot discovery method has three main stages: (1) We obtain weak supervision labels from auto-matic domain-generic annotation. (2) We identify domain-relevant slots based on the annotation labels by iteratively (a) merging and (b) ranking and selecting most viable candidates (Section 3.2). (3) we use the discovered slots to train an independent slot tagger (Section 3.3). Figure Subsequent steps identify domain-relevant slots based on candidates provided by the automatic annotation. The slot discovery process is iterativein each iteration, it: (1) merges similar candidates, (2) ranks candidates' relevance and eliminates irrelevant ones. Once no more frames are eliminated, the process stops and we obtain slot labels, which are used to train a slot tagger (see Section 3.3). We refer to the automatically tagged tokens as (slot) fillers, and the tags are considered slot candidates. We use generic precomputed word embeddings as word representation in both steps. We further compute slot embeddings ( ) for each distinct slot as word embedding averages over all respective slot fillers, weighted proportionally by filler frequency. The slot embeddings need to be re-computed after each iteration due to the merging step. We will now describe the individual steps. Since automatic annotation may have a very fine granularity, where sim is a cosine similarity and sim ctx ( 1 , 2 ) is a normalized number of occurrences of 1 and 2 with the same dependency relation. If the similarity exceeds a pre-set threshold sim , the candidates are merged into one. The main goal of this step is to remove irrelevant slot candidates and select the viable ones only. We hypothesize that different slots are likely to occur in different contexts (e.g., addresses are requested more often than stated by the user). To preserve relevant slots that only occur in rarer contexts, we cluster the data according to verb-slot pairs. We then rank candidates within each cluster (see details below). We consider candidates with a score higher than -fraction of a given cluster mean to be relevant and select them for the next rounds. If a slot candidate is selected in at least one of the clusters, it is considered viable overall. Clustering the data We process the data with a generic SRL tagger. Each occurrence of a filler is thus associated with a head verb whose semantic argument the corresponding word is, if such exists. We then compute embeddings of the formed verb-filler pairs as average of the respective token embeddings. The pairs are then clustered using agglomerative (bottom-up) hierarchical clustering with average linkage according to cosine distance of their embeddings. Candidate Ranking criteria We use the following metrics to compute the ranking score: where 2 is a set of all pairs of fillers for the slot candidate s. We follow Chen et al. ( Our method described in Section 3.2 can give us a good set of dialogue slots. However, using the merged and filtered slots directly may result in low recall since the original annotation models used as weak supervision are not adapted to our specific domain. Therefore, we use the obtained labels to train a new, domain-specific slot tagger to improve performance. The tagger has no access to better labels than those derived by our method; however, it has a simpler task, as the set of target labels is now much smaller and the domain is much narrower. We model the slot tagging task as sequence tagging, using a convolutional neural network that takes word-and character-based embeddings of the tokens as the input and produces a sequence of respective tags the most probable predicted tag is 'O' (i.e., no slot) and the second most probable tag has a probability higher than a preset threshold tag , the second tag is chosen as a prediction instead. As we discuss in Section 6, this threshold is crucial for achieving substantial recall improvement. To improve the robustness of our model, we only use 10% of the original in-domain training set (with labels from Section 3.1) to train the slot tagger model. The rest of the training set is used for a grid search to determine model hyperparameters (hidden layer size, dropout rate and tag threshold). We choose the parameters that yield the best F1 score when compared against the automatic slot discovery results (i.e., no manual annotation is needed here, the aim is at good generalization). To verify the usefulness of the labels discovered by our method, we use them to train and evaluate an end-to-end task-oriented dialogue system. We choose Sequicity The default Sequicity model uses gold-standard dialogue state annotation. However, a compatible state representation is directly obtainable from our labels, simply by concatenating the labels aggregated in each turn from user utterances. Whenever a new value for a slot is found in user input by our tagger, it is either appended to the state representation, or it replaces a previous value of the same slot. This artificial supervision thus allows us to provide a learning signal to the Sequicity model even without manually labeled examples. We evaluate our approach to slot discovery by comparing the resulting slot labels to gold-standard supervised slot annotation. Additionally, we evaluate the structure of clusters created during the selection process (Section 3.2.2) by comparing it to gold-standard user intents. We also test the use-fulness of our labels in a full dialogue response generation setup (Section 4), where we compare to gold-standard dialogue tracking labels. We use the following datasets for our experiments: • CamRest676 (CR) • ATIS (AT) Slot merging and selection parameters were set heuristically in an initial trial run on the Cam-Rest676 data and proved stable across domains. Slot tagger hyperparameters are chosen according to grid search on a portion of the training data, as described in Section 3.3. We test multiple ablation variants of our method: • Ours-full is the full version of our method (full annotation setup and trained slot tagger). The measures are evaluated using a manual slot mapping to the datasets' annotation, which is not needed for the methods themselves (see Section 5.3). * Note that supervised setups are not directly comparable to our approach. • Ours-nothr does not use the recall-increasing second-candidate rule in the slot tagger (cf. Section 3.3). • Ours-notag excludes the slot tagger, directly using the output of our merging and selection step. • Ours-nocl further excludes the clustering step; slot candidate ranking and selection is performed over all candidates together (cf. Section 3.2.2). We also compare to previous work of Chen et al. ( As an intrinsic evaluation of the verb-slot pair clusters formed for slot ranking in Section 3.2.2, we compare to gold-standard intent annotation with respect to the following baselines: (1) a majority baseline (assigning the most frequent intent class to all instances), and (2) a simple method that represents the utterances as averages of respective word embeddings and performs sentence-level intent clustering. All the slots in a given utterance are then assumed to have the same intent. The dialogue generation task is evaluated by comparing to For evaluation, we construct a handcrafted reference mapping between our discovered slots and the respective ground-truth slots and intents. The mapping is domain-specific, but it is very easy to construct even for an untrained person -the process takes less than 10 minutes for each of our domains. It amounts to matching slots from the domain ontology against slots output by our approach, which are represented by FrameNet labels. Most importantly, the mapping is only needed for evaluation, not by our method itself. We provide an example mapping in Appendix B. We use the following evaluation metrics: • Slot F1 score: To reflect slot tagging performance, we measure precision, recall, and F1 for every slot individually. An average is then computed from slot-level scores, weighted by the number of slot occurrences in the data. We measure slot F1 both on standalone user utterances (slot tagging) and in the context of a dialogue system (dialogue tracking). • Slot-level Average Precision (AP). The slot candidates picking task is a ranking problem and we use the average precision metric following where 1 is an indicator function that equals one if slot has a reference mapping defined and @ ( ) is precision at of the ranked list . slots (following the reference mapping). We first evaluate the main task of slot tagging and include a manual error analysis, then present detailed results for subtasks (slot candidate ranking and merging) and additional tasks (intent clustering and full response generation). Slot tagging is evaluated in Table Error analysis: We conducted a manual error analysis of slot tagging to gain more insight about the output quality and sources of errors. In general, we found that the tagger can generalize and capture unseen values (cf. Figure One source of errors is the relatively low recall of the frame-semantic parsers used. We successfully address this issue by introducing the slot tagger, however, many slot values remain untagged. This is expected as our method's performance is inherently limited by the input linguistic annotation quality. Another type of errors is caused by the can- didate merging procedure (see also below). Due to frequent co-occurrence, it might happen that two semantically unrelated candidates are merged and therefore some tokens are wrongly included as respective slot fillers. Nevertheless, the merging step is required in order to obtain a reasonable number of slots for a dialogue domain. Our approach does leave some room for improvements, especially regarding the consistency of results across different slots, which can be imbalanced. For instance, on the WOZ-hotel data, we observe a difference of up to 0.5 F1 score among individual slots (see Appendix A.2). Slot candidate ranking results are given in Table 2. Our pipeline significantly outperforms Chen et al. ( In addition, we include a detailed evaluation of the contribution of the individual slot candidate ranking scores described in Section 3.2.2. Results in Table Slot merging evaluation is shown in Table Clustering evaluation: Table We explore the influence that our labels have on sequence-tosequence dialogue response generation in an experiment on the CamRest676 data (see Table We present a novel approach for weakly supervised natural language understanding in dialogue systems that discovers domain-relevant slots and tags them in a standalone fashion. Our method removes the need for annotated training data by using off-theshelf linguistic annotation models. Experiments on five datasets in four domains mark a signifi-cant improvement in intrinsic NLU performance over previous weakly supervised approaches; in particular, we vastly improve the slot recall. The usefulness of slots discovered by our method is further confirmed in a full dialogue response generation application. Code used for our experiments is available on GitHub.
628
1,115
628
RLET: A Reinforcement Learning Based Approach for Explainable QA with Entailment Trees
Interpreting the reasoning process from questions to answers poses a challenge in approaching explainable QA. A recently proposed structured reasoning format, entailment tree, manages to offer explicit logical deductions with entailment steps in a tree structure. To generate entailment trees, prior single pass sequence-tosequence models lack visible internal decision probability, while stepwise approaches are supervised with extracted single step data and cannot model the tree as a whole. In this work, we propose RLET, a Reinforcement Learning based Entailment Tree generation framework, which is trained utilising the cumulative signals across the whole tree. RLET iteratively performs single step reasoning with sentence selection and deduction generation modules, from which the training signal is accumulated across the tree with elaborately designed aligned reward function that is consistent with the evaluation. To the best of our knowledge, we are the first to introduce RL into the entailment tree generation task. Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
Reasoning over explicitly given knowledge and generating detailed deduction steps are important challenges towards the goal of automated reasoning in AI community One line of previous work considers the entailment trees as linearised sequences and adopt sequence-to-sequence (Seq2Seq) models to generate the entire reasoning chain in a single pass with all given sentences as input The reasoning chain is a sequence of discrete actions. As such, we address the above issues by presenting RLET, a Reinforcement Learning (RL) based Entailment Tree generation framework that models the entire reasoning chain as a Markov Decision Process (MDP). Specifically, we decompose the task into two parts: sentence selection and deduction generation. At each step, the model will first select two sentences (including both given facts and generated intermediate conclusions) for composition, and the deduction generation will combine them into a new intermediate conclusion and add it to the next step. After constructing a whole chain, each step will receive a reward depending on its contribution to the overall correctness and validity of the entire tree. Enjoying the convenience of crafting reward functions in RL, we can flexibly assign evaluationconsistent rewards to the steps, instead of purely relying on exact match with the gold tree. In such a way, model behaviors can be manipulated with more flexibility, getting rid of the rigorous chronological match with ground truth. Supervised by the cumulative rewards, the model is encouraged to find the optimal policy that leads to greater good. Such advantages can not only bring improvements on post-hoc explanation modeling, but also benefit interpretable model decision making, where the reasoning process is integrated into the inference process. Extensive experiments on three settings of the benchmark dataset EntailmentBank
Our goal is to provide a step-by-step reasoning process for commonsense science questions, with prior knowledge of the question, the correct answer and a set of fact sentences. We first describe the task formulation and then explain each part in detail. We formulate the reasoning process as an entailment tree construction task with each step as a logical entailment. The inputs of our task include a collection of fact sentences X = {x 1 , x 2 , • • • , x n } and a hypothesis h, where X consists of both relevant and non-relevant sentences. Specifically, the hypothesis h is the combination of the question and its correct answer, stated in a declarative form. The task aims to construct an entailment tree T from the bottom up, with selected facts from X as leaf nodes, the hypothesis h as the root node and generated intermediate conclusions as internal nodes I. Each intermediate conclusion i k ∈ I is generated by deducing from its immediate children during the construction of the tree. A reasoning step includes selecting the premises and generating the intermediate conclusion. Following The goal of the sentence selection module is to choose two sentences as premises of single step reasoning. At step k, sentence selection takes hypothesis h, fact set X and intermediate set I k as input, where The deduction generation module plays an important role in ensuring the fluency and readability of the reasoning chains. We experiment with model-based and rule-based approaches and find that the latter yields better results. In training, we observe that the finetuned BART model is likely to repeat one of the input premises thus losing useful information To take a step further, we involve a stronger deduction generation module from MetGen Figure X ∪ I k that will make a logical combination in the following module. Deduction Generation Given the selected sentences as input, deduction generation outputs a new intermediate conclusion i k deduced from these two sentences : i k = g(n i , n j ), where g denotes a Seq2Seq model. The conclusion should be well entailed by the premises, and reasons over the information from the given sentences only. The design ethos of our approach is to bridge isolated single steps with cumulative training signals, which fits very well with the nature of RL. To tackle sentence selection task, we model the entailment tree as a Markov Decision Process, which can be denoted as a tuple (S, A, R, T ), where S is a set of states, A(s) is the action space at state s, R(•) is the reward function and T (•) represents the transition function. Our goal is to learn an optimal policy π that decides what action to take at each state. As shown in Figure With a pre-trained DeBERTa where θ is the parameters of policy π, f (•) denotes the contextualised representation of the [CLS] token, [•] denotes concatenation, b i k is the score of action a i k . At step k, we then sample one action a k based on the probability distribution: (3) Given the two sentences within the sampled action, the deduction generator performs logical combination and outputs an intermediate conclusion (i 1 in Figure In summary, we represent a step as (s k , a k , i k ), meaning taking action a k at state s k and generating the intermediate conclusion i k . After undergoing several iterations of the sentence selection and deduction generation, we will obtain a trajectory denoting the reasoning steps we take to construct the entailment tree where K is the length of the reasoning chain. Reward Before evaluating entailment trees, an alignment algorithm To address this issue, we involve the alignment algorithm in the design of our reward function. To begin with, for each intermediate node in our trajectory and the gold tree, we gather all leaf nodes in their children respectively. We consider one generated internal conclusion (node) î is aligned to a gold internal node i if the leaf nodes in their children have the maximum Jaccard similarity. If î has zero similarity with every gold internal node, we align it to a blank node with no conclusion. We assume the aligned nodes are similar in semantics since they are reasoned from similar facts. As described in Figure Rewards will be assigned to each reasoning step after the full trajectory is generated. For each step (s k , a k , i k ), we give it an independent reward r k based on the exact match between its action and the premises in the aligned gold step. -1, otherwise. (5) Without the aid of the alignment, the predicted steps in Figure Then the final cumulative reward of each step is gathered along its subsequent steps where K is the length of the trajectory, γ is a discount factor. With the cumulative aligned reward, although the lower steps are not able to get the structure perfectly matched, the subsequent steps can still get awarded by making correct decisions. Correspondingly, the lower steps will get a less severe penalty from the reward accumulation, which guarantees the flexibility in adjusting training signals. Finally, by aligning the trajectory with the gold tree, we shall get a reward R(s k , a k , i k ) for each step and a total reward for the trajectory: Optimization We aim to learn a stochastic policy of sentence selection module π parameterized by θ which maximizes the expected cumulative reward: Following where K n is the length of trajectory τ n , s n k and a n k denote the state and action at step k in τ n . Supervised learning before integrating RL in training can provide efficient parameter update with high-quality signals 3 Experimental Settings We evaluate our approach on EntailmentBank Depending on the composition of the given fact set X, the dataset offers three challenging settings. In Task 1, only gold facts are provided and will all serve as leaf nodes in the tree. In Task 2, for each QA pair, a total of 25 sentences are provided, including both gold facts and distractors. In Task 3, the most challenging setting, the model needs to first retrieve relevant facts from the full fact corpus, and then perform reasoning as in Task 2. EntailmentWriter The sentence selection module is built with DeBERTa-v3-base model Dalvi et al. ( Leaves (F1, AllCorrect) evaluate how well the model performs in identifying facts that are relevant to questions and answers. The F1 score is computed based on the selected leaf nodes in T pred and the gold leaf nodes. AllCorrect is 1 if they are perfectly matched, otherwise 0. Steps (F1, AllCorrect) mainly evaluate the structure correctness of the trees. For each aligned step in T ′ pred , we measure whether its selected sentences (action in trajectory) matches the gold. F1 score is computed based on the number of perfectly matched steps. We assign AllCorrect of 1 to a predicted tree if all steps in T ′ pred exactly match with gold tree steps. Intermediates (BLEURT, AllCorrect) evaluate the generation quality of the intermediate conclusions. For each aligned step in T ′ pred , we define its intermediate conclusion i is correct if i has a BLEURT As shown in Table For Task 2 and Task 3, our framework outperforms all baselines on the most strict metric Overall AllCorrect, as shown in Table The original settings of EntailmentBank contain hypothesis as the guidance of post-hoc explanation generation. However, in most practical scenarios where only questions are available, the model should ideally reason over knowledge facts to derive the answer while generating an explainable reasoning path. This is defined as Open-ended Commonsense Reasoning (OpenCSR) in We evaluate the structure correctness of the generated explanations in Table We leave these for future work. In practice, high-quality explanation annotations can be costly to obtain, which makes it difficult to train large scale models. An ideal system is expected to have great generalisability even with few annotated explanations for training. To evaluate how RLET can benefit from the RL algorithm under this setting, we experiment with less data under Task 1. We divide the data based on the number of given facts for each QA pair. Detailed statistics of the data split are shown in Table We break down the results by the length of the gold trees in Figure Tree Structured QA Explanations Existing methods on generating entailment trees for QA explanations can be categorized into two branches: single pass generation and stepwise generation. Single pass methods Our work also aligns well with multiple automated reasoning tasks built with RL We presented RLET, a RL-based entailment tree generation framework, which contains sentences selection and deduction generation modules and can be trained with cumulative signals across the entire reasoning tree. Experiments show that RLET outperforms existing baselines on structure correctness and is applicable in practical scenarios. Future directions include applying RL framework on other stepwise methods with more stable and sophisticated RL algorithms. First, sentences are likely to be used more than once when reasoning in real practice. RLET removes used sentences at each time step to reduce the size of action space, which leads to a performance loss of 9.4% on the overall All-Correct. Second, in the sentence selection module, RLET always picks two sentences to merge, while the original dataset contains multi-sentence steps. Though this harms the evaluation results as discussed in Appendix B, this is a minor limitation because the reasoning format is not strictly standardized in real practice. Furthermore, adding [END] token to action space and applying additional fact filter in distractors settings is a naive approach and leaves room for further improvement. Finally, as vanilla policy gradient method is sensitive to hyperparameters and can have large variance, we leave the exploration of more stable RL algorithms in reasoning for future work. For pre-RL supervised training, we set a learning rate of 2e-5, a batch size of 2 and train the model for 20 epochs. For RL training, we set the discount factor γ as {0.9, 0.99}, initial learning rate as 1e-5, warmup ratio as 0.05, and train the model for 20 epochs. γ = 0.99 yields better results. The scheduled sampling ratio decays linearly from 1.0 to 0.5. The total RL training costs 6 hours. The fact filter in Task 2 and 3 is trained with an initial learning rate of 1e-5, warm up ratio of 0.1, for 10 epochs. In Task 2 we save top 5 sentences and filter out sentences with similarity scores lower than 0.98 in Task 3, which are selected based on the validation set. The deduction generation module is implemented with BART-Large We use AdamW The automatic evaluation is an underestimation of our approach because RLET only selects two sentences per action while 25.88% trees in test set contain multiple-premise (more than 2 premises) reasoning steps, which will result in 0 in Steps/Overall In Table Authors of MetGen manually annotated the reasoning patterns of 400 separate steps in the training set, and 275 steps in the validation set. We adopt these annotations to finetune a DeBERTa-Large model as our pattern selector, which takes in two premise sentences as input and predicts its corresponding reasoning pattern as substitution, conjunction or if-then. The pattern selector achieves an accuracy of 81.5% on annotated sentences in the validation set. In In this section, we illustrate some examples from Task 1 test set in which the predicted tree can perform a valid reasoning in a different structure with the gold tree. Predicted Tree Figure Though not exactly matched with gold, the predicted tree can also fulfill the reasoning process with twopremise steps. Q: Which best describes the Sun? A: medium yellow dwarf Hypothesis: the sun is a yellow dwarf with medium size
1,137
1,877
1,137
Clique-Based Clustering for improving Named Entity Recognition systems
We propose a system which builds, in a semi-supervised manner, a resource that aims at helping a NER system to annotate corpus-specific named entities. This system is based on a distributional approach which uses syntactic dependencies for measuring similarities between named entities. The specificity of the presented method however, is to combine a clique-based approach and a clustering technique that amounts to a soft clustering method. Our experiments show that the resource constructed by using this cliquebased clustering system allows to improve different NER systems.
In Information Extraction domain, named entities (NEs) are one of the most important textual units as they express an important part of the meaning of a document. Named entity recognition (NER) is not a new domain (see MUC • intra-annotation ambiguity: Wikipedia lists more than 25 cities named Oxford in the world • systematic inter-annotation ambiguity: the name of cities could be used to refer to the university of this city or the football club of this city. This is the case for Oxford or Newcastle • non-systematic inter-annotation ambiguity: Oxford is also a company unlike Newcastle. The main goal of our system is to act in a complementary way with an existing NER system, in order to enhance its results. We address two kinds of issues: first, we want to detect and correctly annotate corpus-specific NEs The paper is organized as follows. We present, in section 2, the global architecture of our system and from §2.1 to §2.6, we give details about each of its steps. In section 3, we present the evaluation of our approach when it is combined with other classic NER systems. We show that the resulting hybrid systems perform better with respect to F-measure. In the best case, the latter increased by 4.84 points. Furthermore, we give examples of successful correction of NEs annotation thanks to our approach. Then, in section 4, we discuss about related works. Finally we sum up the main points of this paper in section 5.
Given a corpus, the main objectives of our system are: to detect potential NEs; to compute the possible annotations for each NE and then; to annotate each occurrence of these NEs with the right annotation by analyzing its local context. We assume that this corpus dependent approach allows an easier NE annotation. Indeed, even if a NE such as Oxford can have many annotation types, it will certainly have less annotation possibilities in a specific corpus. Figure Different methods exist for detecting potential NEs. In our system, we used some lexicosyntactic constraints to extract expressions from a corpus because it allows to detect some corpusspecific NEs. In our approach, a potential NE is a noun starting with an upper-case letter or a noun phrase which is (see • a governor argument of an attribute syntactic relation with a noun as governee argument (e.g. president attribute ----→ George Bush) • a governee argument of a modifier syntactic relation with a noun as a governor argument (e.g. ← ----Coca-Cola). The list of potential NEs extracted from the corpus will be denoted NE and the number of NEs |NE|. The distributional approach aims at evaluating a distance between words based on their syntactic distribution. This method assumes that words which appear in the same contexts are semantically similar To construct the distributional space associated to a corpus, we use a robust parser (in our experiments, we used XIP parser One triple gives two contexts (1.w 1 .R and 2.w 2 .R) and two chunks (w 1 and w 2 ). Then, we only select chunks w which belong to NE. Each point in the distributional space is a NE and each dimension is a syntactic context. CT denotes the set of all syntactic contexts and |CT| represents its cardinal. We illustrate this construction on the sentence "provide Albania with food aid". We obtain the three following triples (note that aid and food aid are considered as two different chunks): We also use an heuristic in order to reduce the over production of chunks and contexts: in our experiments for example, each NE and each context should appear more than 10 times in the corpus for being considered. D is the resulting (|NE| × |CT|) NE-Context matrix where e i : i = 1, . . . , |NE| is a NE and c j : j = 1, . . . , |CT| is a syntactic context. Then we have: D(e i , c j ) = Nb. of occ. of c j associated to e i (1) A clique in a graph is a set of pairwise adjacent nodes which is equivalent to a complete subgraph. A maximal clique is a clique that is not a subset of any other clique. Maximal cliques computation was already employed for semantic space representation For example, Oxford is an ambiguous NE but a clique such as <Cambridge, Oxford, Edinburgh University, Edinburgh, Oxford Univer-sity> allows to focus on the specific annotation <organization> (see Given the distributional space described in the previous paragraph, we use a probabilistic framework for computing similarities between NEs. The approach that we propose is inspired from the language modeling framework introduced in the information retrieval field (see for example We first compute the maximum likelihood estimation for a NE e i to be associated with a context c j : This leads to sparse data which is not suitable for measuring similarities. In order to counter this problem, we use the Jelinek-Mercer smoothing method: D (e i , c j ) = λP ml (c j |e i ) + (1λ)P ml (c j |CORP) where CORP is the corpus and P ml (c j |CORP) = P i D(e i ,c j ) P i,j D(e i ,c j ) . In our experiments we took λ = 0.5. Given D , we then use the cross-entropy as a similarity measure between NEs. Let us denote by s this similarity matrix, we have: Next, we convert s into an adjacency matrix denoted ŝ. In a first step, we binarize s as follows. Let us denote {e i 1 , . . . , e i |NE| }, the list of NEs ranked according to the descending order of their similarity with e i . Then, L(e i ) is the list of NEs which are considered as the nearest neighbors of e i according to the following definition: (3) gathers the most significant nearest neighbors of e i by choosing the ones which bring the a most relevant similarities providing that the neighborhood's size doesn't exceed b. This approach can be seen as a flexible k-nearest neighbor method. In our experiments we chose a = 20% and b = 10. Finally, we symmetrize the similarity matrix as follows and we obtain ŝ: 1 if e i ∈ L(e i ) or e i ∈ L(e i ) 0 otherwise (4) Given ŝ, the adjacency matrix between NEs, we compute the set of maximal cliques of NEs denoted CLI. Then, we construct the matrix T of general term: where cli k is an element of CLI. T will be the input matrix for the clustering method. In the following, we also use cli k for denoting the vector represented by (T (cli k , e 1 ), . . . , T (cli k , e |NE| )). Figure We propose to apply the Relational Analysis approach (RA) which is a clustering model that doesn't require to fix the number of clusters In other words, cli k and cli k have more chances to be in the same cluster providing that their similarity measure, S kk , is greater or equal to the mean average of positive similarities. X is the solution we are looking for. It is a binary relational matrix with general term: X kk = 1, if cli k is in the same cluster as cli k ; and X kk = 0, otherwise. X represents an equivalence relation. Thus, it must respect the following properties: As the objective function is linear with respect to X and as the constraints that X must respect are linear equations, we can solve the clustering problem using an integer linear programming solver. However, this problem is NP-hard. As a result, in practice, we use heuristics for dealing with large data sets. The presented heuristic is quite similar to another algorithm described in Basically, this heuristic has a O(nbitr × κ max × |CLI|) computation cost. In general terms, we can assume that nbitr << |CLI|, but not κ max << |CLI|. Thus, in the worst case, the algorithm has a O(κ max × |CLI|) computation cost. Figure For each cluster clu l we provide a score F c (c j , clu l ) for each context c j and a score 5 We only represent the NEs and their frequency in the cluster which corresponds to the number of cliques which contain the NEs. Furthermore, we represent the most relevant contexts for this cluster according to equation ( F e (e i , clu l ) for each NE e i . These scores where 1 {P } equals 1 if P is true and 0 otherwise. Given a NE e i and a syntactic context c j , we now introduce the contextual cluster assignment matrix A ctxt (e i , c j ) as follows: A ctxt (e i , c j ) = clu * where: clu * = Argmax {clu l :clu l e i ;Fe(e i ,clu l )>1} F c (c j , clu l ). In other words, clu * is the cluster for which we find more than one occurrence of e i and the highest score related to the context c j . Furthermore, we compute a default cluster assignment matrix A def , which does not depend on the local context: A def (e i ) = clu • where: In other words, clu • is the cluster containing the biggest clique cli k containing e i . So far, the different steps that we have introduced were unsupervised. In this paragraph, our aim is to give a correct annotation to each cluster (hence, to all NEs in this cluster). To this end, we need some annotation seeds and we propose two different semi-supervised approaches (regarding the classification given in Manual annotation of clusters This method is fastidious but it is the best way to match the corpus data with a specific guidelines for annotating NEs. It also allows to identify new types of annotation. We used the ACE2007 guidelines for manually annotating each cluster. However, our CBC system leads to a high number of clusters of cliques and we can't annotate each of them. Fortunately, it also leads to a distribution of the clusters' size (number of cliques by cluster) which is similar to a Zipf distribution. Consequently, in our experiments, if we annotate the 100 biggest clusters, we annotate around eighty percent of the detected NEs (see §3). We suppose in this context that many NEs in NE are already annotated. Thus, under this assumption, we have in each cluster provided by the CBC system, both annotated and non-annotated NEs. Our goal is to exploit the available annotations for refining the annotation of a cluster by implicitly taking into account the syntactic contexts and for propagating the available annotations to NEs which have no annotation. Given a cluster clu l of cliques, #(clu l , e i ) is the weight of the NE e i in this cluster: it is the number of cliques in clu l that contain e i . For all annotations a p in the set of all possible annotations AN, we compute its associated score in cluster clu l : it is the sum of the weights of NEs in clu l that is annotated a p . Then, if the maximal annotation score is greater than a simple majority (half) of the total votes 7 , we assign the corresponding annotation to the cluster. We precise that the annotation <none> 8 is processed in the same way as any other annotations. Thus, a cluster can be globally annotated <none>. The limit of this automatic approach is that it doesn't allow to annotate new NE types than the ones already available. In the following, we will denote by A clu (clu l ) the annotation of the cluster clu l . The cluster annotation matrix A clu associated to the contextual cluster assignment matrix A ctxt and the default cluster assignment matrix A def introduced previously will be called the CBC system's NE resource (or shortly the NE resource). In this paragraph, we describe how, given the CBC system's NE resource, we annotate occurrences of NEs in the studied corpus with respect to its local context. We precise that for an occurrence of a NE e i its associated local context is the set of syntactical dependencies c j in which e i is involved. 7 The total votes number is given by P e i ∈clu l #(clu l , ei). 8 The NEs which don't have any annotation. Given a NE occurrence and its local context we can use A ctxt (e i , c j ) and A def (e i ) in order to get the default annotation A clu (A def (e i )) and the list of contextual annotations {A clu (A ctxt (e i , c j ))} j . Then for annotating this NE occurrence using our NE resource, we apply the following rules: • if the list of contextual annotations {A clu (A ctxt (e i , c j ))} j is conflictual, we annotate the NE occurrence as <none>, • if the list of contextual annotations is nonconflictual, then we use the corresponding annotation to annotate the NE occurrence • if the list of contextual annotations is empty, we use the default annotation A clu (A def (e i )). The NE resource plus the annotation process described in this paragraph lead to a NER system based on the CBC system. This NER system will be called CBC-NER system and it will be tested in our experiments both alone and as a complementary resource. We place ourselves into an hybrid situation where we have two NER systems (NER 1 + NER 2) which provide two different lists of annotated NEs. We want to combine these two systems when annotating NEs occurrences. Therefore, we resolve any conflicts by applying the following rules: • If the same NE occurrence has two different annotations from the two systems then there are two cases. If one of the two system is CBC-NER system then we take its annotation; otherwise we take the annotation provided by the NER system which gave the best precision. • If a NE occurrence is included in another one we only keep the biggest one and its annotation. For example, if Jacques Chirac is annotated <person> by one system and Chirac by <person> by the other system, then we only keep the first annotation. • If two NE occurrences are contiguous and have the same annotation, we merge the two NEs in one NE occurrence. The system described in this paper rather target corpus-specific NE annotation. Therefore, our ex-periments will deal with a corpus of recent news articles (see In our experiments, first, we applied the XIP parser The different materials that we obtained constitute the CBC system's NE resource. Our aim now is to exploit this resource and to show that it allows to improve the performances of different classic NER systems. The different NER systems that we tested are the following ones: • CBC-NER system M (in short CBC M) based on the CBC system's NE resource using the manual cluster annotation (line 1 in Table In Table The first two lines of Table This is actually what we obtained in Table These results allow us to show that the NE resource built using the CBC system is complementary to any baseline NER systems and that it allows to improve the results of the latter. In order to illustrate why the CBC-NER systems are beneficial, we give below some examples taken from the test corpus for which the CBC system A had allowed to improve the performances by respectively disambiguating or correcting a wrong annotation or detecting corpus-specific NEs. First, in the sentence "From the start, his parents, Lourdes and Hemery, were with him.", the baseline hybrid system Stanford + XIP annotated the ambiguous NE "Lourdes" as <location> whereas Stanford + XIP + CBC A gave the correct annotation <person>. Second, in the sentence "Got 3 percent chance of survival, what ya gonna do?" The back read, "A) Fight Through, b) Stay Strong, c) Overcome Because I Am a Warrior.", the baseline hybrid system Stanford + XIP annotated "Warrior" as <organization> whereas Stanford + XIP + CBC A corrected this annotation with <none>. Finally, in the sentence "Matthew, also a favorite to win in his fifth and final appearance, was stunningly eliminated during the semifinal round Friday when he misspelled "secernent".", the baseline hybrid system Stanford + XIP didn't give any annotation to "Matthew" whereas Stanford + XIP + CBC A allowed to give the annotation <person>. Many previous works exist in NEs recognition and classification. However, most of them do not build a NEs resource but exploit external gazetteers A recent overview of the field is given in Regarding this aspect, From a methodological point of view, our proposal is also close to In this paper, we construct a NE resource from the corpus that we want to analyze. In that context, We propose a system that allows to improve NE recognition. The core of this system is a cliquebased clustering method based upon a distributional approach. It allows to extract, analyze and discover highly relevant information for corpusspecific NEs annotation. As we have shown in our experiments, this system combined with another one can lead to strong improvements. Other applications are currently addressed in our team using this approach. For example, we intend to use the concept of clique-based clustering as a soft clustering method for other issues.
578
1,436
578
TwittIrish: A Universal Dependencies Treebank of Tweets in Modern Irish
Modern Irish is a minority language lacking sufficient computational resources for the task of accurate automatic syntactic parsing of usergenerated content such as tweets. Although language technology for the Irish language has been developing in recent years, these tools tend to perform poorly on user-generated content. As with other languages, the linguistic style observed in Irish tweets differs, in terms of orthography, lexicon, and syntax, from that of standard texts more commonly used for the development of language models and parsers. We release the first Universal Dependencies treebank of Irish tweets, facilitating natural language processing of user-generated content in Irish. In this paper, we explore the differences between Irish tweets and standard Irish text, and the challenges associated with dependency parsing of Irish tweets. We describe our bootstrapping method of treebank development and report on preliminary parsing experiments.
Irish is a minority language spoken mostly in small communities in Ireland called 'Gaeltachtaí' User-generated content (UGC), such as tweets, is a valuable, highly available resource for training syntactic parsers that can accurately process social media text. UGC is a genre with features different from those of both spoken language and standardised written language more traditionally found in natural language processing (NLP) corpora. Plank (2016) notes the advantages of utilising fortuitous data in order to create more adaptive, robust language technology. Given that the accuracy of syntactic parsing tools has been shown to decline when evaluated on noisy UGC data Open-source projects such as UD facilitate collaboration and rapid evolution of ideas among linguists internationally. In order to maintain optimum consistency with other UD treebanks, the annotation methodology employed in this research closely follows the general UD guidelines and the language-specific guidelines for Irish while aiming to incorporate the most up-to-date recommendations We carry out preliminary parsing experiments with TwittIrish, investigating the following two questions: How effective is a parser trained on the Irish UD Treebank The paper is structured as follows: Section 2 details the existing Irish NLP resources we use for our research, Section 3 outlines the development of the treebank, Section 4 describes the characteristics of UGC evident in Irish tweets, and Section 5 presents parsing experiments and error analysis.
We use the following resources: Indigenous Tweets (IT) Lynn Twitter Corpus (LTC) Irish Universal Dependencies Treebank (IUDT) gaBERT We combined 700 POS-tagged tweets from the LTC with 166 tweets more recently crawled by IT in order to leverage previous linguistic annotations while also including newer tweets. This involved converting the LTC annotation scheme to that of the UD framework and then POS-tagging the new raw tweets. We provide further detail in Appendix A. With regard to tokenisation, multiword expressions were automatically split into separate tokens following UD conventions. Only minor manual adjustments were required for lemmatisation to ensure alignment with the IUDT (to enable bootstrapping -see Section 3). Finally, the POS tagset used in the LTC was automatically converted to the UD tagset. Appendix A.2 describes this process. Preprocessing of newly-crawled tweets Due to the lack of a tokeniser designed to deal specifically with UGC in Irish, we compared two tools for this task: UDPipe Syntactic annotation As a method shown to reduce manual annotation efforts in syntactic annotation Orthographic variation refers to deviation from the conventional spelling system of the language and is observed at the token level. Therefore, it can affect the lemmatisation of a token in an NLP pipeline, potentially affecting other downstream areas of annotation. In the TwittIrish dataset, 2.5% of tokens contained some orthographic variation. (2) Bím de ghnáth ach sa bhaile an tseacht seo 'I usually am but home this week' Lengthening This refers to the elongation of a token by repeating one or more characters. This can be thought of as an encoding of sociophonetic information (3) tá siad go léir buuuuuuí 'They are all yelloooooow' Case variation Nonstandard use of upper-and lowercase text is another method of encoding sociophonetic information by focusing attention or emotion on a particular word or phrase. (4) Níl todhchaí na Gaeilge sa Ghaeltacht, ach in aon áit AR DOMHAIN 'The future of Irish is not in the Gaeltacht but anywhere ON EARTH' Transliteration The practice of transliteration, in which a word in one language is written using the writing system of another, is common within the language pair of Irish and English. In the TwittIrish treebank, the English language phrase 'fair play' occurs twice while variations 'fair plé', as shown in Example 5 and 'féar plé' occur once each. (5) Fair plé daoibh ' 'Fair play to you ' Punctuation variation Punctuation is used creatively in UGC to format or emphasise strings of text. However, due to the lack of standardisation, occurrences of unconventional punctuation can make text difficult to parse for both human and machine, as in Example 6 which shows a phrase from an Irish tweet appended by two punctuation characters '-)'. It is unclear whether this should be interpreted as some form of punctuation, creative formatting, or a smiley e.g. ':-)'. (6) sin a dhóthain-) 'That's enough-)' Other spelling variation These are mostly slight variations very close to the intended word and may occur due to typographical error. Typos are very common in UGC due to lack of editing or proofreading and may occur via insertion, deletion, substitution, or transposition of characters. Example 7 shows sraith (season) rendered as *staith. Due to their phonetic dissimilarity and the fact that 't' and 'r' are adjacent on the QWERTY keyboard layout, it is reasonable to infer that the substitution was unintentional. Less commonly, disguise or censorship of words or phrases may occur to encrypt profanity or taboo language. (7) tus staith 6 de Imeall 'start of season 6 of Imeall' Just 38.32% of the set of unique lemmata that make up the vocabulary of the TwittIrish treebank occur in the IUDT training data. Table Code-switching vs. borrowing 66.74% of tokens in the TwittIrish treebank are in Irish, 4.85% of tokens are in English and the remainder (consisting of punctuation, meta language tags, etc.) Grammatical phenomena observed in Irish tweets are described in this section. As these idiosyncrasies occur at the phrasal rather than token level, they may directly affect the structure of the parse tree. Some phenomena, such as contraction and over-splitting, cause difficulty during the tokenisation stage, potentially having a negative downstream effect on parsing. Table Contraction Much like abbreviation at the token level, contraction is defined here as the fusion of several tokens for the purpose of brevity, sometimes mimicking spoken pronunciation. Figure The inclusion of extra white space within tokens is often observed in Irish tweets e.g. Níl mé ró chinnte. The prefix ró-('too') is conventionally fused with the adjective it precedes in standardised text and so such tokens are annotated with the goeswith label as shown in Figure cé hé an t-athair? 'who is the father?' Syntax-level code-switching Alternational codeswitching or congruent lexicalisation We compare the performance of two widely used neural dependency parsers on the TwittIrish test set, and examine the effect of using pre-trained contextualised word embeddings from a monolingual Irish BERT model (gaBERT). We report parsing performance broken down by sentence/tweet length, UPOS tags, and dependency labels and carry out a manual error analysis. Further information is detailed in Appendix B. We experiment with two neural dependency parsing architectures: UDPipe To leverage the substantial advances in accuracy achieved in dependency parsing by the use of pretrained contexualised word representations Table Analysis was carried out on the AllenNLP parser with gaBERT embeddings using Dependable The mean sentence length of the IUDT is 23.5 tokens, whereas the mean tweet length in TwittIrish is 17.8. Figure We observe a larger proportion of PROPN, SYM, and PUNCT tags in Irish tweets in comparison to standardised Irish text, which contains a higher proportion of NOUN, DET, and ADP tags. This reflects the observations of Our analysis of the dependency relation distribution of standard English, German, and Italian text compared to that of tweets in those languages reveals that the parataxis, vocative, and advmod relations are more frequent in tweets and that the case, det, and nmod relations are more frequent in standard text. We observe that this same effect is present in Irish tweets. Figure Error Analysis In order to assess the effect of the UGC phenomena present in Irish tweets, we analyse the most and least accurate parses as shown in Table Presented in this paper is the novel resource, Twit-tIrish, the first Universal Dependencies treebank for Irish UGC. Table Table Table annotation. Further, in the miscellaneous column, the label 'SpaceAfter=No' encodes information about which tokens have a space after them in the original text for detokenisation purposes enabling automatic conversion from raw text to tree and vice versa. In order to assess the accuracy of the dependency annotation, a subset of the annotated data, consisting of 46 trees (773 tokens), was reviewed for errors by another Irish speaker trained in linguistic annotation. The task of the reviewer was to flag possible errors in the form of a token with an incorrect head and/or label. 46 possible errors were identified by the reviewer. The possible errors were then discussed by a team of two expert annotators to confirm whether the possible errors were true errors. 32 possible errors were confirmed as true errors. The overall accuracy of the treebank annotation can be estimated as 95.86% by dividing the number of correctly annotated tokens by the total number of tokens in the review. 16 tokens (2.07% of all tokens in the review) had an incorrect label and correct head. 12 tokens (1.55% of all tokens in the review) had an incorrect head and correct label. The most common error (5 instances) was incorrect punctuation attachment. Only 4 tokens (0.52%) were identified as having both an incorrect head and label. Figure root, csubj:cleft and, punct are associated with higher than average LAS in the IUDT test set but lower than average in the Twit-tIrish set. xcomp:pred, advmod, obl, acl:relcl, nmod, and xcomp are associated with higher than average LAS in the Twit-tIrish test set but lower than average LAS in the
962
1,528
962
Linking Surface Facts to Large-Scale Knowledge Graphs
Open Information Extraction (OIE) methods extract facts from natural language text in the form of ("subject"; "relation"; "object") triples. These facts are, however, merely surface forms, the ambiguity of which impedes their downstream usage; e.g., the surface phrase "Michael Jordan" may refer to either the former basketball player or the university professor. Knowledge Graphs (KGs), on the other hand, contain facts in a canonical (i.e., unambiguous) form, but their coverage is limited by a static schema (i.e., a fixed set of entities and predicates). To bridge this gap, we need the best of both worlds: (i) high coverage of free-text OIEs, and (ii) semantic precision (i.e., monosemy) of KGs. In order to achieve this goal, we propose a new benchmark with novel evaluation protocols that can, for example, measure fact linking performance on a granular triple slot level, while also measuring if a system has the ability to recognize that a surface form has no match in the existing KG. Our extensive evaluation of several baselines shows that detection of out-of-KG entities and predicates is more difficult than accurate linking to existing ones, thus calling for more research efforts on this difficult task. We publicly release all resources (data, benchmark and code) 1 .
Open Information Extraction (OIE) methods extract surface ("subject"; "relation"; "object")triples from natural language text in a schema-free manner Knowledge Graphs (KGs), on the other hand, are inventories of canonical facts in the form of (subject; predicate; object)-triples, where each slot is a unique (i.e., unambiguous) concept To combine the best of both worlds, we need to bridge the gap between the schema-free (but ambiguous) surface facts extracted from text and the schema-fixed (but unambiguous) KG knowledge. However, existing benchmarks and models only partially address the problem. One line of work Figure Contributions and Findings. We move away from the unrealistic and incomplete assumptions of prior work and propose 1 a novel large-scale benchmark for OIE-to-KG linking; 2 multifaceted evaluation protocols that cover all aspects of linking OIE facts to KGs (for an overview, see Fig. Through our experimental study, we found that the methods (i) perform well transductively but (ii) their performance deteriorates in an inductive evaluation. Further, we find that (iii) a dedicated OIE-to-KG fact re-ranking model improves the linking performance of both inductive and polysemous OIEs, and that (iv) we obtain high performance by training models solely on a synthetic variant of our dataset (i.e., with the KG as the only human-annotated data). Lastly, we investigate the largely underexplored issue of detecting Out-of-Knowledge-Graph extractions. We show that (v) it is possible to detect Out-of-KG entities to an extent, however, the same does not hold for predicates: a task that our experiments identify as a difficult open problem, which requires more research attention.
For a given surface-form OIE triple t 1 = ("s"; "r"; "o"), the goal is to link each slot to a canonical concept in a KG (if the corresponding concept exists in the KG): "s" → e 1 ∈ E; "r" → p ∈ P; "o" → e 2 ∈ E, with E and P as the (fixed) sets of KG entities and predicates. Importantly, our problem definition (and consequently evaluation) focuses on linking at the fact level, where each OIE slot is contextualized with the other two OIE slots. Transductive Fact Linking. In transductive linking, we measure how well the models link OIEs to KG facts consisting of entities and predicates seen during training (as components of training KG facts). Note that the testing KG facts (as whole triples) are not in the training data. Consider, for example, the extraction t 1 in Fig. In other words, this setup tests the generalization of fact linking models over entities. In Fig. Polysemous Fact Linking. We focus on OIEs for which the "s" and "o" slots are ambiguous w.r.t. the KG, i.e., in isolation they refer to a set of KG entities rather than a single entity. The mention "Michael Jordan" from either t 1 and t 2 (Fig. Here, the other two OIE slots offer the disambiguation signal that is necessary for successful linking. Out-of-KG Detection. We introduce a novel outof-KG detection task, in which the models are to recognize that an OIE triple component (i.e., "s", "r" or "o") cannot be linked because they do not have a corresponding KG concept (e.g., the relation "grew up in" from the triple t 2 in Fig. 3 FaLB: Fact Linking Benchmark We set up an automatic data processing pipeline to derive an OIE-to-KG fact linking benchmark, which supports all four evaluation facets from §2. We refer to both the process (i.e., pipeline) and the resulting benchmark as FaLB. FaLB's input is a dataset with (gold) alignments between natural language sentences and KG facts entailed by the sentence; i.e., each data instance is (sentence, KG fact) pair. Consequently, the creation of FaLB entails five design decisions: selection of (i) sentenceto-KG fact dataset(s), and (ii) a reference KG; (iii) generating OIE triples, (iv) high-precision OIE-KG fact alignments, and (v) a data augmentation strategy to increase the diversity of the data. Below is an example instance of the FaLB dataset: Example Data Instance from FaLB Sentence: "M. J., who was born in Brooklyn, played for the Bulls." OIE surface facts: t1 =("M. J."; "played for"; "the Bulls") and t2 =("M. J. Reference KG. We use Wikidata Generating OIE Triples. We use four state-ofthe-art OIE methods to obtain a set of OIE triples for each of the sentences in the dataset. To increase diversity, we use two state-of-the-art rule-based OIE methods: MinIE High-precision OIE-KG Fact Alignments. Next, we need to match the extracted OIE triples against the set of KG facts associated with the sentences. Crucially, this automatic step needs to have a high-precision in order to create a high quality benchmark. As per the distant supervision assumption of Our goal is to link surface-form ("s"; "r"; "o") OIE triples, to canonical Knowledge Graph facts (e 1 ; p; e 2 ), where e 1 , e 2 ∈ E, and p ∈ P. Each KG entity or predicate is represented as its surfaceform KG label (e.g., "Michael Jordan"), and its KG-provided description (e.g., "American basketball player and businessman"). We henceforth refer to the entities and predicates as KG entries. Intuitively, since both data streams-the OIE triples and the KG facts-are in natural language, we opt to obtain their representations (i.e., embeddings) with a pre-trained language model. We decouple the OIE-to-KG linking in two steps: pre-ranking and re-ranking (see §4.1 and §4.2, respectively), We denote this model as OIE pre ranker . It aims to generate OIE slot embeddings and KG entry embeddings, such that an OIE slot embedding yields higher similarity with its aligned KG entry's embedding compared to the other KG entries. Therefore, during training, we contrast the positive pairs against a set of negatives, thereby training the model to generate embeddings for a matching OIE slot and KG entry that lie close in the latent space. The motivation for such formulation is two-fold: (i) the number of entities is large, and could practically grow further, therefore computing the softmax over all KG entities during training is prohibitive; (ii) there may be unseen KG entries that we encounter during inference, therefore posing the problem as a standard classification inevitably leads to the model ignoring them. We use RoBERTa Figure 2: Left: OIE pre ranker which performs pre-ranking of the OIE slots to KG entries independently (i.e., no context between them is considered). Trained with negatives sampled from the whole KG; Right: Fact re ranker which attends between the whole OIE triple and KG fact to output their similarity; Trained with hard-negatives. and obtain RoBERTa token embeddings. Finally, we pool only the special tokens' embeddings subsequently linearly projected in the desired latent space: ôi = Linear(RoBERTa( t)), where ô is the slot embedding, and i ∈ R 3 is the OIE slot index. KG Embeddings. We represent both the entities and predicates as their label followed by their description (if available in the KG); e.g., the entity (e) Michael Jordan is represented as "Michael Jordan <DESC> American basketball player and businessman...", and the predicate (p) place of birth is represented as "place of birth <DESC> most specific known birth location of a person...". The <DESC> special token indicates the start of the description. We then tokenize the representation (ê for entity, p for predicate), and obtain embeddings using the same RoBERTa model. We pool the <CLS> token embedding and linearly project it in the OIE representation space as: kj = Linear(RoBERTa(b j )), where j ∈ {ê, p}, k is the KG entry embedding, and b is the tokenized KG entry representation. Linking OIEs to KG Facts. Given an OIE slot embedding o i and a KG entry embedding k j , we obtain their dot-product as: ŝpre = o T i k j , where o i and k j are norm-scaled (i.e., ŝpre represents the OIE pre ranker cosine similarity). During inference, we link an OIE to a KG fact by selecting the most similar KG entry for each of the OIE slots. ranker Training. We train the model using standard contrastive loss: we sample N -1 in-batch negative KG entries for each positive OIE slot ↔ KG entry pair, where N is the batch size. As per standard practice (van den where τ is the temperature. Note that during training, we only sample negative KG entry embeddings for each OIE slot, but not the other way around. Sampling only in-batch negatives presents an issue as the training data represents only a limited subset of the whole KG (i.e., only the KG entries with paired OIE). During inference, however, we contrast each OIE slot against the whole KG to find the KG entry with which it exhibits the highest similarity. Therefore, for each OIE slot, we additionally sample e negative entities and p negative predicates at random from the whole KG (e.g., we would sample N-1+p n=1 kn negative predicates). In certain scenarios (e.g., in the case of polysemous OIEs), matching whole OIEs with whole KG facts (i.e., not decoupled per OIE slot) could resolve the ambiguity and thus improve performance. To that end, for each OIE slot, we re-rank the OIE pre ranker topk most probable KG links. We denote this model as Fact re ranker . We perform self-attention between the OIE and the KG fact (both provided as input, separated by a <FACT> special token) with a single RoBERTa transformer, and return their similarity as: ŝre = σ(Linear(RoBERTa(ĉ))), where ĉ is the concatenated OIE and KG fact representation, and ŝre is the sigmoid (σ) normalized similarity. ranker Training. We train by sampling matching OIE ↔ KG fact pairs as positives, and negatives, where we replace some KG fact slots (subject, predicate, object) with incorrect ones, generated as follows: We first obtain embeddings for each KG entry using the OIE pre ranker , and find its top-k most similar candidates w.r.t. all other KG entries. We then corrupt the ground truth KG fact by randomly sampling only from the top-k (hard) negative candidates. Lastly, with 50% probability, we randomly mask (i.e., replace with a <mask> token) the description of the KG fact entries. We measure accuracy to evaluate both OIE slot linking to KGs ( §5.1), and Out-of-KG detection of OIE slots ( §5.2). To measure OIE fact linking, we score a hit if all OIE slots are linked correctly. The error bars represent the standard error of the mean. Setup. We explore the extent to which we can link OIE slots to a large-scale KG (Wikidata). We address two main research questions concerning the OIE-to-KG fact linking task: (i) To what extent do methods generalize to different KG entity facets? We consider transductive, inductive, or polysemous entities (see §2 for detailed definition); (ii) What is the performance impact of the KG size? We test two reference KG sizes: Benchmark Restricted KG (∼650k entities, ∼0.6k predicates) and Large KG (∼5.9M entities, ∼4k predicates). Methods. We use the following methods to obtain results for the OIE slot linking task: (i) RAN-DOM: for each OIE slot, we sample a random KG entry; (ii) FREQUENCY: based on the training data statistics, we link each OIE slot to the most frequent KG entry (entity or predicate) in the training set; (iii) SIMCSE: we use a pretrained SimCSE model In Table Training on Synthetic Data Improves Performance. We explore the extent to which we can learn fact linking models using synthetic data. Syn-thIE To measure to what extent we can link OIEs to KG facts using only synthetic data, we train OIE pre ranker models on both REBEL and SynthIE, and report results (in Table Ablation Study: Importance of Entity Alias Augmentation. We observe that in current datasets most surface form entities (in the natural language sentences) appear only with their "canonical" la- Transductive 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 Frequency Transductive 0.0 ± 0.0 53.3 ± 0.1 8.1 ± 0.1 0.0 ± 0.0 0.0 ± 0.0 53.3 ± 0.1 8.1 ± 0.1 0.0 ± 0.0 SimCSE Transductive 5.4 ± 0.1 0.0 ± 0.0 0.9 ± 0.1 0.0 ± 0.0 2.6 ± 0.1 0.0 ± 0.0 0.5 ± 0.0 0.0 ± 0.0 OIE pre ranker Transductive 86.8 ± 0.1 93.5 ± 0.1 95.7 ± 0.0 79.1 ± 0.1 78.7 ± 0.2 93.5 ± 0.1 93.1 ± 0.1 70.7 ± 0.2 + Context Transductive 84.9 ± 0.1 92.2 ± 0.1 94.8 ± 0.1 77.7 ± 0.1 77.8 ± 0.1 92.2 ± 0.1 92.2 ± 0.1 70.2 ± 0.1 Frequency Inductive 0.0 ± 0.0 0.1 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.1 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 SimCSE Inductive 12.6 ± 0.3 0.4 ± 0.1 6.8 ± 0.3 0.0 ± 0.0 8.1 ± 0.3 0.0 ± 0.0 3.2 ± 0.2 0.0 ± 0.0 OIE pre ranker Inductive 71.9 ± 0.4 69.8 ± 0.5 59.5 ± 0.5 34.9 ± 0.5 62.0 ± 0.5 69.8 ± 0.5 48.2 ± 0.5 25.4 ± 0.4 + Context Inductive 74.5 ± 0.4 73.8 ± 0.4 62.3 ± 0.5 38.9 ± 0.5 64.5 ± 0.5 73.8 ± 0.4 50.8 ± 0.5 29.2 ± 0.5 + Fact re ranker Inductive 76.2 ± 0.4 67.6 ± 0.5 60.8 ± 0.5 40.6 ± 0.5 64.8 ± 0.5 66.5 ± 0.5 54.8 ± 0.5 32.9 ± 0.5 bel. 8 Since the OIEs represent surface-form facts, such lack of diversity prevents the models from learning more complex linking patterns. To overcome this, we perform entity alias augmentation in FaLB by adding the surface form aliases of the entities-available in Wikidata and manually curated-and ablate its impact on the OIE linking task. Besides OIE pre ranker models trained on REBEL and SynthIE with entity alias augmentation, we train additional models without the alias augmented samples. We report results in Table Expectedly, we observe that training with entity aliases allows us to link such OIE entity mentions more successfully than training without them. This was reflected on models trained on both REBEL 8 REBEL is constructed from Wikipedia abstracts, where the references use the canonical form name; SynthIE provides the KG fact (in text format) as is to the LLM, so naturally, the sentence generated does not feature the entity aliases. Therefore, Michael Jordan's synonyms such as M.J., Air Jordan and His Airness, rarely appear in the data. and SynthIE. We further observe that this step hurts the linking of specific OIE entity mentions only moderately, suggesting that OIE linking methods could be trained to be robust w.r.t. entity synsets. Finally, across all OIE slots and fact linking, the macro scores are significantly in favor of the model trained with entity aliases, on both datasets. Ablation Study: Importance of Fact-reranking. We ablate the number of KG facts we rerank (k) with Fact re ranker and report results in Fig. constructs 2 3 OIE-KG fact pairs, reranks the list, and returns the highest scoring KG fact. Setup. Prior works which study the OIE to KG linking problem Prior work Random 50.0 ± 2.8 50.0 ± 2.8 50.0 ± 2.8 12.5 ± 1.8 50.0 ± 1.3 50.0 ± 1.3 50.0 ± 1.3 12.5 ± 1.8 Confidence@1-based Heuristic 71.2 ± 2.3 49.6 ± 2.6 72.1 ± 2.3 30.2 ± 1.5 71.2 ± 2.5 49.6 ± 2.7 69.1 ± 2.5 24.5 ± 1.5 Entropy-based Heuristic 77.5 ± 2.3 49.0 ± 2.7 78.4 ± 2.2 29.9 ± 1.7 73.3 ± 2.3 49.1 ± 2.7 72.8 ± 2.4 27.0 ± 1.7 Query-Key-Value Cross-Attention 70.7 ± 2.5 49.6 ± 2.7 70.9 ± 2.5 25.9 ± 1.0 59.8 ± 2.7 49.4 ± 2.7 60.8 ± 2.5 19.7 ± 0.9 setup). In ReVerb45k In this work, we mitigate these issues and build a benchmark (i) with golden links to KG entities that also (ii) reflects the size of modern KGs. Furthermore, all prior work to date has relied on the strict assumption that all OIE surface form slots have a corresponding reference KG entity or predicate Finally, most existing publicly available datasets do not address the problem of OIE-to-KG linking. Popular datasets like T-REx We shed light on the OIE to KG linking problem, allowing us to fuse the surface-form and openedended knowledge found in OIEs, with the canonical real-world KG facts. We introduced a novel multifaceted benchmark which fixes prior work deficiencies, and proposed a set of task-specific baselines. Our experiments uncover that (i) linking inductive or polysemous OIEs to large KGs is challenging; (ii) we can learn OIE linking methods using only synthetic data; and (iii) detecting whether OIEs are Out-of-KG is an open research problem. Notably, the set of models we explore ignore the KG structure to obtain KG entry embeddings. Leveraging the underlying graph structure should, in theory, yield representations which generalize better to zero-shot samples (e.g., as is the case with detecting out-of-KG relations). Such KG entry embeddings could be even trained offline (i.e., as a separate step) with standard KG embedding methods Last but not least, all data, resources, and models used in this work are specific to the English language. Notice however, our approach can be readily extended to languages other than English, while Wikipedia and Wikidata have versions in other languages -which we leave for future work. We are not aware of any direct ethical impact generated by our work. However, in general, care should be taken when applying our technology to sensitive use cases in high risk domains, such as healthcare. In §2, we go over the problem statement, and in §6 we discussed how the different facets of our benchmark relate to prior and closely related work. Here, we provide a broader discussion where we group the related work based on the problem they address, and provide a more detailed discussion w.r.t. the differences with our work. Open Information Extraction (OIE). OIE methods extract structured surface-form factual information from natural language text data, in the form of ("subject"; "relation"; "object")triples Closed Information Extraction. Given an input text, ClosedIE methods extract a set of (subject; relation; object)-triples where each triple can be expressed within the predefined schema-fixed sets of entities and predicates-of the reference KG Open Information Extraction and Knowledge Graphs. The context of OIE facts is used for many tasks for knowledge graphs, such as knowledge graph population Knowledge Graph Link Prediction. These methods Multifaceted Evaluation. NLP and KG tasks are typically evaluated on a held-out test set, by using evaluation frameworks that assign performance scores on a single value; e.g., accuracy In Table We train all models for 10 epochs using AdamW with a learning rate of 5e-5 and weight decay of 1e-3. We use a RoBERTa While REBEL's entities are golden (i.e., obtained as the hyperlinks from Wikipedia abstracts which link to Wikipedia pages), the predicates between them are obtained using a set of heuristics. This leads to imbalanced data, where most predicates occur only few times, and others occur orders of magnitude more. Consequently, such issue is reflected in the OIE-to-KG fact linking data that we obtain. To address this problem of REBEL, On the other hand, the issue we observe with SynthIE, is that due to the way the data is provided to LLM, the surface-form entities remain in their canonical text-form (i.e., their canonical denotation in the KG), and thus contain little variation. This diverges from the data that we find "in the wild". Critically, when people refer to entities in free-form natural language, they commonly use synonyms, aliases, abbreviations, nicknames, etc. For example, the former basketball player "Michael Jordan" could be referred to as "Air Jordan" and "M.J.". To cope with this issue, we leverage the entity alias augmentation step in FaLB. By increasing the diversity of the OIE entity surface form, we are able to obtain a high quality OIE-to-KG fact linking synthetic dataset, thus the only human-annotated component remains to be the KG. Importantly, the inductive evaluation is testing OIE linking to KG facts that consist of entities which were not part of the models' training data. In §5.1 In §5.2 we evaluate the ability of the models to detect whether an OIE slot (surface-form entity, or surface-form relation) is present in the Knowledge Graph. Intuitively, this task is more difficult than the OIE linking task, as the models need to generalize beyond the training data distribution to perform well on this task. Namely, when linking OIEs to a KG, all prior work All models that we use for the Out-of-KG detection task in §5.2 are built on top of a OIE pre ranker , which is trained on REBEL. For all models, to obtain an out-of-KG indicator-True (1), or False (0)we threshold the output of the models. For each model, we determine the optimal threshold (the confidence, or the entropy) on a hold-out validation dataset which we build on top of REBEL. CONFIDENCE@1-BASED HEURISTIC: We obtain the KG links for each of the OIE slots using
1,285
1,703
1,285
BOTEVAL: Facilitating Interactive Human Evaluation
Following the rapid progress in natural language processing (NLP) models, language models are applied to increasingly more complex interactive tasks such as negotiations and conversation moderations. Having human evaluators directly interact with these NLP models is essential for adequately evaluating the performance on such interactive tasks. We develop BOTEVAL, an easily customizable, opensource, evaluation toolkit that focuses on enabling human-bot interactions as part of the evaluation process, as opposed to human evaluators making judgements for a static input. BOTEVAL balances flexibility for customization and user-friendliness by providing templates for common use cases that span various degrees of complexity and built-in compatibility with popular crowdsourcing platforms. We showcase the numerous useful features of BOTEVAL through a study that evaluates the performance of various chatbots on their effectiveness for conversational moderation and discuss how BOTEVAL differs from other annotation tools.
As natural language processing (NLP) models become more versatile with the recent advances of language models and their instruction-tuned counterparts As noted by To facilitate accurate human evaluations of complex interactive tasks, we developed BOTEVAL, 1 a comprehensive evaluation toolkit that focuses on enabling human -bot taining generalizability, BOTEVAL strives to maximize user-friendliness by providing templates for frequent use cases that involve human evaluation where a human evaluator must interact with a NLP model, multiple models, or another human being to measure human performance. In addition, it is integrated with Amazon Mechanical Turk (AMT) In summary, BOTEVAL's main contributions are: • An open-source and customizable evaluation tool for interactive NLP tasks that incorporates human-bot and human-human interactions into the evaluation process. • Detailed documentation and templates for various use cases to make modifications easy. • Flexible deployment options with built-in integration with popular crowdsourcing platforms such as AMT and Prolific. • Evaluation task management features that facilitate task monitoring and managing crowdsource workers. • Dynamically configurable interaction logic with custom dialogue manager and multihuman and multi-bot evaluation settings.
BOTEVAL is a web application that provides an evaluation interface, what the human evaluators (i.e., crowdsource workers) see (Section 2.1), and an administrator dashboard, what the administrator uses to manage the evaluation task and evaluators (Section 2.2). We recommend that the bots that evaluators interact with are provided as separate APIs that BOTEVAL can make queries to, as this isolates the management of the bot deployment and BOTEVAL (Section 2.3). Human evaluators can be flexibly set to crowdsource workers from AMT or Prolific or any other evaluators with internet access by having them create an account directly for a deployment of BOTEVAL using a public external link. An evaluation task is configured with a central YAML config file that identifies the frontend components to use, the deployment environment, and the crowdsourcing platform to use. A sample evaluation interface for the case study later described in Section 4.1 is shown in Figure The evaluation interface consists of three main components: 1 Conversation pane: a section where the interaction between the human and the bot takes place. This pane can be easily customized to contain seed conversations to serve as initial starting points for interactions to continue off of or it can instead contain any piece of text or completed conversation without requiring any interactions from the evaluators, making BOTEVAL also suitable for simpler annotation tasks. 2 Instruction pane: this is an optional section that shows the main directions. Evaluators can see detailed instructions by clicking on the detailed instructions button. Administrators can choose to show detailed instructions as part of the consent form if one is needed to make sure that evaluators have read them. 3 Survey pane: this is where the human evaluators provide their evaluations. In the given example, it is configured to only be shown after the human evaluators have interacted with the bot for a set number of turns. The conversation pane and instruction pane is configurable by providing custom HTML scripts, while the survey pane is even more easily customizable by configuring a YAML config file. An example of the YAML config file is shown in Appendix A.1. An optional consent form can be shown to evaluators as well, which is also managed with a separate HTML file. Further detail on how the consent form can be configured is in Appendix A.2. BOTEVAL's administrator dashboard provides numerous features for managing evaluation tasks and evaluators. Its main benefit is a GUI that enables a non-technical user to easily become an administrator for human evaluation tasks. The topics page, shown in Figure After launching tasks, users can use the administrator dashboard to conveniently examine tasks that are completed or in progress with the same interface that the evaluators used to complete the task to easily visualize their work rather than examining a database or JSON file, as shown in Figure In addition to these features, we provide convenient AMT-specific features for managing workers and tasks known as human intelligence tasks (HITs). One of the most convenient features is being able to directly assign and remove qualifica-tions for workers after examining their work without having to leave the administrator dashboard. This is an important convenience feature for ensuring the quality of work for human evaluations are kept to the desired standard by blocking unreliable workers. Another is being able to make bonus payments directly after examining the completed task, which is useful when each task is expected to involve variable rewards, such as to account for each HIT taking a different amount of time to complete. Users are given multiple options to choose how they will service the bot that they want to evaluate, but the recommended setup is to set up a separate RESTful API and defining a logic within BOTE-VAL to interface with this API. As shown in 3 in Figure Figure BOTEVAL can be customized to use with any crowdsourcing platform, and it is designed to be directly used with many popular ones such as AMT, Prolific, and Qualtrics. If the goal is to do internal annotations, the setup is even simpler as the user only has to configure BOTEVAL to not use any. Then the user can share their custom URL with the evaluators, where they can sign up and directly work on tasks that are made available to them without going through any other platform. An overview of BOTEVAL's system architecture is shown in Figure The frontend is a simple web interface (i.e., HTML) created with Bootstrap stylesheet. While the majority of the HTML structure is constructed on the server side using Jinja2, some dynamic updates such as responses coming from bots or other participants in the interaction are achieved using AJAX and RESTful APIs. The backend is implemented in Python language using Flask framework, following a model-viewcontroller architecture pattern. Models are implemented using Python classes and stored in a relational database, specifically SQLite. In addition, we use SQLAlchemy, an object-relational mapper, to abstract the mapping between Python classes and database tables. For views, Flask uses Jinja2 for server side templating of HTML pages. Controllers are based on Flask's builtin URL routers and RESTful API constructs. While internally our server is an HTTP server, crowdsourcing platforms such as AMT require annotation interface be served via secure connections (HTTPS). HTTPS can be enabled by obtaining and installing an SSL/TLS certificate. We use free certificates from Certbot, Some scenarios may require several simultaneous instances of BOTEVAL to facilitate multiple annotation tasks, and obtaining SSL certificate for each instance maybe cumbersome. We address this problem by using a different TCP port for each instance, and configuring a single Nginx (with SSL certificate) route requests for all instances. To showcase the usefulness of BOTEVAL, we share a case study that uses BOTEVAL to conduct a study on how effective various zero-shot instruction-tuned language models (ITLM) and dialogue model are in performing conversational moderation (CM) This study makes full use of BOTEVAL as it requires evaluating multiple bots by interacting with them for a preset number of turns (in this case 3), starting with a variety of conversation stubs. The evaluations were conducted with all desired configurations simultaneously to get the most representative and fair results that is not affected by any confounding factors such as recency bias. The evalution was conducted with AMT, and being able to easily monitor evaluations enabled rapid iterations of updating the instructions and giving feedback to the evaluators. Therefore, BOTEVAL was integral in being able to refine the evaluation study efficiently and ultimately collect statistically meaningful results for an interactive evaluation setup. The study showed that prompt-engineered ITLMs outperformed prosocial dialogue models and that a conflict resolution prompt based on the Socratic method was the best performing prompt. In addition, one of this work's central findings was discovering that there are differences between evaluation results when the models were evaluated from a first person point of view (POV) and a third person POV. With BOTEVAL, collecting human evaluations in these two different settings was a simple change of updating the topics file such that the conversation stubs were the completed conversations, rewording the questions such that it is in third person POV, and setting the number of turns required for human evaluators to interact with the bots to zero. BOTEVAL's main differentiation with previous annotation tools and frameworks is that it is focused on, but not limited to, interactive use cases. In other words, it is useful when the annotated data is not static, e.g., bot responses over multiple turns or other dynamic outputs that can change based on user interaction. Therefore, BOTEVAL is appealing for evaluating or collecting data for conversational tasks that usually require multi-turn interactions for fulfilling the goal, rather than a single generated output. Many real-life tasks go through multi-turn interactions, such as negotiations Although BOTEVAL was designed for interac- tive tasks, BOTEVAL can also be easily adapt for simple static annotation tasks by simplifying the conversation pane in Figure In Table A popular general annotation tool is Mephisto (Urbanek and Ringshia, 2023), which started by isolating the crowdsourcing features from ParlAI ParlAI still provides templates for Mephisto for human-bot interactions A prominent set of annotation tools specific to dialogue are centered around task-oriented dialogue We presented BOTEVAL and its usefulness in collecting human evaluations for interactive tasks that require live human-bot interactions through a case study of evaluating various language models on their ability to conversationally moderate online discussions. BOTEVAL provides a customizable interface that can be adapted for various evaluation and annotation use cases while also providing integration with popular crowdsourcing platforms and task management features. We hope that this work will serve as an important foundation for setting up custom interactive human evaluation tasks that facilitate our understanding of more complex NLP systems as they become increasingly sophisticated and capable. We designed BOTEVAL to be modular such that customizing existing templates and modifying the dialogue manager's logic is simple, but it is yet not configured so that the task management process, shown in Figure Another challenge for using BOTEVAL may arise from the difficulty of managing a separate process that serves the bots that the human evaluators will interact with. However, if the BOTEVAL user is able to launch a bot as part of BOTEVAL, refactoring the code for that bot such that its responses are accessed through an API instead is a simple modification with plenty of online tutorials and tools, such as FastAPI.
1,023
1,310
1,023
Contrastive Analysis with Predictive Power: Typology Driven Estimation of Grammatical Error Distributions in ESL
This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate that language specific error distributions in ESL writing can be predicted from the typological properties of the native language and their relation to the typology of English. Our typology driven model enables to obtain accurate estimates of such distributions without access to any ESL data for the target languages. Furthermore, we present a strategy for adjusting our method to low-resource languages that lack typological documentation using a bootstrapping approach which approximates native language typology from ESL texts. Finally, we show that our framework is instrumental for linguistic inquiry seeking to identify first language factors that contribute to a wide range of difficulties in second language acquisition.
The study of cross-linguistic transfer, whereby properties of a native language influence performance in a foreign language, has a long tradition in Linguistics and Second Language Acquisition (SLA). Much of the linguistic work on this topic was carried out within the framework of Contrastive Analysis (CA), a theoretical approach that aims to explain difficulties in second language learning in terms of the relations between structures in the native and foreign languages. The basic hypothesis of CA was formulated by Differently from the SLA tradition, which emphasizes manual analysis of error case studies Tested on 14 languages in a leave-one-out fashion, our model achieves a Mean Average Error (MAE) reduction of 21.8% in predicting the language specific relative frequency of the 20 most common ESL structural error types, as compared to the relative frequency of each of the error types in the training data, yielding improvements across all the languages and the large majority of the error types. Our regression model also outperforms a stronger, nearest neighbor based baseline, that projects the error distribution of a target language from its typologically closest language. While our method presupposes the existence of typological annotations for the test languages, we also demonstrate its viability in low-resource scenarios for which such annotations are not available. To address this setup, we present a bootstrap-ping framework in which the typological features required for prediction of grammatical errors are approximated from automatically extracted ESL morpho-syntactic features using the method of Finally, the utilization of typological features as predictors, enables to shed light on linguistic factors that could give rise to different error types in ESL. For example, in accordance with common linguistic knowledge, feature analysis of the model suggests that the main contributor to increased rates of determiner omission in ESL is the lack of determiners in the native language. A more complex case of missing pronouns is intriguingly tied by the model to native language subject pronoun marking on verbs. To summarize, the main contribution of this work is a CA inspired computational framework for learning language specific grammatical error distributions in ESL. Our approach is both predictive and explanatory. It enables us to obtain improved estimates for language specific error distributions without access to ESL error annotations for the target language. Coupling grammatical errors with typological information also provides meaningful explanations to some of the linguistic factors that drive the observed error rates. The paper is structured as follows. Section 2 surveys related linguistic and computational work on cross-linguistic transfer. Section 3 describes the ESL corpus and the typological data used in this study. In section 4 we motivate our native language oriented approach by providing a variance analysis for ESL errors across native languages. Section 5 presents the regression model for prediction of ESL error distributions. The bootstrapping framework which utilizes automatically inferred typological features is described in section 6. Finally, we present the conclusion and directions for future work in section 7.
Cross linguistic-transfer was extensively studied in SLA, Linguistics and Psychology Computational work touching on crosslinguistic transfer was mainly conducted in relation to the Native Language Identification (NLI) task, in which the goal is to determine the native language of the author of an ESL text. Much of this work focuses on experimentation with different feature sets Previous work on grammatical error correction that examined determiner and preposition errors The current investigation is most closely related to studies that demonstrate that ESL signal can be used to infer pairwise similarities between native languages We obtain ESL essays from the Cambridge First Certificate in English (FCE) learner corpus The FCE corpus has an elaborate error annotation scheme 2 We plan to extend our analysis to additional proficiency levels and languages when error annotated data for these learner profiles will be publicly available. 3 Filtered errors that would have otherwise appeared in the top 20 list, with their respective rank in brackets: Spelling (1), Replace Punctuation (2), Replace Verb (3), Missing Punctuation (7), Replace (8), Replace Noun (9) Unnecessary Punctuation (13), Replace Adjective (18), Replace Adverb (20). exemplified in table 1. In addition to concentrating on the most important structural ESL errors, this cutoff prevents us from being affected by data sparsity issues associated with less frequent errors. We use the World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013), a repository of typological features of the world's languages, as our source of linguistic knowledge about the native languages of the ESL corpus authors. The features in WALS are divided into 11 categories: Phonology, Morphology, Nominal Categories, Nominal Syntax, Verbal Categories, Word Order, Simple Clauses, Complex Sentences, Lexicon, Sign Languages and Other. Table An important challenge introduced by the WALS database is incomplete documentation. Previous studies We perform several preprocessing steps in order to select the features that will be used in this study. First, as our focus is on structural features that can be expressed in written form, we discard all the features associated with the categories Phonology, Lexicon To motivate a native language based treatment of grammatical error distributions in ESL, we begin by examining whether there is a statistically significant difference in ESL error rates based on the native language of the learners. This analysis provides empirical justification for our approach, and to the best of our knowledge was not conducted in previous studies. To this end, we perform a Kruskal-Wallis (KW) test As shown in table 1, we can reject the null hypothesis for 16 of the 20 grammatical error types with p < 0.01, where Unnecessary Determiner, Unnecessary Preposition, Wrongly Derived Noun, and Replace Conjunction are the error types that do not exhibit dependence on the native language. Furthermore, the null hypothesis can be rejected for 13 error types with p < 0.001. These results suggest that the relative error rates of the majority of the common structural grammatical errors in our corpus indeed differ between native speakers of different languages. We further extend our analysis by performing pairwise post-hoc Mann-Whitney (MW) tests Distributions in ESL Given a language l ∈ L, our task is to predict for this language the relative error frequency y l,e of each error type e ∈ E, where L is the set of all native languages, E is the set of grammatical errors, and e y l,e = 1. In order to predict the error distribution of a native language, we train regression models on individual error types: In this equation ŷ l,e is the predicted relative frequency of an error of type e for ESL documents authored by native speakers of language l, and f (t l , t eng ) is a feature vector derived from the typological features of the native language t l and the typological features of English t eng . The model parameters θ l,e are obtained using Ordinary Least Squares (OLS) on the training data D, which consists of typological feature vectors paired with relative error frequencies of the remaining 13 languages: To guarantee that the individual relative error frequency estimates sum to 1 for each language, we renormalize them to obtain the final predictions: ŷl,e = ŷ l,e e ŷ l,e (3) Our feature set can be divided into two subsets. The first subset, used in a version of our model called Reg, contains the typological features of the native language. In a second version of our model, called RegCA, we also utilize additional features that explicitly encode differences between the typological features of the native language, and the and the typological features of English. In the Reg model, we use the typological features of the native language that are documented in WALS. As mentioned in section 3.2, WALS features belong to different variable types, and are hence challenging to encode. We address this issue by binarizing all the features. Given k possible values v k for a given WALS feature t i , we generate k binary typological features of the form: When a WALS feature of a given language does not have a documented value, all k entries of the feature for that language are assigned the value of 0. This process transforms the original 119 WALS features into 340 binary features. In the spirit of CA, in the model RegCA, we also utilize features that explicitly encode differences between the typological features of the native language and those of English. These features are also binary, and take the value 1 when the value of a WALS feature in the native language is different from the corresponding value in English: We encode 104 such features, in accordance with the typological features of English available in WALS. The features are activated only when a typological feature of English has a corresponding documented feature in the native language. The addition of these divergence features brings the total number of features in our feature set to 444. Table An important advantage of our typology-based approach are the clear semantics of the features, which facilitate the interpretation of the model. Inspection of the model parameters allows us to gain insight into the typological features that are potentially involved in causing different types of ESL errors. Although such inspection is unlikely to provide a comprehensive coverage of all the relevant causes for the observed learner difficulties, it can serve as a valuable starting point for exploratory linguistic analysis and formulation of a cross-linguistic transfer theory. aged across the models of different languages, for the error types Missing Determiner and Missing Pronoun. In the case of determiners, the model identifies the lack of definite and indefinite articles in the native language as the strongest factors related to increased rates of determiner omission. Conversely, features that imply the presence of an article system in the native language, such as 'Indefinite word same as one' and 'Definite word distinct from demonstrative' are indicative of reduced error rates of this type. A particularly intriguing example concerns the Missing Pronoun error. The most predictive typological factor for increased pronoun omissions is pronominal subject marking on the verb in the native language. Differently from the case of determiners, it is not the lack of the relevant structure in the native language, but rather its different encoding that seems to drive erroneous pronoun omission. Decreased error rates of this type correlate most strongly with obligatory pronouns in subject position, as well as a verbal person marking system similar to the one in English. Thus far, we presupposed the availability of substantial typological information for our target languages in order to predict their ESL error distributions. However, the existing typological documentation for the majority of the world's languages is scarce, limiting the applicability of this approach for low-resource languages. We address this challenge for scenarios in which an unannotated collection of ESL texts au- thored by native speakers of the target language is available. Given such data, we propose a bootstrapping strategy which uses the method proposed in To put this framework into effect, we use the FCE corpus to train a log-linear model for native language classification using morpho-syntactic features obtained from the output of the Stanford Parser (de where l is the native language, x is the observed English document and θ are the model parameters. We then derive pairwise similarities between languages by averaging the uncertainty of the model with respect to each language pair: In this equation, x is an ESL document, θ are the parameters of the native language classification model and D l is a set of documents whose native language is l. For each pair of languages l and l the matrix S ESL contains an entry S ESL l,l which represents the average probability of confusing l for l , and an entry S ESL l ,l , which captures the opposite confusion. A similarity estimate for a language pair is then obtained by averaging these two scores: As shown in In the bootstrapping setup, we train the regression models on the true typology of the languages in the training set, and use the approximate typology of the test language to predict the relative error rates of its speakers in ESL. We present a computational framework for predicting native language specific grammatical error distributions in ESL, based on the typological properties of the native language and their compatibility with the typology of English. Our regression model achieves substantial performance improvements as compared to a language oblivious baseline, as well as a language dependent nearest neighbor baseline. Furthermore, we address scenarios in which the typology of the native language is not available, by bootstrapping typological features from ESL texts. Finally, inspection of the model parameters allows us to identify native language properties which play a pivotal role in generating different types of grammatical errors. In addition to the theoretical contribution, the outcome of our work has a strong potential to be beneficial in practical setups. In particular, it can be utilized for developing educational curricula that focus on the areas of difficulty that are characteristic of different native languages. Furthermore, the derived error frequencies can be integrated as native language specific priors in systems for automatic error correction. In both application areas, previous work relied on the existence of error tagged ESL data for the languages of interest. Our approach paves the way for addressing these challenges even in the absence of such data.
956
3,288
956
Exploiting domain-slot related keywords description for Few-Shot Cross-Domain Dialogue State Tracking
Collecting dialogue data with domain-slotvalue labels for dialogue state tracking (DST) could be a costly process. In this paper, we propose a novel framework based on domain-slot related description to tackle the challenge of few-shot cross-domain DST. Specifically, we design an extraction module to extract domainslot related verbs and nouns in the dialogue. Then, we integrates them into the description, which aims to prompt the model to identify the slot information. Furthermore, we introduce a random sampling strategy to improve the domain generalization ability of the model. We utilize a pre-trained model to encode contexts and description and generates answers with an auto-regressive manner. Experimental results show that our approaches substantially outperform the existing few-shot DST methods on MultiWOZ and gain strong improvements on the slot accuracy comparing to existing slot description methods.
Dialogue state tracking (DST) is an essential component in a task-oriented dialogue system. It aims to keep track of users' domains, intents and slots information at each turn of the conversation, which helps to provide sufficient information for selecting the next system operation (1) Modular methods
In this work, we proposed a simple but efficient framework named Domain-slot Related Information Awareness method (DRIA) based on the domain-slot related keywords extraction module and a random sampling strategy. Specifically, for the extraction module, we first use TF-IDF algorithm Our contributions are summarized as follows:(1) We propose an effective framework to construct domain-slot related keywords descriptions. To the best of our knowledge, we are the first to incor-porate keywords information into the DST task. (2) We design a random sampling training strategy to integrate rich domain-slot related information during the training, which aims to improve generalization ability. (3) Experimental results show that our method outperforms most of the previous methods in the cross-domain few-shot DST settings, especially in the slot accuracy. As shown in the figure The procedure of this extraction module is divided into three steps: (1) We traverse the entire dataset. For each domain-slot, if the domain-slot is mentioned in a turn of dialogue, mark each token's part of speech in this turn In order to improve the domain generalization ability of the model, we propose a random sampling strategy which enriches the content of description during training. As shown in the figure In this section, we define the dialogue history C t which is the accumulation of dialogues from the beginning to the current turn t. Each turn of dialogue is composed of the system and the user's utterance. We record the dialogue history as C t = {M 1 , N 1 , ..., M n , N n }, where t stands for the conversation turn, M and N denotes the system and user, respectively. The i-th input of the model is composed of the dialogue history and the description of the i-th domain-slot: where [sep] indicates connector. The i-th output is the value of the i-th domain-slot corresponding to the description in the conversation status in the turn T. If there is no slot value in the conversation turn, the output is "None": Finally, we use cross entropy as loss function. First,we utilize the extraction module to get the keywords list of each domain-slot. During the training process, the keywords are extracted to build the description according to the random sampling strategy. In each turn, we traverse the description of each slot and connect the description and the context as the input. Then the model outputs the corresponding results. During the evaluation stage, the extracted keywords will be used to build the description, and the other steps are roughly the same as those in training stage. Note that during the few-shot domain fine-tuning, we randomly select (1%, 5%, 10%) of dataset for keyword extraction, and then use the same data for training. We use T5-small 3 Experiments MultiWOZ 2.0 dataset (1) TRADE: Transferable dialogue state generator (2) DSTQA: Dialogue state tracking via question answering over ontology graph To ensure model consistency with T5DST (1) Naive: Simple transformation of the slot name from "domain-slot" pair to "[slot] of the [domain]". (2) Slot Type: A template for each slot type that follows "[slot type] of [slot] of the [domain]" to facilitates the knowledge transfer among different slots. (3) Slot related verbs & noun: The format of the description is" The [domain] of the [slot] which may include {slot_v} or {slot_n}, and its output type is [output_type]". Note that "[output_type]" follows the format of "slot type" Table Slot Accuracy Analysis. Figure Case Studies. To further illustrate the effectiveness of our framework, figure In this paper, we propose a simple but effective framework to tackle the few-shot cross-domain DST challenge. Specifically, we propose DRIA based on T5. This framework incorporates the domain-slot related information into the description to help the model distinguish the domain-slot more clearly. Further, we propose a random sampling strategy which enriches the content of description during training to improve the domain generalization ability of the model. Results on MultiWOZ dataset show that our method outperforms most of the previous methods in the cross-domain few-shot DST settings. This work has two main limitations: (1) The keywords are obtained based on statistical methods. There will be some dialogues which contain a certain slot while the keyword corresponding to the slot does not exist. In this case, the extracted keyword may be counterproductive to the model. (2) The input length of the T5 model
920
302
920
Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists
Most current approaches in phylogenetic linguistics require as input multilingual word lists partitioned into sets of etymologically related words (cognates). Cognate identification is so far done manually by experts, which is time consuming and as of yet only available for a small number of well-studied language families. Automatizing this step will greatly expand the empirical scope of phylogenetic methods in linguistics, as raw wordlists (in phonetic transcription) are much easier to obtain than wordlists in which cognate words have been fully identified and annotated, even for under-studied languages. A couple of different methods have been proposed in the past, but they are either disappointing regarding their performance or not applicable to larger datasets. Here we present a new approach that uses support vector machines to unify different state-of-the-art methods for phonetic alignment and cognate detection within a single framework. Training and evaluating these method on a typologically broad collection of gold-standard data shows it to be superior to the existing state of the art.
Computational historical linguistics is a relatively young sub-discipline of computational linguistics which uses computational methods to uncover how the world's 7 000 human languages have developed into their current shape. The discipline has made great strides in recent years. Exciting progress has been made with regard to automated language classification In the typical scenario, the researcher has obtained a collection of multilingual word lists in phonetic transcription (e.g. from field research or from dictionaries) and wants to classify them according to cognacy. Such datasets usually cover many languages and/or dialects (from scores to hundreds or even thousands) but only a small number of concepts (often the 200-item or 100-item Swadesh list or subsets thereof). The machine learning task is to perform cross-linguistic clustering. There exists a growing body of gold standard data, i.e. multilingual word lists covering between 40 and 210 concepts which are manually annotated for cognacy (see Methods section for details). This suggests a supervised learning approach. The challenge here is quite different from most machine learning problems in NLP though since the goal is not to identify and deploy language-specific features based on a large amount of mono-or bi-lingual resources. Rather, the gold standard data have to be used to find cross-linguistically informative features that generalize across arbitrary language families. In the remainder of this paper we will propose such an approach, drawing on and expanding related work such as
Cognate detection is a partitioning task: a clustering task which does not necessarily assume a hierarchy. An early approach An alternative family of approaches to cognate detection circumvents this problem by first calculating distances or similarities between pairs of words in the data, and then feeding those scores to a flat clustering algorithm which partitions the words into cognate sets. This workflow is very common in evolutionary biology, where it is used to detect homologous genes and proteins More important than the clustering algorithm one uses is the computation of pairwise similarity scores between words. Here, different measures have been tested, ranging from simple string distance metrics Benchmark data for training and testing was assembled from different previous studies and considerably enhanced by unifying semantic and phonetic representations and correcting numerous errors in the datasets. Our collection was taken from six major sources covers datasets ranging between 100 and 210 concepts translated into 5 to 100 languages from 13 different language families. Modifications introduced in the process of preparing the datasets included (a) the correction of errata (e.g. orthographic forms in place of phonetic representations), (b) the replacement of non-IPA symbols with their IPA counterparts (e.g. t → ú or ' → P), (c) the removal of non-IPA symbols used to convey meta-information (e.g. %), (d) removal of extraneous phonetic representation variants, and (e) the removal of morphological markers. In addition, all concept labels in the different datasets were linked to the Concepticon ( Unlike many other supervised or semi-supervised clustering tasks, the set of cluster labels to be inferred is disjoint from the gold standard labels. Therefore we chose a two-step procedure: (1) A similarity score for each pair of synonymous words from the same dataset is inferred using supervised learning, and (2) these inferred similarities are used as input for unsupervised clustering. As for subtask (1), the relevant gold standard information are the labels "cognate" and "not cognate" for pairs of synonymous words. The sub-goal is to predict a probability distribution over these labels for unseen pairs of synonymous words. This is achieved by training a Support Vector Machine (SVM), followed by Platt scaling The gold standard data were split into a training set and a test set. Feature selection for subtask (1) and parameter training for subtask (2) were achieved via cross-validation over the train-ing data. For evaluation, we trained an SVM on all training data and used it to perform automatic clustering on the test data. The remainder of this section spells out these steps in detail. Our strategy is to first calculate string similarities and distances between pairs of words denoting the same concept and then inferring a partition of the corresponding words from those similarities or distances via a partitioning algorithm. For word comparison we utilize two recently proposed string similarity measures. The first string similarity measure is the one underlying the above-mentioned LexStat algorithm for automatic cognate detection where S AB is the similarity score of an alignment of two words A and B produced by the SCA method, and S A and S B are the similarity scores produced by the alignment of A and B with themselves. The PMI score of two sound classes a, b is defined as where s(a, b) is the probability of a and b being aligned to each other in a pair of cognate words, and q(a), q(b) are the probabilities of occurrence of a and b respectively. Sound pairs with positive PMI score provide evidence for cognacy, and vice versa. To estimate the likelihood of sound class alignments, a corpus of probable cognate pairs was compiled from the ASJP data base In the last step, app. 1.3 million probable cognate pairs were used to estimate the final PMI scores. The PMI scores thus obtained are visualized in Figure strings w 1 , w 2 is then defined as minimal aggregate PMI score for all possible alignments. It can be computed efficiently via the Needleman-Wunsch algorithm. There are major conceptual differences on how the two similarity measures are derived. LexStat similarity estimates separate scores between each pair of doculects, thus utilizing regular sound correspondences, while PMI similarity uses the same PMI scores regardless of the doculects compared. LexStat alignments further capture a prosodic tier which allows for a rough modeling of phonetic context and reflects theories on the importance of phonetic strength in sound change processes The joint distribution of LexStat and PMI string similarities for cognate and non-cognate pairs within our training set is visualized in Figure In this study, we utilized both string similarity measures discussed above, as well as a collec- tion of auxiliary predictors pertaining to the similarity of the doculects compared and the differential diachronic stability of lexical meanings, to infer cognate classifications. We chose a supervised learning approach using a Support Vector Machine (SVM) for this purpose. The overall workflow is shown in Figure During the first phase (the upper part in the figure shown in red), a SVM is trained on a set of training data and then used to predict the probability of cognacy between pairs of words from a set of test data. During the second phase (lower part in the figure, shown in green), those probabilities are used to cluster the words from the test set into inferred cognacy classes. The system is evaluated by comparing the inferred classification with the expert classification. We used the three largest data sets at our disposal (cf. the datasets colored in red in Table Each data point during the first phase is a pair of words w 1 , w 2 (i.e., a pair of phonetic strings) from doculects L 1 , L 2 from data set S, both denoting the same concept c. It is mapped to a vector of values for the following features: similarity and doculect similarity across all word pairs denoting concept c within S. The marginal distributions for cognate and noncognate pairs of those features (for the data from As the data points within the training set are mutually non-independent, we randomly chose one word pair per concept and data set for training the SVM. During the training phase, we used crossvalidation over the data sets within the training set (i.e., using one training data set for validation and the other training data sets for SVM training) to identify the optimal kernel and its optimal parameters. This was carried out by completing both phases of the work flow and optimizing the Adjusted Rand Index (see Subsection 4.5) of the resulting classification. Training and prediction was carried out using the svm module from the Python package sklearn ( In order to cluster the words into sets of potentially cognate words, we follow recent approaches by For each data set D and each concept c covered in D, a network was constructed. The vertices are all words from D denoting c. Two vertices are connected if and only if the corresponding words are predicted to be cognate with a probability ≥ θ according to SVM prediction + Platt scaling. The optimal value for θ was determined as 0.66 via cross-validation over the training data. Infomap was then applied to this network, resulting in an assignment of class labels to vertices/words. We used two evaluation measures to compare inferred with expert classifications on the test data. The Adjusted Rand Index (ARI, We took the original LexStat algorithm as a baseline with which we compare our results. LexStat provides a good baseline, since it was shown to outperform alternative approaches like the above-mentioned CCM approach The evaluation results are given in Table The plot in Figure In order to get a clearer impression on where our algorithm failed, we compared false positives and negatives in the Indo-European data The classical methods for the identification of cognate words in genetically related languages are based on the general idea that relatedness can be rigorously proven. This requires that the languages under investigation have retained enough similarity to identify regular sound correspondences. The further we go back in time, however, the less similarities we find. The fact that an algorithm like LexStat, which closely mimics the classical comparative method in historical linguistics, needs at least 100 (if not more) concepts in order to yield a satisfying performance reflects this problem of data sparseness in historical linguistics. One could argue that a serious analysis in historical linguistics should never be carried out if data are too sparse. As an alternative to this agnostic attitude, however, one could also try to work on methods that go beyond the classical framework, adding a probabilistic component, where data are too sparse to yield undisputable proof. In this paper, we have tried to make a first step into this direction by testing the power of machine learning approaches with state-of-the-art measures for string similarity in quantitative historical linguistics. The fact that our approach outperforms existing automatic approaches shows that this direction could prove fruitful in future research. and the DFG research fellowship grant 261553824 Vertical and lateral aspects of Chinese dialect history (JML). We also thank all scholars who contributed to this study by sharing their data.
1,108
1,567
1,108
Representation and Generation of Machine Learning Test Functions
Writing tests for machine learning (ML) code is a crucial step towards ensuring the correctness and reliability of ML software. At the same time, Large Language Models (LLMs) have been adopted at a rapid pace for various code generation tasks, making it a natural choice for many developers who need to write ML tests. However, the implications of using these models, and how the LLM-generated tests differ from human-written ones, are relatively unexplored. In this work, we examine the use of LLMs to extract representations of ML source code and tests in order to understand the semantic relationships between human-written test functions and LLM-generated ones, and annotate a set of LLM-generated tests for several important qualities including usefulness, documentation, and correctness. We find that programmers prefer LLM-generated tests to those selected using retrieval-based methods, and in some cases, to those written by other humans.
As AI and ML become more and more integrated into everyday processes, ensuring the quality and reliability of these ML models is mandatory, and a critical part of ensuring ML models' performance in production is having good, representative test cases. Traditionally, these tests have been written by engineers and programmers, a process that, while valuable, can be time-consuming and requires extensive experience and expertise in ML methodology. Recognizing the challenges posed by the intricacies of ML code, particularly the distinct nature of ML testing involving both pre-training and post-training tests, our research takes a deliberate focus on this specific domain. This choice serves to constrain the scope of our investigation and allows us to address the unique complexities associated with ML testing, which often deviates from conventional software testing. One possible way to aid programmers is to retrieve existing functions that have been previously implemented, similarly to what has been done for test case selection within a test suite In this work, we make initial steps toward comparing the ML test functions that are generated by LLMs with those generated by human programmers to better anticipate the consequences of a growing number of ML test functions being generated automatically by LLMs. Using a set of approximately 10,000 pairs of ML functions and their tests, we use code embedding methods to explore the semantic relationships between functions and their tests. We then experiment with semantic retrieval-based approaches to find relevant ML tests given an input test function, and finally, we compare several models' ability to generate useful ML test functions and evaluate them using expert human annotations. An overview of the process that we used is presented in Figure
The focus on learning distributed representations of code forms the groundwork of our research. We draw from It's also important to mention the effort on benchmarking datasets like CodeSearchNet Language Models on Source Code Substantial research has been invested in revealing the power of LLMs in dealing with code-related tasks, from code summarization to test generation and beyond. Supported by billions of trainable parameters and extensive publicly available source code, models like StarCoder Additionally, a previous study These works offer valuable insights into the effectiveness of these emerging models, highlighting their capabilities in understanding syntax, pattern recognition, and automation, while also bringing to light their limitations, such as their lack of true understanding, difficulty with complex logic, and challenges with generalizability and interpretability when interacting with code. However, previous applications haven't focused on the unique properties of ML tests We collected a dataset of 56,889 test files extracted from 986 different GitHub ML projects written in Python using the GitHub API In order to link ML functions and their corresponding tests, we applied several heuristics to automate the extraction process: While these rules may filter out some valid test cases, we selected them in order to aim for a high precision in terms of returning a quality set of pairs between focal methods and tests. In this work, we refer to an ML function undergoing testing as a "focal method", and its corresponding ML test case a "test". We also removed some pairs (approximately 150) that contained accents, emojis, or symbols like progress bars, which made them more difficult to process. After applying the heuristics defined above, we were left with 10,324 (focal_method, test) pairs. Around 5% of the focal methods have multiple tests, while the tests themselves are unique to the project and no test is considered to be testing multiple methods. Certain types of pairs could not be collected, e.g., when a test is testing the behavior of a predefined model or functions that are not defined within the project. To evaluate this process, we selected a random sample of 100 (focal_method, test) pairings and manually labeled whether each pairing was correct, meaning that the test does test the function it was associated with, and found that the pairing method was 95% accurate. and Retrieval Task To focus on the relationship between the focal methods and their associated tests, we created embeddings for each focal method using models trained on both code and natural language data. These models included CodeBERT An essential aspect of our exploration involved understanding the semantic relationships between pairs of focal methods' and associated tests' vector representations. Each of the models we used produced embeddings with different shapes (Code-BERT: 768, LLaMA-1: 4096, text-embedding-ada-002: 1536), but for the purpose of visualization, we used Principal component analysis (PCA) to reduce their dimensions to (2). We visualized these pairings using an arrow plot where each focal method embedding is connected to its corresponding test embedding to inspect potential relationships between them. Figure To confirm our visual findings, we ran a permutation test with the text-embedding-ada-002 embeddings. The test statistic used in our case was the mean cosine similarity between corresponding vectors in the set of tests and the set of focal methods, and the number of permutations was set to 10,000. In each permutation, each test was assigned a random focal method to be paired with, and the mean cosine similarity was computed between all pairs. Our results showed that: p_value ≈ 0.0, indicating that the mean cosine similarity between the actual pairs was extremely unlikely to have occurred by chance, and there is some significant relationship between the pairs. Therefore, it may be possible to develop a retrieval model that leverages this relationship in order to find relevant test cases given an input focal method. Based on the results of our permutation test, we next sought to explore whether the closest test embedding to a focal method embedding was its corresponding test embedding. To test this, we used KNN with cosine as a distance metric, to find the closest K tests embeddings to each focal method embedding and see if one of them is indeed its corresponding test embedding. We then performed a comparative analysis using top-K accuracy for K ∈ {1, 5, 10}. Our investigation included the evaluation of the performance of Code-BERT, LLaMA-1 7B, and Text-embedding-ada-002 models. The results are shown in the table 1. Results indicated that the OpenAI Textembedding-ada-002 model stood out with the high- est accuracy for each value of K, showcasing its ability to capture effectively the code semantics. In contrast, LLaMA-1's performance was comparatively weaker, while CodeBERT yielded the lowest accuracy. The results we obtained motivated us to explore more and see if we could train an NN to approximate the test embeddings given the focal method embeddings. We constructed an NN using Tensor-Flow's Keras 3 API. We used a sequential NN architecture with five fully connected layers and ReLU activation functions. We used 80% of the data for training, while the remaining 20% was used for testing, and Mean Squared Error (MSE) Loss was used. To evaluate the performance of the NN, we used KNN with cosine metric to find the N closest tests embeddings to the predicted vector given the focal method embedding. We then checked if the corresponding focal method embedding of the test embedding is among those K nearest neighbors and calculated the top-K accuracy scores, and the results are presented in Table Comparing the two tables 2 and 1, we observed that the NN-based approach had lower accuracy scores than the proximity-based approach for the text-embedding-ada-002 model. However, for the LLaMA-1 7B and CodeBERT models, the accuracy scores improved with the NN-based approach. Despite the accuracy improvements for CodeBERT and LLaMA-1 7B with the NN-based approach, all three models maintained the same ranking based on their accuracy rates. 5 Test Cases Generation Task 5.1 Assessing GPT-3.5-Generated Test Cases in Comparison with Human-Generated Tests Given the popularity of LLMs for code generation, especially GPT-3.5, we chose to investigate how well these types of models, can generate test cases for ML code. We generated cases for all of our ML 3 Initial analysis measuring the average lines of code and comments in the test functions, as reported in table 3, unveiled that GPT-3.5 tends to create longer (in terms of number of lines) test cases with fewer comments than humans. Additionally, both GPT-3.5 and humans occasionally omitted the function call within their test cases. Notably, 4.6% of GPT-3.5 tests and 3.28% of human tests lacked the call for the focal method. This can be explained by the diverse scenarios of unanticipated GPT-3.5 test case generation outcomes such as when the test case consisted of a pass statement only, when the generated code was not a test function, or when GPT-3.5 replicated the code of the focal method when tasked with generating a test case. For further investigation, we used the model textembedding-ada-002, since it performed the best with our retrieval task, to generate embeddings for the GPT-3.5-generated test cases as well. Using PCA dimensionality reduction technique, we performed visualization to detect if there are some differences between human-generated test embeddings and GPT-3.5-generated test embeddings that are potentially visible. We created scatter plots of the reduced embeddings, as shown in Figure To quantitatively confirm our findings, we ran a t-test, to determine if there is a significant difference between the means of the embeddings of tests generated by Humans and the tests generated by GPT-3.5. The computed t-statistic values were very close to zero, indicating a minimal variance in means between the Human and GPT-3.5 test embeddings. Consequently, the p-values were nearly 1, far exceeding our significance level of α = 0.05. Consequently, we fail to reject the null hypothesis (There is no difference between the means of our two samples). The outcomes of our t-test suggest that statistically speaking, the means of the Human and GPT-3 test embeddings do not display a sig-nificant statistical difference. This outcome does not imply that they are identical (as there may be divergences in other parameters like standard deviation, minimum, maximum, etc.). However, it does signify that, from a statistical perspective, we lack evidence to affirm their difference. With that being said, GPT-3.5 tests seem to be very similar to human tests, according to what can be measured using embeddings, which might not represent every facet of the tests. As visualization did not help much capture the differences between both test groups, we conducted a survey to understand which test cases developers and data scientists found more helpful for ML test case generation. 6.1 Survey Methodology We created four different variations of the survey with the possibility for one person to respond to more than one. Each variation of the survey had 5 ML functions extracted from 5 different GitHub projects, each with 5 accompanying test cases. So overall, there were 20 different ML functions from 20 different GitHub Projects and a total of 100 test cases. Upon the emergence of newer LLMs such as GPT-4 and LLaMA-2, and recognizing their potential in test case generation for ML code, we aimed to explore their capabilities as well. To manage costs associated with API calls, we opted not to generate test cases for all of our ML functions using GPT-4. Due to the smaller sample size required for the survey, we managed to use both GPT-4 and LLaMA-2 (with 70 billion parameters) in order to compare these other large models with GPT-3.5. The 5 accompanying test cases for each ML function were the human-generated test for that function, the GPT-3.5-generated test, the retrieved test, the LLaMA-2-generated test (70B), and the GPT-4-generated test. Both GPT-4 and LLaMA-2 (70B) tests were generated by invoking the same prompt used to generate tests using GPT-3.5. To provide the retrieved test, we followed the method that we described in section 4.2, only this time, when seeking the closest test embedding to the focal method embedding from all human-generated test cases, we purposely excluded the test cases originating from the same project as the focal method embedding. By doing so, we simulated an environment wherein our system had not encountered the project before. The process of selecting the ML functions used in the survey involved a random selection from functions that had a comment section that clarified the function's objective so that it was easier for survey takers to understand the code. Furthermore, we made sure that we were certain that the associated human test was correctly paired, eliminating cases that could be considered as noise. Moreover, participants were not provided with links to the associated GitHub projects. This decision was made to ensure fairness, as both the participants and AI assistants may or may not have had prior exposure to these projects. However, since all functions had comments, participants were able to read about the intended purpose of the function. Our survey starts with inquiries about participants' backgrounds, asking for their experience in ML and software testing, prior usage of AI tools for generating test cases, and more. Afterward, participants were presented with a hypothetical scenario wherein they were tasked with writing a test case for an ML function, and five distinct AI assistants provided example test cases to help them write it. Participants were then requested to evaluate each option based on helpfulness, correctness, and readability. The test cases were labeled as test_A, test_B, test_C, test_D, and test_E. For instance, test_A represented the test generated by humans, while test_B, test_C, test_D, and test_E corresponded to GPT-3.5, retrieved, LLaMA-2 (70B), and GPT-4 generated tests, respectively. Participants did not know the true identity of any of the systems. To eliminate any potential biases, we applied shuffling of system labels across the various survey versions. At the survey's conclusion, participants were asked to indicate their preferred system. Our survey enlisted participants from diverse groups including researchers, students, ML engineers, and software developers. To prevent any potential bias, individuals within the same group responded to distinct survey variations. This approach ensured that each survey variant collected responses from a range of groups, avoiding biased results. The participants completed the survey on a voluntary basis and were recruited from the social networks and university groups of the authors' universities in both the United States and North Africa. Our survey was completed by 17 participants from diverse backgrounds. With each survey containing 5 test cases, a cumulative 425 evaluations of test cases was reported. The results revealed that the largest group of participants was students at 41.2%, followed by researchers and software developers at 23.5%, and ML engineers who constituted 11.2% of the participants. Over 64% of our participants had at least 1 year of experience in ML, and over 47% of them had at least 1 year of experience in Software Testing. This overall experience makes them adequate for the evaluation of ML test cases. Surprisingly, the majority of the participants have never used an AI tool to generate test cases before. The few who did mentioned that they have used ChatGPT or Testsigma Throughout our survey, we asked participants to evaluate each test case individually on a scale of 1 For correctness, readability, documentation, helpfulness, and preference distribution scores, the highest is best. For the rank chosen, the lowest is the best. to 5, considering two criteria: Correctness Additionally, we asked our participants to imagine that they needed to write a test case for the target function, and then to rank each 5 test cases associated with the same ML project based on their helpfulness as a reference or starting point for writing a test case for the provided ML function. The averages of participants' scores for each criterion were calculated and summarized in Table where n is the number of the ranked elements and rank i is the rank assigned for the element i. Despite having some criteria that led to strong correlations, the reported results reveal that GPT-4 achieved the highest scores in Correctness, Documentation, and Helpfulness. On the other hand, LLaMA-2 (70B) Also with very closely matched scores, we find human-generated tests and GPT-3.5-generated tests. Even though human-generated tests slightly outperformed the GPT-3.5 model in terms of Correctness, Readability, and Helpfulness ratings, their scores are still very close. This might confirm the idea first presented in Section 5.2: GPT-3.5 and human tests are similar, with a small but noticeable difference (as suggested by their different scores) that is not captured by embedding similarity. At last, retrieved tests attained the lowest scores, resulting in a fifth-place ranking. This suggests that participants found all generative models to appear more helpful than the actual test functions that had been written to test similar ML functions. As a final question in our survey, we inquired about participants' preferred system overall. Our results revealed that the majority of our participants at 41,2% preferred GPT-4-generated tests, followed by 35,3% opting for LLaMA-2-generated tests, while the rest split up between human-generated and GPT-3.5-generated tests, with no preference for retrieved tests. Individuals with over one year of experience in ML and software testing preferred tests generated by humans and LLaMA-2 (70B) more often than others. This suggests that there may be something lacking in tests generated by GPT-4, which is only apparent to those with more experience. While this trend is interesting, it should be taken with caution due to the limited sample size. To confirm this pattern, additional data is required, making it a potential area for future work. In summary, the GPT-4 and LLaMA-2 (70B) models excel in generating apparently correct, readable, and helpful tests. Given that a majority of participants indicated that they haven't used AI tools for test generation previously, this suggests they might benefit from using them for such tasks. In this work, we employed state-of-the-art NLP techniques to generate effective representations for ML source and test code. We developed a heuristic method to build a good-quality dataset of ML function-to-test mappings, forming the basis for generating these representations. We have studied these representations through visualization by leveraging a couple of dimensionality reduction methods, and we have successfully captured some patterns, that we later confirmed. Our findings revealed an interesting insight: the CodeBERT model struggled to capture test case semantics compared to other recent GPT embeddings. We also explored the practicality of these representations for retrieving an ML test case given an ML method. Surprisingly, even state-of-the-art NLP models faced challenges in this task. We also assessed the performance of LLMs in automatically generating test cases, which revealed that some of these models outperformed human-generated tests in terms of helpfulness. It's important to acknowledge the potential weaknesses in our original dataset. Firstly, it is important to acknowledge that the quality of the collected tests may vary, as not all developers write equally comprehensive or effective tests. This variability in test quality introduces a degree of uncertainty in the dataset. Additionally, the dataset consists of projects of varying sizes. As a result, some projects are larger than others, providing a bigger pool of tests for extraction. This discrepancy in project sizes could potentially impact the representation and diversity of the dataset. Furthermore, it is worth noting that a subset of tests in the dataset may be minimal, such as those with the content def test(): pass. These minimal tests lack substantial functionality and may not contribute significantly to the overall depth of the dataset. It is also essential to acknowledge the limitations inherent in our dataset's size, which does not cover a variety of languages and was selected to increase the precision of paired functions and tests rather than to maximize coverage. Lastly, it is important to acknowledge that while the dataset primarily focuses on ML tests, it is challenging to definitively determine if all tests exclusively pertain to ML functionalities rather than general software testing. Due to the inherent complexity and interplay between ML and software testing, there may be instances where tests encompass aspects beyond pure ML functionalities. Also, for our retrieval task, and while the proximity-based approach yielded promising results, the NN-based approach might still have room for improvement potentially through refining the neural network architecture or optimization techniques. Further, a retrieval augmented generation (RAG) approach might be useful in order to gain the benefits from both the retrieval and generationbased approaches. Recognizing the limitations inherent in our survey findings is also important. To begin, participants didn't have the opportunity to execute the provided code within the survey and didn't have access to the whole repository, compelling them to rely on their intuition and expertise only for evaluating the various systems. Moreover, it is crucial to acknowledge that the survey exclusively measures the perceived correctness of the tests. Actual execution of the tests to determine their functional accuracy could provide a more robust evaluation. Additionally, while the survey's participant count is relatively modest, it remains representative. However, it's worth noting that outcomes might exhibit variation with a larger sample size. Despite those limitations, the results remain interesting and undeniably pave the way for future research perspectives. Using LLMs to generate ML test cases presents some ethical concerns that demand careful consideration. Firstly, there is the risk of unintentional leakage of sensitive information from the training data into the generated test cases, potentially compromising privacy and confidentiality. Moreover, the lack of transparency in LLMs makes it challenging to understand how these test cases are formulated, raising concerns about accountability and the potential for bias amplification. Over-reliance on the automation capabilities of LLMs in the testing process may lead to the displacement of human testers, impacting job security and employment opportunities. Additionally, there is a risk of intellectual property violation when generated test cases closely resemble proprietary data or test sce-narios. Another concern involves the potential for erroneous test cases. LLM-generated tests may contain inaccuracies, ambiguities, or flaws that, if not rigorously reviewed and validated, could lead to unreliable ML models that fail to perform as expected. We urge ML test case developers to use LLMs with caution and scrutiny, even though the generated tests appear to be promising. Verifying the generated tests remains an important step in the software development process.
947
1,810
947
Early Detection of Sexual Predators in Chats
An important risk that children face today is online grooming, where a so-called sexual predator establishes an emotional connection with a minor online with the objective of sexual abuse. Prior work has sought to automatically identify grooming chats, but only after an incidence has already happened in the context of legal prosecution. In this work, we instead investigate this problem from the point of view of prevention. We define and study the task of early sexual predator detection (eSPD) in chats, where the goal is to analyze a running chat from its beginning and predict grooming attempts as early and as accurately as possible. We survey existing datasets and their limitations regarding eSPD, and create a new dataset called PANC for more realistic evaluations. We present strong baselines built on BERT that also reach state-of-the-art results for conventional SPD. Finally, we consider coping with limited computational resources, as real-life applications require eSPD on mobile devices.
Online grooming denotes the process where a socalled sexual predator establishes an emotional connection with a minor online to systematically solicit and exploit them for sexual purposes The problem of detecting whether or not a child is being groomed by a predator is called sexual predator detection We believe that it is also important to study approaches that may prevent online grooming -as early as possible, i.e., during an ongoing chat. Ideally, the grooming process should be disrupted before it succeeds to protect children from harm. This task is non-trivial as the content of grooming chats changes over time: chats often start with the exchange of personal information and building of trust, a phase in which they are difficult to detect. In a second stage, predators further develop trust with their victims in a cycle of entrapment. They try to desensitize their victims to sexual topics, isolate them from others, and arrange meetings An example of arranging a meeting is shown in Figure
We introduce the task of early sexual predator • We introduce the problem of eSPD and formally define it. • We survey available datasets, analyze their limitations, and build a new combined dataset called PANC as a best-effort for evaluating eSPD. • We propose a task setup to evaluate eSPD, focusing on the trade-off between earliness and accuracy. • We present strong baselines for eSPD using a two-tier approach. Our method (1) analyzes sliding windows of messages from an ongoing chat using BERT and ( We evaluate three different BERT language models, two of which work on mobile. • We compare our models to previous research in conventional (i.e. "non-early") SPD settings and find that two of them outperform the current state of the art. • We provide an extensive discussion of the limitations of our models and the available data. We see our work as an important step to encourage more research into eSPD. To this end, we make our experimental setup, our baseline models, scripts for corpus processing, and the visualization tool for inspecting analyzed chats (used to generate Figure Due to privacy and legal reasons, grooming chats are extremely difficult to obtain. We introduce the (few) known corpora of this kind and discuss their limitations, motivating the assembly of the PANC dataset we discuss in Section 3. The main source of grooming chats used in SPD literature is the Perverted Justice Foundation (PJ) To our knowledge, the only work using real grooming chats is The PAN Lab at the 2012 CLEF conference introduced a shared task on sexual predator identification PAN12 has several limitations. All grooming chats stem from decoy operations and are not with actual victims, and the non-grooming chats are not with decoys. real. Most problematic for eSPD is the separation into relatively short, unordered segments, thus completely blurring the true timeline of a chat. This makes the data unsuitable for eSPD since we aim to detect predators as early as possible in potentially long-running chats. predator segments The length of real chats is potentially unbounded and keeps increasing, so regarding real chats as infinite is handy. We analyze chats after each new message, thus considering only finite prefixes for classification. Definition 3 (Prefix). Let C = (m 1 , . . . , m l , . . . ) be a chat. We call C(l) := (m 1 , . . . , m l ) the prefix of C with length l. Finally, we define eSPD as follows. Definition 4 (eSPD). Let X Test be a dataset of finite chats. For C = (m 1 , . . . , m n ) ∈ X Test and l = 1, . . . , n increasing over time, an eSPD system decides for each l whether a warning for C should be raised or not by classifying C(l). It stops as soon as a warning is raised, classifying C as grooming. If no warning is raised for all l = 1, . . . , n, it classifies C as non-grooming. Finally, eSPD is the problem of classifying all C ∈ X Test as early and accurately as possible. Note that this definition deliberately states that an eSPD system never classifies a chat as nongrooming as long as there are messages left (or the chat did not end, in a real-life setting), as it cannot know the future after the current prefix C(l). In summary, we find that existing datasets suffer from limitations that make them difficult to use for training and evaluating eSPD. The commonly used datasets PAN12 and VTPAN only contain short, disjointed, and unordered chat segments. For eSPD, however, one needs to detect grooming in a continuous message stream, which is ordered and theoretically unbounded in length. Classifying segments only, we have no information about how early in the complete chat grooming is detected. Moreover, evaluating earliness within single segments would not be interesting as it is not interpretable and because they are so short. While C C 2 does have full chat logs, it does not contain any negative samples. Our analysis thus motivates the assembly of the new PANC dataset as explained in the next section. In this section, we propose an evaluation setup for eSPD. We give a formal definition of the task followed by suitable evaluation metrics. Finally, we discuss how we use and combine existing SPD datasets to create PANC for the evaluation of eSPD. We interpret eSPD as an early risk detection problem In eSPD, there are two desiderata between which a trade-off exists: (a) Raising alerts as early as possible, and (b) raising alerts as accurately as possible. Raising warnings early is good for (a), but hampers (b) as less data is available. Waiting longer with warning hurts (a), but most likely improves (b), as later decisions are based on more messages. Accuracy metrics are most prominent in related work on detecting sexual predators (Pastor We call the number of messages that have been exchanged before a warning is raised the warning latency. We use latency-weighted F 1 where p determines how quickly the penalty should increase as latency increases. A warning after the first message receives 0 penalty and for increasing warning latency, the penalty approaches 1. Now assume an eSPD system to produce a list latencies of warning latencies for all chats C ∈ X Test where (1) C is positive, and (2) the system raises a warning for C. We define the overall speed of correct warnings as This metric is more interpretable than just using the mean or median warning latency, as it depends on the problem and the dataset at hand how good a median warning latency actually is. Finally, the latency-weighted F 1 is given by F latency := F 1 • speed. We generally consider an eSPD system A better than an eSPD system B when it reaches, for a given dataset, a higher F latency ; comparisons focusing more on speed or more on accuracy or searching for pareto-optimal solutions are also possible. Note that we, following Evaluating an eSPD system needs a corpus of chats, where each entire chat is annotated as grooming or not. Note that we do not require this annotation To address these issues, we assembled PANC, an evaluation dataset for eSPD, by carefully combining selected parts from PAN12 and from C C 2. The process is illustrated in Figure Discussion. We consider PANC to be the first corpus suitable for realistic eSPD evaluations. Yet it still has limitations: First, the negative chats are not full-length chats but only segments. While this does not impact our earliness evaluation, it prevents the computation of true eSPD accuracy. Our proposed workaround is to replace chat accuracy with segment accuracy, although we do not know how well the latter approximates the former as we therein classify short segments which can stem from anywhere in a chat. An alternative would be to use a difference source for the negative chats; however, we decided on those from PAN12 as they also include "hard negative" cases (i.e. sexual conversations between consenting adults), which we believe gives more realism to our evaluation. Another limitation is that PANC only contains chats between exactly two authors, so our systems are not applicable in group chats. However, grooming is very rare in group chats as predators depend on their actions staying unnoticed. We present a straightforward eSPD approach to demonstrate the validity of our task setup and to establish baselines for future works. It consists of two tiers of classification: (1) A local tier (Tier 1) that moves a sliding window over the messages of a chat and classifies them, and (2) a global tier (Tier 2) that decides after each window prediction whether to raise a warning or not based on the sequence of recent window predictions. The purpose of this architecture is to balance earliness and accuracy and especially to prevent single suspicious windows from triggering warnings. For Tier 1, we use a standard approach in which we add a linear classifier to a pre-trained transformer model and fine-tune the entire architecture. It takes as input all messages in a given window and outputs a binary prediction. We evaluated different BERT models: B ERT large , B ERT base We use a simple approach for the problem of detecting a chat as grooming based on Tier-1 clas- sification results over a series of windows. After every window classification, we consider the count of positively classified windows within the last 10 windows. If this value exceeds a pre-defined threshold called skepticism s ∈ {1, . . . , 10}, the chat is classified as grooming. Hyperparameters. The only hyperparameter of Tier-2 is thus skepticism which controls the earliness/accuracy tradeoff. We evaluate our baseline approach in our eSPD task setup using the proposed metrics for warning earliness, accuracy, and F latency . We compare three different eSPD systems: S BERT-large , S BERT-base , and S MobileBERT , which use the respective transformer models as described above as the Tier-1 classifier. We use a window size of 50 and a skepticism of 5; an evaluation of the impact of the skepticism parameter can also be found below. We fine-tune each of our BERT models on PANC and VTPAN. As the results of fine-tuning BERT models often vary heavily based on the random seed used An overview of evaluation results for our three model variants is given in Table Best baseline approach. As Table In Figure To get a better understanding of the accuracy of our proposed baseline approach, we also employ it in a conventional SPD setting. This allows us to compare against the state-of-the-art approaches by New state of the art on SPD. Figure We discuss several issues that must be considered before planning to apply an algorithm like the ones presented in this work in practice. A critical question is how representative PANC is of real grooming chats. Due to the lack of publicly available datasets, we could not test our models on complete negative chats. This has implications: We had to resort to measuring accuracy at the segment level, and we cannot provide concrete estimates on warning accuracy for such chats. However, we consider our results on negative segments to be promising. Our Tier-1 classifiers are trained on segments of a chat, created by a specific partitioning of the sequence of messages. However, during eSPD we apply them to windows of the last 50 messages, which may exhibit different properties than the predefined segments. For instance, as segments are separated by lengthy breaks in the conversation, they often begin with greetings -which is not the case for our windows. Such differences may confuse our models and lead to sequences of wrong window classifications, an effect we counteract through the Tier-2 classifier. While we consider only chat messages as information to detect grooming attempts, real-world applications might also have additional data available. For instance, in social media, users are often required to state their age when they create their profile. Such data could be very helpful for eSPD. However, we caution that profile information may not be reliable as it is typically not verified and therefore easy to fake -and it is common for predators to use fake information. Online grooming is a real and pressing problem faced by any chat system open to children. Accordingly, social media sites and games often use automated grooming detection systems Microsoft uses a similar approach for XBOX Live and Skype chat Because of these reasons, there is a need for eSPD systems even on mobile devices. In academia, eSPD so far has seen comparably little research despite its high societal importance, probably due to the difficulties of obtaining appropriate datasets. Early text classification. To our knowledge, We defined the problem of early sexual predator detection (eSPD) in online chats and proposed an evaluation setup for this task. To this end, we assembled the PANC dataset, which, albeit having clear limitations, in our mind is the currently best effort possible with the data available. We also showed that a baseline built on current BERT-based language models achieves strong results on this dataset, and beats previous methods in related settings. Notably, results are only modestly impacted for models that can run on mobile devices. We discussed open issues in our data and evaluation setup that must be studied carefully in future work before eSPD systems could go live (and expand on this discussion in Appendix D). We hope that making our task setup accessible to the research community will encourage more research into the highly important topic of early sexual predator detection. Early sexual predator detection is a highly sensitive topic which calls for a proper discussion of potential implications of such research, the datasets being used, and the readiness of eSPD models. There are potentially high stakes for any subject whose chats are analyzed by eSPD systems. Any application of eSPD in running chat systems would incur interaction with vulnerable populations (minors) which must be firmly protected. False-negative, as well as false-positive predictions, may have severe implications for the falsely alleged chat partner or the erroneously unprotected child, respectively. Online grooming is forbidden by law in many countries, as are the establishment of sexual relationships of any kind to children. In many countries, including Germany, already obtaining logs of chat content with sexual content involving children is forbidden, which makes acquisition or usage of real data impossible outside criminal investigations. At the same time, online grooming does happen now, and in many instances, making research into ways to prevent or at least diminish it important. Datasets. For this study, we did not create any new data or perform any experiments with human beings. According to European regulations, such research does not require an ethics vote from an institutional review board. Instead, we performed specific filtering and combination of data from the two datasets PAN12 and ChatCoder2 (CC2), which are available on request to their authors, and have been extensively used in the literature. The creators of PAN12 anonymized the data by removing usernames and email addresses to avoid the identification of users. This makes PAN12 compatible with European regulations that permit the exchange of carefully anonymized data. The C C 2 chats stem from PJ and are with offenders who were prosecuted in court and adult decoys posing as children. Thus, they contain no conversations with minors or victims, which makes C C 2 compatible with the above-mentioned regulations against possession and usage of any real chat logs involving sexual content with children. Readiness of eSPD models. Real-world applications already use automatic systems to support detection of grooming in chats (a) Chat excerpt. Original source
1,004
1,004
1,004
Semantic Frame Induction with Deep Metric Learning
Recent studies have demonstrated the usefulness of contextualized word embeddings in unsupervised semantic frame induction. However, they have also revealed that generic contextualized embeddings are not always consistent with human intuitions about semantic frames, which causes unsatisfactory performance for frame induction based on contextualized embeddings. In this paper, we address supervised semantic frame induction, which assumes the existence of frame-annotated data for a subset of predicates in a corpus and aims to build a frame induction model that leverages the annotated data. We propose a model that uses deep metric learning to fine-tune a contextualized embedding model, and we apply the finetuned contextualized embeddings to perform semantic frame induction. Our experiments on FrameNet show that fine-tuning with deep metric learning considerably improves the clustering evaluation scores, namely, the B-CUBED F-SCORE and PURITY F-SCORE, by about 8 points or more. We also demonstrate that our approach is effective even when the number of training instances is small.
Semantic frames are knowledge resources that reflect human intuitions about various concepts such as situations and events. One of the most representative semantic frame resources is FrameNet
Example sentence FILLING (1) She covered her mouth with her hand. (2) I filled a notebook with my name. (3) You can embed graphs in your worksheet. (4) He parked the car at the hotel. (5) Volunteers removed grass from the marsh. (6) They'd drained the drop from the teapot. TOPIC (7) Each database will cover a specific topic. (8) Chapter 8 treats the educational advantages. Table (1) listed in Table Recent studies Hence, in this study, we tackle supervised semantic frame induction, which assumes the existence of annotated data for certain predicates, to induce semantic frames that adequately reflect human intuition about the frames. We propose methods that use deep metric learning to fine-tune the contextual word embedding model so that instances of verbs that evoke the same frame are placed close together and other instances are placed farther apart in the semantic space. Figure For automatic construction of semantic frame resources, studies on grouping predicates according to the semantic frames they evoke can be divided into two groups: those that work on semantic frame identification, in which predicates are classified into predefined frames; and those that work on semantic frame induction, in which predicates are grouped according to the frames that they evoke, which are typically not given in advance. Semantic frame identification is often treated as a subtask of frame semantic parsing Semantic frame induction is the task of grouping predicates in texts according to the frames they evoke. Instead of frames being given in advance, each grouping of given predicates is considered a frame. As with semantic frame identification, methods using contextualized embedding have become mainstream. 3 Supervised Semantic Frame Induction The task of supervised semantic frame induction assumes the existence of frame-annotated data for a subset of a corpus's predicates, and it aims to build a frame induction model that leverages the annotated data. Clustering-based methods are generally used for semantic frame induction, and this is also true for supervised semantic frame induction, where the annotated data is used to learn the distance metric for clustering. In this study, the predicates that are used for training the metric and for testing do not overlap. Note that, because different predicates may evoke the same frame, instances in the test data include predicates that evoke frames that are present in the training data. For the simplest baseline, we use a one-step clustering-based method with contextualized embedding. The clustering method is group-average clustering based on the Euclidean distance. We also leverage the masked word embeddings and twostep clustering proposed by Two-step clustering performs clustering for each frame-evoking word with the same lemma, For supervised semantic frame induction, we finetune contextualized word embedding models by applying deep metric learning Distance-based Approach This is a classical deep metric learning approach, and the models typically use multiple encoders to train the distance between a pair of instances. In this approach, we use two losses, a contrastive loss and a triplet loss, to build frame induction models. The contrastive loss where x i denotes an embedding of an instance belonging to the i-th class, m denotes a margin, and D denotes a distance function, which is generally the squared Euclidean distance. The triplet loss (Weinberger and Saul, 2009) is used for training such that, for a triplet of instances, the distance between the anchor and negative instances, which are from different classes, is more than a certain margin greater than the distance between the anchor and positive instances, which are from the same class. The loss is defined as follows: where x a , x p , and x n denote embeddings of the anchor, positive, and negative instances, respectively, and m and D are the same as in Equation ( We create pairs for each instance in the training set by randomly selecting instances of predicates that evoke the same frame as positives and instances of predicates that evoke different frames as negatives. The margin to keep the negatives away is determined by the development set. Classification-based Approach This is an approach that has recently become the standard for face recognition. It basically uses a network that has an encoder to obtain instance embeddings and a linear layer for multiclass classification. This is superior to the distance-based approach in that it does not require a sampling algorithm and saves memory because it uses only a single encoder. The loss function is based on the softmax loss: where x i , w i , and b i denote an embedding of the instance, the linear layer's weight, and a bias term, respectively, for the i-th class, and n denotes the number of classes. Many losses used in face recognition have been adjusted by introducing different margins for the softmax loss , where θ i is the angle between w i and x i . ArcFace Zhang et al. ( where s denotes the automatically tuned scale. While the softmax and AdaCos losses do not require a hyperparameter search, ArcFace requires hyperparameters for the margin and feature scale. Here, we explore only the margin because To evaluate the usefulness of fine-tuning with deep metric learning, we experimented with supervised semantic frame induction by comparing previous non-fine-tuned models to various fine-tuned models ranging from typical to evolved ones. By varying the number of training instances, we also verified that our models were effective even for training a small number of instances. Dataset The dataset in our experiments was created by extracting example sentences in which the frame-evoking word was a verb from the FrameNet 1.7 dataset. Comparison Methods We used BERT Evaluation Metrics For evaluation metrics, we used the PURITY (PU), the INVERSE PURITY (IPU), and their harmonic mean, the F-SCORE (PIF) Table From Table The balance between BCP and BCR in the clustering evaluation metric depends on the final number of frame clusters, #C in Table We found that fine-tuned methods outperformed previous unsupervised methods when the number of training instances was around 30,000. However, the annotation cost of building a resource like FrameNet is high, so the fewer instances used for training, the easier it is to build other language resources and apply them to other tasks. Thus, we experimented with varying the number of training instances. Specifically, for each LU in the training set, the maximum number of instances was varied among 1, 2, 5, 10, and all instances. The resulting average numbers of training instances for the three sets were 1, Table It is not easy to analyze the properties of an embedding in clustering evaluation because the performance depends on the clustering method and the number of clusters. To better understand the fine-tuned embeddings, we performed a similarity ranking evaluation and visualized the embeddings. We evaluated the models by ranking instances according to their embedding similarity. Specifically, we took one verb instance as a query instance; then, we computed the cosine similarity of the embeddings between the query instance and the remaining verb instances and evaluated the similarity rankings of the instances in descending order. We used v w+m with the same weight α that was used for the one-step clustering in Section 4. We chose recall as the metric to evaluate the instance distribution. This metric computes the average matching rate between true instances, which are instances of the same frame as the query instance, and predicted instances, which are obtained by extracting the same number of top-ranked instances as the number of true instances. For example, Set 1 of Table We performed the similarity ranking evaluation in three settings with respect to the search space of the ranked instances: ALL, which included all instances, SAME, which included only instances of the same verb as the query, and DIFF, which included only instances of different verbs as the query. Table It is important to further examine whether the improved performance might have resulted only from the frames included in the training set. That is, we need to verify that the embedding of an instance of an untrained frame could be associated with a correct frame. To investigate this, we aggregated the scores separately for cases in which the frames of the query instance were included in the training set (OVERLAP) and for cases in which they were not (NON-OVERLAP). Table in the NON-OVERLAP case were only evoked by a few verbs, making it relatively easy to obtain higher ranking of instances of the same frame as the query. To intuitively understand the embeddings given by the Vanilla model and two fine-tuned models, we visualized them by t-SNE. Figure In the Vanilla model, the instances for v word tended to be grouped by frame but were not sufficiently grouped into clusters. For example, the instances of the SELF_MOTION frame were divided into two large groups, while those of the REMOV-ING frame were scattered. The instances for v mask were somewhat more scattered than those for v word . In addition, v w+m tended to group instances of the same frame. In the AdaCos and Triplet models, the instances for v word were grouped much better for each frame than those for non-fine-tuned v word . The results also confirmed that instances of frames with similar meanings, such as the PLACING and FILLING frames, were both identifiable and close. However, fine-tuned v word formed many lumps of instances. This suggests that deep metric learning incorporates too much of a verb's surface information. On the other hand, fine-tuned v mask was somewhat better than non-fine-tuned v mask , but not as good as fine-tuned v word . As deep metric learning may require the surface information about a verb to be induced, so fine-tuned v mask may not work well. The instances in fine-tuned v w+m were better grouped than those for fine-tuned v word , because instances of the same frame were more grouped. We worked on the supervised semantic frame induction, and we proposed a model that uses deep metric learning to fine-tune a contextualized embedding model and applied the fine-tuned contextualized embeddings to perform semantic frame induction. In our experiments, we showed that fine-tuned BERT models with the triplet, ArcFace, and Ada-Cos losses are quite promising for semantic frame induction, as the human intuition in developing semantic frames such as those in FrameNet can be well captured by deep metric learning. In particular, the fine-tuned BERT model with the triplet loss performed considerably better than vanilla BERT even when the number of training instances was small; accordingly, the fine-tuned model is expected to have a wide range of applications. We also found that the one-step clustering can be a good choice in addition to two-step clustering when performing fine-tuning. The ultimate goal of this study is to automatically construct semantic frame knowledge from large text corpora. This goal requires not only grouping the verbs according to the frames that they evoke but also grouping their arguments according to the frame element roles that they fill. Our proposed fine-tuned contextualized word embedding with deep metric learning could be effective for clustering arguments as it is for clustering verbs. We would like to explore how to achieve this goal. In this study, we only conducted experiments with English FrameNet, so it is unclear how useful this method will be for other corpora and multilingual resources. However, since our method does not depend on the properties of the specific corpus and language, it is quite possible that fine-tuning would improve the scores in other datasets. In addition, as our method requires supervised data from a semantic frame knowledge resource, some annotation will be necessary when applying the method to other languages that lack such a resource. Tables 8 and 9 list our experimental results for semantic frame induction when using v word , v w+m , and v mask in one-step and two-step clustering, respectively. The results show that v w+m tended to perform better than v word and v mask , thus demonstraiting the usefulness of linear completion. This tendency was noticeable for two-step clustering but more limited for one-step clustering. Regarding the results for v word and v mask , the fine-tuning was effective for v word , as the scores improved considerably, but the effectiveness was limited for v mask . This was probably because the embedding of the special token "[MASK]," which was the source of the contextualized word embedding, was shared by all instances. In Figure Figure
1,091
191
1,091
MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization
State-of-the-art summarization systems can generate highly fluent summaries. These summaries, however, may contain factual inconsistencies and/or information not present in the source. Hence, an important component of assessing the quality of summaries is to determine whether there is information consistency between the source and the summary. Existing approaches are typically based on lexical matching or representation-based methods. In this work, we introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared. We propose a Multiple-choice Question Answering and Generation framework, MQAG, which approximates the information consistency by computing the expected statistical distance between summary and source answer distributions over automatically generated multiple-choice questions. This approach exploits multiple-choice answer probabilities, as predicted answer distributions can be compared. We conduct experiments on four summary evaluation datasets: QAG-CNNDM/XSum, XSum-Hallucination, Podcast Assessment, and SummEval. Experiments show that MQAG, using models trained on SQuAD or RACE, outperforms existing evaluation methods on the majority of tasks. 1
The objective of summary evaluation is to quantify the quality of summaries, either on a relative or an absolute scale. Accurate and reliable automatic summary evaluation systems are useful to researchers, as they provide an easy and cheap way to compare new summarization models to existing ones. Although current summarization systems have improved dramatically in the last decade, and are capable of generating highly fluent outputs (Lewis et al., 2020; Zhang et al., 2020a; Brown 1 Code and model weights are available at Existing methods that measure information consistency generally perform lexical matching, either directly such as ROUGE In this work, a measure of consistency between the source and summary is defined from an information-theoretic perspective. We propose a Multiple-choice Question Answering and Generation framework, MQAG, where instead of comparing text-based answer spans, multiple-choice questions are generated and the resulting answer distributions from the source and summary are compared. The main contributions of this paper are: • We provide an alternative and novel question answering-based approach for assessing information consistency. Our approach can represent the answers via probability distributions instead of lexical or embeddings. • We show that our approach, MQAG, achieves state-of-the-art performance on four out of six summary evaluation tasks.
Standard summary evaluation metrics such as ROUGE Textual overlap scores n-gram based metrics, including BLEU where T x and T y are relation triples extracted from the source and the summary, respectively. Simulated data, such as real or fake summaries created by pre-defined transformations, have been used to train classifiers to detect inconsistent summaries When applied to assess summaries, the context is the source document and the hypothesis is the summary. The probability of being the entail class is then used as the consistency score, Score = P (entail|x, y) Span-based Question Answering (SpanQAG) A question-answering approach consists of a question-generation model and an answering model. Given automatically generated questions, the first answer is derived from the source and the second answer is derived from the evaluated summary, and then the two answers are compared. For example, Nevertheless, existing QA methods are spanbased where the answering system extracts answer spans before two answer spans are compared. Due to the nature of span-based answers, answer verification (i.e. answer comparison) is typically through exact matching, token F1, BERTScore, or a learned metric 3 Multiple-choice Question Answering and Generation (MQAG) Since current summarization systems generate highly fluent summaries, this work focuses on assessing whether summaries contain the same information as that of the source, or whether it is contradictory. One way to view information would be to consider the set of questions that are answerable given a certain passage. If a summary is consistent with the source, then one would expect the set of answerable questions by the summary to overlap with those of the source and yield similar answers. Though span-based QA approaches are similarly motivated, existing span-based frameworks use text similarity measures, either in the form of lexical or representation space. In contrast, we attempt to measure information using multiple-choice questions, which allows for a more abstract understanding of information and enables convenient use of standard information-theoretic measures. Let x = source, y = summary, q = question, and o = options associated with the question q. We define information inconsistency as, where {q (i) , o (i) } is sampled from P G (q, o|y), the question-option generation model, P A (o (i) |q (i) , x) and P A (o (i) |q (i) , y) are the option distributions given the source and summary respectively, and D is a statistical distance such as KL-divergence. Based on the information inconsistency score in Equation We refer to Equation 3 as the MQAG-Sum score as the questions are generated from the summary. Furthermore, it is possible to generate questions, {q, o} using the source x instead of the summary y, {q (i) , o (i) } is sampled from P G (q, o|x). We will refer to this variant as the MQAG-Src score. MQAG-Src is expected to measure the amount of source information present in the summary, i.e. the coverage of the summary, while MQAG-Sum is expected to measure the consistency of the summary with respect to the source. To account for consistency and coverage, we also consider a simple combination, Given two probability distributions over options o (e.g. one conditioned on source x, and the other conditioned on summary y), a statistical distance D measures the distance between the probability distributions. There are multiple distances, which can be used, and in this work, we consider some of the main distances and investigate their properties as well as their empirical performance in our MQAG framework as follows, • KL-Divergence: • One-Best (i.e. argmax matching): where o x = arg max o P A (o|q, x) and o y = arg max o P A (o|q, y). D OB simply determines whether the two answers match or not. • Total Variation: • Hellinger: All of the considered methods compare the summary y against the source document x without the ground-truth summary, and we implement these methods as described in Section 2 using code/repository from the relevant previous works. We use the ROUGE-1 (F1) score in the rouge-score Python package. based on an open scheme, and we use the implementation in FactSumm BERTScore. We use DeBERTa-base Question Generation (G1, G2) The multiple-choice question generation is implemented in two stages. P G (q, o|y) = P G2 (o \a |q, a, y)P G1 (q, a|y) (5) where o = {a, o \a } denotes all options/choices. We set the number of options to four. Both G1 and G2 are sequence-to-sequence T5-large models The answering stage contains one model A, which is Longformer-large Because not all generated questions are of high quality, we consider filtering out low-quality questions through question-context answerability measures where H(.) is base-2 entropy, so N y (q, o) ranges from 1.0 to the number of options, e.g. 4.0. When q is generated from y but N y (q, o) is high, this question q should be deemed unanswerable as it is not answerable even when using the same context. As a result, we use N y (q, o) as an answerability criterion to reject questions which have N y (q, o) higher than a threshold denoted by N τ y . 5 Experimental Results In this subsection, we carry out experiments to find the best configuration of MQAG, including the analysis of statistical distances, variants of MQAG, and answerability. We build two MQAG variants: MQAG SQuAD and MQAG RACE , which differ in the training data of the question+answer generator G1, while the distractor generator G2 and answering system A are both trained on RACE. In Table It can be seen that in both configurations, KLdivergence yields lower correlations than other distances, and on average total variation slightly outperforms Hellinger and one-best distances. Hence, total variation will be used as the main distance. The next observation is that MQAG SQuAD , despite generating more extractive questions, achieves higher correlations than MQAG RACE on most tasks except on Podcast and SummEval. Here, we compare three variants of MQAG scores. Our results in Table In Figure It can be seen that as we filter out high-entropy questions, there is an upward trend in performance across all tasks. In addition, as shown in the figure, setting N τ y at 2.0 seems to be a reasonable answerability threshold. At this threshold, N τ y = 2.0, out of 50 automatically generated questions, about 36 questions are kept for MQAG SQuAD and about 30 questions are kept for MQAG RACE . The number of remaining questions is similar across all datasets as shown in Table The baseline and MQAG results are shown in Table 5. The observation is that MQAG achieves a higher correlation than the best SpanQAG on 5 out of 6 tasks. When compared to all existing baselines, MQAG achieves state-of-the-art performance on 4 out of 6 tasks. To investigate the impact of the abstractiveness of summaries on the performance, Our best performing MQAG configuration consists of (i) generation stage G generates questions from summary y (i.e. MQAG-Sum), (ii) statistical distance is total variation, (iii) the answerability threshold N τ y is set to 2.0. Underline denotes where MQAG outperforms the best SpanQAG system, which is 5 out of 6 tasks. When compared to all baselines, MQAG achieves the highest PCC on 4 out of 6 tasks. The results of all MQAG configurations are provided in Table we split QAG-XSum and XSum-H datasets 5 into two portions of the same size by abstractiveness as measured by the longest sequence in the summary that exists in the source per the summary length (i.e. ROUGE-L precision of summary y using source x as the reference). The results in Table 6 Ablation Studies We analyse the impact of the number of generated questions on the performance of MQAG. The mean and standard deviation are presented in Figure QAGS We investigate model choices by swapping to less capable models, e.g. T5-large → T5-base for generation, and Longformer(4096) → RoBERTa(512) Given the impressive results of large language models (LLMs) across natural language generation tasks, we investigate the performance of LLMs in a zero-shot fashion instead of using fine-tuned T5 for multiple-choice question generation. Specifically, we use OpenAI GPT-3 We found that GPT-3 generated 50 questions as specified in the prompt around 26% of the examples and the remaining only have 20 questions. The majority of questions (more than 95%) have 4 op-tions, while the remaining have 2 options. In Table 7, the results show that zero-shot GPT-3 performs worse than our fine-tuned T5 systems in both multiple-choice question generation tasks. This illustrates that there is some sensitivity due to the quality of generated questions, and using our finetuned T5 is a better option than zero-shot GPT-3. This work proposes MQAG -a novel scheme for assessing information consistency between source and summary based on the distance between multiple-choice answer distributions instead of textbased answer spans in existing question-answering methods. Our experiments demonstrate the potential of this alternative approach which outperforms existing techniques on various datasets. The realization of the framework exploits current multiplechoice question generation and answering systems. Its performance is expected to increase as backbone systems improve, for example, the diversity of questions generated and the selection of options. Also, the framework is highly interpretable, allowing more insight into summary assessment. Domain. Our approach is designed to assess the information content, so it may not work well with other aspects of summary evaluation such as fluency or coherency. Our analysis is based on the systems trained on RACE, which is collected from English examinations in China. Hence, the generated questions and answer distributions could be biased towards the style of the examinations. Efficiency. Given the realization of the MQAG framework where two generators G1 and G2 are adopted, the MQAG framework can be slow when using old infrastructure, for example, it takes around 3 seconds per question on one NVIDIA P100 GPU. When applying the answerability mechanism, the threshold N τ y is set to 2.0.
1,272
1,398
1,272
Diversity Enhanced Narrative Question Generation for StoryBooks
Question generation (QG) from a given context can enhance comprehension, engagement, assessment, and overall efficacy in learning or conversational environments. Despite recent advancements in QG, the challenge of enhancing or measuring the diversity of generated questions often remains unaddressed. In this paper, we introduce a multi-question generation model (mQG), which is capable of generating multiple, diverse, and answerable questions by focusing on context and questions. To validate the answerability of the generated questions, we employ a SQuAD2.0 fine-tuned question answering model, classifying the questions as answerable or not. We train and evaluate mQG on the FairytaleQA dataset, a well-structured QA dataset based on storybooks, with narrative questions. We further apply a zero-shot adaptation on the TellMeWhy and SQuAD1.1 datasets. mQG shows promising results across various evaluation metrics, among strong baselines. 1
Question generation (QG), focusing on the questions derived from specific text passages or documents, plays an integral role in a wide array of domains. It improves question answering (QA) systems The importance of generating and evaluating multiple questions becomes evident when we examine the creation process of QA datasets One significant application of generating diverse and multiple questions is education. It has been observed that children can develop better reading comprehension skills at an early age by creating narrative questions themselves and being asked comprehension-related questions about storybooks Recently, some researchers have attempted to generate multiple narrative questions. For educational applications, To address the above challenges, we introduce a multi-question generation model (mQG) that generates diverse and contextually relevant questions by referencing questions from the same context. mQG is trained with maximum question similarity loss L M QS , which is designed to make the representation of reference questions and the representation of a target question similar. Moreover, mQG employs a recursive generation framework, where previously generated questions are recursively fed back into the model as mQG is trained to output different questions from reference questions. Same as our two baselines, mQG is trained and evaluated on the FairytaleQA dataset, which focuses on narrative comprehension of storybooks. This dataset is designed to provide high-quality narrative QA pairs for students from kindergarten to eighth grade (ages 4 to 14), and labeled questions as explicit or implicit. We adopt Self-BLEU The main contributions of this paper are summarized as follows. • We expand the scope of the question generation task by generating a comprehensive set of questions, regardless of our knowledge of the answers, and subsequently categorize them into answerable and non-answerable questions. • We introduce mQG, a novel question genera-tion model that is trained using the maximum question similarity loss L M QS and employs a recursive referencing process for generating a wide array of questions while preserving semantic correctness. • We introduce an answerability evaluation model capable of classifying questions as implicit, explicit, or unanswerable. 2 Related Work
Based on given contents, question generation aims to generate natural language questions, where the generated questions are able to be addressed with the given contents. After neural approaches took over a large proportion in QG In natural language generation (NLG), generating outputs that are not only correct but also diverse is essential. In the decoding aspect, diversity has been researched in areas such as top-k sampling In this section, we formalize the multi-question generation task and introduce our mQG. We first formulate our task and then explain how our model's training process incorporates a maximum question similarity loss L M QS . Finally, we provide a detailed outline of our recursive generation framework. The QG task in this paper aims to generate each question using a given context, question type, and the history of questions generated from the same context with the same question type. We use seven wh-words (what, when, where, which, who, why, how) as question types. Mathematically, given the context C, question type QT , and history of generated questions H i = (GQ 1 , GQ 2 , ..., GQ i-1 ), this task can be defined as generating a question, ĜQ, where: For the training process, we extract wh-words from each question by applying part-of-speech tagging with the Spacy mQG is built upon BART As shown in Figure Given a set of reference questions sentencelevel representation as Q = {Q 1 , ..., Q m } and a sentence-level representation of the target question as T Q, the maximum question similarity loss L M QS is computed as follows: where s(Q i , T Q) is a cosine similarity calculation between representations. By optimizing the model parameters to maximize the sentence-level similarity between these different representations, we guide mQG to generate diverse questions within the range of semantic correctness. This is achieved by ensuring that all the representations, which are the ground truth questions, are semantically correct. In doing so, we maintain a balance between diversity and accuracy in the generated questions. The overall training objective L is defined as L CE refers to the cross-entropy loss from a target question. As cross-entropy loss is calculated at the token level, the use of cross-entropy loss enhances mQG to generate syntactically correct questions. Figure TellMeWhy TellMeWhy contains a mixture of explicit and implicit questions. Approximately 28.82% of questions in the dataset are implicit. We evaluate with 1,134 sections and 10,689 questions from the test split. SQuAD1.1 In the experiments, we compare mQG with four baselines; an end-to-end model initialized with BART-large, and methods proposed in As the FairytaleQA dataset consists of multiple questions in one context, we concat all questions and train the BART-large model to generate questions based on each context. To match the number of generated questions, we set the maximal target length to 280 tokens which roughly matches the number of generated questions setting of mQG. We construct this baseline following the framework in QAG. This baseline follows a question-answer generation architecture by EQG. EQG model In evaluating question generation, both the quality and diversity of the generated questions are critical components. Thus, we evaluate each aspect with separate automatic evaluation metrics. We use Rouge-L score In order to evaluate whether the generated questions correspond to the context, we leverage SQuAD2.0 dataset Table General baselines (E2E and CB) that generate multiple questions in one iteration show significant underperformance in the Rouge-L F1 score and in the number of generated questions, compared to strong baselines (QAG and EQG), and the mQG. This indicates that to generate multiple questions, a specific model is needed. Across all evaluation metrics, mQG consistently outperforms the baselines. We evaluate the diversity and quality of generated questions on the FairytaleQA dataset with human judges. We hire five annotators, proficient in English as their first foreign language, to further evaluate the diversity and quality of the generated questions. We follow the human evaluation procedure described by In the question diversity study, we randomly sample 5 books from the original test set; and for each book, we randomly sample 8 sections, totaling 40 sections. For each section, we randomly sample three questions as a question set from each model, and provide only the question sets for annotation. For each question set, the annotators rank the three models on a scale of 1 (highest) to 3 (lowest) based on three dimensions of diversity: type-whether the three selected questions have different question types; syntax-whether the three selected questions use different syntax; and content-whether the three selected questions need to be addressed with diverse answers. As shown in Table In the question quality study, we again randomly sample 5 books from the original test set. For each book, we select a random sample of 8 sections. Each section contains four questions, each randomly sampled from three models and ground-truth, totaling 160 questions. Two dimensions are rated from 1 (worst) to 5 (best): appropriateness-whether the question is semantically correct; answerability-whether the question can be addressed by a given section. As shown in Table We conduct a zero-shot evaluation on two distinct datasets, to test mQG more in various real-world scenarios, where contexts and desired questions can differ. Zero-shot evaluation is essential for Table assessing model performance as it illuminates the model's ability to generalize beyond the specific examples it was trained on. In zero-shot evaluation, we compare mQG with two strong baselines, EQG and QAG. Initially, we examine the performance on the Tellmewhy dataset in Table Table Through these two different settings, we see promising results of mQG. It shows the adaptability of mQG to diverse question styles and domains, further validating the robustness and utility of mQG. Given that mQG can be set with the number of questions to generate, we conduct an experiment on various settings of question number per section per question type to generate. In Figure As discussed in section 5.2, mQG aims to increase diversity within questions while maintaining semantic correctness. mQG w/o L M QS refers to the mQG model only trained with L CE . For mQG w/o L M QS and reference questions, we give only question type and context as input while training, and no recursive framework is used in inference. Table In this work, we extend the scope of answerunaware question generation to generate multiple diverse questions. We propose a novel framework that applies a maximum question similarity loss during training to promote question diversity, followed by a recursive generation process for further refinement. Additionally, an evaluation model is introduced to verify the answerability of the generated questions. Recognizing the essential role of narrative questions in education, we train and evaluate mQG accordingly. Comprehensive experiments validate the efficacy of mQG across a variety of datasets, highlighting its potential utility in environments that demand diverse narrative questions. Limitations mQG framework utilizes a recursive feedback mechanism for generating questions during the inference stage. However, the quality of these generated questions remains uncertain. If the quality of previously generated questions is poor, this may adversely impact the quality of subsequent questions produced by mQG. Moreover, the quantity of questions that can be generated is limited by a maximum token threshold. Another limitation is the potential risk of misclassification by the evaluation model, which could lead to the categorization of unanswerable questions as answerable. Despite our efforts to mitigate this risk, the evaluation model is still at a level of uncertainty in accurately classifying the generated questions. Even with the fact that reliability scores can be low in NLP tasks, in the quality human evaluation, the reliability scores are relatively low. This can lead to uncertainty in the results. Moreover, in addition to the main results, we compare the performance of mQG between different backbone models and decoding methods. In Table To determine how MQS loss affects training, we conduct experiments with the mQG model using different settings for the weighting factor β. The overall training objective L is defined as In FairytaleQA results are the mean value of 3 crossvalidation results. Rouge-L (alt) denotes one-to-one match calculation. Diff denotes the difference between Rouge-L (ori) and Rouge-L (alt). of implementing MQS loss to enhance diversity within the bounds of semantic correctness. As mentioned in Section 4.3, we calculate the Rouge-L score only to find the highest score for each ground-truth question. This calculation method may lead to the one-to-many matching problem. To determine if the problem has occurred, we compare the results with another Rouge-L calculation Rouge-L (alt). This calculation excludes previously matched generated questions, allowing for only one-to-one matches. In Table For the mQG model, we use the MQS loss of the validation set as the selecting criteria. For the mQG models without MQS loss, we use MLE loss as the selecting criteria.
945
2,326
945
Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
Existing models of multilingual sentence embeddings require large parallel data resources which are not available for low-resource languages. We propose a novel unsupervised method to derive multilingual sentence embeddings relying only on monolingual data. We first produce a synthetic parallel corpus using unsupervised machine translation, and use it to fine-tune a pretrained cross-lingual masked language model (XLM) to derive the multilingual sentence representations. The quality of the representations is evaluated on two parallel corpus mining tasks with improvements of up to 22 F1 points over vanilla XLM. In addition, we observe that a single synthetic bilingual corpus is able to improve results for other language pairs.
Parallel corpora constitute an essential training data resource for machine translation as well as other cross-lingual NLP tasks. However, large parallel corpora are only available for a handful of language pairs while the rest relies on semi-supervised or unsupervised methods for training. Since monolingual data are generally more abundant, parallel sentence mining from non-parallel corpora provides another opportunity for low-resource language pairs. An effective approach to parallel data mining is based on multilingual sentence embeddings We propose a method to further align representations from such models into the cross-lingual space and use them to derive sentence embeddings. Our approach is completely unsupervised and is applicable even for very distant language pairs. The proposed method outperforms previous unsupervised approaches on the BUCC 2018 The paper is organized as follows. Section 2 gives an overview of related work; Section 3 introduces the proposed method; Section 4 describes the experiments and reports the results. Section 5 concludes.
Related research comprises supervised methods to model multilingual sentence embeddings and unsupervised methods to model multilingual word embeddings which can be aggregated into sentences. Furthermore, our approach is closely related to the recent research in cross-lingual language model (LM) pretraining. Supervised multilingual sentence embeddings. The state-of-the-art performance in parallel data mining is achieved by LASER Unsupervised multilingual word embeddings. Cross-lingual embeddings of words can be obtained by post-hoc alignment of monolingual word embeddings Cross-lingual LM pretraining. We propose a method to enhance the cross-lingual ability of a pretrained multilingual model by fine-tuning it on a small synthetic parallel corpus. The parallel corpus is obtained via unsupervised machine translation (MT) so the method remains unsupervised. In this section, we describe the pretrained model (Section 3.1), the fine-tuning objective (Section 3.2) and the extraction of sentence embeddings (Section 3.3). We provide details on the unsupervised MT system in Section 3.4. The starting point for our experiments is a crosslingual language model (XLM) When parallel data is available, it can be leveraged in training of the multilingual language model using a translation language model loss (TLM) (Lample and We use this objective to fine-tune the pretrained model on a small synthetic parallel data set obtained via unsupervised MT for one language pair, aiming to improve the overall cross-lingual alignment of the internal representations of the model. In our experiments, we also compare the performance to fine-tuning on small authentic parallel corpora. Pretrained language models produce contextual representations capturing the semantic and syntactic properties of word (subword) tokens in their variable context Aggregating subword embeddings to fixedlength sentence representations necessarily leads to an information loss. We compose sentence embeddings from subword representations by simple element-wise averaging. Even though meanpooling is a naive approach to subword aggregation, it is often used for its simplicity Our unsupervised MT model follows the approach of In this section, we empirically evaluate the quality of our cross-lingual sentence embeddings and compare it with state-of-the-art supervised methods and unsupervised baselines. We evaluate the proposed method on the task of parallel corpus mining and parallel sentence matching. We fine-tune two different models using English-German and Czech-German synthetic parallel data. The XLM model was pretrained on the Wikipedia corpus of 100 languages Monolingual training data for the unsupervised MT models was obtained from NewsCrawl 2007-2008 (5M sentences per language). The text was cleaned and tokenized using standard Moses To generate synthetic data for fine-tuning, we train two unsupervised MT models (Czech-German, English-German) using the same method and parameters as in The small synthetic parallel corpora obtained in the first step are used to fine-tune the pretrained XLM-100 model using the TLM objective. We measure the quality of induced cross-lingual embeddings from different layers on the task of parallel sentence matching described in Section 4.5 and observe the best results at the 12th layer after fine-tuning for one epoch with a batch size of 8 sentences and all other pretraining parameters intact. The development accuracy decreases with fine-tuning on a larger data set. We assess our method against two unsupervised baselines to separately measure the fine-tuning effect on the XLM model and to compare our results to another possible unsupervised approach based on post-hoc alignment of word embeddings. Vanilla XLM. Contextualized token representations are extracted from the 12th layer of the original XLM-100 Word Mapping. We use Word2Vec embeddings with 300 dimensions pretrained on NewsCrawl and map them into the cross-lingual space using the unsupervised version of VecMap We measure the performance of our method on the BUCC shared task of parallel corpus mining where the system is expected to search two comparable non-aligned corpora and identify pairs of parallel sentences. We evaluate on two data sets -the original BUCC 2018 corpus created by inserting parallel sentences into monolingual texts extracted from Wikipedia In order to score all candidate sentence pairs, we use the margin-based approach of Artetxe and Schwenk (2019a) which was proved to eliminate the hubness problem of embedding spaces and yield superior results When comparing our method to related work, it must be noted that the XLM model was pretrained on Wikipedia and therefore has seen the monolingual BUCC sentences during training. This could result in an advantage over other systems, as the model could exploit the fact that it has seen the non-parallel part of the comparable corpus during training. However, since both the proposed method an the vanilla XLM baseline suffer from this, their results remain comparable. We also report results on the News test set which is free from such potential bias (Table The results reveal that TLM fine-tuning brings a substantial improvement over the initial pretrained model trained only using the MLM objective (vanilla XLM). In terms of the F1 score, the gain across four BUCC language pairs is 14.0-22.3 points. Even though the fine-tuning focused on a single language pair (English-German), the improvement is notable for all evaluated language pairs. The largest margin of 21.6 points is observed for the English-Chinese mining task. We observe that using a small parallel data set of authentic translation pairs instead of synthetic ones does not have a significant effect. The weak results of the word mapping baseline can be partially attributed to the superiority of contextualized embeddings for representation of sentences over static ones. Furthermore, word mapping relies on the questionable assumption of isomorphic embedding spaces which weakens its performance especially for distant languages. In our proposed model, it is possible that joint training of contextualized representations induces an embedding space with more convenient geometric properties which makes it more robust to language diversity. Although the performance of our model generally lags far behind the supervised LASER benchmark, it is valuable because of its fully unsupervised nature and it works even for distant languages such as Chinese-Czech or English-Kazakh. To assess the effect of proposed fine-tuning on other language pairs not covered by BUCC, we evaluate our embeddings on the task of parallel sentence matching (PSM). The task entails searching a pool of shuffled parallel sentences to recover correct translation pairs. Cosine similarity is used for the nearest neighbor search. We first evaluate the pairwise matching accuracy on a newstest multi-way parallel data set of 3k sentences in 6 languages. 5 We use newstest2012 for development and newstest2013 for testing. The results in Table Since the greatest appeal of parallel corpus mining is to enhance the resources for low-resource languages, we also measure the PSM accuracy on the Tatoeba The results are clearly sensitive to the amount of monolingual sentences in the Wikipedia corpus used for XLM pretraining and the matching accuracy of very low-resource languages is significantly lower than we observed for high-resource languages. However, the benefits of fine-tuning are substantial (around 20 percentage points) and for some languages the results even reach the supervised baseline (e.g. Kazakh, Georgian, Nepali). It seems that explicitly aligning one language pair during fine-tuning propagates through the shared parameters and improves the overall representation alignment, making the contextualized embeddings more language agnostic. The propagation effect could also positively influence the ability of cross-lingual transfer within the model in downstream tasks. A verification of this is left to future work. Table We derive sentence embeddings from all layers of the model and show PSM results on the development set averaged over all language pairs in Figure We proposed a completely unsupervised method to train multilingual sentence embeddings which can be used for building a parallel corpus with no previous translation knowledge. We show that fine-tuning an unsupervised multilingual model with a translation objective using as little as 20k synthetic translation pairs can significantly enhance the cross-lingual alignment of its representations. Since the synthetic translations were obtained from an unsupervised MT system, the entire procedure requires no authentic parallel sentences for training. Our sentence embeddings yield significantly better results on the tasks of parallel data mining and parallel sentence matching than our unsupervised baselines. Interestingly, targeting only one language pair during the fine-tuning phase suffices to propagate the alignment improvement to unrelated languages. It is therefore not necessary to build a working MT system for every language pair we wish to mine. The average F1 margin across four language pairs on the BUCC task is ∼17 points over the original XLM model and ∼7 on the News dataset where only one of the evaluated language pairs was seen during fine-tuning. The gain in accuracy in parallel sentence matching across 8 language pairs is 7.2% absolute, lagging only 7.1% absolute behind supervised methods. For the future we would like to apply our model on other cross-lingual NLP tasks such as XNLI or cross-lingual semantic textual similarity.
734
1,072
734
Bias in Opinion Summarisation from Pre-training to Adaptation: A Case Study in Political Bias
Opinion summarisation aims to summarise the salient information and opinions presented in documents such as product reviews, discussion forums, and social media texts into short summaries that enable users to effectively understand the opinions therein. Generating biased summaries has the risk of potentially swaying public opinion. Previous studies focused on studying bias in opinion summarisation using extractive models, but limited research has paid attention to abstractive summarisation models. In this study, using political bias as a case study, we first establish a methodology to quantify bias in abstractive models, then trace it from the pre-trained models to the task of summarising social media opinions using different models and adaptation methods. We find that most models exhibit intrinsic bias. Using a social media text summarisation dataset and contrasting various adaptation methods, we find that tuning a smaller number of parameters is less biased compared to standard fine-tuning; however, the diversity of topics in training data used for fine-tuning is critical.
Opinion summarisation aims to condense the opinions presented in the source documents into a summary so that readers can effectively comprehend the opinions in the source documents using input data such as product reviews A summarisation model's output will reflect any biases inherited from the training data. Pre-trained language models (PLMs) were exposed to a variety of data that may contain societal bias, which inevitably perpetuates social stereotypes in models Prior studies have focused on studying bias in opinion summarisation using extractive models by comparing how contents are extracted and if they are representing opinions from different social groups in the source documents equally or proportionally In this study, we use the following definition of fairness: the generated summary must give exposure to the opinions of different social groups equally or proportionally w.r.t. the input documents; more information on this can be found in Section 3. To address the aforementioned issues, this paper introduces a method using a classifier to identify opinions and a fairness metric to measure bias using abstractive summarisation models to summarise text with opinions, using political bias as the case study. We further investigate var-ious adaptation methods and the bias introduced, using our method for evaluating bias in abstractive summarisation. This can be used in conjunction with other performance evaluations to identify models that have good performance while keeping bias to the minimum. We find that different models and their variants express intrinsic bias, and fine-tuning these pretrained models to summarise social media text amplified the bias. In addition, we find that adaptation methods play an important role. We find that tuning a smaller number of parameters using methods such as adapter tuning
Opinion summarisation is a task to summarise user opinions expressed in different online media, such as product reviews, social media conversations, and online discussion forums. There are two primary types of models: extractive -selecting salient sentences from input documents Existing studies of bias in opinion summarisation have focused on the perspective of using social attributes of social media users and examining whether the generated summary reflects these groups fairly by selecting text produced by different social groups equally or proportionally using different social attributes such as gender, race and political stance Prior work has paid attention to bias in language models. Extensive research has focused on social biases such as gender, race and other social attributes Figure Given a collection of tweets, T , defined as T = {t 0 , t 1 , t 2 , ..., t N }. Each tweet t i has a groundtruth label y i ∈ Y for its political stance, where Y = {y 0 , y 1 , y 2 , ..., y N } represents the label set (left or right-leaning). Given a set of input tweets, a model would generate a summary S where each summary consists of a list of sentences defined as S = {s 0 , s 1 , s 2 , ..., s L }. Each generated sentence would be classified as left or right-leaning using the trained classification model discussed in Section 4.1. Given the set of input tweets T , the proportion of left and right-leaning documents can be represented as P T L and P T R respectively. For the generated summary S, the proportion of left and right-leaning sentences can be represented as P SL and P SR respectively. For a model to be considered unbiasedly representing opinions in the provided source documents, it should generate a summary that reflects similar proportions of opinions in the input documents, i.e. P T L = P SL and P T R = P SR , or P T L /P T R = P SL /P SR . In our study, we focus on evaluating the model's output w.r.t. the input proportions only. We are considering two different input scenarios, namely equal input and skewed input. The intuition behind and the details of different input proportions in summarising social media text are below: • Equal Input In the case of equal input, the input documents contain the same proportion of opinions from different social groups. For a model to be considered fair, it should give exposure to opinions from different social groups equally in the generated summary, i.e., if both P T L and P T R are 0.5, the generated summary should reflect this by having both P SL and P SR equal to 0.5. • Skewed Input It is not always practical to have equal distribution in the input documents; instead, they are often proportionally different among different groups. For example, existing studies have shown political parties tweet at different frequencies We evaluate fairness in models based on the idea that the generated summary should give exposure to opinions representing different social groups w.r.t. the input only. More details on the metric we are adapting using these notions for evaluation can be found in Section 4.2. Note that our notion of fairness can be broadly applicable to the summarisation of different types of opinions in other genres, such as positive or negative opinions on specific issues. We formulate our problems in three steps. We first use a classification model to determine whether the sentences in the generated summary represent opinions from left or right-leaning groups. Then, using the metric to assess whether a model contains left or right-leaning bias and quantifying the severity by comparing the generated summaries w.r.t. the input documents. The overall process of measuring bias is visualised in Figure We use a RoBERTa Detail of the training process can be found in Section A.6. The average accuracy and macro F1 scores of the model are 0.9162 and 0.9031 respectively. The majority of the input documents contained only a single sentence. We, therefore, treat each sentence in the generated summaries as a tweet and apply the classifier to retrieve opinions in the generated summary. Model generated summaries often consist of compound sentences that contain opposing opinions due to their abstract nature. To overcome this issue, we first use ChatGPT (we use Ope-nAI's ChatGPT API (gpt-3.5-turbo-0301) for our experiments) to split these compound summaries into sentences containing only a single opinion by prompting "Split the following sentences into simple propositions without introducing new information, do it sentence by sentence: \n\n Sentences:". Then apply the classifier to each of these sentences. Note that the summarisation dataset provided by Calculating the proportion of left and right-leaning in the input tweets and summary provides a set of opinion distributions in both the source documents and the summary. To answer the question of whether the generated summary exposes opinions in the input documents equally or proportionally, a similarity measure over pairs of such distributions is required. Even though we can compare two distributions using any distributional divergence, there are some intricacies in the differences between the two distributions that we would like to capture. In particular, which side is a biased model more likely to give exposure to? This means that divergence measures such as the Kullback-Liebler or 1-Wasserstein distance are insufficient as they are deemed not directional. We thus turn to a fairness notion called statistical parity that is used to evaluate fairness in machine learning models and decision-making procedures In our experiments, we report the average Second-order SPD as the overall fairness measurement for each model with different input proportions. We use existing state-of-the-art abstractive summarisation models with different architectures and variants in our study. Including encoder-decoder models BART • BART • T5 • GPT-2 Different from extractive summarisation models, abstractive summarisation models generate the summary to cover key information in the input documents by rephrasing. To achieve this, a certain level of model tuning is required, and different adaptation methods can be applied. We are using the following adaptation methods on all models mentioned in Section 4.3: • Standard fine-tune the models mentioned in Section 4.3 are further trained on a dataset to adapt to the specific task, during this step, the model's parameters are all updated to better adapt to the task at hand. • Adapter tuning instead of updating all parameters in a model, adapter tuning introduces adapter layers in the original model and only updates parameters in these layers • Prefix-tuning (Li and Liang, 2021) is an additive method where the beginning of the input (prefix), is connected to a series of continuous vectors that are specific to the task at hand. In every layer of the model, the hidden states are appended with the prefix parameters; upon tuning, only the prefix parameters will be updated. The tokens of the input sequence can still attend to the prefix as virtual tokens. Our implementation of prefix-tuning is using the PEFT library from HuggingFace. • Last decoder layer tuning we freeze all pretrained parameters for the models stated in Section 4.3 with the exception of the final decoder layer. This would only update the final layer of the decoder, leaving the other layers of the model unchanged. 5 Results and Discussion In this study, we use the tweet summarisation dataset provided by Bilal et al. ( (2022) we limit the abstract summarisation models word limit to the generated summary within [90%, 110%] of the gold standard length. We trained the models using the provided training set: 80% for training and 20% for evaluation, with a batch size of 16, for 10 epochs with early stopping, with a learning rate resulting in the lowest validation loss. Then evaluated on the provided test set. To test whether a model has political bias when summarising social media text, we use the political partition of the dataset provided by Recall that we use different input proportions to examine model fairness, we generate the testing dataset as follows: for equal input, we select 50% of tweets from both political stances. We have two scenarios for skewed input: one with more left-leaning tweets (where 75% of the inputs are left-leaning and 25% are right-leaning) and one with more right-leaning tweets (where 25% of the inputs are left-leaning and 75% are right-leaning). For each scenario, we create 100 test inputs with 20 tweets each to ensure it is within maximum input length limit. The purpose of this is to determine if the model can fairly represent both sides given an equal input; in the case of skewed inputs, whether the model can reflect the stances proportionally. A fair model should generate summaries exposing opinions from different social groups w.r.t. the opinion proportions presented in the source documents only. In summary, we first adapt models to summarise social media, test model performance using data provided by Bilal et al. ( representing a particular social group, we are using the term intrinsic bias to denote political bias in social media text summarisation in pre-trained models. We measure the intrinsic bias by looking at the bias expressed when applying models in a zero-shot setting. The result of intrinsic bias can be found in Table 1. A fair model should have a close to zero absolute value of Second-order SPD; negative values indicate including more left-leaning information than it should, and positive values indicate including more right-leaning information than the model should. A model should achieve a close to zero reading for all three input proportions to indicate complete fairness by reflecting political stances w.r.t. the input only. The Second-order SPD (SPD 2nd ) is reported for measuring the fairness of models using different input proportions (equal, more left-leaning, and more right-leaning), and calculated by averaging across test instances. We find that most models can fairly represent the input political stances when the provided inputs are balanced or contain more left-leaning information. However, when providing more right-leaning input, all models failed to expose opinions proportionally in the generated summaries. Overall, models are better at exposing left-leaning opinions than right-leaning opinions, indicating models are expressing left-leaning bias, which is consistent with the zero-shot findings of Different adaptation methods are available other than standard fine-tuning to adapt language models to a specialised task, and it has been shown that tuning a smaller set of parameters can result in more robust performance than standard fine-tuning Overall, models become more left-leaning using different adaptation methods; this is witnessed by the shift of Second-order SPD for equal and more right-leaning inputs, where they have higher absolute negative values, indicating models generate summaries that expose opinions representing the left more than the right. The overall distribution of bias across various models remains similar and mainly reflects intrinsic bias. The dataset provided by In this study, we examine evaluating fairness using abstractive summarisation models to summarise social media opinions, where fair models should generate summaries expose opinions from different social groups w.r.t. the provided input only. In the case of political discussion, we find that most PLMs present intrinsic bias by giving fair exposure to opinions from the left-leaning group but not the right-leaning group. We further investigate different adaptation methods and how they affect fairness. The result shows that models adapting to the task of summarising social media text increase bias in general; however, tuning a smaller number of parameters have relatively lower bias. We further investigate tuning models by individual topic, where we find the benefit of bias reduction diminishes when tuning a smaller number of parameters, which suggests the importance of diverse datasets being presented when tuning a smaller number of parameters. Future work may explore the relationship between exposing models to diverse topics and bias. Our study sheds light on understanding bias and the effect of different adaptation methods on bias in abstractive summarisation models, particularly when summarising text with opinions. In this study, we examine bias in summarising social media text using PLMs and different adaptation methods. We focus on a single type of biaspolitical bias, due to the limited dataset available. We understand and respect the intricacies of political ideologies and recognise that they go beyond a simple binary classification. However, within the confines of our current data, categorising along the left-right spectrum provides a practical and necessary approximation for analysis. We hope that future research with more diverse datasets will allow for a more nuanced exploration of political leanings. However, the framework of this study is applicable to different social biases in summarising social media text. Furthermore, due to the inability to update model parameters with different adaptation strategies in close-sourced LLMs, we focus on open-sourced language models in our work. Having stated that, the methodology for evaluating fairness using LLMs to summarise social media text is still applicable for researchers who have access to these models. This study followed ethical principles and guidelines. The authors of this paper by no means suggest that language models are intentionally biased. We highly encourage readers to investigate and evaluate the findings for themselves. Overall, the goal of our research is to promote awareness of bias in summarising social media text since it is critical to understand what is summarised and whether it represents actual public opinion. Our work contributes to understanding the biases of summarisation models when summarising social media text, which is crucial for ethical use. Table 5: In the in-topic setting, for the COVID-19 partition, adapter has the overall best performance by obtaining the highest ROUGE scores; for elections, standard fine-tune has the overall best ROUGE scores. When applying models in a cross-topic setting, most models have a significant performance drop, except for those fine-tuned using adapter tuning. Suggesting adapter tuning is the most robust method for summarising social media text. the provided test set by topic. In the in-topic setting, models are tested using the same topic as they are trained on, i.e., training using COVID-19 and testing using COVID-19. In the cross-topic setting, language models are tested using a different topic, i.e., training on COVID-19 and testing using elections. We measure the model performance using the ROUGE score In the in-topic setting, for the COVID-19 partition, adapter has the overall best performance by obtaining the highest ROUGE scores; for elections, standard fine-tune has the overall best ROUGE scores. When applying models in a cross-topic setting, most models have a significant performance drop, except for those fine-tuned using adapter tuning. Suggesting adapter tuning is the most robust method for summarising social media text. To verify the necessity to use Second-order SPD to measure bias, we conducted paired t-tests on the Observed SPD and Expected SPD across various input proportions, models, and adaptation methods. The result is presented in Table Indicating that using SPD alone is not sufficient to capture change in representation. Open-source Packages We utilise different opensource scientific artifacts in this work, including ROUGE We declare that the use of all models, datasets, or scientific artifacts in this paper aligns with their intended use.
1,091
1,840
1,091
A Holistic Approach to Reference-Free Evaluation of Machine Translation
Traditional machine translation evaluation relies on references written by humans. While reference-free evaluation gets rid of the constraints of labor-intensive annotations, it can pivot easily to new domains and is more scalable. In this paper, we propose a referencefree evaluation approach that characterizes evaluation as two aspects: (1) fluency: how well the candidate translation conforms to normal human language usage; (2) faithfulness: how well the candidate translation reflects the source data. We further split the faithfulness into word-level and sentence-level. Extensive experiments spanning WMT18/19/21 Metrics segment-level daRR and MQM datasets demonstrate that our proposed reference-free approach, ReFreeEval, outperforms SOTA reference-free metrics like YiSi-2, SentSim and BERTScore-MKD in most language directions. The code can be found at ReFreeEval Repo 1 .
Machine translation evaluation has conventionally relied on reference, where outputs are compared against translations written by humans. This is in contrast to the reference-free manner in which translation quality is directly assessed with the source text. Reference-free evaluation The history of reference-free evaluation for MT can trace back to "QE as a Metric" track of ˚Equal contribution. : Corresponding author. 1 More challenging but worthwhile, we focus on dispensing with references as well as human scores. Nevertheless, embedding-based methods are limited to token-level semantic similarity while neglecting sentence-level faithfulness In addition, current reference-free evaluation methods rarely take fluency into account. For the unfluent candidates whose content is roughly consistent with the source, the embedding-based metrics can hardly discriminate and provide accurate evaluation scores In this work, we propose a holistic approach (i.e., ReFreeEval) to enhance the evaluation model in aspects of fluency and faithfulness, meanwhile on both word and sentence levels. With regard to fluency, we pose a data augmentation method and train a fluency discrimination module. For word-level faithfulness, we adopt a self-guided contrastive word-alignment method. For sentencelevel faithfulness, we execute knowledge distillation with SBERT
Reference-free evaluation of MT can be characterized as two aspects: (1) fluency: how well it conforms to normal human language usage; and (2) faithfulness: how well the translated text reflects the source data. We assess faithfulness at different granularity: word level and sentence level. Figure We explore a data augmentation method to perturb the fluency of target sentences with noise which is difficult to be identified. Then we train a fluency discrimination module with contrastive learning A complex or compound sentence 3 has two or more clauses and relative clauses that are joined together with conjunctions or punctuation. As logical relations exist between these clauses, we manipulate and permute the clauses separated by punctuation, instead of words. In this way, the meaning is preserved inside the clauses, meanwhile, the sentence is often unfluent and unnatural. Similar to complex and compound sentences, for a simple sentence with only one clause 4 , we randomly split it into two fragments and permute the two fragments. Compared to permutation on the token level, clauselevel permutation has less influence on sentence fluency and semantic change. The clause-based 3 We denote a source and target sentence in parallel data as x and y. Perturbed samples augmented from y are ŷ1 , ŷ2 , ..., ŷk . A reliable metric has the ability to give the original fluent target y a higher evaluation score than those k perturbed unfluent samples. As for the score, we adopt the same calculation measure as BERTScore but replace the pre-trained monolingual model In order to discriminate fluent sentences from perturbed ones according to these scores, we treat the original target and its corresponding perturbed samples as opposite and assign them 1/0 hard labels. The cross-lingual model which produces XBERTScore is trained to classify target-side sentences with a cross-entropy loss function. The objective function on N training samples is as follows: e swpx,yq `řk i"1 e swpx,ŷ i q (1) As for word-level faithfulness, each word in the source sentence should have a corresponding crosslingual representation in the target sentence and each word in the target sentence should be an accurate translation of its source word. This motivates us to do word-alignment training to enhance wordlevel evaluation. This module shares similar architecture with sentence-level fluency where word embeddings are derived from 9th layer of XLM-Roberta-Base. We take the same steps as A ij " 1 means x i and y j are aligned. Based on this objective, we adopt a self-guided contrastive cross-lingual word-alignment method. By contrast, we not only pull semantic aligned words to have closer contextual representations but also push unrelated words away The source token and target token are deemed to be unrelated if their similarity value is low. In our method, these unmatched pairs constitute negative samples and are pushed away. Moreover, we set threshold c 2 to further restrict the negative samples. The unmatched pairs whose similarity value is lower than c 2 are discarded from negatives as this unmatched relation can be easily distinguished by the model. In this way, we can control the difficulty of negative samples and only preserve those indistinguishable ones (hard negatives) to train the model. B " pS xy ą c 2 q ˚pS T yx ą c 2 q (3) B ij " 1 means x i and y j are aligned or a part of hard negatives, which are preserved to train. In Figure Finally, based on two dimensions of source and target, the positive and negative samples mentioned above, we construct a self-guided contrastive learning objective function on the word level as follows: L word " L x `Ly (6) The main idea is to improve sentence-level faithfulness evaluation. Concretely, we distill sentencelevel semantic meaning from SBERT into the wordlevel shared model. We use SBERT to extract semantically meaningful sentence embeddings. Sentence semantic similarity between x and y is calculated with cosinesimilarity between sentence embeddings x and y: The semantic similarity reflects the sentencelevel faithfulness from target to source. Then we can obtain sentence-level faithfulness scores s s px, yq, s s px, ŷ1 q, ..., s s px, ŷk q. We use KLdivergence as the objective function to reduce the discrepancy between sentence-level and word-level similarity: L f a " ÿ x,y 1 PYx s s px, y 1 q log s s px, y 1 q s w px, y 1 q (8) In this distillation module, SBERT plays a role of a teacher. Sentence-level semantic knowledge is distilled into the word-level shared model through these sentence-level faithfulness scores. In this way, evaluation is no longer limited to word level but incorporated sentence semantics. On the other hand, SBERT plays a role as a corrector. It is unreasonable that a disturbed sample with slightly changed semantics is considered to be completely contrary to the original sentence. We correct the binary classification and convert the 0/1 discrete value in the fluency discrimination module to continuous variables. For sentence-level training, we combine fluency with faithfulness. This joint architecture is motivated by α is a hyper-parameter to control the weight that the sentence-level faithfulness module accounts for. 3 Experiment Datasets We train and evaluate on four language pairs: EnglishØChinese and EnglishØGerman. For training, we use the datasets following Awesome-Align Baselines For reference-based metrics, we choose sentBLEU For WMT21 segment-level evaluation, conventional Kendall-tau statistic is used to measure the correlation between our scores and MQM scores. The main results are displayed in Table We propose a reference-free evaluation approach ReFreeEval that comprehensively considers three aspects: aspect. ReFreeEval, combining the above three modules, achieves a higher correlation with human judgments, outperforming current SOTA referencefree metrics like YiSi-2, SentSim and BERTScore-MKD in most language directions. In this section, we discuss some limitations of our method and future work based on the limitations. First, the enhancement of the word-level module is not as strong as the remedy of the sentence-level module. Our word-level module solely achieves improvement compared with XBERTScore but doesn't improve as much as the sentence-level module. The main reason is that the XBERTScore framework lacks sentence-level semantic knowledge. Besides, our word-level self-guided contrastive method doesn't resort to external information and only consolidates the alignment already existing in the pre-trained language model. Second, ReFreeEval performs comparably with baseline models on language pairs involving German. We guess it is due to the evaluation of QE. In the future, we'll further explore valuable external information on word level. And we'll try to explore discrepancies among language pairs to optimize the results. In addition, our simple but effective data augmentation method -clause per- mutation doesn't rely on rules or toolkits, which is an initial attempt at modeling fluency. It could benefit from further refinement such as languagespecific knowledge, syntactic and semantic parsing to recognize clauses. We'll conduct an in-depth investigation into further work. would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper. We would like to express our sincere gratitude to Hui Huang for guidance before this research. We are also grateful to Chunyou Li, Yu Xiang and Yu Zhang for their assistance during internship. Sentence DA XBERTScore ReFreeEval SRC 但也有顾客认为,网站退款服务不是百分之百 完美。 REF Nonetheless, some customers felt that website refund services are not perfect. But there are also customers who believe the site refund service is not 100 per cent perfect. 1.1059 0.8993 0.9249 But also some customers believe that website refunds money the service is not 100% perfect. -1.5038 0.9031 0.8680 Table metrics are restricted to surface form and neglect semantic meaning. Instead, embedding-based metrics adopt word embedding to explore word-level semantic meaning. WMDo COMET leverages contextual word embeddings of the source sentence, MT hypothesis, and reference (or human post-edition) extracted from pretrained cross-lingual models. The embeddings are combined and fed into a feed-forward network. It's a quality estimation system and is trained with human assessments(DA, HTER, MQM). As reference is costly to be collected in practice, reference-free metrics attract more attention. Recent studies have explored evaluating translation quality only based on the source text. YiSi-2 calculates similarities between crosslingual word embeddings for aligned source and candidate translation words and outputs an Fmeasure statistic as the metric score. OpenKiWi-XLMR As reference-based BERTScore has achieved outstanding performance, many recent referencefree evaluation methods build on BERTScore. XBERTScore (Leiter, 2021) adopts the crosslingual pre-trained language model to evaluate only based on source sentence without reference. SentSim From Table Following the data setting of awesome-align We compare our clause permutation with tokenlevel data augmentation methods shuffling and repetition. The results are displayed in Table For the fluency module alone, our clause-based augmentation method performs much better than the others, which suggests that our method provides more proper and valuable fluency information than others. As for sentence-level faithfulness, we compare the variation of sentence semantic similarity in Table Based on the linguistic definition of clauses, our clause permutation approach can effectively incorporate perturbation to continuity and smoothness, which constitute the essence of fluency. This approach is simple and intuitive, making it a suitable choice for the preliminary step for more in-depth investigations about realistic perturbations. For sentence-level training, we adjust the hyperparameter α to balance fluency and faithfulness. A small α means the sentence-level training mainly focuses on classification, which may neglect the semantic meaning of perturbed samples as we explained in section2.3. While a large α weakens the effect of hard classification labels, the soft similarity is also not enough for sentence-level training. From Table Samples in Word-Level Faithfulness We experiment with different settings of threshold c 2 in word-level faithfulness to observe the influence of the difficulty of negative samples. A small c 2 reduces the difficulty of contrastive learning. This setting includes negative samples whose unmatched relations can be easily distinguished. While a large c 2 restricts the negative samples extremely, which may lose some useful information. The results in Table Our experimental results above are based on the model training in a single run with random seed 42. In this section, we implement the statistical significance test following
884
1,357
884
FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge
Detecting factual errors in textual information, whether generated by large language models (LLM) or curated by humans, is crucial for making informed decisions. LLMs' inability to attribute their claims to external knowledge and their tendency to hallucinate makes it difficult to rely on their responses. Humans, too, are prone to factual errors in their writing. Since manual detection and correction of factual errors is labor-intensive, developing an automatic approach can greatly reduce human effort. We present FLEEK, a prototype tool that automatically extracts factual claims from text, gathers evidence from external knowledge sources, evaluates the factuality of each claim, and suggests revisions for identified errors using the collected evidence. Initial empirical evaluation on fact error detection (77-85% F1) shows the potential of FLEEK. A video demo of FLEEK can be found at
While textual information offers a convenient and efficient means of communication, it is critical to acknowledge its potential for misuse or unintended consequences. False or misleading information spreads easily over online platforms Previous works In this work, we present FLEEK (FactuaL Error detection and correction with Evidence Retrieved from external Knowledge), an intelligent and model-agnostic tool designed to support end users (e.g. human graders) in fact verification and correction. Our tool features an intuitive and userfriendly interface, capable of automatically identifying potential verifiable facts from input text. It generated questions for each fact and queries both curated knowledge graphs and the open web to collect evidence. Our tool then verifies the correctness of the facts using the gathered evidence, and suggests revisions to the original text. Our verification process is naturally interpretable since the extracted facts, generated questions, and retrieved evidence all directly reflect which infor-mation units contribute to the verification process. For the example mentioned above, FLEEK would highlight verifiable facts with different colors indicating their factuality levels (see Figure
Figure Taylor Swift is 30 years old In this work, we define a fact as a unit of information that (1) describes a certain entity or (2) captures the relation between two entities (3) describes an event. Each fact consists of a subject, a predicate, and at least one object. We use the semi-structured triple format to represent such a fact. Our goal is to break a sentence into a set of triples such that each triple represents a verifiable piece of information. This way, we can provide more fine-grained verification details for each sentence. To exhaustively extract facts, we consider two triple formats: Flat Triple: For binary predicates, i.e., predicates with one object, we represent the fact in the form of (Subject; Predicate; Object). For example, the triple representation of the fact "Taylor Swift is 30 years old." is (Taylor Swift; age; 30 years old). Extended Triple Ilyas et al. ( To extract these triples, we came up with five challenging human demonstrations such that, for an input sentence, they include different combinations of flat and extended triples. We prompt two instructable LLMs to obtain such triples. More details on LLMs utilized for this task, along with an in-depth analysis of the errors they generate, is provided in section 4. Taylor Swift is 33 years old (born in The generated questions will be sent to two retrieval systems: a knowledge graph (KG)-based system and a web-based system. Knowledge Graph-based: We send the question generated for each triple t to our KG question answering (KGQA) system and collect the retrieved short answers. The answer and can either be a single value (e.g., birth date, birthplace) or a list (e.g., profession, spouses). The ensuing entailment decision is derived differently for these two forms of answers (more details in Section 2.1.4). Web-based: Similarly, we also submit the same question(s) to our web search engine (Web Search). We then take the top-k (e.g., 5) web passages returned for each question and combine them to create a consolidated set of answers. Additionally, Web Search is able to highlight the short answer a for each retrieved passage p. The final retrieval list from Web Search is in the format [(p 1 , a 1 ), (p 2 , a 2 ), ..., (p k , a k )]. Given an input passage p, we split it into a set of sentences {s 1 , ...s i }. We then verify each sentence using the sequential pipeline described below. Given the output of the Fact Extraction component T , the task of Question Generation is to generate questions for each t ∈ T such that the answer to the question is the Object part of t. In this way, various answers retrieved from different sources can be used to verify each triple t. Depending on the format of triple t (flat or extended), we introduce two different question generation paradigms. Type-aware Question Generation (TQGen). Consider the triple In addition to generating a precise type-aware question, we need to provide context for extended triples so that the retrieved evidence corresponds to the exact situation that requires verification. Consider the extended triples mentioned earlier, (Taylor Swift; moved; move_ID; place; Nashville) Given the triple representation of a fact t, the set of KG answers A kg = {a 1 , a 2 , ...}, and the set of Web answers A w = {(p 1 , a 1 ), (p 2 , a 2 ), ..}, the task is to decide whether t is supported by the set of retrieved evidence. This involves two steps: Step 1 -Verify against KG answers. Based on our observation, when the evidence retrieved from the KG is a singular value, the expected answer to the question is most likely to also be a single value (e.g. city of birth). Therefore when |A kg | = 1, we classify the fact as "Strongly Supported" if it is entailed by the answer, and "questionable" otherwise. However, if the KG answer is a list, we classify each answer in A kg as either "supporting" or "not supporting" based on whether it entails the fact. In this case, due to the limited coverage of facts in KG Step 2 -Verify against Web answers. In case the KG answer is empty or a list, web answers will be also used to make a decision. We classify the answers in A w as either "supporting" or "not supporting" evidence. Finally, the fact is labeled as "Likely Supported" if our system finds at least one "supporting" evidence and "Questionable" otherwise. In what follows, we describe how perform evidence classification. Triple Entailment. For every triple t, we have a set of retrieved answers A = A kg ∪ {a i |a i ∈ A w }. Our task is to classify each answer as either "supporting" or "not supporting". To this end, we construct an evidence triple t e by replacing the object part of the triple with the short answer retrieved. Therefore, for each a ∈ A and triple t = (S; P ; O), the corresponding evidence triple is t e = (S; P ; a). If the claim triple t = (S; P ; P id; P _attr; O) is extended, the corresponding evidence triple is t e = (S; P ; P id; P _attr; a). The claim and its corresponding evidence triple are then used to form a prompt and fed to LLM to make a final decision. The Fact Revision module aims to correct a questionable fact triple stated in an input sentence into its corrected version while preserving everything else stated in the sentence. More specifically, let s be a sentence containing a questionable triple t src , i.e., s |= t src (i.e., s entails fact t src ). Let the evidence triple formed by the verification process outlined above be t dest . The Fact Revision model will thus rewrite s into s ′ such that s ′ ̸ |= t src ∧ s ′ |= t dest , and s ′ |= t i where t i ̸ = t src is any triple entailed by s. Following is an example (the objects of the triples are in bold): s = "Taylor Swift is 30 years old." t src =(Taylor Swift; age; 30) t dest =(Taylor Swift; age; 33) s ′ = "Taylor Swift is 33 years old." In our implementation, we prompt LLMs with one demonstration to obtain satisfactory results. The frontend of FLEEK is built using Angular 3 and Bootstrap UI 4 , which allows for creating dynamic, interactive, and visually appealing user interface. The backend of FLEEK is handled by Django 5 , a Python-based server-side framework that facilitates the integration with ML-based libraries. The entry point to the system is the two views, LLM and Playground, shown at Figure Previous benchmarks on fact verification Our system has two use cases. The first one is to verify the responses generated by LLMs (in this case, GPT-3). To evaluate our system's performance, we selected 50 questions from WikiQA (Wikipedia open-domain Question Answering) test set (1) identify the facts within the response, (2) label each fact as "Strongly Supported", "Likely Supported", or "Questionable", (3) accompany each fact with an evidence set, particularly the questionable facts. We call this dataset Bench LLM . Each instance in the Bench LLM contains the annotated GPT-3-generated response. The second use case is to verify an arbitrary input text. To create evaluation data that suits this task, we target the introduction section of Wikipedia pages. To partially perturb sentences and create incorrect facts, we sample 50 random sentences with at least one hyperlink. Then, we retrieve the hyperlink's corresponding entity from Wikidata All FLEEK's components that facilitate fact verification and correction use few-shot prompting with a large language model. Any model that can learn from in-context demonstrations can be used to instantiate FLEEK. We choose one open-source model, Vicuna (33 billion parameters), and one closed source model, GPT-3 (175 billion parameters), to create two instances of our tool. We call the instance with Vicuna as its large language model FLEEK V icuna and the instance that utilizes GPT-3 as its large language model FLEEK GP T -3 . We evaluate both instances in the following section. Consider the set of system-generated spans S = {s 1 , ..., s n } and ground truth spans G = {g 1 , ..., g m }. We measure the number of textual spans that are correctly identified, labeled, and attributed to the valid supporting evidence as ov. Then, we calculate verification system's precision as ov |S| , recall as ov |G| , and the F1 score. Table We also measure the accuracy of revisions proposed by the fact correction component. Both systems have on-par performance with an average accuracy of 72.7%. However, our investigation shows that 54.1% of incorrect revisions are a result of errors in previous components propagated through the system. Thus, Fact Correction's average precision, given the correct verification results, is 87.5%. Note that although our initial results show great promise, both evaluation datasets are small (50 sentences) and come from the same data source (Wikpedia). One ongoing work is to create a larger benchmark (with different levels of difficulty from more diverse sources) for a more extensive and reliable evaluation of our system. Error Analysis. We randomly select 30 examples where FLEEK GP T -3 made erroneous decisions and investigate the types of errors each of its components made (Figure We presented FLEEK, an innovative solution geared towards assisting users in verifying the accuracy and factuality of textual claims. We aim to keep improving the FLEEK so that it can be a handy tool for various stakeholders. As part of our future work, we intend to do more comprehensive evaluations of FLEEK, including testing it with various LLMs and over a comprehensive benchmark. Limitation. First, our current system depends on the initial set of responses generated by LLMs to perform the tasks. Nevertheless, we can prompt each component multiple times and employ methods such as majority voting to enhance the accuracy of each task. Second, experiments presented are based on small-scale datasets. We plan to expand both datasets as part of our future endeavors. Finally, both datasets are manually annotated by one annotator. We plan to hire more annotators and refine the annotation process so as to provide a more comprehensive evaluation of our method.
894
1,231
894
PaperMage: A Unified Toolkit for Processing, Representing, and Manipulating Visually-Rich Scientific Documents
Despite growing interest in applying natural language processing (NLP) and computer vision (CV) models to the scholarly domain, scientific documents remain challenging to work with. They're often in difficult-to-use PDF formats, and the ecosystem of models to process them is fragmented and incomplete. We introduce papermage, an opensource Python toolkit for analyzing and processing visually-rich, structured scientific documents. papermage offers clean and intuitive abstractions for seamlessly representing and manipulating both textual and visual document elements. papermage achieves this by integrating disparate state-of-the-art NLP and CV models into a unified framework, and provides turnkey recipes for common scientific document processing use-cases. papermage has powered multiple research prototypes of AI applications over scientific documents, along with Semantic Scholar's large-scale production system for processing millions of PDFs.
Research papers and textbooks are central to the scientific enterprise, and there is increasing interest in developing new tools for extracting knowledge from these visually-rich documents. Recent research has explored, for example, AI-powered reading support for math symbol definitions However, this type of NLP research on scientific corpora is difficult because the documents come in difficult-to-use formats like PDF, Unlike more mature parsers ( §2.1), these downstream models are often research prototypes ( §2.2) that are limited to extracting only a subset of the structures needed for one's research (e.g., the same model may not provide both sentence splits and figure detection). As a result, users must write extensive custom code that strings pipelines of multiple models together. Research projects using models of different modalities (e.g., combining an imagebased formula detector with a text-based definition extractor) can require hundreds of lines of code. We introduce papermage, an open-source Python toolkit for processing scientific documents. Its contributions include (1) magelib, a library of primitives and methods for representing and manipulating visually-rich documents as multimodal constructs, (2) Predictors, a set of implementations that integrate different state-of-the-art scientific document analysis models into a unified interface, even if individual models are written in different frameworks or operate on different modalities, and (3) Recipes, which provide turn-key access to well-tested combinations of individual (often single-modality) modules to form sophisticated, extensible multimodal pipelines.
Processing visually-rich documents like scientific documents requires a joint understanding of both visual and textual information. In practice, this often requires combining different models into complex processing pipelines. For example, GRO-BID While aforementioned software tools use CRF or BiLSTM-based models, Transformer-based models have seen wide adoption among NLP researchers for their powerful processing capabilities. Recent years have seen the rise of layout-infused Transformers papermage's use case lies between that of turnkey software and a framework for supporting research. Similar to Transformers 3 Design of papermage papermage is three parts: (1) magelib, a library for intuitively representing and manipulating visuallyrich documents, (2) Predictors, implementations of models for analyzing scientific papers that unify disparate machine learning frameworks under a common interface, and (3) Recipes, combinations of Predictors that form multimodal pipelines. In this section, we use code snippets to show how our library's abstractions and syntax are tailored for the visually-rich document problem domain. Data Classes. magelib provides three base data classes for representing fundamental elements of visually-rich, structured documents: Document, Layers and Entities. First, a Document might minimally store text as a string of symbols: 1 >>> from papermage import Document 2 >>> doc . symbols 3 " Revolt : Collaborative Crowdsourcing ... " But visually-rich documents are more than a linearized string. For example, analyzing a scientific paper requires access to its visuospatial layout (e.g., pages, blocks, lines), logical structure (e.g., title, abstract, figures, tables, footnotes, sections), semantic units (e.g., paragraphs, sentences, tokens), and more (e.g., citations, terms). In practice, this means different parts of doc.symbols can correspond to different paragraphs, sentences, tokens, etc. in the Document, each with its own set of corresponding coordinates representing its visual position on a page. magelib represents structure using Layers that can be accessed as attributes of a Document (e.g., doc.sentences, doc.figures, doc.tokens) (Figure 1 >>> sentences = Layer ( entities =[ 2 Entity (...) , Entity (...) , ... See Figure Methods. magelib also provides a set of functions for building and interacting with data: augmenting a Document with additional Layers, traversing and spatially searching for matching Entities in one Layer, and cross-referencing between Layers (see Figure A Document that only contains doc.symbols can be augmented with additional Layers: 1 >>> paragraphs = Layer (...) 2 >>> sentences = Layer (...) 3 >>> tokens = Layer (...) 4 5 >>> doc . add ( paragraphs , sentences , tokens ) Adding Layers automatically grants users the ability to iterate through Entities and crossreference intersecting Entities across Layers: 1 >>> for paragraph in doc . paragraphs : ["Techniques", "for", "collecting", "labeled", "data", "perts", "for", "manual", "annotation", ...] Crowdsourcing provides a scalable and efficient way to construct labeled datasets for training machine learning systems. However, creating comprehensive label guidelines for crowdworkers is often prohibitive even for seemingly simple concepts. Incomplete or ambiguous label guidelines can then result in differing interpretations of concepts and inconsistent labels. Existing approaches for improving label quality, such as worker screening or detection of poor work, are ineffective for this problem and can lead to rejection of honest work and a missed opportunity to capture rich interpretations about data. We introduce Revolt, a collaborative approach that brings ideas from expert annotation workflows to crowd-based labeling. Revolt eliminates the burden of creating detailed label guidelines by harnessing crowd disagreements to identify ambiguous concepts and create rich structures (groups of semantically related items) for post-hoc label decisions. Experiments comparing Revolt to traditional crowdsourced labeling show that Revolt produces high quality labels without requiring label guidelines in turn for an increase in monetary cost. This up front cost, however, is mitigated by Revolt's ability to produce reusable structures that can accommodate a variety of label boundaries without requiring new data to be collected. Further comparisons of Revolt's collaborative and non-collaborative variants show that collabvoration reaches higher label accuracy with lower monetary cost. learned models that must be trained on representative datasets labeled according to target concepts (e.g., speech labeled by their intended commands, faces labeled in images, emails labeled as spam or not spam). Protocols and Utilities. To instantiate a Document, magelib provides protocols and utilities like Parsers and Rasterizers, which hook into off-the-shelf PDF processing tools: In this example, papermage runs PDF2TextParser (using pdfplumber) to extract the textual information from a PDF file. Then it runs PDF2ImageRasterizer (using pdf2image) to update the first Document with images of pages. In §3.1, we described how users create Layers by assembling collections of Entities. But how would they make Entities in the first place? For example, to identify multimodal structures in visually-rich documents, researchers might want to build complex pipelines that run and combine output from many different models (e.g., computer vision models for extracting figures, NLP models for classifying body text). papermage provides a unified interface, called Predictors, to ensure models produce Entities that are compatible with the Document. papermage includes several ready-to-use Predictors that leverage state-of-the-art models to extract specific document structures (Table Linguistic/ Semantic Segments doc into text units often used for downstream models. SentencePredictor wraps sciSpaCy BoxPredictor wraps models from LayoutParser SpanPredictor wraps Token Classifiers from Transformers As many practitioners depend on prompting a model through an API call, we implement APIPredictor which interfaces external APIs, such as GPT-3 We also implement SnippetRetrievalPredictor which wraps models like Contriever development of new ones from scratch. Similarly to the Transformers library, a Predictor's implementation is typically independent from its configuration, allowing users to customize each Predictor by tweaking hyperparameters or loading a different set of weights. Below, we showcase how a vision model and two text models (both neural and symbolic) can be applied in succession to a single Document. See Table 1 >>> import papermage as pm 2 >>> cv = pm . BoxPredictor (...) 3 >>> tables , figures = cv . predict ( doc ) 4 >>> doc . add ( tables , figures ) Predictors return a list of Entities, which can be group_by() to organize them based on predicted label value (e.g., tokens classified as "title" or "authors"). Finally, these predictions are passed to doc.annotate() to be added to Document. Finally, papermage provides predefined combinations of Predictors, called Recipes, for users seeking high-quality options for turn-key processing of visually-rich documents: 1 from papermage import CoreRecipe 2 recipe = CoreRecipe () 3 doc = recipe . run ( " ... pdf ") Recipes can also be flexibly modified to support development. For example, our current default combines the pdfplumber PDF parsing utility with the I-VILA 4 Vignette: Building an Attributed QA System for Scientific Papers How could researchers leverage papermage for their research? Here, we walk through a user scenario in which a researcher (Lucy) is prototyping an attributed QA system for science. System Design. Drawing inspiration from When presenting the answer to the user, the prototype also visually highlights the retrieved passages as supporting evidence to the generated answer. Fast iterations. Leveraging the bounding box data from papermage to visually highlight the retrieved passages, Lucy suspects the retrieval component is likely underperforming. She makes a simple edit from doc.sentences to doc.paragraphs and evaluates system performance under different input granularity. She also realizes the system often retrieves content outside the main body text. She restricts her traversal to filter out paragraphs that overlap with footnotes-[p.text for p in doc.paragraphs if len(p.footnotes) == 0]making clever use of the cross-referencing functionality to detect when a paragraph is actually coming from a footnote. This example demonstrates the versatility of the affordances provided by magelib. In this work, we've introduced papermage, an open-source Python toolkit for processing scientific documents. papermage was developed to supply high-quality data and reduce friction for research prototype development at Semantic Scholar. Today, it is being used in the production PDF processing pipeline to provide data for both the literature graph As a toolkit primarily designed to process scientific documents, there are two areas where papermage could cause harms or have unintended effects. Extraction of bibliographic information papermage could be used to parse author names, affiliation, emails from scientific documents. Like any software, this extraction can be noisy, leading to incorrect parsing and thus mis-attribution of manuscripts. Further, since papermage relies on static PDF documents, rather than metadata dynamically retrieved from publishers, users of papermage need consider how and when extracted names should no longer be associated with authors, a harmful practice called deadnaming Misrepresentation or fabrication of information in documents In §3, we discussed how papermage can be easily extended to support highlevel applications. Such applications might include question answering chatbots, or AI summarizers that perform information synthesis over one or more papermage documents. Such applications typically rely on generative models to produce their output, which might fabricate incorrect information or misstate claims. Developers should be vigilant when integrating papermage output into any downstream application, especially in systems that purport to represent information gathered from scientific publications.
952
1,647
952
Improving Adversarial Text Generation by Modeling the Distant Future
Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation. Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to apply. We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues. Specifically, we propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization. Extensive experiments demonstrate that the proposed method leads to improved performance.
Text generation is an important area of investigation within machine learning. Recent work has shown excellent performance on a number of tasks, by combining reinforcement learning (RL) and generative models. Example applications include image captioning For RL-based text generation, most existing works rely on a model-free framework, which has been criticized for its high variance and poor sample efficiency In this paper, we propose a model-based imitation-learning method to overcome the aforementioned issues in text-generation tasks. Our main idea is to employ an explicit guider network to model the generation environment in the feature space of sentence tokens, used to emit intermediate rewards by matching the predicted features from the guider network and features from generated sentences. The guider network is trained to encode global structural information of training sentences, and thus is useful to guide next-token prediction in the generative process. Within the proposed framework, to assist the guider network, we also develop a new type of self-attention mechanism to provide high-level planning-ahead information and maintain consistent semantic meaning. Our experimental results demonstrate the effectiveness of proposed methods.
Text Generation Model Text generation models learn to generate a sentence Y = (y 1 , . . . , y T ) of length T , possibly conditioned on some context X. Here each y t is a token from vocabulary A. Starting from the initial state s 0 , a recurrent neural network (RNN) produces a sequence of states (s 1 , . . . , s T ) given an input sentence-feature representation (e(y 1 ), . . . , e(y T )), where e(•) denotes a word embedding function mapping a token to its ddimensional feature representation. The states are recursively updated with a function known as the cell: s t = h(s t-1 , e(y t )). One typically assigns the following probability to an observation y at location t: p(y|Y <t ) = [softmax(g(s t ))] y . Together (g, h) specifies a probabilistic model π, i.e., log π(Y ) = t log p(y t |Y <t ). (1) To train the model π, one typically uses maximum likelihood estimation (MLE), via minimizing the cross-entropy loss, i.e., J MLE (π) = -E[log π(Y )]. In order to generate sentence Y s from a (trained) model, one iteratively applies the following operations: y s t+1 ∼ Multi(1, softmax(g(s t ))), (2) s t = h(s t-1 , e(y s t )) , where Multi(1, •) denotes one draw from a multinomial distribution. Model-Based Imitation Learning Text generation can be considered as an RL problem with a large number of discrete actions, deterministic transitions, and deterministic terminal rewards. It can be formulated as a Markov decision process (MDP) M = S, A, P, r, γ , where S is the state space, A is the action space, P is the deterministic environment dynamics, r(s, y) is a reward function, and γ ∈ (0, 1) is the discrete-time discount factor. The policy π φ , parameterized by φ, maps each state s ∈ S to a probability distribution over A. The objective is to maximize the expected reward: In model-based imitation learning The model is illustrated in Figure The guider network, implemented as an RNN with LSTM units, is adopted to model environment dynamics to assist text generation. The idea is to train a guider network such that its predicted sentence features at each time step are used to assist next-word generation and construct intermediate rewards, which in turn are used to optimize the sentence generator. Denote the guider network as G ψ (s G t-1 , f t ), with parameters ψ and input arguments (s G t-1 , f t ) at time t, to explicitly write out the dependency on the guider network latent state s G t-1 from the previous time step. Here f t is the input to the LSTM guider, which represents the feature of the current generated sentence extracted r X p 7 1 d 3 z / U r t q I h j j m y Q T b J N P H J A a u S U n J E 6 4 e S e P J J n 8 u I 8 O E / O q / P 2 X T r h F D 3 r 5 B e c 9 y / a 2 q B N < / l a t e x i t > CNN MLP w t < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t u q q p q 3 7 u w E f s n 7 + A a q K Y o N < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 8 L i q q k O r 4 y R e y u K e / V + 5 i 3 t e Y t a j u C P / I + f w D A + Y x i < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 8 T P j 8 p W 5 m g 7 r I V N u D P D 9 n l C 0 6 8 k x q H k r X B 0 P f V b j 1 w b k a h 7 H K c 8 i O l A i U g w i l a 6 e + p h r 1 p z 6 + 4 M Z J l 4 B a l B g W a v + t X t J y y L u U I m q T E d z 0 0 x y K l G w S S f V L q Z 4 S l l I z r g H U s V j b k J 8 t m p E 3 J i l T / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t Y r e / s P J s k 0 4 z 5 L Z K J b I T V c C s V 9 F C h 5 K 9 W c x q H k z X B 4 P f G b j 1 w b k a h 7 H K U 8 i G l f i U g w i l a 6 e + p i t 1 J 1 a + 4 U Z J F 4 B a l C g U a 3 8 t X p J S y L u U I m q T F t z 0 0 x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t Y r e / s P J s k 0 4 z 5 L Z K J b I T V c C s V 9 F C h 5 K 9 W c x q H k z X B 4 P f G b j 1 w b k a h 7 H K U 8 i G l f i U g w i l a 6 e + p i t 1 J 1 a + 4 U Z J F 4 B a l C g U a 3 8 t X p J S y L u U I m q T F t z 0 0 x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t Y r e / s P J s k 0 4 z 5 L Z K J b I T V c C s V 9 F C h 5 K 9 W c x q H k z X B 4 P f G b j 1 w b k a h 7 H K U 8 i G l f i U g w i l a 6 e + p i t 1 J 1 a + 4 U Z J F 4 B a l C g U a 3 8 t X p J S y L u U I m q T F t z 0 0 x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t Y r e / s P J s k 0 4 z 5 L Z K J b I T V c C s V 9 F C h 5 K 9 W c x q H k z X B 4 P f G b j 1 w b k a h 7 H K U 8 i G l f i U g w i l a 6 e + p i t 1 J 1 a + 4 U Z J F 4 B a l C g U a 3 8 t X p J S y L u U I m q T F t z 0 0 x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t Y r e / s P J s k 0 4 z 5 L Z K J b I T V c C s V 9 F C h 5 K 9 W c x q H k z X B 4 P f G b j 1 w b k a h 7 H K U 8 i G l f i U g w i l a 6 e + p i t 1 J 1 a + 4 U Z J F 4 B a l C g U a 3 8 t X p J S y L u U I m q T F t z 0 0 x 1 y S l m D u A P n M 8 f 3 T m N t g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q 3 U T 9 y t Y r e / s P J s k 0 4 z 5 L Z K J b I T V c C s V 9 F C h 5 K 9 W c x q H k z X B 4 P f G b j 1 w b k a h 7 H K U 8 i G l f i U g w i l a 6 e + p i t 1 J 1 a + 4 U Z J F 4 B a l C g U a 3 8 t X p J S y L u U I m q T F t z 0 0 q w X 6 9 3 6 m I 9 W r H L n A P 7 A + v w B y q i T 5 A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " X / B b P < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 G q 4 R 7 h 6 x w e U t W p N B q A 5 V m S E I S 2 q y Q j x B E m F t + q q a E t z l L 6 + S X r P h X j S a d 6 1 6 u 1 X W U Q E n 4 B S c A x d c g j a 4 B R 3 Q B R h k 4 B m 8 g j f r y X q x 3 q 2 P x e i a V e 7 U w B 9 Y n z 8 F g Z T 1 < / l a t e x i t > K w a / i x Y M i X v 0 c 3 v w 2 p l s P u v k g 5 P H e 7 0 d e X p A w q r T j f F u V l d W 1 9 Y 3 q Z m 1 r e 2 d 3 z 9 4 / 6 C m R S k y 6 W D A h + w F S h F F O u p p q R v q J J C g O G L k P J l e F f / 9 I p K K C 3 + l p Q r w Y j T i N K E b a S L 5 9 l A 0 D w U I 1 j c 0 F o 9 z P 3 P z h 2 r f r T 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " y L 3 s 0 4 B p / V q G k J 3 0 G 8 q C M a m W T Q g = " > A A A B / n i c b V B P S 8 M w H E 3 n v z n / V c W T l + A Q P I 1 2 D v Q 4 8 K D H C W 4 O t l r S N N 3 C 0 q Q k q T B K w a / i x Y M i X v 0 c 3 v w 2 p l s P u v k g 5 P H e 7 0 d e X p A w q r T j f F u V l d W 1 9 Y 3 q Z m 1 r e 2 d 3 z 9 4 / 6 C m R S k y 6 W D A h + w F S h F F O u p p q R v q J J C g O G L k P J l e F f / 9 I p K K C 3 + l p Q r w Y j T i N K E b a S L 5 9 l A 0 D w U I 1 j c 0 x M e s 2 G e 9 5 o 3 r b q 7 V Z Z R x U c g x N w B l x w A d r g B n R A F 2 C Q g W f w C t 6 s J + v F e r c + 5 q M V q 9 w 5 B H 9 g f f 4 A X z e V s g = = < / l a t e x i t > s 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " A q R / o 2 x q 8 W y 1 E n u j w d g 3 I c V H S m 0 = " > A A A B / H i c b V D N S 8 M w H E 3 9 n P O r u q O X 4 B A 8 j X Y O 9 D j w 4 n G C + 4 C t l D R N t 7 A 0 K U k q l F L / F S 8 e F P H q H + L N / 8 Z 0 6 0 E 3 H 4 Q 8 3 v v 9 y M s L E k a V d p x v a 2 N z a 3 t n t 7 Z X 3 z 8 4 P D q 2 T 0 4 K w a / i x Y M i X v 0 c 3 v w 2 p l s P u v k g 5 P H e 7 0 d e X p A w q r T j f F u V l d W 1 9 Y 3 q Z m 1 r e 2 d 3 z 9 4 / 6 C m R S k y 6 W D A h + w F S h F F O u p p q R v q J J C g O G L k P J l e F f / 9 I p K K C 3 + l p Q r w Y j T i N K E b a S L 5 9 l A 0 D w U I 1 j c 0 F V e 5 n T v 5 w 7 d t 1 p + H M A J e J W 5 I 6 K N H x 7 a 9 h K H A a E 6 4 m 8 g j f r y X q x 3 q 2 P + W j F K n c O w R 9 Y n z 9 w O J W 9 < / l a t e x i t > f G t < l a t e x i t s h a 1 _ b a s e 6 4 = " z U c 7 C 4 R M y k u C + q Q K u z B Z q w S L z z 0 = " > A A A B / n i c b V B P S 8 M w H E 3 n v z n / V c W T l + A Q P I 1 2 D v Q 4 8 K D H C W 4 O t l r S N N 3 C 0 q Q k q T B K w a / i x Y M i X v 0 c 3 v w 2 p l s P u v k g 5 P H e 7 0 d e X p A w q r T j f F u V l d W 1 9 Y 3 q Z m 1 r e 2 d 3 z 9 4 / 6 C m R S k y 6 W D A h + w F S h F F O u p p q R v q J J C g O G L k P J l e F f / 9 I p K K C 3 + l p Q r w Y j T i N K E b a S L 5 9 l A 0 D w U I 1 j c 0 F o 9 z P d P 5 w 7 d t 1 p + H M A J e J W 5 I 6 K N H x 7 a 9 h K H A a E 6 4 A H e q A P M M j A M 3 g F b 9 a T 9 W K 9 W x / L 0 Q 2 r 2 m m A P 7 A + f w A J r Z T 4 < / l a t e x i t > f 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " v 5 4 x g q V m / n q 9 D y N z G r d c 1 F u 6 O P g = " > A A A B / H i c b V D N S 8 M w H E 3 9 n P O r u q O X 4 B A 8 j X Y O 9 D j w 4 n G C + 4 C t l D R N t 7 A 0 K U k q l F L / F S 8 e F P H q H + L N / 8 Z 0 6 0 E 3 H 4 Q 8 3 v v 9 y M s L E k a V d p x v a 2 N z a 3 t n t 7 Z X 3 z 8 4 P D q 2 T 0 4 X 6 e y N H s S r j m c k Y 6 Z l a 9 U r x P 2 + c 6 u j G y y l P U k 0 4 X j 4 U p Q x q A c s m Y E g l w Z p l h i A s q c k K 8 Q x J h L X p q 2 5 K c F e / v E 4 G 7 Z Z 7 1 W r f d 5 r d T l V H D Z y B c 3 A J X H A N u u A O 9 E A f Y J C B Z / A K 3 q w n 6 8 V 6 t z 6 W o x t W t d M A f 2 B 9 / g A L M p T 5 < / l a t e x i t > f t < l a t e x i t s h a 1 _ b a s e 6 4 = " g 7 5 7 F + X y u G B m K w U j f a Z q y B p Y F O g = " > A A A B / H i c b V D N S 8 M w H E 3 9 n P O r u q O X 4 B A 8 j X Y O 9 D j w 4 n G C + 4 C t l D R N t 7 A 0 K U k q l F L / F S 8 e F P H q H + L N / 8 Z 0 6 0 E 3 H 4 Q 8 3 v v 9 y M s L E k a V d p x v a 2 N z a 3 t n t 7 Z X 3 z 8 4 P D q 2 T 0 4 8 5 h j + w P n 8 A b q 4 j z M = < / l a t e x i t > . . . < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 e 3 5 2 g W f r l 8 5 h j + w P n 8 A b q 4 j z M = < / l a t e x i t > . . . < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 e 3 5 2 g W f r l 8 5 h j + w P n 8 A b q 4 j z M = < / l a t e x i t > s 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " m Q 9 7 r i 6 n c S F 3 r H e r Y / F a M k q d o 7 g D 6 z P H 1 B A k / 8 = < / l a t e x i t > s t < l a t e x i t s h a 1 _ b a s e 6 4 = " q v Q z C 3 5 d 0 h 8 j Y J j y d 9 X z 3 w / g o j < l a t e x i t s h a 1 _ b a s e 6 4 = " n P + r W / D + H b y u u J Z r 9 T Z g q 7 W K w < l a t e x i t s h a 1 _ b a s e 6 4 = " l X N F j q u U u / R P d x + q o d X X s Z E k m o q y O K h Y c q R j l H e A w q Z p E T z q S G Y S G a y I j L G E h N t 2 q q Y E t z l L 6 + S T q P u n t c b d x e 1 5 n V R R x m O 4 Q T O w I V L a M I t t K A N B B 7 h G V 7 h z X q y X q x 3 6 2 M x W r K K n S P 4 A + v z B 7 R I l E E = < / l a t e x i t > by an encoder network. Specifically, let the current generated sentence be Y 1...t (encouraged to be the same as parts of a training sentence in training), with f t calculated as: The initial state of the guider network is the encoded feature of a true input sentence by the same convolutional neural network (CNN), i.e., s G 0 = Enc(X), where Enc(•) denotes the encoder transformation, implemented with a CNN Text Generation with Planning We first explain how one uses the guider network to guide next-word generation for the generator (the LSTM decoder in Figure Guider Network Training Given a sentence of feature representations (f 1 , f 2 , . . . f T ) for a training sentence, we seek to update the guider network such that it is able to predict f t+c given f t , where c > 0 is the number of steps that are looked ahead. We implement this by forcing the predicted feature, G ψ (s G t , f t ), to match both the sentence feature f t+c (first term in ( where D cos (•, •) denotes the cosine similarity As in many RL-based text-generation methods, such as SeqGAN Though SeqGAN We below describe how to use the proposed guider network to define intermediate rewards, leading to a definition of feature-matching reward. Feature-Matching Rewards We first define an intermediate reward to generate a particular word. The idea is to match the ground-truth features from the CNN encoder in Figure , ) is the predicted feature. Intuitively, f tf t-i measures the difference between the generated sentences in feature space; the reward is high if it matches the predicted feature transition f tf t-i from the guider network. At the last step of text generation, i.e., t = T , the corresponding reward measures the quality of the whole generated sentence, thus it is called a final reward. The final reward is defined differently from the intermediate reward, discussed below for both the unconditional-and conditional-generation cases. Note that a token generated at time t will influence not only the rewards received at that time but also the rewards at subsequent time steps. Thus we propose to define the cumulative reward, T i=t γ i r g i with γ a discount factor, as a featurematching reward. Intuitively, this encourages the generator to focus on achieving higher long-term rewards. Finally, in order to apply policy gradient to update the generator, we combine the featurematching reward with the problem-specific final reward, to form a Q-value reward specified below. Similar to SeqGAN, the final reward is defined as the output of a discriminator, evaluating the quality of the whole generated sentence, i.e., the smaller the output, the less likely the generation is a true sentence. As a result, we combine the adversarial reward r f ∈ [0, 1] by the discriminator where p(y t |s t-1 ; φ, ϕ) is the probability of generating y t given s t-1 in the generator. Algorithm 1 describes the proposed model-based imitation learning framework for text generation. Model-based or Model-free Text generation seeks to generate the next word (action) given the current (sub-)sentence (state). The generator is considered as an agent that learns a policy to predict the next word given its current state. In previous work As illustrated in Figure For the generator, we put an adversarial regularizer on the encoded latent s 0 (X) and penalize it if it contains the sentiment information, by maximizing the entropy, i.e., max l p(l| s 0 (X)) log p(l| s 0 (X)), where p is a pre-trained classifier. Intuitively, the generator gives candidate words represented by O t , while the guider makes a choice implicitly by w t based on the sentiment information. The sentiment information is contained in w t , while the content of the original sentence is represented by O t . To achieve styletransfer, one feeds the original sentence X with the target style label l to get the transferred sentence Y with style l. Following previous work We first review related works that combine RL and GAN for text generation. As one of the most rep-resentative models in this direction, SeqGAN RL techniques can also be used in other ways for text generation generation. More details of GMGAN are provided in Appendix D. We use the COCO Image Captions Dataset, in which most sentences have a length of about 10 words. Since we consider unconditional text generation, only image captions are used as the training data. After preprocessing, we use 120,000 random sample sentences as the training set, and 10,000 as the test set. The BLEU scores with different methods are listed in Table Long Text Generation: EMNLP2017 WMT Following Human Evaluation Simply relying on the above metrics is not sufficient to evaluate the proposed method We require all the workers to be native English speakers, with approval rate higher than 90% and at least 100 assignments completed. We randomly sample 100 sentences generated by each model. Ten native English speakers on Amazon Mechanical Turk are asked to rate each sentence. The average human rating scores are shown in Table We conduct ablation studies on long text generation to investigate the improvements brought by each part of our proposed method. We first test the benefits of using the guider network. Among the methods compared, Guider is the standard MLE model with the guider network. We further compare RL training with i) only final rewards , ii) only feature-matching rewards, and iii) combining both rewards, namely GMGAN. The results are shown in Table Res152-SCST: a group of zebras standing in a eld . Res152-GMST: a herd of zebras standing in a eld of grass . Tag-SCST: a zebra and a zebra drinking water from a eld of grass . Tag-GMST: a group of zebras drinking water in the eld of grass . Res152-SCST: a group of people walking down a skateboard . Res152-GMST: a group of people standing on a street with a skateboard . Tag-SCST: a woman walking down a street with a skateboard . Tag-GMST: a black and white photo of a man riding a skateboard . Res152-SCST: a baby sing next to a baby girae . Res152-GMST: a lile baby sing next to a baby holding a teddy bear . Tag-SCST: a black and white photo of a woman holding a teddy bear . Tag-GMST: a black and white photo of a man and a woman holding a teddy bear . Compute evaluation scores based on references. Compute Q s t via (6), and update π φ with policy gradient via (8). 7: until GMST converges
700
1,257
700
Handling Japanese Homophone Errors in Revision Support System for Japanese Texts; REVISE
Japanese texts frequently suffer from the homophone errors caused by the KANA-KANJI conversion needed to input the text. It is critical, therefore, for Japanese revision support systems to detect and to correct homophone errors. This paper proposes a method for detecting and correcting Japanese homophone errors in compound nouns. This method can not only detect Japanese homophone errors in compound nouns, but also can find the correct candidates for the detected errors automatically. Finding the correct candidates is one superiority of this method over existing methods. The basic idea of this method is that a compound noun component places some restrictions on the semantic categories of the adjoining words. The method accurately determines that a homophone is misused in a compound noun if one or both of its neighbors is not a member of the semantic set defined by the homophone. Also, the method successfully indicates the correct candidates for the detected homophone errors.
We have been using morphological analysis to develop REVISE, a revision support system that corrects Japanese input errors Most Japanese texts are made with Japanese word processors. As Japanese texts consist of phonograms, KANA, and ideograms, KANJI, lapanese word processors always use KANA-KANJI conversion in which KANA sequences (i.e. readings) input through the key board are converted into KANA-KANJI sequences. Therefore, Japanese texts suffer from homophone errors caused by erroneous KANA-KANJI conversion. A homophone error occurs when a KANA sequence is converted into the wrong word which has the same KANA sequence (i.e. the same reading). Therefore, detecting and correcting homophone errors is an important topic. Previous research into detecting homophone errors with revision supportsystems used two approaches; (a) using correct-wrong word pairs This paper describes a method for detecting and correcting homophone errors in compound nouns used in REVISE. The idea underlying this method is that a compound noun component semantically restricts the semantic categories of adjoining words. Using semantic categories reduces dictionary size; moreover, this method needs no syntactic information such as case frames. Mso described are the experimental results made to certify the validity of this method.
Key terms used in this paper are defined as follows: • Japanese compound noun; A noun that consists of several nouns, none of which have JOSHI (i.e. Japanese postpositions). A word that sounds the same as another but has different spelling (i.e. KANJI sequence) and meaning. • Homophone error; An error that occurs when a KANA sequence is converted into the wrong word which has the same KANA sequence (i.e. the same reading) as the correct one. • Semantic category; A class for dividing nouns themselves into concepts according to their meaning. For example, both "~t .~.'." and "~/'~,." belong to the same semantic category [nature]. It is necessary to use semantic information, such as the semantic restriction between words in a sentence, to handle homophone errors. We note that it is difficult, if may not impossible, to handle all homophone errors uniformly. For example, within a compound noun, the semantic restriction is mainly seen between adjacent words. The case frame semantic restriction encompasses the whole sentence. Therefore, the discussion of this paper focuses on the detection and correction of homophone errors in compound nouns. 4 A method for handling homophone errors The compound noun that includes only one homophone, h i, is represented as; wp hiw°, where up, Wo are words that have no homophones. The set of words with the same reading as h i is H= { hl, h 2, "", hl, "", hm }. PS i is the set of semantic categories that can appear immediately before homophone h i. NS i is the set of semantic categories that can appear immediately after h i. Here, we assume that each semantic restriction for each word in set H is exclusive. That is, for every i, j, esi n P% = NS i {q NSj = #, ---(1) iq=j, i,j = 1,2,'",m. In the compound noun wp h i w n, when h i is the correct homophone, the semantic categories of wp and w, satisfy the semantic restrictions of h,, i.e., the semantic category ofwp e PS~ and the semantic category of w, e NS ~ ". (2) On the other hand, when h i is the wrong homophone, semantic categories of wp and w, do not satisfy the semantic restriction for h i, i.e., from (1) and ( The correct homophone in a compound noun should satisfy the semantic restrictions established by its adjoining words. The semantic category for the adjoining word of the homophone error should be included in the sets of semantic categories that can appear immediately before/after the correct homophone. Namely, it is the correct candidates for the detected homophone error that satisfy formula (2) and that have the same KANA sequence (i.e. the same reading) as the error. When the semantic category sets of homophones partially overlap and the category of the adjoining word falls into the overlap region, the homophone is detected as erroneous even if it is correct, as described above in 4.2. In this case, the detected homophone itself is 'also indicated as one of the correct candidates if it satisfies formula (2). To indicate only candidates which satisfy formula (2) leads us to a shortened correction process because the correct homophone will be included in the candidates. The semantic restriction dictionary describes which semantic categories can adjoin, either before or after, each homophone. Figure • homophone reading: the semantic restriction dictionary is retrieved by the homophone reading in the error correction process, to find the correct candidates for the detected homophone error. • KANJI homophone spelling: the dictionary is retrieved by the KANJI homophone spelling in the error detection process, to determine whether the homophone is misused in the compound noun or not. • information whether semantic restrictions in this record apply to the preceding or following word. • semantic restrictions: this is the set of semantic categories that can adjoin the homophone. Semantic categories which are included in two or more sets of the homophones are marked as to show insufficient semantic discrimination. Ways of using the semantic restriction dictionary in both processes, error detection and error correction, will be described using examples in the next section. An example of detecting homophone errors in the compound noun "~1.~',~", which includes the homophone "~-~-(chemistry)" is shown in figure "~" has homophonic words "~,r/~(machine) '' and "~/~(chance)", while "-E4~" has no homophonic word. Although, as shown in figure Let's consider an example that exhibits insufficient semantic discrimination. The compound noun "-V~ tt~" shown in figure [act] But, [act] can also appear prior to other homophonic word (shown by outlining in this figure). l'he semantic set possible prior neighbors of "~k~".'~ ] Error correction process [act] E {[body], [tool], [at~] } ,,-~,,. l Access semantic restriction dictionary ['he semantic set possible prior neighbors of ".~"." ,&,/ using reading" ~ ~'v~". [act] ~ {[dominate], [duty] , [tram~cfion] } .'. "~,~"and "~" are indicated as correct candidates. (actually "/~-l~" is correct in this example). Therefore, "~" in the compound noun "SE~d~ " is detected as the error, and the correction process is invoked. The semantic restriction dictionary is accessed using the reading "~ ~),p~". The semantic set of possible prior neighbors of homophonic words "~-~" and "~" are then obtained. The semantic category [act] is an element of the set for "~-~-~" but is not included the set for "~ ~". According to formulae (2) and (3), only "g][~" and "-~-~" are indicated as correct candidates. Although the correct homophone is detected as the error, that the correct homophone (the original homophone) will be a candidate shortens the correction process. The validity of this method was confirmed with experiments in detecting and correcting homophone errors. We assumed that the input compound nouns were already segmented into component words and that their reading and semantic categories were already added. Compound nouns including all homophones in table 1, were collected from newspaper articles over a 90 day period, and the semantic restriction dictionary was made based on the semantic restrictions between the homophones and the adjoining words in compound nouns. Generally speaking, the performance of an error detection method can be measured by two indices: the detection rate indicates the percentage of errors correctly determined and the misdetection rate indicates the percentage of correct words that are erroneously detected as errors. The detection rate is defined as; Detection rate = the number of errors detected actual number of wrong compounds in the sample. The misdetection rate is defined as; Misdetection rate= the number of homophones misdetected actual number of correct compounds in the sample. The experimental results are shown in table We analyzed the experimental results and determined that misdetection is caused by two factors; (a) imperfect semantic restriction dictionary, Co) semantic categories that belong to sets that can adjoin words having the same reading. The number of compound nouns used to make the semantic restriction dictionary was different for each word reading. When the number of compound nouns used to construct the dictionary is large enough, misdetection caused by factor (a) will be minimized. Factor (b) can be offset by optimizing the semantic category system to improve semantic discrimination. This problem will be researched in the future. This paper has described a method for detecting and correcting Japanese homophone errors in compound nouns used in a revision support system for Japanese texts; REVISE. The underlying concept of this method is that a compound noun component can restrict the set of semantic categories of words that can adjoin the component. The method accurately determines that a homophone is misused in a compound noun if one or both of its neighbors is not a member of the set defined by the homophone. Also, the method successfully indicates the correct candidates for the detected homophone errors automatically. Experiments indicate that the detection rate is over 95% and that the misdetection rate is less than 30%. These results conf'mn the validity of this method in detecting and correcting Japanese homophone errors in compound nouns.
988
1,320
988
INTERFAIR: Debiasing with Natural Language Feedback for Fair Interpretable Predictions
Debiasing methods in NLP models traditionally focus on isolating information related to a sensitive attribute (e.g. gender or race). We instead argue that a favorable debiasing method should use sensitive information 'fairly,' with explanations, rather than blindly eliminating it. This fair balance is often subjective and can be challenging to achieve algorithmically. We explore two interactive setups with a frozen predictive model and show that users able to provide feedback can achieve a better and fairer balance between task performance and bias mitigation. In one setup, users, by interacting with test examples, further decreased bias in the explanations (5-8%) while maintaining the same prediction accuracy. In the other setup, human feedback was able to disentangle associated bias and predictive information from the input leading to superior bias mitigation and improved task performance (4-5%) simultaneously.
Debiasing human written text is an important scientific and social problem that has been investigated by several recent works However, a user can potentially further tune the model's belief on the bias, leading to a correct prediction while minimally using biased information. While interactive NLP models recently focused on model debugging In this paper, we propose INTERFAIR, a modular interactive framework that (1) enables users to provide natural language feedback at test time to balance between task performance and bias mitigation, (2) provides explanations of how a particular input token contributes to the task performance and exposing bias, and finally (3) achieves better performance than a trained model on full-text input when augmented with feedback obtained via interactions.
An interpretable debiasing algorithm produces a rationale along with a prediction of the original task to expose the amount of bias or sensitive information used. Precisely, a rationale is the minimal and sufficient part of the input responsible for the prediction. For text input, let the predictive input tokens for the task output be called task rationales and tokens revealing sensitive information be called bias rationales. Since the model solely uses the rationales to predict the task output, these rationales are highly faithful According to We highlight that even an algorithmically debiased model can have failure modes and one potential option is to fix the problem at the inference time. We argue that human users are better at fixing the failure cases that a model is unable to learn from the training data. We also assume that the model parameters remain frozen during the fixing process, and users only interact with the final prediction and its associated hidden model states. We start with a frozen model that is algorithmically debiased and allow users to interact and edit its rationale at the inference time towards lower bias. Since rationales are tied to task prediction, the user should edit them without lowering the task performance. Primarily, the users are encouraged to find better low-bias replacements for tokens highly important for both task performance and revealing bias. To this end, we hypothesize a system, INTERFAIR, to achieve a fair balance between task performance and bias. For the scope of this paper, we use classification as the predictive task and text only as the input modality. For the base model, we use an LSTM classification model, trained using the procedure described in During operation, the user queries with a text input for the classification task (e.g., predicting the profession from a biography) and a known bias variable (e.g., gender). After querying, the user receives the prediction, rationales (with importance scores) for the task prediction, and the bias variable. Since the goal is to potentially disentangle the bias from the predictive task, we restrict users to directly modify the bias rationales only. A change in the bias rationales will trigger a change in the task rationales and, finally, in the prediction. Since rationales are in natural language (tokens), we enable users to interact in natural language (NL). INTER-FAIR converts the NL feedback to be actionable for the model to update its rationales. Rationales are presented to the users with importance scores for each input token (see Figure The simplest form of feedback is to provide feedback on the bias importance of a certain input token by indicating whether they would be high or low. However, we expect users to have linguistic variations in their queries. To generalize the process of parsing the NL feedback to actionable feedback for all input tokens, we treat it as a sequence labeling task. Specifically, we build a parser that encodes the NL feedback, bias variable (e.g., gender), and the original task input and produces a sequence of High / Low / NA labels for the complete input token sequence. An example feedback and its parse are shown in Table After parsing the NL feedback, we use the parse labels to update the bias importance scores. First, we convert each parse label to a numeric equivalent using the following map (parse label → important score): High → 1; Low → 0; NA → unchanged. Then we use a linear combination to update the bias importance scores: bias new = αbias new + (1α)bias user with α hyperparameter and bias user being the numeric equivalent of the user feedback. Change in bias importance scores should propagate to the task rationale. We explored two strategies to update the task rationale. • Heuristic: Following • Gradient: Since changes in bias rationale scores affect task rationales scores (hence the task rationales), we can directly perturb the final hidden states h of the classification model that generate the task rationale scores for each token We break our experiments into two parts: 1) developing the NL parser and 2) interactive debiasing with INTERFAIR. We use BiosBias (De-Arteaga et al., 2019), a dataset made from a large-scale user study of gender in various occupations. It contains short biographies labeled with gender and profession information, and a possible confluence exists between gender and annotated profession labels. Using INTERFAIR, we would like to predict the profession from biographies without the influence of gender. Following For evaluation, we use accuracy for task performance (profession prediction) and use an off-theshelf gender detector to measure the bias in the task rationales (Bias F1), following Following We perform a user study with 10 subjects who interact with INTERFAIR and optionally provide feedback to one of the two objectives -1) Constrained: Minimize bias in task rationales without changing the task prediction, and 2) Unconstrained: Minimize bias task rationales as a priority, however, can update task prediction if it seems wrong. The cohort was English-speaking and had an awareness of gender biases but did not have formal education in NLP/ML. The study included an initial training session with 10 instances from the BiosBias test set. Subsequently, participants engaged with 500 reserved examples designated for the interactive debiasing phase. The gender split of the subject pool was 1:1. To understand the change in model performance and bias, we consider two other debiasing models along with the base model Table Acc. Bias F1 Compre. Suff. INTERFAIR without feedback balances the task performance and bias very well. In the constrained setup, the user locks in the task performance (by design) but are able to decrease bias further at the inference time just by perturbing model hidden states using NL feedback. In the unconstrained setup, users are able to modify bias rationales in such a way that improves task performance while decreasing bias. Most importantly, even though 81% (Full Text performance) is the upper bound of accuracy for purely training-based frameworks, users achieve a better task performance (4-5%) while keeping the bias in rationales minimal. In both setups, gradientbased changes in model states are superior to the heuristic strategy to modify the final task rationales. Since unconstrained setup can also confuse users and may lead to failure modes, we see the lowest bias F1 is achieved in the unconstrained setup; however, users were able to keep the bias as low as the INTERFAIR-base model in all interactive settings. Test-time improvement of task performance and bias with a frozen model indicates that 1) full-textbased training suffers from spurious correlation or noise that hampers task performance, and 2) interactive debiasing is superior to no feedback since it produces better quality human feedback to refine task performance while eliminating bias. This phenomenon can be seen as a proxy for data augmentation leading to a superior disentanglement of original task performance and bias. Finally, since test-time interactions modify task rationales, we check their faithfulness using comprehensiveness and sufficiency scores, measured as defined in Feedback format In our initial pilot study with a sample size of N=5 (subjects with no background in NLP/ML), we investigated two feedback formats: 1) allowing participants to perturb weights through three options -NA/High/Low, and 2) soliciting natural language feedback. While it may seem more efficient to offer feedback by engaging with individual tokens and selecting a perturbation option, participants expressed confusion regarding how altering the significance of each token would effectively mitigate bias. Conversely, participants found it more intuitive to provide natural language feedback such as "A person's name is unrelated to their profession." To understand the possibility of this would change had our participants possessed a background in NLP/ML, we conducted a supplementary study involving another cohort of 5 participants, all of whom had completed at least one relevant course in NLP/ML. These participants encountered no difficulties in directly manipulating token importance using the NA/High/Low options and revealed a comparable trend to approaches employing natural language feedback methods. Beyond LSTMs LSTM-based base models enjoyed the gradient update during the interactive debiasing, but to extend this to the model to no hidden states access (e.g., GPT-3), we have to restrict only to heuristic-based approach. We investigate a modular pipeline that uses to extract both the task and bias rationales and then followed by an LSTM-based predictor that predicts the task labels only using the task rationales. The rationale extractor and task predictor are not connected parametrically, another reason why we can only use heuristic-based methods to update the task rationales. The final accuracy and Bias F1 were not significantly different than what was achieved in our LSTM-based setup despite GPT-3 based IN-TERFAIR-base having significantly better performance (acc. 84.0). This suggests the choice of the underlying base model may not be significant if the output can be fixed through iterative debiasing. In summary, INTERFAIR shows the possibility of user-centric systems where users can improve model performances by interacting with it at the test time. Test-time user feedback can yield better disentanglement than what is achieved algorithmically during training. Debiasing is a subjective task, and users can take the higher agency to guide model predictions without affecting model parameters. However, INTERFAIR does not memorize previous feedback at a loss of generalization, which can be addressed via memory-based interactions
926
793
926
Coarse-to-Fine Decoding for Neural Semantic Parsing
Semantic parsing aims at mapping natural language utterances into structured meaning representations. In this work, we propose a structure-aware neural architecture which decomposes the semantic parsing process into two stages. Given an input utterance, we first generate a rough sketch of its meaning, where low-level information (such as variable names and arguments) is glossed over. Then, we fill in missing details by taking into account the natural language input and the sketch itself. Experimental results on four datasets characteristic of different domains and meaning representations show that our approach consistently improves performance, achieving competitive results despite the use of relatively simple decoders.
Semantic parsing maps natural language utterances onto machine interpretable meaning representations (e.g., executable queries or logical forms). The successful application of recurrent neural networks to a variety of NLP tasks In this work, we propose to decompose the decoding process into two stages. The first decoder focuses on predicting a rough sketch of the meaning representation, which omits low-level details, such as arguments and variable names. Example sketches for various meaning representations are shown in Table We argue that there are at least three advantages to the proposed approach. Firstly, the decomposition disentangles high-level from low-level semantic information, which enables the decoders to model meaning at different levels of granularity. As shown in Table Our framework is flexible and not restricted to specific tasks or any particular model. We conduct experiments on four datasets representative of various semantic parsing tasks ranging from logical form parsing, to code generation, and SQL query generation. We adapt our architecture to these tasks and present several ways to obtain sketches from their respective meaning representations. Experimental results show that our framework achieves competitive performance compared
Length Example GEO 7.6 13.7 6.9 x : which state has the most rivers running through it? y : (argmax $0 (state:t $0) (count $1 (and (river:t $1) (loc:t $1 $0)))) a : (argmax#1 state:t@1 (count#1 (and river:t@1 loc:t@2 ) ) ) ATIS 11.1 21.1 9.2 x : all flights from dallas before 10am y : (lambda $0 e (and (flight $0) (from $0 dallas:ci) (< (departure time $0) 1000:ti))) a : (lambda#2 (and flight@1 from@2 (< departure time@1 ? ) ) ) DJANGO 14.4 8.7 8.0 x : if length of bits is lesser than integer 3 or second element of bits is not equal to string 'as' , y : if len(bits) < 3 or bits Various models have been proposed over the years to learn semantic parsers from natural language expressions paired with their meaning representations More recently, neural sequence-to-sequence models have been applied to semantic parsing with promising results Our own work also aims to model the structure of meaning representations more faithfully. The flexibility of our approach enables us to easily apply sketches to different types of meaning representations, e.g., trees or other structured objects. Coarse-to-fine methods have been popular in the NLP literature, and are perhaps best known for syntactic parsing The idea of using sketches as intermediate representations has also been explored in the field of program synthesis Our goal is to learn semantic parsers from instances of natural language expressions paired with their structured meaning representations. We first generate the meaning sketch a for natural language input x. Then, a fine meaning decoder fills in the missing details (shown in red) of meaning representation y. The coarse structure a is used to guide and constrain the output decoding. denote a natural language expression, and y = y 1 • • • y |y| its meaning representation. We wish to estimate p (y|x), the conditional probability of meaning representation y given input x. We decompose p (y|x) into a twostage generation process: where a = a 1 • • • a |a| is an abstract sketch representing the meaning of y. We defer detailed description of how sketches are extracted to Section 4. Suffice it to say that the extraction amounts to stripping off arguments and variable names in logical forms, schema specific information in SQL queries, and substituting tokens with types in source code (see Table where In the following, we will explain how p (a|x) and p (y|x, a) are estimated. An encoder is used to encode the natural language input x into vector representations. Then, a decoder learns to compute p (a|x) and generate the sketch a conditioned on the encoding vectors. Input Encoder Every input word is mapped to a vector via is the vocabulary size, and o (x t ) a one-hot vector. We use a bi-directional recurrent neural network with long short-term memory units (LSTM, Hochreiter and Schmidhuber 1997) as the input encoder. The encoder recursively computes the hidden vectors at the t-th time step via: where [•, •] denotes vector concatenation, e t ∈ R n , and f LSTM is the LSTM function. Coarse Meaning Decoder The decoder's hidden vector at the t-th time step is computed by , where a t-1 ∈ R n is the embedding of the previously predicted token. The hidden states of the first time step in the decoder are initialized by the concatenated encoding vectors Additionally, we use an attention mechanism where Z t = |x| j=1 exp{d t • e j } is a normalization term. Then we compute p (a t |a <t , x) via: where Generation terminates once an end-of-sequence token "</s>" is emitted. Meaning representations are predicted by conditioning on the input x and the generated sketch a. The model uses the encoder-decoder architecture to compute p (y|x, a), and decorates the sketch a with details to generate the final output. Sketch Encoder As shown in Figure The final decoder is based on recurrent neural networks with an attention mechanism, and shares the input encoder described in Section 3.1. The decoder's hidden states {h t } |y| t=1 are computed via: where h 0 = [ -→ e |x| , ←e 1 ], and y t-1 is the embedding of the previously predicted token. Apart from using the embeddings of previous tokens, the decoder is also fed with {v k } |a| k=1 . If y t-1 is determined by a k in the sketch (i.e., there is a one-toone alignment between y t-1 and a k ), we use the corresponding token's vector v k as input to the next time step. The sketch constrains the decoding output. If the output token y t is already in the sketch, we force y t to conform to the sketch. In some cases, sketch tokens will indicate what information is missing (e.g., in Figure For the missing details, we use the hidden vector h t to compute p (y t |y <t , x, a), analogously to Equations ( The model's training objective is to maximize the log likelihood of the generated meaning representations given natural language expressions: max where D represents training pairs. At test time, the prediction for input x is obtained via â = arg max a p (a |x) and ŷ = arg max y p (y |x, â), where a and y represent coarse-and fine-grained meaning candidates. Because probabilities p (a|x) and p (y|x, a) are factorized as shown in Equations ( In order to show that our framework applies across domains and meaning representations, we developed models for three tasks, namely parsing natural language to logical form, to Python source code, and to SQL query. For each of these tasks we describe the datasets we used, how sketches were extracted, and specify model details over and above the architecture presented in Section 3. For our first task we used two benchmark datasets, namely GEO (880 language queries to a database of U.S. geography) and ATIS (5, 410 queries to a flight booking system). Examples are shown in Table Algorithm 1 shows the pseudocode used to extract sketches from λ-calculus-based meaning representations. We strip off arguments and variable names in logical forms, while keeping predicates, operators, and composition information. We use the symbol "@" to denote the number of missing arguments in a predicate. For example, we extract "from@2" from the expression "(from $0 dallas:ci)" which indicates that the predicate "from" has two arguments. We use "?" as a placeholder in cases where only partial argument information can be omitted. We also omit variable information defined by the lambda operator and quantifiers (e.g., exists, count, and argmax). We use the symbol "#" to denote the number of omitted tokens. For the example in Figure The meaning representations of these two datasets are highly compositional, which motivates us to utilize the hierarchical structure of λ-calculus. A similar idea is also explored in the tree decoders proposed in Parent Feeding Taking the meaning sketch "(and flight@1 from@2)" as an example, the parent of "from@2" is "(and". Let p t denote the parent of the t-th time step in the decoder. Compared with Equation (10), we use the vector d att t and the hidden state of its parent d pt to compute the prob-ability p (a t |a <t , x) via: where [•, •] denotes vector concatenation. The parent feeding is used for both decoding stages. Our second semantic parsing task used DJANGO DJANGO is a diverse dataset, spanning various real-world use cases and as a result models are often faced with out-of-vocabulary (OOV) tokens (e.g., variable names, and numbers) that are unseen during training. We handle OOV tokens with a copying mechanism Copying Mechanism Recall that we use a softmax classifier to predict the probability distribution p (y t |y <t , x, a) over the pre-defined vocabulary. We also learn a copying gate g t ∈ [0, 1] to decide whether y t should be copied from the input or generated from the vocabulary. We compute the modified output distribution via: where w g ∈ R n and b g ∈ R are parameters, and the indicator function 1 [yt / ∈Vy] is 1 only if y t is not in the target vocabulary V y ; the attention score s t,k (see Equation ( The WIKISQL WIKISQL queries follow the format "SELECT agg op agg col WHERE (cond col cond op cond) AND ...", which is a subset of the SQL syntax. SELECT identifies the column that is to be included in the results after applying the aggregation operator agg op 2 to column agg col. WHERE can have zero or multiple conditions, which means that column cond col must satisfy the constraints expressed by the operator cond op 3 and the condition value cond. Sketches for SQL queries are simply the (sorted) sequences of condition operators cond op in WHERE clauses. For example, in Table The generation of SQL queries differs from our previous semantic parsing tasks, in that the table schema serves as input in addition to natural language. We therefore modify our input encoder in order to render it table-aware, so to speak. Furthermore, due to the formulaic nature of the SQL query, we only use our decoder to generate the WHERE clause (with the help of sketches). The SELECT clause has a fixed number of slots (i.e., aggregation operator agg op and column agg col), which we straightforwardly predict with softmax classifiers (conditioned on the input). We briefly explain how these components are modeled below. as As shown in Figure analogously to Equations ( SELECT Clause We feed the question vector ẽ into a softmax classifier to obtain the aggregation operator agg op. If agg col is the k-th table column, its probability is computed via: WHERE Clause We first generate sketches whose details are subsequently decorated by the fine meaning decoder described in Section 3.2. As the number of sketches in the training set is small (35 in total), we model sketch generation as a classification problem. We treat each sketch a as a category, and use a softmax classifier to compute p (a|x): where W a ∈ R |Va|×n , b a ∈ R |Va| are parameters, and ẽ is the table-aware input representation defined in Equation ( Once the sketch is predicted, we know the condition operators and number of conditions in the WHERE clause which follows the format "WHERE (cond op cond col cond) AND ...". As shown in Figure Let {h t } |y| t=1 denote the LSTM hidden states of the fine meaning decoder, and the vectors obtained by the attention mechanism as in Equation ( Condition values are typically mentioned in the input questions. These values are often phrases with multiple tokens (e.g., Mikhail Snitko in Table where l L yt / r R yt represents the first/last copying index of cond yt is l/r, the probabilities are normalized to 1, and σ(•) is the scoring network defined in Equation ( We present results on the three semantic parsing tasks discussed in Section 4. Our implementation and pretrained models are available at Preprocessing For GEO and ATIS, we used the preprocessed versions provided by Configuration Model hyperparameters were cross-validated on the training set for GEO, and were validated on the development split for the other datasets. Dimensions of hidden vectors and word embeddings were selected from {250, 300} and {150, 200, 250, 300}, respectively. The dropout rate was selected from {0.3, 0.5}. Label smoothing GEO ATIS ZC07 We compare our model (COARSE2FINE) against several previously published systems as well as various baselines. Specifically, we report results with a model which decodes meaning representations in one stage (ONESTAGE) without leveraging sketches. We also report the results of several ablation models, i.e., without a sketch encoder and without a table-aware input encoder. ). Again we observe that the sketch encoder is beneficial and that there is an 8.9 point difference in accuracy between COARSE2FINE and the oracle. Results on WIKISQL are shown in Table In this paper we presented a coarse-to-fine decoding framework for neural semantic parsing. We first generate meaning sketches which abstract away from low-level information such as arguments and variable names and then predict missing details in order to obtain full meaning representations. The proposed framework can be easily adapted to different domains and meaning representations. Experimental results show that coarseto-fine decoding improves performance across tasks. In the future, we would like to apply the framework in a weakly supervised setting, i.e., to learn semantic parsers from question-answer pairs and to explore alternative ways of defining meaning sketches.
729
1,269
729
Stage-wise Fine-tuning for Graph-to-Text Generation
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders. However, they fail to fully utilize the structure information of the input graph. In this paper, we aim to further improve the performance of the pre-trained language model by proposing a structured graph-to-text model with a two-step fine-tuning mechanism which first fine-tunes the model on Wikipedia before adapting to the graph-to-text generation. In addition to using the traditional token and position embeddings to encode the knowledge graph (KG), we propose a novel treelevel embedding method to capture the interdependency structures of the input graph. This new approach has significantly improved the performance of all text generation metrics for the English WebNLG 2017 dataset. 1
In the graph-to-text generation task Table We explore the proposed stage-wise fine-tuning and structure-preserving embedding strategies for graph-to-text generation task on WebNLG corpus
Given an RDF graph with multiple relations our goal is to generate a text faithfully describing the input graph. We represent each relation with a triple (s i , r i , o i ) ∈ G for i ∈ {1, ..., n}, where s i , r i , and o i are natural language phrases that represent the subject, type, and object of the relation, respectively. We augment our model with addi- tional position embeddings to capture the structure of the KG. To feed the input for the large-scale Transformer-based PLM, we flatten the graph as a concatenation of linearized triple sequences: embeddings to enhance the flattened input of pretrained Transformer-based sequence-to-sequence models such as BART and TaPas • Triple Role ID takes 3 values for a specific triple (s i , r i , o i ): 1 for the subject s i , 2 for the relation r i , and 3 for the object o i . • Tree level ID calculates the distance (the number of relations) from the root which is the source vertex of the RDF graph. To get better domain adaptation ability We use the standard NLG evaluation metrics to report results: BLEU When selecting the best models, we also evaluate each model with PARENT Results with Wikipedia fine-tuning. The Wikipedia fine-tuning helps the model handle unseen relations such as "inOfficeWhileVicePresident", and "activeYearsStartYear" by stating "His vice president is Atiku Abubakar." and "started playing in 1995" respectively. It also combines relations with the same type together with correct order, e.g., given two death places of a person, the model generates: "died in Sidcup, London" instead of generating two sentences or placing the city name ahead of the area name. Results with positional embeddings. For the KG with multiple triples, additional positional embeddings help reduce the errors introduced by pro-noun ambiguity. For instance, for a KG which has "leaderName" relation to both country's leader and university's dean, position embeddings can distinguish these two relations by stating "Denmark's leader is Lars Løkke Rasmussen" instead of "its leader is Lars Løkke Rasmussen". The tree-level embeddings also help the model arrange multiple triples into one sentence, such as combining the city, the country, the affiliation, and the affiliation's headquarter of a university into a single sentence: "The School of Business and Social Sciences at the Aarhus University in Aarhus, Denmark is affiliated to the European University Association in Brussels". However, pre-trained language models also generate some errors as shown in Table The WebNLG task is similar to Wikibio generation Those models We propose a new two-step structured generation task for the graph-to-text generation task based on a two-step fine-tuning mechanism and novel treelevel position embeddings. In the future, we aim to address the remaining challenges and extend the framework for broader applications. This work is partially supported by Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
836
186
836
RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL
Relational structures such as schema linking and schema encoding have been validated as a key component to qualitatively translating natural language into SQL queries. However, introducing these structural relations comes with prices: they often result in a specialized model structure, which largely prohibits using large pretrained models in text-to-SQL. To address this problem, we propose RASAT: a Transformer seq2seq architecture augmented with relation-aware self-attention that could leverage a variety of relational structures while inheriting the pretrained parameters from the T5 model effectively. Our model can incorporate almost all types of existing relations in the literature, and in addition, we propose introducing co-reference relations for the multi-turn scenario. Experimental results on three widely used text-to-SQL datasets, covering both singleturn and multi-turn scenarios, have shown that RASAT could achieve state-of-the-art results across all three benchmarks (75.5% EX on Spider, 52.6% IEX on SParC, and 37.4% IEX on
Text-to-SQL is the task that aims at translating natural language questions into SQL queries. Since it could significantly break down barriers for nonexpert users to interact with databases, it is among the most important semantic parsing tasks that are of practical importance Various types of relations have been introduced for this task since Although integrating various relational structures as well as using a tree-decoder have been shown to be vital to generating qualitative SQL queries and generalizing better towards unseen database schema, the dev of various specifically designed model architectures significantly deviate from the general sequential form, which has made it hard if one considers leveraging large pre-trained models for this task. Existing methods either use BERT output as the input embedding of the specifically designed model In another thread, pretrained seq2seq models just have unveiled their powerful potential for this task. Recent attempts by In this work, different from the more common approach of fine-tuning the original pretrained model or using prompt tuning, we propose to augment the self-attention modules in the encoder and introduce new parameters to the model while still being able to leverage the pre-trained weights. We call the proposed model RASAT 2 . Our model can incorporate almost all existing types of relations in the literature, including schema encoding, schema linking, syntactic dependency of the question, etc., into a unified relation representation. In addition to that, we also introduce coreference relations to our model for multi-turn text-to-SQL tasks. Experimental results show that RASAT could effectively leverage the advantage of T5. It achieves the stateof-art performance in question execution accuracy (EX/IEX) on both multi-turn (SParC and CoSQL) and single-turn (Spider) text-to-SQL benchmarks. On SParC, RASAT surpasses all previous methods in interaction execution accuracy (IEX) and improves state-of-the-art performance from 21.6% to 52.6%, 31% absolute improvements. On CoSQL, we improve state-of-the-art IEX performance from 8.4% to 37.4%, achieving 29% absolute improvements. Moreover, on Spider, we improve state-ofthe-art execution accuracy from 75.1% to 75.5%, achieving 0.4% absolute improvements.
Early works usually exploit a sketch-based slotfilling method that uses different modules to predict the corresponding part of SQL. These methods decompose the SQL generation task into several independent sketches and use different classifiers to predict corresponding part, such as SQLNet Faced with the multi-table and complex SQL setting, using graph structures to encode various complex relationships is a major trend in the text-to-SQL task. For example, Global-GNN For the conversational context-dependent textto-SQL task that includes multiple turns of interactions, such as SParC and CoSQL, the key challenge is how to take advantage of historical interaction context. Edit-SQL Recently, Given a natural language question Q and database schema S =< T , C >, our goal is to predict the SQL query Y. Here Q = {q i } |Q| i=1 is a sequence of natural language tokens, and the schema S consists of a series of tables i=1 . The content of database S is noted as V. For each table t i , the columns in this table is denoted as In the multi-turn setting, our notations adapt correspondingly. i.e., Q = {Q i } |Q| i=1 denotes a sequence of questions in the interaction, with Q i denoting each question. Also, the target to be predicted is a sequence of SQL queries, Y = {Y i } |Y| i=1 , with each Y i denoting the corresponding SQL query for the i-th question Q i . Generally, for each question, there is one corresponding SQL query, such that |Q| = |Y|. While predicting Y i , only the questions in the interaction history are available, i.e., {Q 1 , • • • , Q i }. Relation-aware self-attention where H is the number of heads, and are learnable weights. The r K ij , r V ij are two different relation embeddings used to represent the relation r between the i-th and j-th token. The overall structure of our RASAT model is shown in Figure The input to the encoder is a combination of question(s) Q, database schema S =< T , C > with the database name S, as well as database content mentions and necessary delimiters. We mostly follow (2) where t i is the table name, c ij is the j-th column name of the i-th table. The v ∈ V showing after column c 11 is the database content belonging to the column that has n-gram matches with the tokens in the question. As for delimiters, we use | to note the boundaries between Q, S, and different tables in the schema. Within each table, we use : to separate between table name and its columns. Between each column, , is used as the delimiter. As for the multi-turn scenario, we add the questions in the history at the start of the sequence and truncate the trailing tokens in the front of the sequence when the sequence length reaches 512. i.e., where | are the corresponding delimiters. Next, we add various types of relations as triplets, linking between tokens in the serialized input, which naturally turns the input sequence into a graph (Figure To fine-tune this model, we inherit all the parameters from T5 and randomly initialize the extra relation embeddings introduced by relation-aware self-attention. The overall increase of parameters is less than 0.01% (c.f. Appendix A). Equipped with relation-aware self-attention, we can incorporate various types of relations into the T5 model, as long as the relation can be presented as a triplet, with its head and tail being the tokens in the input sequence X. Formally, we present the triplet as where H, T are the head and tail items in the triplet, and r represents the relation. Given the input sequence X of length |X|, we assume that for each direction of a given pair of tokens, there only exists up to one relation. Thus, if we consider the tokens in X as vertices of a graph, it could have up to |X| 2 directed edges, with each edge corresponding to an entry in the adjacency matrix of the graph. In this paper, we call this graph, containing tokens from the whole input sequence as its vertices and the incorporated relations as its edges, as interaction graph. We assign two relation embeddings for each type of introduced relation. Thus the Transformer encoder comes with two trainable lookup tables storing relations embeddings to compute the key and value in the self-attention (c.f. Figure We reserve a set of generic relations for serving as mock relations for token pairs that do not have a specific edge. In total, we have used 51 different relations in the model (c.f. Appendix D). Apart from the mock generic relations, there are generally 5 types of relations, which are: schema encoding, schema linking, question dependency structure, coreference between questions, and database content mentions. Please refer to Table Schema Encoding. Schema encoding relations refer to the relation between schema items, i.e., H, T ∈ S. These relations describe the structure information in a database schema. For example, PRIMARY-KEY indicates which column is the primary key of a table, BELONGS-TO shows which table a column belongs to, and FORIGN-KEY connects the foreign key in one table, and the primary key in another table. Schema Linking. Schema linking relations refer to the relations between schema and question items, i.e., H ∈ S, T ∈ Q or vice versa. We follow the settings in RAT-SQL Question Dependency Structure. This type of relation refers to the edges of a dependency tree of the question, i.e., H, T ∈ Q. Unlike the previous two relation types, it is less explored in the literature on text-to-SQL. Since it reflects the grammatical structure of the question, we believe it should also be beneficial for the task. In our work, to control the total number of relations and avoid unnecessary overfitting, we do not discriminate between different dependency relations. Figure Coreference Between Questions. This type of relation is unique to the multi-turn scenario. In a dialog with multiple turns, it is important for the model to figure out the referent of the pronouns correctly. Figure Although pre-trained models like T5 are believed to have the capability to handle this implicitly, we still find that explicitly adding these links could significantly improve the model's performance. The various aforementioned types of relations are between types of items, with their H and T being either words or phrases. However, almost all pretrained models take input tokens at the subword level, resulting in a difference in the granularity between the relations and the input tokens. Previous works use an extra step to aggregate multiple subword tokens to obtain a single embedding for each item in the interaction graph, such as mean pooling, attentive pooling, or with BiLSTMs In this work, we adopt the other way: we propagate the relations into the subword level by cre- In this section, we will show our model's performance on three common text-to-SQL datasets: Spider Datasets Spider is a large-scale, multi-domain, and cross-database benchmark. SparC and CoSQL are multi-turn versions of Spider on which the dialogue state tracking is required. All test data is hidden to ensure fairness, and we submit our model to the organizer of the challenge for evaluation. 75.3 80.5 70.9 75.5 Table coreference links. In total, 51 types of relations are used (c.f. Appendix D for a detailed list). For dependency parsing, stanza The batch size we used is 2048. We use Adafactor The results on SParC are shown in achieves state-of-the-art results on all four evaluation metrics. Compared with the previous state-of-the-art RAT-SQL-TC + GAP Among the models that can predict with values, the fine-tuned T5-3B model from UNIFIEDSKG Furthermore, on the official leaderboard of SParc which reports over test set, our proposed RASAT + PICARD brings the IEX from 21.6% to 52.6%, achieving 31% absolute improvements. Compared with SParC, CoSQL is labeled in a Wizard-of-Oz fashion, forming a more realistic and challenging testbed. Nevertheless, our proposed model could still achieve state-of-the-art results (Table By comparing to the previous state-of-the-art HIE-SQL + GraPPa For the same reason as on SParC, we mainly compare QEX/IEX performance on the dev set, and RASAT + PICARD surpasses all models that can predict executable SQLs (with values). Especially for IEX, our model surpasses the previous state-of-the-art from 26.2% to 39.6%, with 13.4% absolute improvement. Moreover, on the official leaderboard of CoSQL which reports over test set, RASAT + PICARD brings the IEX from 8.4% to 37.4%, with 29% absolute improvements. The results on the Spider is provided in Table Furthermore, we also evaluate our model on a more challenging Spider variant, Spider-Realistic In this subsection, we conduct a set of ablation studies to examine various aspects of the proposed model. Due to the limited availability of the test sets, all numbers in this subsection are reported on the dev set. Effect on SQL difficulty. One might conjecture that the introduced relations are only effective for more difficult, longer SQL query predictions, while for predicting short SQL queries, the original T5 model could handle equally well. Thus, we evaluate our model according to the difficulty of the examples, where the question/SQL pairs in the dev set are categorized into four subsets, i.e., easy, medium, hard, and extra hard, according to their level of difficulty. In Table Relation Types. We conducted additional experiments to analyze the relative contribution of different relation types. The experimental results on Spider is shown in Table In this work, we propose RASAT, a Relation-Aware Self-Attention-augmented T5 model for the textto-SQL generation. Compared with previous work, RASAT can introduce various structural relations into the sequential T5 model. Different from the more common approach of fine-tuning the origi-nal model or using prompt tuning, we propose to augment the self-attention modules in the encoder and introduce new parameters to the model while still being able to leverage the pre-trained weights. RASAT had achieved state-of-the-art performances, especially on execution accuracy, in the three most common text-to-SQL benchmarks. Our method consumes plenty of computational resources since we leverage the large T5-3B model. We train our models on 8 A100 GPUs (80G) for around 2 days. Our model truncates the source sequences to 512, this may lead to information loss when a sample has long input. We find that about 3% of training data in CoSQL will be affected. We only work with English since it has richer analytical tools and resources than other language. Compared with the original T5 model, only two embedding matrices are added to the encoder in our model, with 2 × µ × d kv parameters. These embedding matrices are shared in each encoder layer and each head. Here µ = 51 is the total number of relations and d kv is the dimension of the key/value states in self-attention (64 in T5-small/base/large and 128 in T5-3B). The overall increase of parameters is less than 0.01%. Here we show the output difference between T5 and most AST-based models. As it shown in Table In Table Table
1,046
2,289
1,046
Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation
A major obstacle to the wide-spread adoption of neural retrieval models is that they require large supervised training sets to surpass traditional term-based techniques, which are constructed from raw corpora. In this paper, we propose an approach to zero-shot learning for passage retrieval that uses synthetic question generation to close this gap. The question generation system is trained on general domain data, but is applied to documents in the targeted domain. This allows us to create arbitrarily large, yet noisy, question-passage relevance pairs that are domain specific. Furthermore, when this is coupled with a simple hybrid termneural model, first-stage retrieval performance can be improved further. Empirically, we show that this is an effective strategy for building neural passage retrieval models in the absence of large training corpora. Depending on the domain, this technique can even approach the accuracy of supervised models.
Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks
Retrieval Model Rescoring Model Figure Another consideration is that BM25 is often high quality The focus of the present work is methods for building neural models for first-stage passage retrieval for large collections of documents. While rescoring models are key components to any retrieval system, they are out of the scope of this study. Specifically, we study the zero-shot setting where there is no target-domain supervised training data The zero-shot setting is challenging as the most effective neural models have a large number of parameters, which makes them prone to overfitting. Thus, a key factor in training high quality neural models is the availability of large training sets. To address this, we propose two techniques to improve neural retrieval models in the zero-shot setting. First, we observe that general-domain questionpassage pairs can be acquired from community platforms Towards zero-shot neural retrieval with improved domain adaptability, we propose a data augmentation approach A second contribution is a simple hybrid model that interpolates a traditional term-based model -BM25 We compare a number of baselines including other data augmentation and domain transfer techniques. We show on three specialized domains (scientific literature, travel and tech forums) and one general domain that the question generation approach is effective, especially when considering the hybrid model. Finally, for passage retrieval in the scientific domain, we compare with a number of recent supervised models from the BioASQ challenge, including many with rescoring stages. Interestingly, the quality of the zero-shot hybrid model approaches supervised alternatives. Neural Retrieval The retrieval vs. rescorer distinction (Figure Model Transfer Previous work has attempted to alleviate reliance on large supervised training sets by pre-training deep retrieval models on weakly supervised data such as click-logs Question generation for data augmentation is a common tool, but has not been tested in the pure zero-shot setting nor for neural passage retrieval. Hybrid Models Combining neural and termbased models have been studied, most commonly via linearly interpolating scores in an approximate re-ranking stage In this work, we are specifically investigating the zero-shot scenario where there exists neither user issued questions nor domain specific data except the passage collection itself. We propose to address the Ubuntu Forums Passage: Every time I get a notification about and begin updating when they become available, the process is interrupted by an error message: error in foomatic-filters. Then I get "error in linux generic package" and a bunch of numbers. This is replaced before I can write it all down with "error in Linux package" Everything seems to go OK except I don't know if the updates are really being installed. I tried un-installing and re-installing foomatic-filters . . . Generated Question: How do I get rid of error in foomatic-filters? Biomedical Literature Passage: Electroencephalographic tracings of 50 patients who presented the classical features of Friedreich's ataxia were reviewed . . . Friedreich's ataxia is mainly a spinal disorder. Involvement of supraspinal and in particular brain stem or diencephalic structures may be more extensive in those patients who show electrographic abnormalities. This would require confirmation with comparative data based on pathological observations. Impaired function of brain stem inhibitory mechanism may be responsible for the slightly raised incidence of seizures in patients with Friedreich's ataxia and other cerebellar degenerations. Generated Question: What is the significance of Friedreich's ataxia? Table training data scarcity issue by generating synthetic questions To ensure data quality, we further filter the data by only keeping question-answer pairs that were positively rated by at least one user on these sites. In total, the final dataset contains 2 millions pairs, and the average length of questions and answers are 12 tokens and 155 tokens respectively. This dataset is general domain in that it contains questionanswer pairs from a wide variety of topics. Our question generator is an encoder-decoder with Transformer Our approach is robust to domain shift as the generator is trained to create questions based on a given text. As a result, generated questions stay close to the source passage material. Real examples are shown in Table In this section we describe our architecture for training a first-stage neural passage retriever. Our retrieval model belongs to the family of relevancebased dense retrieval 6 that encodes pairs of items in dense subspaces In this work, both query and document encoders are based on BERT We encode P as (CLS, p 1 , . . . , p m , SEP). For some datasets, a passage contains both a title T = (t 1 , ..., t l ) and content C = (c 1 , ..., c o ), in which case we encode the passage as (CLS, t 1 , ..., t l , SEP, c 1 , ..., c o , SEP). These sequences are fed to the BERT encoder. Let h CLS ∈ R N be the final representation of the "CLS" token. Passage encodings p are computed by applying a linear projection, i.e., p = W * h CLS , where W is a N × N weight matrix (thus N = 768), which preserves the original size of h CLS . This has been shown to perform better than down-projecting to a lower dimensional vector We encode Q as (CLS, q 1 , q 2 , ..., q n , SEP) which is then fed to the BERT encoder. Similarly, 6 A.k.a. two-tower, dual encoder or dense retrieval. a linear projection on the corresponding "CLS" token, using the same weight matrix W, is applied to generate q. Following previous work For training, we adopt softmax cross-entropy loss. Formally, given an instance {q, p + , p - 1 , ..., p - k } which comprises one query q, one relevant passage p + and k non-relevant passages p - i . The objective is to minimize the negative log-likelihood: log(e q,q + + k i=1 e q,q - i ) -q, q + This loss function is a special case of ListNet loss For the set {p - 1 , ..., p - k }, we use in-batch negatives. Given a batch of (query, relevant-passage) pairs, negative passages for a query are passages from different pairs in the batch. In-batch negatives has been widely adopted as it enables efficient training via computation sharing Since the relevance-based model encodes questions and passages independently, we run the encoder over every passage in a collection offline to create a distributed lookup-table as a backend. At inference, we run the question encoder online and then perform nearest neighbor search to find relevant passages, as illustrated in the bottom half of Figure Traditional term-based methods like BM25 For a query Q and a passage P , BM25 is computed as the following similarity score, , where k/b are BM25 hyperparameters, IDF is the term's inverse document frequency from the corpus, cnt is the term's frequency in a passage, n/m are the number of tokens in Q/P , and m avg is the collection's average passage length. Like most TF-IDF models, this can be written as a vector space model. Specifically, let q bm25 ∈ [0, 1] |V | be a sparse binary encoding of a query of dimension |V |, where V is the term vocabulary. Specifically this vector is 1 at position i if v i ∈ Q, here v i is the i-th entry in V . Furthermore, let p bm25 ∈ R |V | be a sparse real-valued vector where, We can see that, BM25(Q, P ) = q bm25 , p bm25 As BM25 score can be written as vector dotproduct, this gives rise to a simple hybrid model, sim(q hyb , p hyb ) = q hyb , p hyb = [λq bm25 , q nn ], [p bm25 , p nn ] = λ q bm25 , p bm25 + q nn , p nn , where q hyb and p hyb are the hybrid encodings that concatenate the BM25 (q bm25 /p bm25 ) and the neural encodings (q nn /p nn , from Sec 4.1); and λ is a interpolation hyperparameter that trades-off the relative weight of BM25 versus neural models. Thus, we can implement BM25 and our hybrid model as nearest neighbor search with hybrid sparse-dense vector dot-product We outline data and experimental details. The Appendix has further information to aid replicability. BioASQ Biomedical questions from Task B Phase A of BioASQ Forum Threads from two online user forum domains: Ubuntu technical help and TripAdvisor topics for New York City NaturalQuestions Aggregated queries issued to Google Search Dataset statistics are listed in Appendix A. BM25 Term-matching systems such as BM25 ICT The Inverse Cloze Task (ICT) Ngram QA The dataset mined from community questionanswer forums (Sec. 3) itself can be used directly to train a neural retrieval model since it comes of the form query and relevant text (passage) pair. This data is naturally occurring and not systematically noisy, which is an advantage. However, the data is not domain-targeted, in that it comes from general knowledge questions. We call models trained on this dataset as QA. Applying a model trained on general domain data to a specific domain with no adaptation is a strong baseline QGen The QGen retrieval model trained on the domain-targeted synthetic question-passage pairs described in Section 3. While this model can contain noise from the generator, it is domain-targeted. QGenHyb This is identical to QGen, but instead of using the pure neural model, we train the hybrid model in Section 4.4 setting λ = 1.0 for all models to avoid any domain-targeted tuning. We train the term and neural components independently, combing them only at inference. All ICT, NGram, QA and QGen models are trained using the neural architecture from Section 4. For BioASQ experiments, question and passage encoders are initialized with BioBERT base v-1.1 We can categorize the neural zero-shot models along two dimensions extractive vs. transfer. ICT and Ngram are extractive, in that they extract exact substrings from a passage to create synthetic questions for model training. Note that extractive models are also unsupervised, since they do not rely on general domain resources. QA is a direct cross-domain transfer model, in that we train the model on data from one domain (or general domain) and directly apply it to the target domain for retrieval. QGen models are in-direct cross-domain transfer models, in that we use the out-of-domain data to generate resources for model training. The nature of each zero-shot neural system requires different generated training sets. For ICT, we follow For QGen models, each passage is truncated to 512 sentence tokens and feed to the question generation system. We also run the question generator on individual sentences from each passage to promote questions that focus on different aspects of the same document. We select at most 5 salient sentences from a passage, where sentence saliency is the max term IDF value in a sentence. The size of the generated training set for each baseline is shown in Table Our main results are shown in Table Accuracy of pure neural models are shown in the upper group of Table Performance of term-based models and hybrid models are shown in Table For NaturalQuestions since there is a single relevant passage annotation, we report Precision@1 and Mean reciprocal rank (MRR) One question we can ask is how close to the state-of-the-art in supervised passage retrieval are these zero-shot models. To test this we looked at BioASQ 8 dataset and compare to the topparticipant systems. In order to make our results comparable to participant systems, we return only 10 passages per question (as per shared-task guidelines) and use the official BioASQ 8 evaluation software. Table A natural question is whether improved firststage model plus supervised rescoring is additive. The last two lines of the table takes the twobest first-stage retrieval models and adds a simple BERT-based cross-attention rescorer As noted earlier, on BioASQ, BM25 is a very strong baseline. This makes the BM25/QGenHyb zero-shot models highly likely to be competitive. When we look at NaturalQuestions, where BM25 is significantly worse than neural models, we see that the gap between zero-shot and supervised widens substantially. The last row of Table Since our approach allows us to generate queries on every passage of the target corpus, one question is that whether retrieval system trained this way simply memorizes the target corpus or it also generalize on unseen passages. Furthermore, from an efficiency standpoint, how many synthetic training examples are required to achieve maximum performance. To answer these questions, we uniformly sample a subset of documents and then generate synthetic queries only on that subset. Results on BIOASQ 7 are shown in Figure Another interesting question is how important is the quality of the question generator relative to retrieval performance. Below we measured gen- eration quality (via Rouge-based metrics We study methods for neural zero-shot passage retrieval and find that domain targeted synthetic question generation coupled with hybrid termneural first-stage retrieval models consistently outperforms alternatives. Furthermore, for at least one domain, approaches supervised quality. While out of the scope of this study, future work includes further testing the efficacy of these first-stage models in a full end-to-end system (evaluated briefly in Section 6.1), as well as for pre-training supervised models Our question generation follows the same implementation of For ICT task, we follow For zero-shot neural retrieval model training, we uniformly sample of a subset of 5K (question, document) pairs from the training data as a noisy development set. Instead of finding the best hyperparameter values, we use this subset to find the largest batch size and learning rate that lead the training to converge For BM25, the only two hyperparameters are k and b. We set these to k = 1.2 and b = 0.75 as advised by For the hybrid model QGenHyb, the only hyperparameter is λ. We set this to 1.0 without any tuning, since this represented an equal trade-off between the two models and we wanted to keep the systems zero-shot. However, we did try experimentations. For BioASQ 8b and Forum Ubuntu, values near 1.0 were actually optimal. For BioASQ
950
119
950
Coherence-based Modeling of Clinical Concepts Inferred from Heterogeneous Clinical Notes for ICU Patient Risk Stratification
In hospitals, critical care patients are often susceptible to various complications that adversely affect their morbidity and mortality. Digitized patient data from Electronic Health Records (EHRs) can be utilized to facilitate risk stratification accurately and provide prioritized care. Existing clinical decision support systems are heavily reliant on the structured nature of the EHRs. However, the valuable patient-specific data contained in unstructured clinical notes are often manually transcribed into EHRs. The prolific use of extensive medical jargon, heterogeneity, sparsity, rawness, inconsistent abbreviations, and complex structure of the clinical notes poses significant challenges, and also results in a loss of information during the manual conversion process. In this work, we employ two coherence-based topic modeling approaches to model the free-text in the unstructured clinical nursing notes and capture its semantic textual features with the emphasis on human interpretability. Furthermore, we present FarSight, a long-term aggregation mechanism intended to detect the onset of disease with the earliest recorded symptoms and infections. We utilize the predictive capabilities of deep neural models for the clinical task of risk stratification through ICD-9 code group prediction. Our experimental validation on MIMIC-III (v1.4) database underlined the efficacy of FarSight with coherence-based topic modeling, in extracting discriminative clinical features from the unstructured nursing notes. The proposed approach achieved a superior predictive performance when benchmarked against the structured EHR data based state-of-the-art model, with an improvement of 11.50% in AUPRC and 1.16% in AUROC.
Until recently, the healthcare industry had an inclination towards conservative approaches for the treatment and diagnosis of patients, resulting in less patient-centric and imprecise assessments Pat is 83 yo F w/PMHx for CLL and hypotens, who was admited for an elective total hip arthroplasty for persistent hip pain. NGT to low cont suct. Family here to visit. Pat initially sustained a right hip fracture after a fall in [ ** 2137 ** ], and had an ORIF performed at the time. Gave med for pain. Has had right hip pain ever since, and also has AVN of the right femoral head. She came in today for elective tot hip repl. In the OR today, patient had an estimated 1600cc EBL, and received 6u pRBC. I/Os were 7200cc in (3.7L LR, 1.5L pRBCs). Figure Structured medical data in the form of Electronic Health Records (EHRs) contain numerical assessments (e.g., lab results) and are amenable to standard statistical analysis The voluminosity of nursing notes can be observed from the heavy-tailed distribution of the MIMIC-III nursing notes across various patients (see Figure With the availability of large de-identified healthcare databases such as MIMIC-III 2 In this paper, we discuss an approach to model the rich patient-specific information in the unstructured clinical nursing notes, to aid in the risk stratification as an ICD-9 code group prediction task. ICD-9 codes are a taxonomy of diagnostic codes used for cost-effectiveness analysis, epidemiology studies, and designing health-care policies. Accurate ICD-9 code group prediction not only promotes better ICD-9 code determination, but also facilitates more reliable risk stratification by reporting on the severity, symptoms, and the use of resources across code groups, thus aiding disease-specific staging systems. In our work, two coherence-based topic modeling approaches, Coherence-based Latent Dirichlet Allocation (C-LDA) and Coherence-based Nonnegative Matrix Factorization (C-NMF) are employed to capture the semantic relationships between the textual features of the clinical notes and derive optimal data representations with a higher guarantee on human interpretability. We employ Far-Sight to aggregate the documented patient data in a way intended to detect the onset of the disease with the earliest recorded symptoms. Furthermore, we benchmark the performance of our proposed topic models using two neural architectures, including Multi-Layer Perceptron (MLP) and Attention-based Long Short Term Memory (A-LSTM). Additionally, we perform a sensitivity analysis to assess the statistical significance of the obtained results. The remainder of this paper is structured as follows: Section 2 describes the MIMIC-III database, the preprocessing steps, and the topic modeling approaches employed to obtain the optimal data representations from the raw clinical nursing notes. The deep neural architectures employed in the clinical task of ICD-9 code group prediction along with the discussion of the experimental results of our benchmarking are presented in Section 3. Finally, Section 4 summarizes this paper with highlights on future research possibilities.
In this section, we discuss in detail, the Natural Language Processing (NLP) pipeline designed to facilitate multi-label ICD-9 code group prediction, and the same is depicted in Figure MIMIC-III (v1.4) is a publicly available large healthcare database with comprehensive medical data of over 40, 000 ICU patients. The healthcare database contains 223, 556 nursing notes extracted from 2, 083, 180 note events (noteevents table), corresponding to 7, 704 distinct patients (diagnoses icd table). Two selection criteria were employed in the cohort selection. Firstly, only those records corresponding to the patients older than 15 (adults) were retained using the patient's age at the time of admission to the ICU (extracted from admissions and patients tables). Secondly, only the first admission of a patient to the hospital was considered. Both these steps were followed in accordance with the existing literature The data extracted from the MIMIC-III database contained erroneous patient entries due to several factors, including missing values, duplicate or incorrect records, outliers, and noise. The erroneous entries were filtered out using the iserror attribute of the noteevents table. Then, duplicate patient records were identified and deduplicated. The resultant dataset comprised of nursing notes corresponding to 6, 532 patients, and the data in these records were aggregated using the proposed Far-Sight technique. It is crucial to detect the onset of the disease with the earliest detected symptoms, to provide preventive care and reduce the mortality and morbidity of complications. We propose FarSight, which is designed to aggregate the patient data using a future lookup on all the detected diseases in the later medical records concerning that patient. Let P be the set of all patients, and let a patient p have a sequence of N clinical notes, i mapped to an ICD-9 code I (p) i indexed in the order from the oldest to the most recent. Now, Far-Sight aggregates the ICD-9 codes across the nursing notes of a patient using a future lookup, resulting in , where . Ultimately, we aim at learning a function F to estimate the probability of classifying a given nursing note η (p) j into a set of diagnostic code groups: F(S (p) ) ≈ Pr(I (p) | η p j ). Instead of aggregating several patient records, FarSight only aggregates the ICD-9 codes across a particular patient's nursing notes to facilitate risk stratification at the initial stages of the disease with the earliest recorded symptoms and infections. Let the set of all nursing notes be S = {S (p) } P p=1 . Each nursing note η j constitutes a variable length of words from a large vocabulary V, making S very complex. Thus, a transformation (T ) of the unstructured clinical text to a machine-processable form (T : ) is vital to the efficacy and performance of the underlying deep neural architectures. Topic modeling aims at finding a set of topics Determining the optimal number of LDA or NMF clusters is a challenging task. To address this issue, we utilize the Topic Coherence (TC) or semantic coherence where t i , t j ∈ T . The coherence score comes from external data, i.e., the data not used during training (we employed the full set of English Wikipedia articles), and is intended to regularize the topic models. The NPMI similarity score is an extension of the pointwise mutual information score, and is used in finding associations and collocations between the words (2) PMI(t i , t j ) = log 2 Pr(t i , t j ) Pr(t i )Pr(t j ) (3) The individual confirmation measures obtained for all topics (T i s) are averaged to obtain the final coherence score. The number of topics for both LDA and NMF models was determined to be 100, by computing the coherence score of several topic models obtained by varying the number of topics. The LDA and NMF matrices were built on a bag-of-words representation of the clinical notes. For the ease of interpretation, a heat map presenting the correlations between top terms' membership in top five C-LDA clusters is presented in Figure From Figure ICD-9 codes are a taxonomy of diagnostic codes typically used by healthcare professionals and insurers when discussing medical conditions. This study only focuses on category-level (group) predictions, owing to the high granularity of the diagnostic codes. Each code group comprises a set of similar diseases, and most of the health conditions can be categorized into a unique group. This study focuses on the risk stratification as a multi-label problem, where each nursing note is mapped to multiple ICD-9 code groups. The ICD-9 codes for a given admission are mapped into 19 distinct code groups We used two deep neural architectures, Multilayer Perceptron (MLP) and Attention-based LSTM (A-LSTM), for the multi-label ICD-9 code group prediction task. The deep models were trained to minimize a binary cross-entropy loss function using an Adam optimizer, with a batch size of 128, for eight epochs. The MLP is a feed-forward artificial neural network consisting of multiple layers of neurons (nodes) interacting using weighted connections. MLP offers several advantages including adaptive learning, fault tolerance, parallelism, and generalizability. The output of a neuron in every layer serves as an input to the subsequent layer. A neuron in the current layer (l) with the input I (l) is activated in the following layer (l + 1) as g (l) (W (l) • I (l) + b (l) ), where g (l) is a non-linear activation such as Rectified Linear Unit (ReLU), tanh, or logistic sigmoid, and b (l) and W (l) are the bias and weight matrix at layer l. MLP uses backpropagation to determine the gradient of the loss function needed to learn an optimal set of weights and biases needed to minimize a loss function. This study employs an MLP network with one hidden layer of 75 nodes, activated using a ReLU function, and one output layer of 19 nodes, activated using a sigmoid function. The LSTM effectively captures the long-term dependencies and overcomes the gradient vanishing problem which is crucial in the accurate risk stratification using unstructured nursing notes. LSTMs introduce an adaptive gating mechanism to determine the extent to which the LSTM memory units must retain the previous state (c t-1 ) and memorize the features in the current state (c t ). Typically, four gates composite an LSTM network including the input gate i, the forget gate f , the output gate o, and the candidate value g for the cell state. The precise form of an LSTM update at a layer l and time step t is computed as: where denotes element-wise multiplication, h t is the output at a time step t, and W (l) is a [4n×2n] weight matrix at layer l. Attentive neural models have been successfully applied to several NLP tasks including sentence summarization, text entailment, and reading comprehension To experimentally validate the proposed approach, we performed an exhaustive benchmarking on the clinical nursing notes obtained from the MIMIC-III database. The experiments were performed using a server running Ubuntu OS with 56 cores of Intel Xeon processors, 128 GB RAM, 3 TB hard drive, and two NVIDIA Tesla M40 GPUs. A significant challenge arose due to the manifold nature of diseases, as each patient record was assigned a set of ICD-9 code groups. This study employs a pair-wise comparison of the actual and predicted code group sets. Five standard evaluation metrics including Accuracy (ACC), F1 score, MCC score, Area Under the Precision-Recall Curve (AUPRC), and Area Under the ROC Curve (AU-ROC) were employed to evaluate the performance of the proposed coherence-based modeling approaches, classified using MLP and A-LSTM. Ten-fold cross-validation was performed to assess the predictability of the proposed models. Table AUPRC varies with the change in the ratio of the target classes in the data and hence is more informative than AUROC while evaluating imbalanced data The experimental results in Table To understand the distribution of the underlying data, we employed the Kolmogorov-Smirnov test for normality Although the proposed approach effectively stratifies the patients' risk and the associated complications, it can be enhanced further, which calls for further research on this topic. First, the proposed approach only models the unstructured nursing text and neglects the structured EHR information (e.g., lab results), which can potentially be utilized to facilitate robust patient profiling. Second, the modeling presented in this study does not account for real-time clinical data. In the future, we intend on exploring the techniques for modeling structured EHR data along with the data modeled from the unstructured clinical nursing notes. We also aim at validating our model on realtime clinical data to enhance its predictability and adaptability, thus focusing on the need for timeaware, dependable architectures in real-world hospital scenarios.
1,721
3,132
1,721
Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign
In this paper we report on the complexity of persuasion technique annotation in the context of a large multilingual annotation campaign involving 6 languages and approximately 40 annotators. We highlight the techniques that appear to be difficult for humans to annotate and elaborate on our findings on the causes of this phenomenon. We introduce Holistic IAA, a new word embedding-based annotator agreement metric and we report on various experiments using this metric and its correlation with the traditional Inter Annotator Agreement (IAA) metrics. However, given somewhat limited and loose interaction between annotators, i.e., only a few annotators annotate the same document subsets, we try to devise a way to assess the coherence of the entire dataset and strive to find a good proxy for IAA between annotators tasked to annotate different documents and in different languages, for which classical IAA metrics can not be applied.
In the recent years we have observed an emergence of automated tools for facilitating online media analysis for better understanding of the presented narratives around certain topics across countries, and to identify manipulative, deceptive and propagandistic content. Developing such tools requires annotated data of high quality. We report on the complexity of annotating such manipulative devices, i.e., persuasion techniques, in the context of a large annotation campaign involving 6 languages and approximately 40 annotators, whose details are described in • share some lessons learned from this large multi-lingual annotation campaign that might be beneficial for other researchers planing similar tasks, • present a detailed analysis of the disagreements between annotators and potential causes thereof and try to measure the complexity of the annotation task, and • propose a new concept of measuring Inter-Annotator Agreement (IAA) in a multilingual set-up, to overcome the limitations of the classical IAA metrics in such scenario. We first highlight the techniques that appear to be difficult for humans to annotate using the classical Cohen's κ Classical IAA measures impose certain limitations. First, they only capture the coherence of the annotations in texts written in the same language. Secondly, considering annotations done for a single language, there were many annotators, but annotating totally different subsets of documents. The classical IAA metrics are computed using a tiny fraction of the whole dataset: the one where the annotators annotated the same articles, despite the fact that the exact same text could be annotated in different articles by different annotators. Finally, the classical IAA measures only capture agreement at the time of the annotation, but do not tell us anything about the coherence and quality of the final curated dataset. In order to overcome the aforementioned limitations, we introduce Holistic IAA, a new multilingual word embedding-based IAA metric and we report on various experiments using it and its correlation with the traditional IAA metrics. However, given somewhat limited and loose interaction between annotators, i.e., only a few annotators annotate the same document subsets, we try to devise a way to assess the coherence of the entire dataset and strive to find a good proxy for IAA between annotators tasked to annotate different documents and in different languages. We present our preliminary results on this research problem with an ultimate goal of establishing a mechanism that allows to compare all annotators no matter which document they annotated, and to detect diverging annotations across languages. Our contributions can be summarized as follows: (i) we measure how confusing were the persuasion technique labels for different groups of annotators; (ii) we assess the coherence of the dataset using standard IAA measures; (iii) we introduce a new mutlilingual pancorpus IAA measure based on semantic similarity; (iv) we exploit this new measure on the raw and curated annotations of the annotators, and compare the resulting ranking of annotators to the one obtained by standard IAA measurements; (v) we comment on the self-coherence of the annotators using the new measure, as well as of the dataset language-wise. This paper focuses primarily on the annotation agreement and complexity, whereas the description of the resulting dataset is kept to the minimum necessary for understanding the content. For further details please refer to The paper is organized as follows. Section 2 reports on the related work. Section 3 introduces the persuasion technique taxonomy and describes the annotation process. Next, Section 4 reports on the annotation coherence computed using traditional IAA metrics and highlights the hard-to-annotate techniques. Subsequently, Section 5 introduces a new word embedding-based annotator agreement metric and reports on various experiments using it and correlating it with the traditional IAA metrics. We end with some concluding remarks in Section 6.
Persuasion detection in text is related to work on propaganda detection. The work in the latter area initially focused on document-level analysis and predictions, e.g., In parallel, other efforts focused on the detection of specific persuasion techniques. Various related shared tasks on the detection of persuasion techniques were organized recently, and various taxonomies were introduced Related work on IAA which explores going beyond the limitation of standard measures was reported in The taxonomy used in our annotation endeavour is an extension of the taxonomy introduced in Da San Our annotation task consisted of annotating persuasion techniques in a corpus consisting of circa 1600 news articles revolving around various globally discussed topics in six languages: English, French, German, Italian, Polish, and Russian, using the taxonomy introduced earlier. A balanced mix of mainstream media and "alternative" media sources that could potentially spread mis/disinformation were considered for the sake of creating the dataset. Furthermore, sources with different political orientation were covered as well. The pool of annotators consisted of circa 40 persons, all native or near-native speakers of the language they annotated. Most of the annotators were either media analysts or researchers and experts in (computational) linguistics, where approximately 80% of the annotators had prior experience in performing linguistic annotations of news-like texts. A thorough training was provided to all annotators which consisted of: (a) reading a 60-page annotation guidelines Annotations were curated in two steps. In the first step (document-level curation) the independent annotations were jointly discussed by the annotators and a curator, where the latter was a more experienced annotator, whose role was to facilitate making a decision about the final annotations, including: (a) merging the complementary annotations (tagged only by one annotator), and (b) resolving the identified potential label conflicts. In the second step (corpus-level curation) a global consistency analysis was carried out. The rationale behind this second step was to identify inconsistencies that are difficult to spot using single-document annotation view and do comparison at corpus level, e.g., comparing whether identical or near-identical text snippets were tagged with the same or a similar label (which should be intuitively the case in most situations). The global consistency analysis sketched above proved to be essential to ensure the high quality of the annotations. The annotation resulted in annotation of approx. 1600 documents with ca. 37K text spans annotated. The dataset is highly imbalanced. The class distribution and some statistics are provided in Annex B We measured the Inter-Annotator Agreement (IAA) using Krippendorff's α, achieving a value of 0.342. This is lower than the recommended threshold of 0.667, but we should note that this value represents the agreement level before curation, and as such, it is more representative of the curation difficulty rather than of the quality of the final consolidated annotations. We used the IAA during the campaign to allocate curation roles and to remove low-performing annotators. We further studied the IAA by ranking the annotators by their performance with respect to the ground truth on the subset of documents they annotated. We split then the annotators into two groups: top and low based on subjective assessment by the curators after the end of the curation campaign, this assessment was then further confirmed numerically (see Annex E for details). Their respective average α were 0.415 and 0.250. Finally, we considered the α of the group of the curators, in order to make an approximate estimation of the coherence of the curated dataset, as we expect these curators to consistently curate the data with at least the same coherence they had when annotating documents. There are only two such curators, whose α is of 0.588, which is lower but close to the recommended value. In Figure One can see that Loaded Language (MW:LL) is the single label that is most confused with any other label, and the Name Calling (AR:NCL) is the label with which it co-occurs most, and indeed, these two labels have a very similar definition. The same applies to the pair Casting Doubt (AR:D) and Questioning the Reputation (AR:QCR). In order to study which persuasion techniques are more difficult to annotate we again divided the annotators in 3 groups: all which contains all the annotators, top which contains half of the annotators whose performance are the highest as measured by their average Cohen's κ agreement, and low which contains the rest of the annotators. For each of these groups, and for each of the persuasion techniques, we measured how annotators in a given group tend to disagree with each otherirrespective of the actual ground truth. More precisely, we compute for each pair of annotators and for all their overlapping annotations the percentage of disagreeing annotations for a given label divided by the total number of annotations between them with that label. Here, annotations of two annotators are considered overlapping if one is at most 10% longer or shorter than the other one, taking into account the exact position of the annotations in the text. We report these numbers in Table In order to interpret the results, it is also important to take into account that the 2 sub-groups, namely, top and low, also do interact with each other. We consider the following indicator of complexity: for each of the group if the disagreement is above a given threshold c that we fixed for illustration purpose at 0.25 in the table, the corresponding values are boldfaced. We also divide the techniques in the table (column 'difficulty') into four general annotation complexity classes based on the overall disagreement: very easy (all ≤ .1, in light green), easy (all ≤ .25, in green), moderate (all ≤ .4, in orange), and difficult (all > .4, in red). Additionally, we consider the following indicator: if top > all or if top > low (the techniques for which this applies are marked with an asterisk in the table ). One can see that a high low value does not necessarily mean that the label is actually hard, for instance, the label False Dilemma is very well understood by the top group. High low value and low top value denotes a label whose understanding is not straightforward but does not pose special learning problem, in such case improving annotations for this label requires simply insisting on more basic training. On the contrary, when the top value is higher than the others (techniques marked with an asterisk), it means that at least one of the groups agrees more with the other group than top group with itself, meaning that there is an inconsistent understanding of the label within the group. This could indicate a difficult label requiring additional clarification to be made to all annotators, or a potential inconsistency in the label definition. This is, for instance, the case for the label Repetition, which is indeed inconsistent as it includes two very different definitions of repetition. The overall picture of the annotation complexity classes resembles to the per-label performances of classifier systems reported in The class Doubt has one of the best reported F 1 scores, however, it has a difficult annotation complexity, the reason being that it is one of the most confused classes, as it is often a subpart of other techniques. Some hard labels remain a challenge even for top annotators, and as such selecting 'reliable' annotators solely based on their overall IAA might not be sufficient to ensure the best quality of annotations, it is also important to identify for which labels additional training might be necessary. Quantifying the annotation complexity of an annotation campaign in such a way gives an understanding of the difficulty of the task, and allows to identify incoherent understanding of the guidelines early on, and gives a more refined understanding of the quality of the annotations than considering IAA measures alone. On top of the findings on annotation complexity we additionally summarize here our findings on the sources of disagreements and annotation complexity from the continuous meetings with annotators and curators: • disregarding small nuances in the definition of Loaded Language and Name Calling we noticed that disagreements and annotation or non-annotation of some instances were due to subjective perception linked to cultural differences, which was apparent when comparing annotations across languages, • some annotators had problems with the Justification techniques, including, in particular, Appeal to Popularity, Appeal to Values, Appeal to Authority due to not understanding upfront that one subjective opinions on what is considered a value or an authority does not play a role for definition of these techniques, and not considering the role of negation, e.g., not understanding that making a reference to something not being popular falls per definition under Appeal to Popularity too, • many annotators, who probably did not read the guidelines thoroughly, literally interpreted some persuasion technique definitions, e.g., in the context of Simplification techniques, instead of detecting certain logic patterns in text (see Annex A for definitions), the annotators literally interpreted the word 'simplification' and reasoned based on the base of whether the presentation of the information is too simplistic and certain facts were downplayed or exaggerated, which is actually linked to a different technique, i.e., Exaggeration-Minimisation, • some of the media analysts who served as annotators were often using background knowledge (professional bias) to make decisions whether some text fragments are instances of persuasion techniques, which was strictly prohibited by the guidelines; this was mainly related to Simplifications and Distractions, • some of the annotators, in particular, media analysts were making a direct link of persuasion technique labeling with fact verification, which was not in line with the guidelines. To sum up, for the major fraction of persuasion techniques the disagreements resulted not from subjective perceptions of the annotators, but mainly due to not sticking strictly to the definitions provided in the 60-page guidelines and/or professional background bias that lead to misinterpretation of the persuasion technique definitions. 5 Embedding-based IAA Assessment We introduce a new measure, namely, Holistic IAA, which allows to compare an annotator with any other, even if they did not annotate a single document in common and annotated documents in different languages. This metric exploits the property of multilingual aligned sentence embeddings, which are able to encode with similar vector representations sentences in different language with the same meaning, and different sentences with a similar meaning in a given language. Formally, we introduce the following holistic agreement between two annotators as o θ l ,θs (a 1 , a 2 ) where a i is the function that maps input texts to label for a given annotator a i ; and for any two pair of strings θ l is the threshold on the length ratio and θ s is the threshold on the similarity measure defined for any embedding model M using the cosine distance between the embedding vector of the input strings (we denote it with o for the first letter of the word "holos" in Greek). We define the set of Comparable Text Pairs (CTP) between two sets of texts X and Y as: Using this definition and defining S(a i ) as the function returning all the sentences annotated by annotator a i , we define the Holistic IAA for 2 annotators: x,y∈CT P θ l ,θs S(a 1 ),S(a 2 ) I a 1 (x)=a 2 (y) |CT P θ l ,θs S(a 1 ),S(a 2 ) | Extending to groups of annotators A and B, we get the more generic formulation: In a first step, the embedding for each annotated text span by each annotator is computed and stored in a vector database, and is associated with the following metadata: the document id, the annotator and the label. We use FAISS for the vector database, without quantization and with cosine distance Q1 "недопустимым" (ru, unacceptable) : insupportable (fr, 0.03, unbearable), invisibile (it, 0.03, invisible), insostenibile (it, 0.04, unsustainable), Inacceptable (fr, 0.05, unacceptable) Q2 "tout simplement, un mensonge" (fr, all simply a lie) : È tutta una menzogna (it, 0.04, it is all a lie), jawne kłamstwo (pl, 0.06, a clear lie), questa è una bugia (it, 0.06, this is a lie), Énorme mensonge (fr, 0.07, an enormous lie), alles wieder eine große Lüge (de, 0.08, again a big lie), Wir glauben, dass wir belogen werden (de, 0.09, we believe we are being lied to), obficie okłamuj ąc (pl, 0.09, lying profusely), fatiscenti menzogne (it, 0.09, crumbling lies), оголтелое вранье (ru, 0.09, rabid lies), n'en faire qu'une bouchée (fr, 0.09, deal with it easily), mensonges éhontés (fr, 0.09, shameless lies) Figure In Table In order to validate the approach, we perform rank correlation analysis between the ranking computed by standard IAA techniques and the ones with our approach using Kendall's Tau rank correlation coefficient The raw annotations allow us to compute pairwise IAA with Cohen's κ between annotators, who have annotated the exact same documents. For each annotator, we consider the ranking of the annotators he can be compared to and which have at least 10 annotations in common. Given the raw annotations dataset, we compute the Holistic IAA o value, and for each annotator we rank all the other annotators to which it can be compared to, as measured by the average level of agreement on labels for semantically similar text spans. We compare the ranking of most 'similar' annotators for each annotator computed using Cohen's κ with the ranking computed using Holistic IAA on the same subset of annotators. We consider 3 rankings: strict Cohen's κ; same ranking is done on the same set of documents and annotators as the one used to compute Cohen's κ; diff ranking is done on the same pair of annotators, but strictly on documents that were not jointly annotated by them. We perform a simple grid search over the hyper-parameters θ s and θ l . In Table Optimal parameters are too conservative and as such the CTP set was too small in order to compare all annotators or groups of annotators, and a such prevented from further studying the properties of Holistic IAA. This proves that the Holistic IAA can be used as a proxy for the pan-document pan-annotators agreement for some specific set of parameters, however, without the possibility to precisely link its value to other standard IAA measures, and with the caveat that the correlation is positive yet not perfect. As such, Holistic IAA can be used mainly to comment on the qualitative difference in agreement between different subsets of annotations. Table We performed an error analysis of the confusions found using Holistic IAA: using the 33k+ confusions found by the approach over the dataset, for each pair of labels we evaluate up to 5 alleged confusions and graded the similarity between the corresponding texts on a 3-tier scale. Two texts are considered: identical if the meaning is so close that minor nuance in text would not alter the label chosen (e.g. "opération spéciale" (fr) and "Spezialoperation" (de) both meaning "special operation"); close if the meaning is substantially different, but semantically close enough making the label debatable and worthy to be flagged to a curator for review, for instance one text could be more generic than the other one (e.g. "finì molto male" (it) = "it ended badly" and "durement mise à mal" (fr) = "badly impacted"); unrelated if the meaning is unrelated -even if the texts contain the same elements. A total of 502 data points were annotated. Note that only texts containing at least one space were considered. In Table We can also see the difficulty of setting cutoff boundaries as the range of minimum and maximum semantic distance is overlapping between all the 3 classes, and with close and identical having almost the same mean boundaries. We can nevertheless observe that the mean value of close is 0.75, making it a reasonable candidate for θ l . These results show that about half of the annotations flagged by the system were indeed of interest to the curators. However, as such, the results are highly dependent on the model used. Future work will require to identify embeddings with a larger margin between the classes in order to make the work of the curators more efficient. Table In order to further evidentiate the behavior of Holistsic IAA, we use it to quantify the impact of the corpus-level curation step. This step was performed per-language after the usual documentlevel curation step was accomplished. The data was sorted per-label and the master curators looked at the overall coherence of the annotated text-span label pairs, the context of the spans was also provided. This step lead to several corrections and is understood to have boosted the overall coherence of the dataset, and should be reflected with a higher o value for the corpus. In Table In Table In Figure Overall, when considering the corpus we observe a quality increase as measured by the o value. Knowing the dataset coherence computed using standard IAA measures in a monolingual setting, and comparing it with values computed using Holistic IAA, we extrapolate from it the coherence of the entire multilingual dataset. Only two curators have jointly annotated the same set of documents while acting as annotators before the curation phase and taking on the curator role, as such we can compute the Krippendorff's α between them, which is 0.588, a little under the recommended value. The o value between them on the same data is 0.420. A group of 3 "master curators" covered all the languages and curated most of the dataset. Their average o value on the raw annotations is of 0.565. This higher value illustrates the fact that the coherence of the annotations in the final dataset is higher than when measured on the raw annotations. We now consider only the curated dataset. In Figure However, most of the inter-language o values are much lower than the intra-language values. We believe this to be due to 2 factors: 1) each curation was performed per-language, ignoring the others, thereby increasing the self coherence of each language; 2) as in the case of the diff vs. strict in Figure In Table We reported on the complexity of annotating persuasion techniques in a large-scale multilingual annotation campaign. We introduced the Holistic IAA paradigm, a new measure to serve as a proxy of the estimation of inter-annotator agreement and actual corpus coherence in settings that are fundamentally outside the scope of usual IAA measures. We demonstrate that annotator ranking computed using this new measure is positive and can highly correlates with ranking computed using Cohen's Kappa in some settings. Using it, we can observe the beneficial impact of the second step of our 2step curation phase, and also identify similarity and divergence between annotators for some subsets of labels. The experiment conducted in this study supports what was informally remarked regarding the estimation of the performance of the annotators and increased our confidence in the coherence of the final corpus. We believe that using Holistic IAA as part of the monitoring of multilingual or monolingual large-scale annotation campaigns could help to spot problems by flagging potential incoherence in the labels of semantically similar sentences at an early stage. In future work we envisage exploration of thresholds for finer interpretation and exploring the use of other semantic similarity models. Distribution Representativeness Although the underlying corpus of annotated news articles covers a wide range of topics as well as media from all sides of the political spectrum it should neither be seen as representative nor balanced in any specific way w.r.t. media in any country. Consequently, the distribution of the annotated persuasion techniques might, in principle, not be fully representative as well. Biases Given that human data annotation involves some degree of subjectivity we created a comprehensive 60-page annotation guidelines document to clarify important cases during the annotation process. Nevertheless, some degree of intrinsic subjectivity might have impacted the techniques picked up by the annotators during the annotation, and impacted so the distribution thereof in the final dataset. Furthermore, although the taxonomy used in this annotation campaign covers most of the 'popular' techniques used in the media, we identified some persuasive attempts which could not have been matched with any of the techniques in the existing taxonomy, and were tagged as OTHER (less than 3% of all annotations) and were not considered in the reported work, which once again poses a certain limitation with respect to the representativeness of persuasion technique types used in the media. Methodology Soundness Our results are limited to certain extent, in particular, the introduced IAA metric should be considered as a proof of concept since certain approximations and simplifications were made and parameters were chosen, e.g., the choice for cutoff of maximal retrieved similar sentences, the length ratio to select sentence to be compared is constrained, and the choice of similarity metrics for computing semantic similarity that exploits a specific sentence embeddings model. Different settings and choices could yield different results. Disregarding of these shortcomings, the new metric helped to circumvent the limited scope and utility of classical IAA in such a large-scale multilingual campaign. We believe that the proposed methodology presented in this paper is too some extent generic, and would be of great interest to the community. The approach considers only the text of the annotation, as such their context is ignored. This limitation is mitigated in case the annotation guidelines do not specify that the span of annotation must contain all necessary information to unambiguously determine the label, which is the case in the campaign whose data was used to illustrate our approach. Biases The news articles for the creation of the underlying dataset were sampled in such a way in order to have a balanced representation with respect to different points of view and type of media. We also strived to engage a mix of annotators with different backgrounds, i.e., both media analysts and computational linguists. Furthermore, the annotators were explicitly instructed not take their personal feeling about the particular topic and to objectively focus on identifying whether specific persuasion techniques were used. Disregarding the aforementioned efforts, the distribution of the various persuasion techniques annotated might not perfectly reflect the broader spectrum of the media landscape in the target languages, which should be taken into account in exploiting the related statistical information for any kind of analysis, etc. Analogously, the findings and statistics related to the annotation complexity are linked to the specific pool of annotators engaged in the campaign, and, consequently, they should be considered as approximative. Intended Use and Misuse Potential The reported work focuses solely on sharing experience with the research community on annotating persuasion techniques in news articles in a large campaign, analysis of the difficulty of annotating such techniques, and ways of measuring annotation agreement and consistency across languages. The reported work is not linked to a release of the underlying annotated dataset, which is a subject of different publication and related ethical considerations. The two-tier persuasion technique taxonomy has 6 coarse-grained categories: Attack on reputation: The argument does not address the topic, but rather targets the participant (personality, experience, deeds) in order to question and/or to undermine their credibility. The object of the argumentation can also refer to a group of individuals, an organization, an object, or an activity. Justification: The argument is made of two parts, a statement and an explanation or an appeal, where the latter is used to justify and/or to support the statement. Simplification: The argument excessively simplifies a problem, usually regarding the cause, the consequence or the existence of choices. Distraction: The argument takes focus away from the main topic or argument to distract the reader. Call: The text is not an argument, but an encouragement to act or to think in a particular way. Manipulative wording: the text is not an argument, but uses specific language, which contains words or phrases that are either non-neutral, confusing, exaggerating, loaded, etc., in order to impact the reader emotionally. They are further subdivided into 23 fine-grained persuasion techniques. The full list of the finegrained techniques is presented in 3, whereas some examples of text snippets representing various persuasion techniques are provided in Figure In Figure This section provides an excerpt from the annotation guidelines • if one has doubts whether a given text fragment contains a persuasion technique then do not annotate it, (conservative approach) • select the minimal amount of text Name Calling or Labelling [AR:NCL]: a form of argument in which loaded labels are directed at an individual, group, object or activity, typically in an insulting or demeaning way, but also using labels the target audience finds desirable. Guilt by Association [AR:GA]: attacking the opponent or an activity by associating it with a another group, activity or concept that has sharp negative connotations for the target audience. Casting Doubt [AR:D]: questioning the character or personal attributes of someone or something in order to question their general credibility or quality. Appeal to Hypocrisy [AR:AH]: the target of the technique is attacked on its reputation by charging them with hypocrisy/inconsistency. Questioning the Reputation [AR:QR]: the target is attacked by making strong negative claims about it, focusing specially on undermining its character and moral stature rather than relying on an argument about the topic. Flag Waiving [J:FW]: justifying an idea by exhaling the pride of a group or highlighting the benefits for that specific group. Appeal to Authority [J:AA]: a weight is given to an argument, an idea or information by simply stating that a particular entity considered as an authority is the source of the information. Appeal to Popularity [J:AP]: a weight is given to an argument or idea by justifying it on the basis that allegedly "everybody" (or the large majority) agrees with it or "nobody" disagrees with it. Appeal to Values [J:AV]: a weight is given to an idea by linking it to values seen by the target audience as positive. Appeal to Fear, Prejudice [J:AF]: promotes or rejects an idea through the repulsion or fear of the audience towards this idea. Strawman [D:SM]: consists in making an impression of refuting an argument of the opponent's proposition, whereas the real subject of the argument was not addressed or refuted, but instead replaced with a false one. Red Herring [D:RH]: consists in diverting the attention of the audience from the main topic being discussed, by introducing another topic, which is irrelevant. Whataboutism [D:W]: a technique that attempts to discredit an opponent's position by charging them with hypocrisy without directly disproving their argument. Causal Oversimplification [S:CaO]: assuming a single cause or reason when there are actually multiple causes for an issue. False Dilemma or No Choice [S:FDNC]: a logical fallacy that presents only two options or sides when there are many options or sides. In extreme, the author tells the audience exactly what actions to take, eliminating any other possible choices. Consequential Oversimplification [S:CoO]: is an assertion one is making of some "first" event/action leading to a domino-like chain of events that have some significant negative (positive) effects and consequences that appear to be ludicrous or unwarranted or with each step in the chain more and more improbable. Appeal to Authority: Since the Pope said that this aspect of the doctrine is true we should add it to the creed. Appeal to Popularity: Because everyone else goes away to college, it must be the right thing to do. Appeal to Values: It's standard practice to pay men more than women so we'll continue adhering to the same standards this company has always followed. Appeal to Fear, Prejudice: It is a great disservice to the Church to maintain the pretense that there is nothing problematical about Amoris laetitia. A moral catastrophe is self-evidently underway and it is not possible honestly to deny its cause. Strawman: Referring to your claim that providing medicare for all citizens would be costly and a danger to the free market, I infer that you don't care if people die from not having healthcare, so we are not going to support your endeavour. Red Herring: Lately, there has been a lot of criticism regarding the quality of our product. We've decided to have a new sale in response, so you can buy more at a lower cost!. Whataboutism: A nation deflects criticism of its recent human rights violations by pointing to the history of slavery in the United States. Causal Oversimplification: School violence has gone up and academic performance has gone down since video games featuring violence were introduced. Therefore, video games with violence should be banned, resulting in school improvement. There is no alternative to Pfizer Covid-19 vaccine. Either one takes it or one dies. Consequential Oversimplification: If we begin to restrict freedom of speech, this will encourage the government to infringe upon other fundamental rights, and eventually this will result in a totalitarian state where citizens have little to no control of their lives and decisions they make Slogans: "Immigrants welcome, racist not! Conversation Killer: I'm not so naive or simplistic to believe we can eliminate wars. You can't change human nature. Appeal to Time: This is no time to engage in the luxury of cooling off or to take the tranquilizing drug of gradualism. Now is the time to make real the promises of democracy. Now is the time to rise from the dark and desolate valley of segregation to the sunlit path of racial justice. Loaded Language: They keep feeding these people with trash. They should stop. • avoid personal bias (i.e., opinion and emotions) on the topic being discussed as this has nothing to do with the annotation of persuasion techniques, • do not exploit external knowledge to decide whether given text fragment should be tagged as a persuasion technique, • do not confuse persuasion technique detection with fact checking. A given text fragment might contain a claim which is known to be true, but that does not imply there are no persuasion techniques to annotate in this particular text fragment, • often, authors use irony (not being explicitly part of the taxonomy), which in most cases serves a purpose to persuade the reader, most frequently to attack the reputation of someone or something. In such cases the respective persuasion technique type should be used, or other if the use of irony does not fall under any persuasion technique type in the taxonomy, • in case of quotations or reporting of what a given person said the annotation of the persuasion techniques within the boundaries of that quotation should be done from the perspective of that person who is making some statement or claim (point of reference) and not from the author perspective. For each persuasion technique we have also specified what text fragment should be annotated in the document. The general rule is to annotate the minimum amount of text that can be considered as a trigger to spot the technique, even if it requires an understanding of the context that spans over more than one of the preceding sentences. Sometimes, the to-be-annotated text fragment might go beyond the boundaries of one single sentence. In the following we briefly summarize the rules for all the techniques. Name Calling or Labelling: The noun phrase, the adjective that constitutes the label and/or the name. If quotation marks are used, they should be included in the annotation as well. Guilt by Association: The part of text that refers to an entity and a mention of someone else (considered evil/negative) doing the same or similar thing that is considered negative. The mention of the activity of the target entity might be implicit. Casting Doubt: Only the text fragment that questions the credibility and the object whose credibility is being questioned. There is no need to include the full context. Appeal to Hypocrisy: The text phrase embracing a certain activity, and another one which is used as an argument to accuse the former as being a hypocrite. Questioning the Reputation: Only the text fragments that refer to something negative being mentioned about the person/group/object. Flag Waving: The part of the text that refers to patriotism or other group related values, and the conclusion/action it is supposed to support if it is present in the text. Appeal to Authority: The part of the text that refers to the authority (and potentially some of his/her statement/opinion/action), and the conclusion it supports, in case the latter is present in the text. Appeal to Popularity: The part of the text that refers to something that a majority does or seems to be widely supported and/or is popular together with the conclusion it is supposed to support. Appeal to Values: The part of the text that refers to values, and include the conclusion it is supposed to support, in case the latter is included explicitly in the text. or a false conclusion drawn therefrom should be annotated, although, often not all parts of the pattern above are explicitly mentioned in the text. False Dilemma or No Choice: The minimal text fragment that matches one of the following logical patterns should be annotated: (a) Black & White Fallacy: There are only two alternatives A and B to a given problem/task. It cannot be A. Therefore, the only solution is B (since A is not an option). The only solution to a given problem/task is A. although, often not all parts of the pattern above are explicitly mentioned in the text. The entire text fragment that matches the above logical pattern should be annotated: if A will happen then B, C, D, ... will happen where: -A is something one is trying to reject (support) -B, C, D are perceived as some potential negative (positive) consequences happening if A happens. The slogan only (no need to annotate the conclusion it supports), and in case it is surrounded Conversation Killer: A minimal text span that triggers ending the conversation, discussion, etc. Appeal to Time: A minimal text span referring to the argument of time that calls for some action. Both the call and the action should be annotated. Loaded Language: Only the phrase containing loaded words, the context in which they appear should not be annotated. As a general rule one should consider to tag longer text fragment if and only if each of the words adds more emotional 'load' to the text fragment. The minimal text fragment that introduces confusion: it could be a word, but also a longer piece of text that requires to be read in order to understand the confusion it causes. Exaggeration or Minimisation: The text fragment that provides the description that downplays or exaggerates the object of criticism. The latter should be included in the annotated text as well. Repetition: All text fragments that repeat the same message or information that was introduced earlier. The first occurrence of the message/information is to be annotated as well. If it is not clear what exactly to annotate then the entire sentence should be annotated. Furthermore, it is important to emphasize that a repetition of something per se is not always a persuasion technique, but could sometimes be used only to refer to a topic/issue being discussed. In Figure
936
4,066
936
SPANNER: Named Entity Re-/Recognition as Span Prediction
Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction. Despite its preliminary effectiveness, the span prediction model's architectural bias has not been fully understood. In this paper, we first investigate the strengths and weaknesses when the span prediction model is used for named entity recognition compared with the sequence labeling framework and how to further improve it, which motivates us to make complementary advantages of systems based on different paradigms. We then reveal that span prediction, simultaneously, can serve as a system combiner to re-recognize named entities from different systems' outputs. We experimentally implement 154 systems on 11 datasets, covering three languages, comprehensive results show the effectiveness of span prediction models that both serve as base NER systems and system combiners. We make all code and datasets available:
The rapid evolution of neural architectures However, despite the success of span predictionbased systems, as a relatively newly-explored framework, the understanding of its architectural bias has not been fully understood so far. For example, what are the complementary advantages compared with SEQLAB frameworks and how to make full use of them? Motivated by this, in this paper, we make two scientific contributions. We first investigate what strengths and weaknesses are when NER is conceptualized as a span prediction task. To achieve this goal, we perform a fine-grained evaluation of SPANNER systems against SEQLAB systems and find there are clear complementary advantages between these two frameworks. For example, SEQLAB-based models are better at dealing with those entities that are long and with low label consistency. By contrast, SPANNER systems do better in sentences with more Out-of-Vocabulary (OOV) words and entities with medium length ( §3.3). Secondly, we reveal the unique advantage brought by the architectural bias of the span prediction framework: it can not only be used as a base system for named entity recognition but also serve as a meta-system to combine multiple NER systems' outputs. In other words, the span prediction model play two roles showing in Fig. 1. Most of the existing NER combiners rely on heavy feature engineering and external knowledge Experimentally, we first implement 154 systems on 11 datasets, on which we comprehensively evaluate the effectiveness of our proposed span prediction-based system combiner. Empirical results show its superior performance against several typical ensemble learning algorithms. Lastly, we make an engineering contribution that benefits from the practicality of our proposed methods. Specifically, we developed an online demo system based on our proposed method, and integrate it into the NER Leaderboard, which is very convenient for researchers to find the complementarities among different combinations of systems, and search for a new state-of-the-art system.
NER is frequently formulated as a sequence labeling (SEQLAB) problem To make a comprehensive evaluation, in this paper, we use multiple NER datasets that cover different domains and languages. CoNLL-2003 2 (Sang and De Meulder, 2003) covers two different languages: English and German. Here, we only consider the English (EN) dataset collected from the Reuters Corpus. CoNLL-2002 3 (Sang, 2002) contains annotated corpus in Dutch (NL) collected from De Morgen news, and Spanish (ES) collected from Spanish EFE News Agency. We evaluate both languages. OntoNotes 5.0 WNUT-2016 Although this is not the first work that formulates NER as a span prediction problem Overall, the span prediction-based framework for NER consists of three major modules: token representation layer, span representation layer, and span prediction layer. Given a sentence X = {x 1 , • • • , x n } with n tokens, the token representation h i is as follows: where EMB(•) is the pre-trained embeddings, such as non-contextualized embeddings GloVe First, we enumerate all the possible m spans and then re-assign a label y ∈ Y for each span s. For example, for sentence: "London 1 is 2 beautiful 3 ", the possible span's (start, end) indices are {(1, 1), (2, 2), (3, 3), (1, 2), (2, 3), (1, 3)}, and the labels of these spans are all "O" except (1, 1) (London) is "LOC". We use b i and e i to denote the start-and end-index of the span s i , respectively, and 1 ≤ b i ≤ e i ≤ n. Then each span can be represented as The vectorial representation of each span could be calculated based on the following parts: Boundary embedding: span representation is calculated by the concatenation of the start and end tokens' representations Span length embedding: we additionally featurize each span representation by introducing its length embedding z l i , which can be obtained by a learnable look-up table. The final representation of each span s i can be obtained as: The span representations s i are fed into a softmax function to get the probability w.r.t label y. where score(•) is a function that measures the compatibility between a specified label and a span: where s i denotes the span representation and y k is a learnable representation of the class k. Heuristic Decoding Regarding the flat NER task without nested entities, we present a heuristic decoding method to avoid the prediction of overlapped spans. Specifically, for those overlapped spans, we keep the span with the highest prediction probability and drop the others. Setup To explore how different mechanisms influence the performance of span prediction models, We design four specific model variants (i) generic SPANNER: only using boundary embedding (ii) boundary embedding + span length embedding, (iii) boundary embedding + heuristic decoding, (iv) heuristic decoding + (ii). Tab. 4 shows results of our SPANNER against six baseline combiner methods on CoNLL-2003 and OntoNotes5.0-BN under a nuanced view. We can observe that: (1) Overall, our proposed SPANNER outperforms all other competitors significantly (p-value < 0.05) on most of the combination cases include the one ("all") that most previous works have explored. (2) As more base systems are introduced in descending order, the combined performance will be improved gradually. The combination performance will decrease with the reduction of the best single system, which holds for all the combiners. (3) The best performance is always achieved on the combination case with more models, instead of the one with a small number of top-scoring base models. This suggests that introducing more base models with diverse structures will provide richer complementary information. The holistic results in Tab. 1 make it hard for us to interpret the relative advantages of NER systems with different structural biases. To address this problem, we follow the interpretable evaluation idea Analysis As shown in Tab. 2, the green area indicates SEQLAB performs better while the red area implies the span model is better. We observe that: (1) The generic SPANNER shows clear complementary advantages with SEQLAB-based systems. Specifically, almost all SEQLAB-based models outperform generic SPANNER when (i) entities are long and with lower label consistency (ii) sentences are long and with fewer OOV words. By contrast, SPANNER is better at dealing with entities locating on sentences with more OOV words and entities with medium length. (2) By introducing heuristic decoding and span length features, SPANNERs do slightly better in long sentences and long entities, but are still underperforming on entities with lower label consistency. The complementary advantages presented by SEQLABs and SPANNERs motivate us to search for an effective framework to utilize them. The development of ensemble learning for NER systems, so far, lags behind the architectural evolution of the NER task. Based on our evidence from §3.3, we propose a new ensemble learning framework for NER systems. The basic idea is that each span prediction NER (SPANNER) system itself can also conceptualize as a system combiner to re-recognize named entities from different systems' outputs. Specifically, Fig. where score(•) is defined as Eq.4. Then the final prediction label is: Table Regarding SEQLAB-base systems, following (3) sentence-level encoders: LSTM We extensively explore six system combination methods as competitors, which involves supervised and unsupervised fashions. Voting, as an unsupervised method, has been commonly used in existing works: Majority voting (VM): All the individual classifiers are combined into a final system based on the majority voting. Weighted voting base on overall F1-score (VOF1): The taggers are combined according to 9 We view BERT as the subword-sensitive representation because we get the representation of each subword. the weights, which is the overall F1-score on the testing set. Weighted voting base on class F1-score (VCF1): Also weighted voting, the weights are the categories' F1-score. Stacking (a.k.a, Stacked Generalization) is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy (Ting and Notably, compared to our model, these methods are computationally expensive since they require external training samples for system combiners, which is achieved by (i) collecting training data by performing five-fold cross-validation Setup Most previous works on system combination only consider one combination case where all base systems are put together. In this setting, we aim to explore more fine-grained combination cases. Specifically, we first sort systems based on their performance in a descending order to get a list m. We refer to m[i : k] as one combination case, dubbed combined interval, which represents systems whose ranks are between i and k. In practice, we consider 23 combination cases showing in Tab. 4. To examine whether the SPANNNER is significantly better than the other baseline methods, we conduct the significance test with Wilcoxon Signed-RankTest Setup To also explore the effectiveness of SPAN-NER on the other datasets, we calculate the average performance of each system combination method the meta-model and base systems are all neural models. There is a handful of works about system combination for NER. Co-evolution of NLP Systems and their combiners Systems for NLP tasks (e.g., NER model) and their combiners (e.g., ensemble learning for NER) are developing in two parallel directions. This paper builds the connection between them and proposes a model that can be utilized as both a base NER system and a system combiner. Our work opens up a direction toward making the algorithms of NLP models and system combination co-evolved. The unified idea can be applied to other NLP tasks, and some traditional methods like reranking in syntactic parsing can be re-visited. For example, we can formulate constituency parsing CombinaBoard It has become a trend to use a Leaderboard (e.g., paperwithcode 10 ) to track current progress in a particular field, especially with the rapid emergence of a plethora of models. Leaderboard makes us pay more attention to and even obsess over the state-of-the-art systems √ indicates the embedding/structure is utilized in the current SEQLAB system. Tab. 8 illustrates the full model name and the detailed structure of the SEQLAB models. All the SEQLAB models use the CRF as the decoder. For example, the full model name of "sq0" is "CflairWglove lstmCrf", representing a sequence labeling model that uses the Flair as characterlevel embedding, GloVe as word-level embedding, LSTM as the sentence-level encoder, and CRF as the decoder. For "sq3", its full model name is "CbertWnon lstmCrf", representing a sequence
946
2,043
946
Boosting Neural Machine Translation with Similar Translations
This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches. In particular, we show how we can simply feed the neural model with information on both source and target sides of the fuzzy matches, we also extend the similarity to include semantically related translations retrieved using distributed sentence representations. We show that translations based on fuzzy matching provide the model with "copy" information while translations based on embedding similarities tend to extend the translation "context". Results indicate that the effect from both similar sentences are adding up to further boost accuracy, are combining naturally with model fine-tuning and are providing dynamic adaptation for unseen translation pairs. Tests on multiple data sets and domains show consistent accuracy improvements. To foster research around these techniques, we also release an Open-Source toolkit with efficient and flexible fuzzy-match implementation.
For decades, the localization industry has been proposing Fuzzy Matching technology in CAT tools allowing the human translator to visualize one or several fuzzy matches from translation memory when translating a sentence leading to higher productivity and consistency With improving machine translation technology 1 and training of models on translation memories, machine translated output has been progressively introduced as a substitute for fuzzy matches when no sufficiently "good" fuzzy match is found and proved to also increase translator productivity given appropriate post-editing environment With Neural Machine Translation (NMT), the integration of Fuzzy Matching is less obvious since NMT does not keep nor build a database of aligned sequences and does not explicitly use n-gram language models for decoding. The only obvious and important use of translation memory is to use them to train an NMT model from scratch or to adapt a generic translation model to a specific domain (fine-tuning) In this work, we are pushing the concept further a) by proposing and evaluating new integration methods, b) by extending the notion of similarity and showing that fuzzy matches can be extended to embedding-based similarities, c) by analyzing how online fuzzy matching compares and combines with offline fine-tuning. Finally, our results also show that introducing similar sentence translation is helping NMT by providing sequences to copy (copy effect), but also providing additional context for the translation (context effect).
A translation memory (TM) is a database that stores translated segments composed of a source and its corresponding translations. It is mostly used to match up previous translations to new content that is similar to content translated in the past. Assuming that we translated the following English sentence into French: [How long does the flight last?] ↝ [Combien de temps dure le vol?]. Both the English sentence and the corresponding French translation are saved to the TM. This way, if the same sentence appears in a future document (an exact match) the TM will suggest to reuse the translation that has just been saved. In addition to exact matches, TMs are also useful with fuzzy matches. These are useful when a new sentence is similar to a previously translated sentence, but not identical. For example, when translating the input sentence: [How long does a cold last?], the TM may also suggest to reuse the previous translation since only two replacements (a cold by the flight) are needed to achieve a correct translation. TMs are used to reduce translation effort and to increase consistency over time. More formally, we consider a TM as a set of K sentence pairs {(s k , t k ) ∶ k = 1, . . . , K} where s k and t k are mutual translations. A TM must be conveniently stored so as to allow fast access to the pair (s k , t k ) that shows the highest similarity between s k and any given new sentence. Many methods to compute sentence similarity have been explored, mainly falling into two broad categories: lexical matches (i.e. fuzzy match) and distributional semantics. The former relies on the number of overlaps between the sentences taken into account. The latter counts on the generalisation power of neural networks when building vector representations. Next, we describe the similarity measures employed in this work. Fuzzy Matching Fuzzy matching is a lexicalised matching method aimed to identify non-exact matches of a given sentence. We define the fuzzy matching score F M (s i , s j ) between two sentences s i and s j as: where ED(s i , s j ) is the Edit Distance between s i and s j , and |s| is the length of s. Many variants have been proposed to compute the edit distance, generally performed on normalized sentences (ignoring for instance case, number, punctuation, space or inline tags differences that are typically handled at a later stage). Also, IDF and stemming techniques are used to give more weight on significant words or less weight on morphological variants Since we did not find an efficient TM fuzzy match library, we implemented an efficient and parameterizable algorithm in C++ based on suffixarray where S(s) denotes the set of n-grams in sentence s, max(q) returns the longest n-gram in the set q and |r| is the length of the n-gram r. For Ngram matching retrieval we also use our in-house open-sourced toolkit. The current research on sentence similarity measures has made tremendous advances thanks to distributed word representations computed by neural nets. In this work, we use sent2vec We define the similarity score EM (s i , s j ) between sentences s i and s j via cosine similarity of their distributed representations h i and h j : where ||h|| denotes the magnitude of vector h. To implement fast retrieval between the input vector representation and the corresponding vector of sentences in the TM we use the faiss 5 toolkit Given an input sentence s, retrieving TM matches consists of identifying the TM entry (s k , t k ) for which s k shows the highest matching score. However, with the exception of perfect matches, not all words in s k or s are present in the match. Considering the example in Section 2, the words the flight and a cold are not related to each other, from that follows that the TM target words le vol are irrelevant for the task at hand. In this section we faiss discuss an algorithm capable of identifying the set of target words T ∈ t k that are related to words of the input sentence s. Thus, we define the set T as: where A is the set of word alignments between words in s k and t k and S is the LCS (Longest Common Subsequence) set of words in s k and s. The LCS is computed as a by-product of the edit distance S is found as a sub-product of computing fuzzy or n-gram matches. Word alignments are performed by fast align We retrieve fuzzy, n-gram and sentence embedding matches as detailed in the previous section. We explore various ways to integrate matches in the NMT workflow. We follow the work by Figure + As a variant of FM * , we now mark target words which are not related to the input sentence in an attempt to help the network identify those target words that need to be copied in the hypothesis. However, we use an additional input stream (also called factors) to let the network access to the entire target sentence. Tokens used by this additional stream are: S for source words; R for unrelated target words and T for related target words. + In addition to fuzzy matches, we also consider arbitrary large n-gram matching. Thus, we use the same format as for FM + but considering the highest scored n-gram match as computed by N M (s i , s j ). + Finally, we also retrieve the most similar TM sentences as computed by EM (s i , s j ). In this case, marking the words that are not related to the input sentence is not necessary since similar sentences retrieved following EM score do not necessarily present any lexical overlap. Note from the example in 3 Experimental Framework We used the following corpora in this work Our NMT model follows the state-of-the-art Transformer base architecture We perform fuzzy matching, ignoring exact matches, and keep the single best match if F M (s i , s j ) ≥ 0.6 with no approximation. Similarly, the largest N -gram match is used for each test sentence with a threshold N M (s i , s j ) ≥ 5. A similarity threshold EM (s i , s j ) ≥ 0.8 is also employed when retrieving similar sentences using distributed representations. The faiss search toolkit is used through python API with exact FlatIP index. Building and retrieval times for each algorithm on a 2M sentences translation memory (Europarl corpus) are provided in Table We compare our baseline model, without augmenting input sentences, to different augmentation formats and retrieval methods. Our base model is built using the concatenation of all the original corpora. All other models extend the original corpora with sentences retrieved following various retrieval methods. It is worth to notice that extended bitexts share the target side with the original data. In this experiment, all corpora are used to build the models while matches of a given domain are retrieved from the training data of this domain. Models are built using the original source and target training data (base), and after augmenting the source sentence as detailed in Section 2. Table Best scores are obtained by models using augmented inputs except for corpora not suited for translation memory usage: News, TED for which we observe no gains correlated to low matching rates. For the other corpora, large gains are achieved when evaluating test sentences with matches (up to +19 BLEU on GNOME corpus), while a very limited decrease in performance is observed for sentences that do not contain matches. This slight decrease is likely to come from the fact that we kept the corpus size and number of iterations identical while giving harder training tasks. Results are totally on par with the findings of All types of matching indicate their suitability showing accuracy gains. In particular for fuzzy matching, which seems to be the best for our task. Among the different techniques used to insert fuzzy matching, FM + obtains the best results, validating Target fuzzy matches To evaluate if the fuzzy match quality is really the primary criterion for the observed improvements, we consider FM # T where the fuzzy matches are rescored (on the training set only) with the edit distance between the reference translation and the target side of the fuzzy match. By doing so, we reduce the fuzzy match average F M source score by about 2%, but increase target edit distance from 61% to 69%. The effect can be seen in Table Unseen matches Note that in the previous experiments, matches were built over domain corpora that are already used to train the model. This is a common use case: the same translation memory used to train the system will be used in run time, but now we evaluate the ability of our model in a different context where a test set is to be translated for which we have a new TM that has never been seen when learning the original model. This use case to typical translation task where new entries will be added continuously to the TM and shall be used instantly for translation of following sentences. Hence, we only use EPPS, News, TED and Wiki data to build two models: the first employs only the original source and target sentences (base) the second learns to use fuzzy matches (FM + ). Table Combining matching algorithms Next, we evaluate the ability of our NMT models to combine different matching algorithms. First, we use ⊖(M 1 , M 2 , ...) to denote the augmentation of an input sentence that considers first the match specified by M 1 , if no match applies for the input sentence then it considers using the match specified by M 2 , and so on. Note that at most one match is used. Sentences for which no match is found are kept without augmentation. Similar to Table Copy Vs. Context We observe that models allowing for augmented input sentences effectively learn to output the target words used as augmented translations. Table We compute for each word added in the input sentence as T (part of a lexicalised match), R (not in the match) and E (from an embedding how often they appear in the translated sentence. Results show that T words increase their usage rate by more than 10% compared to the corresponding base models. Considering R words, models incorporating fuzzy matches increase their usage rate compared to base models, albeit with lower rates than for T words. Furthermore, the number of R words output by FM + is clearly lower than those output by FM # , demonstrating the effect of marking unrelated matching words. Thus, we can confirm the copy behaviour of the networks with lexicalised matches. Words marked as E (embedding matches) increase their usage rates when compared to base models but are far from the rates of T words. We hypothesize that these sentences are not copied by the translation model, rather they are used to further contextualise translations. Our work stems on the technique proposed by Similar to our work, Our approach combines source and target words within a same sentence -the same type of approach has also been proposed by Last, we can also compare the extra-tokens appended in augmented sentences as "side constraints" activating different translation paths on the same spirit than the work done by This paper explores augmentation methods for boosting Neural Machine Translation performance by using similar translations. Based on "neural fuzzy repair" technique, we introduce tighter integration of fuzzy matches informing neural network of source and target and propose extension to similar translations retrieved We use the next set of hyper-parameters: size of word embedding: 512; size of hidden layers: 512; size of inner feed forward layer: 2, 048; number of heads: 8; number of layers: 6; batch size: 4, 096 tokens. Note that when using factors (FM + , NM + and EM + ) the final word embedding is built after concatenation of the word embedding (508 cells) and the additional factor embedding (4 cells). We use the lazy Adam optimiser. We set warmup steps to 4, 000 and update learning rate for every 8 iterations. Models are optimised during 300K iterations. Fine-tuning is performed continuing Adam with the same learning rate decay schedule until convergence on the validation set. All models are trained using a single NVIDIA P100 GPU. We limit the target sentence length to 100 tokens.The source sentence is limited to 100, 200 and 300 tokens depending on the number of sentences used to augment the input sentence. We use a joint vocabulary of 32K for both source and target sides. In inference we use a beam size of 5. For evaluation, we report BLEU scores computed by multi-bleu.perl. The table Matched Sentence 0.86 (i) supply of gas to power producers (CCGTs [10]); (a) Gas supply to power producers (CCGTs) 0.87 The Commission shall provide the chairman and the secretariat for these working parties. The Commission shall provide secretariat services for the Forum, the Bureau and the working parties. 0.93 Admission to a course of training as a pharmacist shall be contingent upon possession of a diploma or certificate giving access, in a Member State, to the studies in question, at universities or higher institutes of a level recognised as equivalent. Admission to basic dental training presupposes possession of a diploma or certificate giving access, for the studies in question, to universities or higher institutes of a level recognised as equivalent, in a Member State.
1,066
1,533
1,066
EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts
For low-resourced Bangla language, works on detecting emotions on textual data suffer from size and cross-domain adaptability. In our paper, we propose a manually annotated dataset of 22,698 Bangla public comments from social media sites covering 12 different domains such as Personal, Politics, and Health, labeled for 6 fine-grained emotion categories of the Junto Emotion Wheel. We invest efforts in the data preparation to 1) preserve the linguistic richness and 2) challenge any classification model. Our experiments to develop a benchmark classification system show that random baselines perform better than neural networks and pretrained language models as hand-crafted features provide superior performance. 1
Identifying emotions has helped find solutions to numerous problems for English text, namely retrieving emotion from suicide notes Bangla is the sixth most spoken language globally 2 and is the native language of Bangladesh. † First and second authors contributed equally 1 Data and code available at
[B] এইরকম েশা-অফ হাজার বার েদখেত চাই। Few datasets have been made public for detecting emotion in a low-resourced Bangla language In this paper, we aim to create a multi-label emotion dataset of noisy textual data collected from social media on various topics. We use the Junto emotion wheel • We propose EmoNoBa dataset, which comprises 22,698 multi-label Emotion on Noisy Bangla text. These texts are public comments on 12 different topics from 3 different social media platforms. Table • We establish baselines by experimenting on linguistic features, recurrent neural networks, and pre-trained language models. We also shed light on various aspects of the problem throughout our analysis. • We publicly release our dataset and model to foster research in this direction. Data Collection We set the following primary objectives before creating the dataset so that these objectives increase the generalization capabilities: Samples should contribute to making the dataset 1) domain independent and 2) less repetitive. We start by collecting user comments from YouTube, Facebook and Twitter on 12 most popular topics of Prothom Alo Objective Given a predefined set of emotions -Junto-6 basic emotions, the goal is to identify all emotions conveyed in a piece of text. Annotation We use five annotators for each instance. Emotion(s) voted by atleast three annotators were considered the final labels. Instances that could not be finalized this way were sent to authors for the final tag. We will refer to the former instances as genInst and the latter as excInst. We also kept the system fully anonymous for the authenticity of the annotations For genInst: For excInst: where T i is the set of the emotions selected by this annotator for instance i, O i is the set of the emotions selected by atleast two other annotators for instance i, A i is the set of the emotions selected by the authors for instance i, and I is the set of instances. We set the following criterion when choosing annotators. Annotators must be 1) well educated to understand the instances despite grammatical and spelling errors, and 2) active social media users to understand the context. Before selecting an emotion, we instructed them first to identify their child emotions from the Junto emotion wheel for better coherence. As such, 80 undergraduate students annotated 5 to 5,000 instances each, with 74 of them attaining AnnoAccu of 60% or more. Table In total, we have 22,698 instances in the final dataset. The average length of the instance is 1.36 ± 0.82 sentences, and the average length of the sentence is 11.70 ± 10.70 words. Moreover, 77.28% of our instances source from Youtube, and 15.3% contain multiple emotions. Figure We performed per-multi-label stratified split to create training (90%) and testing (10%) sets. Test set received precedence on excInst. In the cases of overflows, leftover instances were inserted into the training set and vice versa (Table In this section, we present the methods we used to develop a benchmark model for EmoNoBa. We extract word (1-4) and character (1-5) n-grams from the instances as these lexical representations have shown strong performance in different classification tasks. Then we vectorize each instance with the TF-IDF weighted scores and train on linear SVM Due to the capability of capturing sequential information from both directions in texts, we use Bi-LSTM Due to the recent success of BERT We implement our experimental framework using Scikit-learn We report our experimental results on the test set in Table Among the word n-gram, unigram achieves the best result by at least 12%. Combining the word grams yields better results but fails to surpass the standalone unigram model. On the other hand, the less showing of character n-grams verdicts that the task does not rely much on the character level information as with the increase of n-grams induces better results. Integrating all word 1-4 grams with character 1-3 grams provides the best result of 42.81 F1. Similar result was achieved in Arabic and Spanish languages in SemEval 2018 E-c task Findings Notice that both the negative emotions (anger, sadness, fear) and the positive emotions (love, joy, surprise) provides best results on subword or phrase level information. Table In this paper, we present EmoNoBa, a dataset for fine-grained emotion detection on Bangla text collected from comment sections of social media platforms on 12 different domains. We found that hand-crafted features performed comprehensively better than neural models. As the future work, we will exploit the findings identified in this work while incorporating contextual understanding.
717
300
717
What's in a Name? Entity Type Variation across Two Biomedical Subdomains
There are lexical, syntactic, semantic and discourse variations amongst the languages used in various biomedical subdomains. It is important to recognise such differences and understand that biomedical tools that work well on some subdomains may not work as well on others. We report here on the semantic variations that occur in the sublanguages of two biomedical subdomains, i.e. cell biology and pharmacology, at the level of named entity information. By building a classifier using ratios of named entities as features, we show that named entity information can discriminate between documents from each subdomain. More specifically, our classifier can distinguish between documents belonging to each subdomain with an accuracy of 91.1% F-score.
Biomedical information extraction efforts in the past decade have focussed on fundamental tasks needed to create intelligent systems capable of improving search engine results and easing the work of biologists. More specifically, researchers have concentrated mainly on named entity recognition, mapping them to concepts in curated databases Many of the tools currently used for biomedical language processing were trained and evaluated on such popular corpora, most of which consist of documents from the molecular biology subdomain. However, previous studies (discussed in Section 2) have established that different biomedical sublanguages exhibit linguistic variations. It follows that tools which were developed and evaluated on corpora derived from one subdomain might not always perform as well on corpora from other subdomains. Understanding these linguistic variations is essential to the process of adaptating natural language processing tools to new domains. In this paper, we highlight the variations between biomedical sublanguages by focussing on the different types of named entities (NEs) that are relevant to them. We show that the frequencies of different named entity types vary enough to allow a classifier for scientific subdomains to be built based upon them. The study is performed on open access journal articles present in the UK PubMed Central
Harris (1968) introduced a formalisation of the notion of sublanguage, which was defined as a subset of general language. According to this theory, it is possible to process specialised languages, since they have a structure that can be expressed in a computable form. More recently, several works on the study of biomedical languages substantiated his theory. For instance, Other studies have investigated the differences between general and biomedical languages by focussing on specific linguistic aspects, such as verb-argument relations and pronominal anaphora. For instance, Taking a different angle, Nguyen and Kim (2008) examined the differences in the use of pronouns by studying general domains (MUC and ACE) and one biomedical domain (GENIA). They observed that compared to the MUC and ACE corpora, the GENIA corpus has significantly more occurrences of neutral and third-person pronouns, whilst first and second person pronouns are non-existent. Our work is most similar to that of In contrast, we examine the differences between biomedical sublanguages at the semantic level, using only named entities. Furthermore, we choose to perform our analysis only on two subdomains (i.e. cell biology and pharmacology), and try to classify these by using supervised machine learning algorithms. We designed an experiment in which various machine learning algorithms are trained and tested on data obtained from open access journal articles. Firstly, a corpus of articles was created (Section 3.1), after which the documents were automatically annotated with named entities (Section 3.2). We then extracted a number features relevant to the named entities present in the corpus (Section 3.3). Our corpus was created by first searching the NLM Catalog 3 for journals whose Broad Subject Term attributes contain only cell biology or pharmacology, and then narrowing down the results to those which are in English and available via PubMed Central. Also, since we are concentrating on full-text documents, we retained only those journals that are available within the PubMed Open Access subset 4 . According to this procedure, we obtained a final list of two journals for cell biology and six for pharmacology. Using the PMC IDs of all articles published in the selected journals, we retrieved documents from UK PubMed Central. This database was chosen as our source as the documents it contains are already tagged with named entity information. A total of 360 articles was retrieved for each category, i.e. cell biology and pharmacology. The retrieved documents were encoded in XML format. Several unusable fragments were removed before converting them to plain text. Examples of such fragments are article metadata (authors, their affiliations, publishing history, etc.), tables, figures and references. Table Cell To extract named entities from the corpus, we used a simple method that augments the named entities present in the UKPMC articles with the output of two named entity recognition tools Named entities in the UKPMC database were identified using NeMine The Open-Source Chemistry Analysis Routines (OSCAR) software In total, 20 different classes of entities were considered in this study. However, due to the combination of several NERs, some NE types are identified by more than one NER. Furthermore, some of the NE types are more general and cover other more specific types, which are also annotated by one or mroe of the tools. This can lead to double annotation. For instance, the Gene|Protein type is more general than both Gene and Protein, whereas the Chemical molecule type is a hypernym of Gene, Protein, Drug and Metabolite. In the case of multiple annotations over the same span of text, we removed the more general labels, so that each NE has only one label. Contradictory cases, where two NERs label one NE with completely different tags, were not found. After augmenting the existing NEs by running the two NER tools on the corpus, the outputs were combined to give a single "silver" annotation list. This operation was performed by computing the mathematical union of the three individual annotation sets, as shown in Equation Table Using the corpus described previously, we created a training set for supervised machine learning algorithms. Every document in the corpus was transformed into a vector consisting of 20 features. Each of these features corresponds to an entity type in Table where n type represents the number of NEs of a certain type in a document and N represents the total number of NEs in that document. Furthermore, each vector was labelled with the subdomain to which the respective document belongs (i.e., cell biology or pharmacology). Weka The baseline that has been used is ZeroR, a simple algorithm that classifies all instances as pertaining to the majority class. Since our classes have equal numbers of instances, the F-score of ZeroR is 50%. The previously described features were used as input to various supervised machine learning algorithms; results and error analysis are provided in Section 4.1 and Section 4.2, respectively. As can be seen from We also employed AdaBoost in conjunction with the previously mentioned four classifiers, and the results are given in Table Unsurprisingly, Protein is the feature with the most discriminatory power, considering it has the highest count and it occurs almost three times more often in the cell biology class than in the pharmacology class. Chemical molecules follow closely, again due to a high count and large difference between the classes. Due to their high scores obtained from the attribute evaluators, we ran the experiment again considering only these two features. The Random Forest classifier achieved an F-score of 80% using these parameters. At the other end of the scale, there are five features which have very little influence in discriminating between the two classes. The corresponding named entity types have the lowest occurrence counts in the corpora, with the exception of Organ. When running Random Forest with these five features only, an F-score of 50.5% is obtained. This result is very close to the baseline, surpassing it by only a small fraction. As can be seen in As previously mentioned, the two features that achieved the highest information gain are the ratios for the Protein and Chemical molecule types. Accordingly, only these two features were considered in this error analysis. We firstly examined the features of the cell biology documents which were incorrectly classified as pharmacology papers. It was noticeable that the majority of the misclassified documents in this case have a small percentage of Proteins (less than 0.35) and/or a large percentage of Chemical molecules (greater than 0.58). To confirm this observation, a sample of documents was accessed via the PubMed Central page which provides links to identified entities such as compounds, substances, genes and proteins. For instance, the misclassified cell biology paper with PMCID 2755470 was found to have no proteins, whilst the one with PMCID 2679709 has quite a large number of substances (chemical molecules). We also analysed the features of papers in the pharmacology subdomain which were misclassified as cell biology documents. In contrast to the first type of misclassification, these documents have a large percentage of Proteins and/or small percentage of Chemical molecules. For example, the pharmacology paper with PMCID 2817930 contains many protein instances, whilst the one with PMCID 2680808 has no mentions of chemical molecules. We have shown that with the help of named entity identification, classifiers can be built that are able to distinguish between papers belonging to different biomedical subdomains. The Random Forest algorithm is able to discriminate between cell biology and pharmacology open-access fulltext articles with an F-score of 91%. This result supports the hypothesis that sublanguages used in different biomedical domains exhibit significant semantic variations. Such variations should therefore be considered when adapting automated tools developed for a particular subdomain to new subdomains. One possible future direction is to analyse multiple medical subdomains, such as neurology, virology and critical care. This could enable the measurement of the distance between various subdomains with respect to specific named entity types. Furthermore, a comparison of the method described above with those using bag-of-words or other non-semantic features could further enforce the importance of named entities in document classification and sublanguage identification.
748
1,368
748
Optimizing Retrieval-augmented Reader Models via Token Elimination
Fusion-in-Decoder (FiD) is an effective retrieval-augmented language model applied across a variety of open-domain tasks, such as question answering, fact checking, etc. In FiD, supporting passages are first retrieved and then processed using a generative model (Reader), which can cause a significant bottleneck in decoding time, particularly with long outputs. In this work, we analyze the contribution and necessity of all the retrieved passages to the performance of reader models, and propose eliminating some of the retrieved information, at the token level, that might not contribute essential information to the answer generation process. We demonstrate that our method can reduce run-time by up to 62.2%, with only a 2% reduction in performance, and in some cases, even improve the performance results. 1
The task of Open-Domain Question Answering (ODQA) The retriever-reader architecture has been widely-used and adopted for ODQA tasks of the reading process. In order to assess ODQA methods, There has been rapid and remarkable progress in retriever-reader systems for solving ODQA tasks using a generative approach Previous works have attempted to mitigate these bottlenecks, either by limiting the input to the reader or by directly optimizing it in a variety of methods. In this work, we tackle the heavy cross-attention computation in the decoder by introducing Token Filtering, a method that removes redundant tokens from input passages during the decoding stage, by dynamically computing their salience during generation. Using Token Filtering eliminates uninformative tokens from the cross-attention matrix, and prevents them from being utilized during answer generation, directly contributing to the reduction of the overall generation time. To further boost efficiency and reduce latency, we combine our Token Filtering approach with dynamic decoder layer skipping introduced by Overall, our contributions are as follows: • We analyze the performance vs. efficiency trade-off of the FiD model, in terms of latency, FLOPs and the salience of the input information within the reader model, during long-form generation. • We propose a novel approach for improving the efficiency of FiD, with a combined approach of Token Filtering and decoder layer reduction, which removes tokens and irrelevant layers during the generation process of every token for long-form answers. • We show that models utilizing our approach can save up to 62.2% on the MS MARCO dataset, 54.9% on NQ, and 40.9% on ELI5, in terms of the generation time, while incurring a drop of no more than 2% in performance. • Without computational restrictions, our method reaches state-of-the-art performance in KILT's ELI5 task.
In a retriever-reader system, the reader, which is typically a language model, receives a query along with a collection of passages, where each passage often consists of a title and a context. Additionally, we are provided with the ground truth, which can be an expected answer or a gold passage that is most relevant to the query. Since our main focus is on generative models, we employ the widelyused Fusion-in-Decoder (FiD) model The decoder module then cross-attends to the large number of concatenated input representations and assimilates the information from the different passages to generate an answer. At each decoding step, the decoder computes the attention scores based on the precomputed input tokens' representations which serve as the query for the multi-headed attention operation, concurrently taking into account the current decoded sequence. There are multiple parts in a retriever-reader setup that have a direct effect on the end-to-end latency. One of them is potentially reducing the number of passages provided to the reader model. Naturally, the FiD latency could be reduced if we provide less input passages to the reader. However, it is unclear how much time is utilized by each of its sub-modules. de Jong et al. ( Thus, we undertake an additional analysis, to comprehend how the time (latency) is distributed between the FiD encoder and the decoder modules, depending on the number of input passages and the amount of generated tokens. Our findings are illustrated in Figure Overall, in the specific case of long-answer tasks such as LFQA, we can conclude that the decoder serves as the primary source of latency and computational load during inference. This finding is further supported by similar works An additional bottleneck affecting the efficiency of FiD is the extended sequence created by concatenating input passages, which the decoder focuses on during generation. Assuming the reader is supplied with an excessive amount of passages, our objective is to assess the importance of the input token representations. Essentially, our primary research question pertains to filtering out uninformative tokens that have no impact on answer generation, without compromising performance. Inspired by previous works that have assessed the relevance of input to decoders, we focus on the cross-attention scores. These scores have been recently demonstrated to serve as a metric of importance for the input token representations, particularly in relation to their impact on the accuracy of the answer In order to investigate the utility of crossattention scores as a meaningful indicator, we aim to verify their ability to focus on the important information within the input text. To accomplish this, we include the gold passage in a list of 100 retrieved passages (given a specific question). To simplify the analysis, we position the gold passage at rank 1, as the input matrix of the decoder's crossattention matrix does not inherently incorporate any notion of order. In order to examine the input token scores throughout the entire generation process, we calculate the average cross-attention scores for each decoder layer and at every generated token index. With the aim of identifying and filtering out irrelevant tokens, we select the top p% of tokens with the highest cross-attention scores and compute the proportion of the tokens that originate from the gold passage. Figure We observe that the decoder's initial layers (2nd and 3rd) exhibit the greatest proportion of tokens derived from the gold passage. This implies that these layers should be employed for calculating the input relevance scores. Additionally, we have noticed that, in most layers, the ratio reaches its peak around the 20 th generated token and subsequently Answer Length (in tokens) % from Chosen Tokens (b) The distribution of passages over the chosen tokens, in the 2 nd layer. Figure The percentage of tokens that were chosen from each passage (1-100). The gold passage (labeled as 1) is colored red. declines during the generation process. This indicates that it is most advantageous to utilize the cross-attention scores in the early stages of generation. Next, we proceed to examine the extent to which the model attends to tokens from the gold passage compared to other less informative tokens. The findings of this analysis are presented in Figure In summary, we have demonstrated that the cross-attention scores possess the capability to prioritize the most pertinent information in the input, making them a reliable mechanism for selecting informative input tokens. Moreover, we have identified the optimal layers and ranges of generated token indices to generate these scores, ensuring the selection of the most appropriate input tokens. For a comprehensive examination of the cross-attention patterns, we encourage readers to refer to Appendix A for further details. Following our analysis in Section 3.2, we turn to implementing a method for filtering out the redundant information during the decoding stage. We aim to find a subset of the input tokens that is the most relevant for generating the correct answer. As pointed out previously, we can utilize the crossattention scores computed between the generated tokens and the passages as basic signal for filtering out irrelevant tokens, similarly to Thus, we suggest a token filtering approach, using the cross-attention scores computed at a predetermined layer and generated token index during inference. At that point, for each input token, we compute the average attention scores over all attention heads, similarly to where t is the generated token index, l is the layer index, h is the number of attention heads, and A i t,l represents the cross-attention scores at index t, layer l and head i. We perform a descending argsort operation on the scores above, and take the top p% from the sorted input token indices. Hence, we denote T as the total number of input tokens from all the passages, and T ′ as the amount of tokens we keep after filtering, which is p% from T : where [i] indicates accessing the vector at index i. Finally, we keep only the tokens chosen in T op t,l from the cross-attention past key-values states K past , V past : (3) where A[B] selects the elements from A whose indices appear in B. These new past key-value states are the only ones used for generating all subsequent tokens. Since the filtering percentage, token index and layer can effect the quality of this mechanism, as inspected in Section 3.2, we obtain the optimal values for them by performing a hyperparameterlike search over their possible values, which is described in Section 5.4. We produce new past key and value representations for the input sequence (across all the decoder layers), containing only the selected tokens, resulting in a more compact tensor to attend to. We name the filtering mechanism Token Filtering, with an overview of the approach presented in Figure Note that we remove the irrelevant tokens from the keys and values of the encoder output sequence during inference time only once, hence reducing their dynamic dimension during computation for all the subsequent tokens. For additional details about the cross-attention scoring computation we refer the reader to Appendix B. Our experiments are conducted on commonly used datasets for LFQA. ELI5 MS NaturalQuestions (NQ) For all datasets, we use the validation as the test set and a subset of the training set for validation, as done by We specify the hyperparameters used for training on the various datasets in Table FiD We base our models on the FiD generative reader CALM While our Token Filtering approach primarily focuses on eliminating redundant input tokens, it does not decrease the number of decoder layers responsible for processing them. To tackle this concern, we also incorporate a recent effective early exiting method for the decoder module Retrieval We first create an index for retrieval over a Wikipedia dump 5 , comprised of multiple passages. For all the evaluated datasets, we retrieve 100 passages for each question from the index, using a combination of dense and sparse passage rankers. We refer the reader to Appendix C for more details regarding the retrieval process. Hardware We used 8 24GB NVIDIA RTX3090 for training base-sized models, and 8 40GB A100 GPUs for training large-sized models. For inference and latency measurements we used a single accelerator. Inference setup Throughout our latency measurements, we used a batch size of 1 and averaged the latency over all queries. Decoding is done using beam-search with 4 beams, and similarly as We use KILT's implementation of ROUGE-L and F1 for performance measurements 6 Experimental Results In Figure For the base-sized models (Figures 5a, 5b, 5c), we can observe all methods improve upon the baseline model, each one in a different aspect. For CALM, the method is able to reach lower latency values, due to skipping the redundant layer computations. In the case of Token Filtering, it is also able to preserve and at times improve the performance of the model overall, while the latency improvement remains limited, since it is still computing the remaining tokens across all decoder layers. The performance improvement is presumably due to the redundant tokens being removed early on during the generation process, hence allowing the model to better attend to the salient information in the input. When combining both methods, the performance enhancement of the Token Filtering and the latency reduction of CALM produce a better curve than either method alone. In addition, we showcase the drop in 2% performance per dataset, showing that our method is able to reduce the latency significantly more than the regular FiD, with the best reduction reached on the MS MARCO dataset for FiD-Base, saving 62.2% of the latency. In the NQ dataset however, for both the base-sized and large-sized models, while the CALM method does achieve proper latency reduction, the Token Filtering does not effect the results significantly. Since we focus on real-world scenarios, we showcase the trade-off with the actual latency, instead of measurements such as FLOPS (MACs), as done by previous works (de To asses the performance of our Combined method further, we choose the best performing hyperparameter setting for the FiD-Base model, and report the test set results for each dataset, with Table Open-Domain Question Answering Many previous works utilized setups which are based on the retriever and the language model reader components achieve substantially better results Encoder-Decoder Efficiency Due to computation bottlenecks in generative models, particularly in encoder-decoder models In particular, the decoder model utilizes many redundant and heavy cross-attention operations, which can be removed or replaced with simpler alternatives Since encoder-decoder models perform compute heavy operations in multiple layers, previous works have proposed stopping the layer propagation dynamically by assessing the model's confidence for prediction at a certain layer The input to encoder models tends to become increasingly large, especially in ODQA settings with many input passages. Since not all spans of infor-mation are relevant to produce the correct answer, previous works propose eliminating the irrelevant tokens from the input to the encoder, by identifying the salient information during inference time We analyze the precise performance vs. efficiency trade-off of the FiD's encoder and decoder in long-form settings, with an analysis of the crossattention operation in the decoder model. We show that the decoder has more impact on the latency, particularly for long outputs, and that the decoder attends to more salient information early on during generation. Hence, our proposed approach for efficiency reduction, namely a combined approach of Token Filtering and CALM, removes irrelevant layers and tokens during the generation process, for every token produced. Our approach achieves a significant reduction in resources (up to 62.2%), while not sacrificing more than 2% of the performance, and is the current state-of-the-art on the ELI5 KILT leaderboard. Future work can further develop a more dynamic method for choosing the most relevant tokens from the input, instead of using predetermined hyperparameters, and train the cross-attention patterns to better attend to the salient information during generation. Regarding the retriever, as mentioned in Section 5.2, we did not experiment with a vast array of retrievers, due to the scope of the work being on the reader model. Regarding the models for comparison, we primarily focused on the performance of the FiD model versus our own approach, while testing them on various datasets. Hence, we did not perform extensive reproductions of other methods, such other encoder-decoder models, but instead report their original results as they were published. We believe that our results can be generalized to other architectures as well. In our hyperparameter search, we chose a subspace of all the possible values each parameter has, due to a limited amount of computation available. Our approximation of the space covers the main areas of relevance for our purposes. In this section, we continue our discussion from section 3.2, regarding the analysis of the crossattention scores. In Figure In Figure In addition to the methods introduced in 4, the computation of the cross-attention scores can be further altered in a few key areas, which we tackle as well. Value Normalization. As mentioned in , where [n] is the n th row, in this case the n th token, and v n = ||V [n]|| 2 is the norm of the n th row (token) in V . Hence, we apply this normalization to the attention scoring operation. Mean over all decoder layers. Instead of taking the representation of the current decoder layer only, we instead take the average over every layer before the current one. Thus, we compute the attention scores S t,l for the input tokens as follows: From our preliminary analysis, this mean operation does not effect the quality of the filtering method, and hence is not applied. Since our method primarily focuses on the reader model, we have implemented a generalized approach for creating ranked passage lists. Our document corpus is taken from a Wikipedia Dump, which has been split into 100-word-long passages, as done in For the CALM, we utilize beam search for long sequence generation. In the beam-search setting, we use n b beams, there which causes the issue of how to allow some tokens to cease computation at a certain level, while allowing the others to continue computation. For the scope of our work, we apply the hard assumption that the confidence value is the lowest one from all beams, hence exiting only if all the tokens in the beams have satisfied the exiting condition. Formally, given confidence scores cl = (c 1 l , c 2 l , ..., c n b l ) at layer l, the confidence value used at the layer will thus be c l = min j∈[1,n b ] c j l . We note that while For the Token Filtering, since we are discussing mainly Encoder-Decoder architectures, we apply the filtering by removing the redundant tokens from the past key and value states for the cross-attention. In addition, we also remove said tokens from the encoder hidden states, encoder attention mask, and the encoder-decoder position bias. Len. Chosen ELI5 150 MS MARCO 50 NQ 50 We observe that the trends of the FLOPS (MACs) and the Input Psg. are very similar, since the passages effect the encoder the most, and the encoder has the most impact on FLOPS (MACs) (as stated by de Jong et al. (
813
1,894
813
Zero-Shot Entity Linking by Reading Entity Descriptions
We present the zero-shot entity linking task, where mentions must be linked to unseen entities without in-domain labeled data. The goal is to enable robust transfer to highly specialized domains, and so no metadata or alias tables are assumed. In this setting, entities are only identified by text descriptions, and models must rely strictly on language understanding to resolve the new entities. First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities. Second, we propose a simple and effective adaptive pre-training strategy, which we term domainadaptive pre-training (DAP), to address the domain shift problem associated with linking unseen entities in a new domain. We present experiments on a new dataset that we construct for this task and show that DAP improves over strong pre-training baselines, including BERT. The data and code are available at https: //github.com/lajanugen/zeshel. 1
Entity linking systems have achieved high performance in settings where a large set of disambiguated mentions of entities in a target entity dictionary is available for training. Such systems typically use powerful resources such as a high-coverage alias table, structured data, and linking frequency statistics. For example, While most prior works focus on linking to general entity databases, it is often desirable to link to specialized entity dictionaries such as legal cases, company project descriptions, the set of characters in a novel, or a terminology glossary. Unfortunately, labeled data are not readily available and are often expensive to obtain for these specialized entity dictionaries. Therefore, we need to develop entity linking systems that can generalize to unseen specialized entities. Without frequency statistics and meta-data, the task becomes substantially more challenging. Some prior works have pointed out the importance of building entity linking systems that can generalize to unseen entity sets In this work, we propose a new zero-shot entity linking task, and construct a new dataset for it. Zero-shot entity linking poses two challenges for entity linking models. First, without the availability of powerful alias tables or frequency priors, models must read entity descriptions and reason about the correspondence with the mention in context. We show that a strong reading comprehension model is crucial. Second, since labeled mentions for test entities are not available, models must adapt to new mention contexts and entity descriptions. We focus on both of these challenges. The contributions of this paper are as follows: • We propose a new zero-shot entity linking task that aims to challenge the generalization ability of entity linking systems with minimal assumptions. We construct a dataset for this task, which will be made publicly available. • We build a strong baseline by using state-of-theart reading comprehension models. We show that attention between mention in context and entity descriptions, which has not been used in prior entity linking work, is critical for this task. • We propose a simple yet novel adaptation strategy called domain-adaptive pre-training (DAP) and show that it can further improve entity linking performance.
We first review standard entity linking task definitions and discuss assumptions made by prior systems. We then define the zero-shot entity linking task and discuss its relationship to prior work. Entity linking (EL) is the task of grounding entity mentions by linking them to entries in a given database or dictionary of entities. Formally, given a mention m and its context, an entity linking system links m to the corresponding entity in an entity set E = {e i } i=1,...,K , where K is the number of entities. The standard definition of EL Single entity set This assumes that there is a single comprehensive set of entities E shared between training and test examples. An alias table contains entity candidates for a given mention string and limits the possibilities to a relatively small set. Such tables are often compiled from a labeled training set and domain-specific heuristics. Frequency statistics Many systems use frequency statistics obtained from a large labeled corpus to estimate entity popularity and the probability of a mention string linking to an entity. These statistics are very powerful when available. Structured data Some systems assume access to structured data such as relationship tuples (e.g., (Barack Obama, Spouse, Michelle Obama)) or a type hierarchy to aid disambiguation. The main motivation for this task is to expand the scope of entity linking systems and make them generalizable to unseen entity sets for which none of the powerful resources listed above are readily available. Therefore, we drop the above assumptions and make one weak assumption: the existence of an entity dictionary E = {(e i , d i )} i=1,..,K , where d i is a text description of entity e i . Our goal is to build entity linking systems that can generalize to new domains and entity dictionaries, which we term worlds. We define a world as W = (M W , U W , E W ), where M W and U W are distributions over mentions and documents from the world, respectively, and E W is an entity dictionary associated with W. are defined as mention spans in documents from U W . We assume the availability of labelled men- We additionally assume that samples from the document distribution U Wtgt and the entity descriptions E Wtgt are available for training. These samples can be used for unsupervised adaptation to the target world. During training, mention boundaries for mentions in W tgt are not available. At test time, mention boundaries are provided as input. We summarize the relationship between the newly introduced zero-shot entity linking task and prior EL task definitions in Table Standard EL While there are numerous differences between EL datasets Cross-Domain EL Recent work has also generalized to a cross-domain setting, linking entity mentions in different types of text, such as blogposts and news articles to the Wikipedia KB, while only using labeled mentions in Wikipedia for training (e.g., Linking to Any Work on word sense disambiguation based on dictionary definitions of words is related as well We construct a new dataset to study the zeroshot entity linking problem using documents from Wikia. We use data from 16 Wikias, and use 8 of them for training and 4 each for validation and testing. To construct data for training and evaluation, we first extract a large number of mentions from the Wikias. Many of these mentions can be easily linked by string matching between mention string and the title of entity documents. These mentions are downsampled during dataset construction, and occupy a small percentage (5%) of the final dataset. While not completely representative of the natural distribution of mentions, this data construction method follows recent work that focuses on evaluating performance on the challenging aspects of the entity linking problem (e.g., We categorize the mentions based on token overlap between mentions and the corresponding entity title as follows. High Overlap: title is identical to mention text, Multiple Categories: title is mention text followed by a disambiguation phrase (e.g., mention string: 'Batman', title: 'Batman (Lego)'), Ambiguous substring: mention is a substring of title (e.g., mention string: 'Agent', title: 'The Agent'). All other mentions are categorized as Low Overlap. These mentions respectively constitute approximately 5%, 28%, 8% and 59% of the mentions in the dataset. Table Table We adopt a two-stage pipeline consisting of a fast candidate generation stage, followed by a more expensive but powerful candidate ranking stage. Without alias tables for standard entity linking, a natural substitute is to use an IR approach for candidate generation. We use BM25, a variant of TF-IDF to measure similarity between mention string and candidate documents. Since comparing two texts-a mention in context and a candidate entity description-is a task similar to reading comprehension and natural language inference tasks, we use an architecture based on a deep Transformer As in BERT ). Mention words are signaled by a special embedding vector that is added to the mention word embeddings. The Transformer encoder produces a vector representation h m,e of the input pair, which is the output of the last hidden layer at the special pooling token Note that prior neural approaches for entity linking have not explored such architectures with deep cross-attention. To assess the value of this departure from prior work, we implement the following two variants: (i) Pool-Transformer: a siamese-like network which uses two deep Transformers to separately derive single-vector repre-sentations of the mention in context, h m , and the candidate entity, h e ; they take as input the mention in context and entity description respectively, together with special tokens indicating the boundaries of the texts: ([CLS] m [SEP]) and ([CLS] e [SEP]), and output the last hidden layer encoding at the special start token. The scoring function is h m h e . Single vector representations for the two components have been used in many prior works, e.g., In the experiments section, we also compare to re-implementations of We focus on using unsupervised pre-training to ensure that downstream models are robust to target domain data. There exist two general strategies for pre-training: (1) task-adaptive pre-training, and (2) open-corpus pre-training. We describe these below, and also propose a new strategy: domainadaptive pre-training (DAP), which is complementary to the two existing approaches. Task-adaptive pre-training Intuitively, the target-domain distribution is likely to be partially captured by pre-training if the open corpus is sufficiently large and diverse. Indeed, open-corpus pre-training has been shown to benefit out-of-domain performance far more than indomain performance In addition to pre-training stages from other approaches, we propose to insert a penultimate domain adaptive pre-training (DAP) stage, where the model is pre-trained only on the target-domain data. As usual, DAP is followed by a final fine-tuning stage on the source-domain labeled data. The intuition for DAP is that representational capacity is limited, so models should prioritize the quality of target domain representations above all else. We introduce notation to describe various ways in which pre-training stages can be composed. • U src denotes text segments from the union of source world document distributions U W 1 src . . . U W n src . • U tgt denotes text segments from the document distribution of a target world W tgt . • U src+tgt denotes randomly interleaved text segments from both U src and U tgt . • U WB denotes text segments from open corpora, which in our experiments are Wikipedia and the BookCorpus datasets used in BERT. We can chain together a series of pre-training stages. For example, U WB → U src+tgt → U tgt indicates that the model is first pre-trained on the open corpus, then pre-trained on the combined source and target domains, then pre-trained on only the target domain, and finally fine-tuned on the source-domain labeled data. 7 We show that chaining together different pre-training strategies provides additive gains. Pre-training We use the BERT-Base model architecture in all our experiments. The Masked LM objective Resources fine-tuning on the Entity-Linking task, we use a small learning rate of 2e-5, following the recommendations from Evaluation We define the normalized entitylinking performance as the performance evaluated on the subset of test instances for which the gold entity is among the top-k candidates retrieved during candidate generation. The unnormalized performance is computed on the entire test set. Our IR-based candidate generation has a top-64 recall of 76% and 68% on the validation and test sets, respectively. The unnormalized performance is thus upper-bounded by these numbers. Strengthening the candidate generation stage improves the unnormalized performance, but this is outside the scope of our work. Average performance across a set of worlds is computed by macro-averaging. Performance is defined as the accuracy of the single-best identified entity (top-1 accuracy). We first examine some baselines for zero-shot entity linking in Table When using the Full-Transformer model, pretraining is necessary to achieve reasonable performance. We present results for models pre-trained on different subsets of our task corpus (U src , U tgt , U src+tgt ) as well as pre-training on an external large corpus (U WB ). We observe that the choice of data used for pre-training is important. In Table To analyze the impact of unseen entities and domain shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds. 5-point drop in performance. Entities from new worlds (which are by definition unseen and are mentioned in out-of-domain text) prove to be the most difficult. Due to the shift in both the language distribution and entity sets, we observe a 11-point drop in performance. This large generalization gap demonstrates the importance of adaptation to new worlds. Our experiments demonstrate that DAP improves on three state-of-the-art pre-training strategies: • U src+tgt : task-adaptive pre-training, which combines source and target data for pretraining • U WB : open-corpus pre-training, which uses Wikipedia and the BookCorpus for pre-training (We use a pre-trained BERT model • U WB → U src+tgt : the previous two strategies chained together. While no prior work has applied this approach to domain adaptation, a similar approach for task adaptation was proposed by Pre-training EL Accuracy N. Acc. U. Acc. UWB To further analyze the results of DAP, we plot the relationships between the accuracy of Masked LM (MLM accuracy) on target unlabeled data and the final target normalized accuracy (after finetuning on the source labeled data) in Figure Table To analyze the mistakes made by the model, we compare EL accuracy across different mention categories in Table We discussed prior entity linking task definitions and compared them to our task in section 2. Here, we briefly overview related entity linking models and unsupervised domain adaptation methods. Entity linking models Entity linking given mention boundaries as input can be broken into the tasks of candidate generation and candidate ranking. When frequency information or alias tables are unavailable, prior work has used measures of similarity of the mention string to entity names for candidate generation Unsupervised domain adaptation There is a large body of work on methods for unsupervised domain adaptation, where a labeled training set is available for a source domain and unlabeled data is available for the target domain. The majority of work in this direction assume that training and test examples consist of (x, y) pairs, where y is in a fixed shared label set Y. This assumption holds for classification and sequence labeling, but not for zero-shot entity linking, since the source and target domains have disjoint labels. Most state-of-the-art methods learn non-linear shared representations of source and target domain instances, through denoising training objectives Adversarial training methods We introduce a new task for zero-shot entity linking, and construct a multi-world dataset for it. The dataset can be used as a shared benchmark for entity linking research focused on specialized domains where labeled mentions are not available, and entities are defined through descriptions alone. A strong baseline is proposed by combining powerful neural reading comprehension with domainadaptive pre-training. Future variations of the task could incorporate NIL recognition and mention detection (instead of mention boundaries being provided). The candidate generation phase leaves significant room for improvement. We also expect models that jointly resolve mentions in a document would perform better than resolving them in isolation. In tables 8, 9, 10, 11 we show some example mentions and model predictions. For each instance, the examples show the correct gold entity and the top-5 predictions from the model. Examples show 32 token contexts centered around mentions and the first 32 tokens of candidate entity documents.
976
2,287
976
End-to-End Single-Channel Speaker-Turn Aware Conversational Speech Translation
Conventional speech-to-text translation (ST) systems are trained on single-speaker utterances, and they may not generalize to real-life scenarios where the audio contains conversations by multiple speakers. In this paper, we tackle single-channel multi-speaker conversational ST with an end-to-end and multi-task training model, named Speaker-Turn Aware Conversational Speech Translation, that combines automatic speech recognition, speech translation and speaker turn detection using special tokens in a serialized labeling format. We run experiments on the Fisher-CALLHOME corpus, which we adapted by merging the two single-speaker channels into one multi-speaker channel, thus representing the more realistic and challenging scenario with multi-speaker turns and cross-talk. Experimental results across single-and multi-speaker conditions and against conventional ST systems, show that our model outperforms the reference systems on the multi-speaker condition, while attaining comparable performance on the single-speaker condition. We release scripts for data processing and model training. 1 * Work conducted during an internship at Amazon.
Speech translation (ST) has seen wide adoption in commercial products and the research community 2017, inter alia) have recently gained increasing interest and popularity thanks to their simple architecture, less error propagation Despite significant recent advances in E2E-ST In this work, we tackle the more challenging task of multi-speaker conversational ST. We refer to it as multi-turn & multi-speaker (MT-MS), as opposed to single-turn, which most ST systems implicitly assume. This is illustrated in Figure 1. We introduce the task of multi-turn & multispeaker ST, including cross-talks and speakerturns, that expands the realm of ST which has been limited to single-speaker utterances. 2. We propose an end-to-end model
Joint ST & ASR Modeling Recent works in ST have leveraged ASR training data to improve translation quality. In principle, joint ASR and ST modeling Conversational Speech Translation Work on conversational ST In this work, we report results on the Fisher-CALLHOME corpus Speaker-Turn and Cross-Talk in ASR Speakerturns and cross-talks have been explored in the ASR field and commonly termed, multi-talker ASR. 3 Speaker-Turn Aware Conversational Speech Translation (STAC-ST) This section describes our end-to-end multi-task learning model for multi-turn multi-speaker conversational ST. Figure STAC-ST has a standard front-end module. First, frame-level 80-dimensional filterbank features are extracted from the audio Where, F is the feature dimension, T is the number of speech frames, N is the number of text tokens, and V is the vocabulary. During training of STAC-ST, we concatenate independent datasets D ASR = (X, Y ASR ) and D ST = (X, Y ST ), for ASR & ST, respectively. Samples of training minibatches are jointly drawn from D ASR and D ST . A key component of the model is the serialized multi-task labeling framework based on special tokens. As shown in Figure The first two tokens are language tokens that define the task for either ST (when At inference time, both language tokens are preset to specify the desired task. [TURN] and [XT] specify the auxiliary tasks of detecting speaker-turn changes and cross-talks, which are critical for MT-MS speech processing and more aligned to acoustic tasks. Note that crosstalks always occur during speaker-turn changes, so [XT] always follows We concatenate transcripts or translations sequentially, inserting [TURN] and [XT] tokens when needed. If utterances u t and u t+1 overlap in time, we append the targets of utterance u t+1 after utter-ance u t . The order of utterances is determined by their start time. A demonstration of such serialization is shown below: [XT] WORD2 WORD3 ... STAC-ST jointly models ASR and ST by balancing CTC L CT C and L N LL are computed by appending linear layers with dimension V on top of the encoder and decoder, respectively. Figure This section introduces the datasets and metrics we used for evaluation, as well as architecture and training details of STAC-ST. We use the Fisher and CALLHOME corpora which respectively comprises 186 hr and 20 hr of audio and transcripts of telephone conversations in Spanish. 3 The Spanish-to-English translations are available from Segmentation. Each conversation on Fisher-CALLHOME occurred between two speakers with multiple turns over two channels (one speaker per channel). For MT-MS ST experiments, we merge the two channels into one, which creates natural speaker changes and cross-talks as illustrated in Figure To build models with manageable size and computation, following Fisher-CALLHOME has limited training data size, so we explore additional corpora to improve our model and to evaluate its generalization ability. We also use the official CoVoST 2 and CV corpora are composed of single-turn pre-segmented utterances. To generate data consistent with our MT-MS segmentation, we randomly concatenate audio utterances and yield segments of up to 30 seconds. Note that these synthetic MT-MS segments contain no silences and cross-talks, but still have speaker-turn changes (labeled by [TURN]). We report case-insensitive BLEU using Sacre-BLEU We experiment with three model sizes, S(mall), M(edium), and L(arge), with increasing dimension We train for 100k steps the S-size models and 200k steps the M-and L-size models. We use AdamW Our experimental results document three properties of the STAC-ST model: We explored various training data configurations for multi-task learning (see Table Joint training of single-turn and multi-turn tasks is beneficial. Adding multi-turn ST data for training gives marginal improvements (Row-1 vs. Row-0); this suggests that simply adding limited multi-turn data will not suffice for the MT-MS cases. When either single-turn or multi-turn Multi-turn ASR data helps multi-turn ST. In our training data, there are more labeled single-turn ST data and multi-turn ASR data than multi-turn ST data. We tested a zero-shot setting where, for the multi-turn condition is only covered by ASR training data (Row-5). Comparing to training with single-turn ST+ASR data only (Row-2), the resulting model brings 3-8 BLEU gains. We hypothesize that, as the encoder is target-language-agnostic, the acoustic representations and the turn detection capacity learned from multi-turn ASR data does partially transfer to the ST task. Multi-turn ST does not seem to help multiturn ASR. This can be seen by comparing WER scores in Row-2 and Row-6. We hypothesize that the non-monotonicity of the multi-turn ST task disrupts multi-turn ASR performance ). However, this can be fixed by adding back multi-turn ASR data (Row-4). Note that we use the Row-4 data configuration for the rest of the paper. The STAC-ST multi-task learning framework also encodes speaker-turn and cross-talk information with task tokens [TURN] and [XT]. We run experiments to study how these task labels impact on ASR and ST performance in MT-MS setting and how they even enable speaker change detection. Modeling speaker-turn and cross-talk detection helps multi-speaker ST and ASR. We run experiments by ablating the two task tokens. Evaluation results in Table Modeling speaker-turn and cross-talk detection enables the model to perform speaker change detection. The CTC loss helps the encoder to align input audio to text tokens per acoustic frame, including the two task tokens. We trace speakerturns and cross-talks in the timeline by ( To compute these metrics, we first prepare Rich Transcription Time Marked (RTTM) files for each test set from the time-aligned CTC [TURN] spikes. We compared performance of two STAC-ST models (S and L) against a reference system, the speaker segmentation pipeline of the popular PyAnnote toolkit We run extensive benchmarks to compare STAC-ST with related work in various settings, including (1) different audio segmentation strategies, (2) model size, and (3) evaluation on single-turn ST. A common practice for translating long-form audio files is to first segment them into smaller chunks based on voice activity detection (VAD). We compare our MT-MS segmentation approach with two popular VAD-based audio segmenters, i.e., WebRTC As shown in Figure Given the lack of prior work on MT-MS ST, we compare STAC-ST against a strong multi-task model, i.e., Whisper Results in Table To position STAC-ST against previous work on ST, we also run experiments under the conventional single-turn ST condition. These experiments enable us to (1) see how our end-to-end multi-task learning approach performs on a specific input condition, and (2) compare STAC-ST against four previous models trained and evaluated on the same task. To allow for comparing results across singleturn and MS-MT conditions, we also report performance with three Whisper systems. Results of these experiments are reported in Table In this work, we present STAC-ST, an end-to-end system designed for single-channel multi-turn & multi-speaker speech translation that uses a multitask training framework to leverage both ASR and ST datasets. We demonstrate that STAC-ST generalizes to both standard pre-segmented ST benchmarks and multi-turn conversational ST, the latter being a more challenging scenario. STAC-ST is also shown to learn the task of speaker change detection, which helps multi-speaker ST and ASR. We investigate different aspects of STAC-ST, including the impact of model and data size, automatic segmentation for long-form conversational ST, zero-shot multi-turn & multi-speaker ST with-out specific training data. Overall, this work sheds light on future work towards more robust conversational ST systems that can handle speaker-turns and cross-talks. 1. Our primary test sets, Fisher and CALL-HOME, have narrowly one translation direction (Spanish→English). The only other public conversational ST dataset we are aware of is MSLT All speech datasets we use have anonymous speakers. We do not have any access to nor try to create any PII (Personal Identifiable Information) of speakers, and our model neither identifies speakers nor uses speaker embeddings. In this section, we evaluate different CTC weights for joint ASR & ST training under the STAC-ST framework. We show in Figure We list complete main results on Fisher-CALLHOME corpora for all the official subsets. Multi-Turn Segments. Table Single-Turn Segments. For the sake of completeness, we also report the performance of STAC-ST on each subset of Fisher-CALLHOME with the default utterance segmentation (single-turn). In MT-MS data, each segment contains different degree of overlaps. We calculate the overlap ratio for each segment in Fisher and CALLHOME, group the segment-level overlap ratios into 4 bins, and report BLEU scores for each bin in In Table We provide compete ablation results of adding [TURN] & [XT] task tokens on all the official development and test sets of Fisher-CALLHOME, as listed in Table With WebRTC, audio is split when 90% of consecutive frames do not include speech. We set the frame length to 30 ms and the aggressiveness parameter to 1 as in form an additional pre-processing step to minimize the domain mismatch between SHAS and Fisher-CALLHOME. (2) We modify each audio file by masking with 0 all the regions in the signal where there is no speech activity, i.e., setting all the non-speech activity regions to silence. (3) We then use the masked long-form audio files with SHAS. This step decreases the false alarms rate that can be produced by SHAS on noisy segments or between contiguous utterances where there are close-talks. Close-talks are areas where two utterances are too close and the segmentation tools might not generalize well. In order to keep comparable the experimental and evaluation setup, we perform the same pre-processing step when using WebRTC. Besides SHAS (Figure Traditional speech translation datasets are composed of single-turn pre-segmented utterances. Following Section 5.3.3, we also run experiments on the CoVoST 2 test set. 13 In the following Table The results show that (1) our multilingual large model outperforms Whisper and XLS-R multilingual models with comparable sizes, even though Whisper and XLS-R where trained on data two orders of magnitude larger: 680k hours for Whisper, 436k hours for XLS-R, and 3k hours for STAC-ST L multilingual; (2) our models with smaller sizes sometimes outperform larger Whisper mod-
1,146
728
1,146
Optimal Transport for Unsupervised Hallucination Detection in Neural Machine Translation
Neural machine translation (NMT) has become the de-facto standard in real-world machine translation applications. However, NMT models can unpredictably produce severely pathological translations, known as hallucinations, that seriously undermine user trust. It becomes thus crucial to implement effective preventive strategies to guarantee their proper functioning. In this paper, we address the problem of hallucination detection in NMT by following a simple intuition: as hallucinations are detached from the source content, they exhibit cross-attention patterns that are statistically different from those of good quality translations. We frame this problem with an optimal transport formulation and propose a fully unsupervised, plug-in detector that can be used with any attention-based NMT model. Experimental results show that our detector not only outperforms all previous model-based detectors, but is also competitive with detectors that employ external models trained on millions of samples for related tasks such as quality estimation and cross-lingual sentence similarity.
Neural machine translation (NMT) has achieved tremendous success In this work, we focus on leveraging the crossattention mechanism to develop a novel hallucination detector. This mechanism is responsible for selecting and combining the information contained in the source sequence that is relevant to retain during translation. Therefore, as hallucinations are translations whose content is detached from the source sequence, it is no surprise that connections between anomalous attention patterns and hallucinations have been drawn before in the literature Rather than aiming to find particular patterns, we go back to the main definition of hallucinations and draw the following hypothesis: as hallucinationscontrary to good translations-are not supported by the source content, they may exhibit cross-attention patterns that are statistically different from those found in good quality translations. Based on this hypothesis, we approach the problem of hallucination detection as a problem of anomaly detection with an optimal transport (OT) formulation Our key contributions are: • We propose an OT-inspired fully unsupervised hallucination detector that can be plugged into any attention-based NMT model; • We find that the idea that attention maps for hallucinations are anomalous in light of a reference data distribution makes for an effective hallucination detector; • We show that our detector not only outperforms all previous model-based detectors, but is also competitive with external detectors that employ auxiliary models that have been trained on millions of samples.
A NMT model M defines a probability distribution p θ (y|x) over an output space of hypotheses Y conditioned on a source sequence x contained in an input space X . In this work, we focus on models parameterized by an encoder-decoder transformer model The first-order Wasserstein distance between two arbitrary probability distributions µ ∈ △ n and ν ∈ △ m is defined as where c : is the set of all joint probability distributions whose marginals are µ, ν. The Wasserstein distance arises from the method of optimal transport (OT) A notable example is the Wasserstein-1 distance, W 1 , also known as Earth Mover's Distance (EMD), obtained for c(u, v) = ∥u-v∥ 1 . The name follows from the simple intuition: if the distributions are interpreted as "two piles of mass" that can be moved around, the EMD represents the minimum amount of "work" required to transform one pile into the other, where the work is defined as the amount of mass moved multiplied by the distance it is moved. Although OT has been explored for robustness Hallucinations are translations that lie at the extreme end of NMT pathologies In this work, we depart from artificial settings, and focus on studying hallucinations that are naturally produced by the NMT model. To that end, we follow the taxonomy introduced in On-the-fly hallucination detectors are systems that can detect hallucinations without access to reference translations. These detectors are particularly relevant as they can be deployed in online applications where references are not readily available. Previous work on on-the-fly detection of hallucinations in NMT has primarily focused on two categories of detectors: external detectors and modelbased detectors. External detectors employ auxiliary models trained for related tasks such as quality estimation (QE) and cross-lingual embedding similarity. On the other hand, model-based detectors only require access to the NMT model that generates the translations, and work by leveraging relevant internal features such as model confidence and cross-attention. These detectors are attractive due to their flexibility and low memory footprint, as they can very easily be plugged in on a vast range of NMT models without the need for additional training data or computing infrastructure. We will focus specifically on model-based detectors that require obtaining internal features from a model M. Building a hallucination detector generally consists of finding a scoring function s M : X → R and a threshold τ ∈ R to build a binary rule g M : X → {0, 1}. For a given test sample x ∈ X , (2) If s M is an anomaly score, g M (x) = 0 implies that the model M generates a 'normal' translation for the source sequence x, and g M (x) = 1 implies that M generates a 'hallucination' instead. Anomalous cross-attention maps have been connected to the hallucinatory mode in several works In this scenario, we only rely on the generated translation and its source mass distribution to decide whether the translation is a hallucination or not. Concretely, for a given test sample x ∈ X : 1. We first obtain the source mass attention distribution π M (x) ∈ △ |x| ; 2. We then compute an anomaly score, s wtu (x), by measuring the Wasserstein distance between π M (x) and a reference distribution u: Choice of reference translation. A natural choice for u is the uniform distribution, u = 1 n • 1, where 1 is a vector of ones of size n. In the context of our problem, a uniform source mass distribution means that all source tokens are equally attended. Choice of cost function. We consider the 0/1 cost function, c(i, j) = 1[i ̸ = j], as it guarantees that the cost of transporting a unit mass from any token i to any token j ̸ = i is constant. For this distance function, the problem in Equation 1 has the following closed-form solution This is a well-known result in optimal transport: the Wasserstein distance under the 0/1 cost function is equivalent to the total variation distance between the two distributions. On this metric space, the Wasserstein distance depends solely on the probability mass that is transported to transform π M (x) to u. Importantly, this formulation ignores the starting locations and destinations of that probability mass as the cost of transporting a unit mass from any token i to any token j ̸ = i is constant. Interpretation of Wass-to-Unif. Attention maps for which the source attention mass is highly concentrated on a very sparse set of tokens (regardless of their location in the source sentence) can be very predictive of hallucinations In this scenario, instead of using a single reference distribution, we use a set of reference source mass distributions, R x , obtained with the same model. By doing so, we can evaluate how anomalous a given translation is compared to a model data-driven distribution, rather than relying on an arbitrary choice of reference distribution. First, we use a held-out dataset D held that contains samples for which the model M generates good quality translations according to an automatic evaluation metric (in this work, we use COMET Then, for a given test sample x ∈ X , we apply the procedure illustrated in Figure 4. We obtain the anomaly score s wtd (x) by averaging the bottom-k distances in W x : where S is the set containing the k smallest elements of W x . Interpretation of Wass-to-Data. Hallucinations, unlike good translations, are not fully supported by the source content. Wass-to-Data evaluates how anomalous a translation is by comparing the source attention mass distribution of that translation to those of good translations. The higher the Wassto-Data score, the more anomalous the source attention mass distribution of that translation is in comparison to those of good translations, and the more likely it is to be an hallucination. Relation to Wass-to-Unif. The Wasserstein-1 distance (see Section 2.2) between two distributions is equivalent to the ℓ 1 -norm of the difference between their cumulative distribution functions With this scoring function, we aim at combining Wass-to-Unif and Wass-to-Data into a single detector. To do so, we propose using a two-stage process that exploits the computational benefits of Wass-to-Unif over Wass-to-Data. for a predefined scalar threshold τ wtu . To set that threshold, we compute W wtu = {s wtu (x) : x ∈ D held } and set τ wtu = P K , i.e τ wtu is the K th percentile of W wtu with K ∈ ]98, 100[ (in line with hallucinatory rates reported in 5 Experimental Setup We follow the setup in We compare our methods to the two best performing model-based methods in Guerreiro et al. (2022). Attn-ign-SRC. This method consists of computing the proportion of source words with a total incoming attention mass lower than a threshold λ: This method was initially proposed in Seq-Logprob. We compute the length-normalised sequence log-probability of the translation: We provide a comparison to detectors that exploit state-of-the-art models in related tasks, as it helps monitor the development of model-based detectors. CometKiwi. We compute sentence-level quality scores with CometKiwi LaBSE. We leverage LaBSE We report the Area Under the Receiver Operating Characteristic curve (AUROC) and the False Positive Rate at 90% True Positive Rate (FPR@90TPR) to evaluate the performance of different detectors. We use WMT18 DE-EN data samples from the heldout set used in 6 Results We start by analyzing the performance of our proposed detectors on a real world on-the-fly detection scenario. In this scenario, the detector must be able to flag hallucinations regardless of their specific type as those are unknown at the time of detection. Wass-Combo is the best model-based detector. other methods both in terms of AUROC and FPR. When compared to the previous best-performing model-based method (Seq-Logprob), Wass-Combo obtains boosts of approximately 4 and 10 points in AUROC and FPR, respectively. These performance boosts are further evidence that model-based features can be leveraged, in an unsupervised manner, to build effective detectors. Nevertheless, the high values of FPR suggest that there is still a significant performance margin to reduce in future research. The notion of data proximity is helpful to detect hallucinations. Table Our model-based method achieves comparable performance to external models. Table Translation quality assessments are less predictive than similarity of cross-lingual sentence representations. Table In this section, we present an analysis on the performance of different detectors for different types of hallucinations (see Section 2.3). We report both a quantitative analysis to understand whether a detector can distinguish a specific hallucination type from other translations (Table Table This analysis is particularly relevant to better understand how different detectors specialize in different types of hallucinations. In Appendix J, we show that the trends presented in this section hold for other mid-and low-resource language pairs. Fully detached hallucinations. Detecting fully detached hallucinations is remarkably easy for most detectors. Interestingly, Wass-to-Unif significantly outperforms Wass-to-Data on this type of hallucination. This highlights how combining both methods can be helpful. In fact, Wass-Combo performs similarly to Wass-to-Unif, and can very easily separate most fully detached hallucinations from other translations on a fixed-threshold scenario (Figure Strongly detached hallucinations. These are the hardest hallucinations to detect with our methods. Nevertheless, Wass-Combo performs competitively with the previous best-performing modelbased method for this type of hallucinations (Seq-Logprob). We hypothesize that the difficulty in detecting these hallucinations may be due to the varying level of detachment from the source sequence. Indeed, Figure Oscillatory hallucinations. Wass-to-Data and Wass-Combo significantly outperform all previous model-based detectors on detecting oscillatory hallucinations. This is relevance in the context of model-based detectors, as previous detectors notably struggle with detecting these hallucinations. Moreover, Wass-Combo also manages to outperform LaBSE with significant improvements in FPR. This hints that the repetition of words or phrases may not be enough to create sentence-level representations that are highly dissimilar from the non-oscillatory source sequence. In contrast, we find that CometKiwi appropriately penalizes oscillatory hallucinations, which aligns with observations made in We propose a novel plug-in model-based detector for hallucinations in NMT. Unlike previous attempts to build an attention-based detector, we do not rely on ad-hoc heuristics to detect hallucinations, and instead pose hallucination detection as an optimal transport problem: our detector aims to find translations whose source attention mass distribution is highly distant from those of good quality translations. Our empirical analysis shows that our detector outperforms all previous model-based detectors. Importantly, in contrast to these prior approaches, it is suitable for identifying oscillatory hallucinations, thus addressing an important gap in the field. We also show that our detector is competitive with external detectors that use state-of-the-art quality estimation or cross-lingual similarity models. Notably, this performance is achieved without the need for large models, or any data with quality annotations or parallel training data. Finally, thanks to its flexibility, our detector can be easily deployed in real-world scenarios, making it a valuable tool for practical applications. We highlight two main limitations of our work. Firstly, instead of focusing on more recent NMT models that use large pretrained language models as their backbone, our experiments were based on transformer base models. That is because we used the NMT models that produced the translations in the datasets we analyze, i.e, the models that actually hallucinate for the source sequences in the dataset. Nevertheless, research on hallucinations for larger NMT models makes for an exciting line of future work and would be valuable to assess the broad validity of our claims. Secondly, although our method does not require any training data or human annotations, it relies on access to a pre-existing database of source mass distributions. This can be easily obtained offline by running the model on monolingual data to obtain the distributions. Nevertheless, these datastores need not be costly in terms of memory. In fact, in Appendix J, we validate our detectors for datastores that contain less than 100k distributions. dataset Our detectors do not require access to a GPU machine. All our experiments have been ran on a machine with 2 physical Intel(R) Xeon(R) Gold 6348 @ 2.60GHz CPUs (total of 112 threads). Obtaining Wass-to-Unif scores for all the 3415 translations from the Guerreiro et al. ( We use scikit-learn F Tracing-back performance boosts to the construction of the reference set R x In Section 6.1 in the main text, we showed that evaluating how distant a given translation is compared to a data-driven reference distribution-rather than to an ad-hoc reference distribution-led to increased performance. Therefore, we will now analyze the construction of the reference set R x to obtain Wass-to-Data scores (step 2 in Figure Construction of R held . To construct R held , we first need to obtain the source attention mass distributions for each sample in D held . If D held is a parallel corpus, we can force-decode the reference translations to construct R held . As shown in Table 5, this construction produces results similar to using good-quality model-generated translations. Moreover, we also evaluate the scenario where R held is constructed with translations of any quality. Table Oscillatory Als Maß hierfür wird meist der sogenannte Pearl Index benutzt (so benannt nach einem Statistiker, der diese Berechnungsformel einführte). As a measure of this, the so-called Pearl Index is usually used (so named after a statistician who introduced this calculation formula). The term "Pearl Index" refers to the term "Pearl Index" (or "Pearl Index") used to refer to the term "Pearl Index" (or "Pearl Index"). improves performance, the gains are not substantial. This connects to findings by Guerreiro et al. (2022): hallucinations exhibit different properties from other translations, including other incorrect translations. We offer further evidence that properties of hallucinations-in this case, the source attention mass distributions-are not only different to those of good-quality translations but also to most other model-generated translations. Length-filtering the distributions in R held . The results in Table We perform ablations on Wass-to-Data and Wass-Combo for all relevant hyperparameters: the length- filtering parameter δ, the maximum cardinality of R, |R| max , the value of k to compute the Wass-to-Data scores (step 4 in Figure On length-filtering. The results in Table On the choice of |R| max . ments. This suggests that when comparing the source mass attention distribution of a test translation to other such distributions obtained for other translations (instead of the ad-hoc uniform distribution used for Wass-to-Unif scores), the information from the location of the source attention mass is helpful to obtain better scores. On the formulation of Wass-Combo. To combine the information from Wass-to-Unif and Wassto-Data, we could also perform a convex combination of the two scores: for a predefined scalar parameter λ. In Table Concurrently to our work, We show a qualitative analysis on the same fixedthreshold scenario described in Section 6.2 in Fig- Our detector is not able to detect fully detached hallucinations that come in the form of exact copies of the source sentence. For these pathological translations, the attention map is mostly diagonal and is thus not anomalous. Although these are severe errors, we argue that, in a real-world application, such translations can be easily detected with string matching heuristics. We also find that our detector Wass-Combo struggles with oscillatory hallucinations that come in the form of mild repetitions of 1-grams or 2- in Table In order to establish the broader validity of our model-based detectors, we present an analysis on their performance for other NMT models and on mid and low-resource language pairs. Overall, the detectors exhibit similar trends to those discussed in the main text (Section 6). The dataset from To obtain all model-based information required to build the detectors, we use the same Transformer models that generated the translations in the datasets in consideration. All details can be found in The trends in Section 6.1 hold for other language pairs. The results in Table The trends in Section 6.2 hold for other language pairs. In Section A, we remark that almost all NE-EN hallucinations are oscillatory, whereas almost all RO-EN hallucinations are fully detached. With that in mind, the results in Table cross-lingual embedding similarity models to filter low-quality translations.
1,085
1,584
1,085
When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion
Though machine translation errors caused by the lack of context beyond one sentence have long been acknowledged, the development of context-aware NMT systems is hampered by several problems. Firstly, standard metrics are not sensitive to improvements in consistency in document-level translations. Secondly, previous work on context-aware NMT assumed that the sentence-aligned parallel data consisted of complete documents while in most practical scenarios such document-level data constitutes only a fraction of the available parallel data. To address the first issue, we perform a human study on an English-Russian subtitles dataset and identify deixis, ellipsis and lexical cohesion as three main sources of inconsistency. We then create test sets targeting these phenomena. To address the second shortcoming, we consider a set-up in which a much larger amount of sentence-level data is available compared to that aligned at the document level. We introduce a model that is suitable for this scenario and demonstrate major gains over a context-agnostic baseline on our new benchmarks without sacrificing performance as measured with BLEU. 1
With the recent rapid progress of neural machine translation (NMT), translation mistakes and inconsistencies due to the lack of extra-sentential context are becoming more and more noticeable among otherwise adequate translations produced by standard context-agnostic NMT systems A context-agnostic NMT system would often produce plausible translations of isolated sentences, however, when put together in a document, these translations end up being inconsistent with each other. We investigate which linguistic phenomena cause the inconsistencies using the OpenSubtitles We show that by using a limited amount of document-level parallel data, we can already achieve substantial improvements on these benchmarks without negatively affecting performance as measured with BLEU. Our approach is inspired by the Deliberation Networks The key contributions are as follows: • we analyze which phenomena cause contextagnostic translations to be inconsistent with each other; • we create test sets specifically addressing the most frequent phenomena; • we consider a novel and realistic set-up where a much larger amount of sentencelevel data is available compared to that aligned at the document level; • we introduce a model suitable for this scenario, and demonstrate that it is effective on our new benchmarks without sacrificing performance as measured with BLEU.
We begin with a human study, in which we: 1. identify cases when good sentence-level translations are not good when placed in context of each other, 2. categorize these examples according to the phenomena leading to a discrepancy in translations of consecutive sentences. The test sets introduced in Section 3 will then target the most frequent phenomena. To find what makes good context-agnostic translations incorrect when placed in context of each other, we start with pairs of consecutive sentences. We gather data with context from the publicly available OpenSubtitles2018 corpus ( 2018) for English and Russian. We train a contextagnostic Transformer on 6m sentence pairs. Then we translate 2000 pairs of consecutive sentences using this model. For more details on model training and data preprocessing, see Section 5.3. Then we use human annotation to assess the adequacy of the translations without context and in the context of each other. The whole process is two-stage: 1. sentence-level evaluation: we ask if the translation of a given sentence is good, 2. evaluation in context: for pairs of consecutive good translations according to the first stage, we ask if the translations are good in context of each other. In the first stage, the annotators are instructed to mark as "good" translations which (i) are fluent sentences in the target language (in our case, Russian) (ii) can be reasonable translations of a source sentence in some context. For the second stage we only consider pairs of sentences with good sentence-level translations. The annotators are instructed to mark translations as bad in context of each other only if there is no other possible interpretation or extra additional context which could have made them appropriate. This was made to get more robust results, avoiding the influence of personal preferences of the annotators (for example, for using formal or informal speech), and excluding ambiguous cases that can only be resolved with additional context. The statistics of answers are provided in Table 1. We find that our annotators labelled 82% of sentence pairs as good translations. In 11% of cases, at least one translation was considered bad at the sentence level, and in another 7%, the sentences were considered individually good, but bad in context of each other. This indicates that in our setting, a substantial proportion of translation errors are only recognized as such in context. From the results of the human annotation, we take all instances of consecutive sentences with good translations which become incorrect when placed in the context of each other. For each, we identify the language phenomenon which caused a discrepancy. The results are provided in Table From Table From Table Ambiguity of the first type comes from the inability to predict the correct morphological form of some words. We manually gather examples with such structures in a source sentence and change the morphological inflection of the relevant target phrase to create contrastive translation. Specifically, we focus on noun phrases where the verb is elided, and the ambiguity lies in how the noun phrase is inflected. The second type we evaluate are verb phrase ellipses. Mostly these are sentences with an auxiliary verb "do" and omitted main verb. We manually gather such examples and replace the translation of the verb, which is only present on the target side, with other verbs with different meaning, but 4 Details are provided in the appendix. the same inflection. Verbs which are used to construct such contrastive translations are the top-10 lemmas of translations of the verb "do" which we get from the lexical table of Moses Lexical cohesion can be established for various types of phrases and can involve reiteration or other semantic relations. In the scope of the current work, we focus on the reiteration of entities, since these tend to be non-coincidental, and can be easily detected and transformed. We identify named entities with alternative translations into Russian, find passages where they are translated consistently, and create contrastive test examples by switching the translation of some instances of the named entity. For more details, please refer to the appendix. For the most frequent phenomena from the above analysis we create test sets for targeted evaluation. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in the considered aspect. All contrastive translations we use are correct plausible translations at a sentence level, and only context reveals the errors we introduce. All the test sets are guaranteed to have the necessary context in the provided sequence of 3 sentences. The system is asked to score each candidate example, and we compute the system accuracy as the proportion of times the true translation is preferred over the contrastive ones. Test set statistics are shown in Table Previous work on context-aware neural machine translation used data where all training instances have context. This setting limits the set of available training sets one can use: in a typical scenario, we have a lot of sentence-level parallel data and only a small fraction of document-level data. Since machine translation quality depends heavily on the amount of training data, training a contextaware model is counterproductive if this leads to ignoring the majority of available sentence-level data and sacrificing general quality. We will also show that a naive approach to combining sentencelevel and document-level data leads to a drop in performance. In this work, we argue that it is important to consider an asymmetric setting where the amount of available document-level data is much smaller than that of sentence-level data, and propose an approach specifically targeting this scenario. We introduce a two-pass framework: first, the sentence is translated with a context-agnostic model, and then this translation is refined using context of several previous sentences (context includes source sentences as well as their translations). We expect this architecture to be suitable in the proposed setting: the baseline context-agnostic model can be trained on a large amount of sentence-level Let D sent = {(x i , y i )} N i=1 denote the sentencelevel data with n paired sentences and D doc = {(x j , y j , c j )} M j=1 denote the document-level data, where (x j , y j ) is source and target sides of a sentence to be translated, c j are several preceding sentences along with their translations. Base model For the baseline context-agnostic model we use the original Transformerbase Context-aware decoder (CADec) The contextaware decoder is trained to correct translations given by the base model using contextual infor-mation. Namely, we maximize the following document-level log-likelihood: where y B j is sampled from P (y|x j , θ B ). CADec is composed of a stack of N = 6 identical layers and is similar to the decoder of the original Transformer. It has a masked self-attention layer and attention to encoder outputs, and additionally each layer has a block attending over the outputs of the base decoder (Figure At training time, we use reference translations as translations of the previous sentences. For the cur-rent sentence, we either sample a translation from the base model or use a corrupted version of the reference translation. We propose to stochastically mix objectives corresponding to these versions: where ỹj is a corrupted version of the reference translation and b j ∈ {0, 1} is drawn from Bernoulli distribution with parameter p, p = 0.5 in our experiments. Reference translations are corrupted by replacing 20% of their tokens with random tokens. We discuss the importance of the proposed training strategy, as well as the effect of varying the value of p, in Section 6.5. As input to CADec for the current sentence, we use the translation produced by the base model. Target sides of the previous sentences are produced by our two-stage approach for those sentences which have context and with the base model for those which do not. We use beam search with a beam of 4 for all models. We use the publicly available OpenSubtitles2018 corpus We evaluate in two different ways: using BLEU for general quality and the proposed contrastive test sets for consistency. We show that models indistinguishable with BLEU can be very different in terms of consistency. We randomly choose 500 out of 2000 examples from the lexical cohesion set and 500 out of 3000 from the deixis test set for validation and leave the rest for final testing. We compute BLEU on the development set as well as scores on lexical cohesion and deixis development sets. We use convergence in both metrics to decide when to stop training. The importance of using both criteria is discussed in Section 6.4. After the convergence, we average 5 checkpoints and report scores on the final test sets. We consider three baselines. baseline The context-agnostic baseline is Transformer-base trained on all sentence-level data. Recall that it is also used as the base model in our 2-stage approach. concat The first context-aware baseline is a simple concatenation model. It is trained on 6m sentence pairs, including 1.5m having 3 context sentences. For the concatenation baseline, we use a special token separating sentences (both on the source and target side). s-hier-to-2.tied This is the version of the model s-hier-to-2 introduced by BLEU scores for our model and the baselines are given in Table We observe that our model is no worse in BLEU than the baseline despite the second-pass model being trained only on a fraction of the data. In contrast, the concatenation baseline, trained on a mixture of data with and without context is about 1 BLEU below the context-agnostic baseline and our model when using all 3 context sentences. CADec's performance remains the same independently from the number of context sentences (1, 2 or 3) as measured with BLEU. s-hier-to-2.tied performs worst in terms of BLEU, but note that this is a shallow recurrent model, while others are Transformer-based. It also suffers from the asymmetric data setting, like the concatenation baseline. Scores on the deixis, cohesion and ellipsis test sets are provided in Tables Figure At training time, CADec uses either a translation sampled from the base model or a corrupted reference translation as the first-pass translation of the current sentence. The purpose of using a corrupted reference instead of just sampling is to teach CADec to rely on the base translation and not to change it much. In this section, we discuss the importance of the proposed training strategy. Results for different values of p are given in Table 9. All models have about the same BLEU, not statistically significantly different from the baseline, but they are quite different in terms of incorporating context. The denoising positively influences almost all tasks except for deixis, yielding the largest improvement on lexical cohesion. In concurrent work, Automatic evaluation of the discourse phenomena we consider is challenging. For lexical cohesion, We analyze which phenomena cause otherwise good context-agnostic translations to be inconsistent when placed in the context of each other. Our human study on an English-Russian dataset identifies deixis, ellipsis and lexical cohesion as three main sources of inconsistency. We create test sets focusing specifically on the identified phenomena. We consider a novel and realistic set-up where a much larger amount of sentence-level data is available compared to that aligned at the document level and introduce a model suitable for this scenario. We show that our model effectively handles contextual phenomena without sacrificing general quality as measured with BLEU despite using only a small amount of document-level data, while a naive approach to combining sentence-level and document-level data leads to a drop in performance. We show that the proposed test sets allow us to distinguish models (even though identical in BLEU) in terms of their consistency. To build context-aware machine translation systems, such targeted test sets should prove useful, for validation, early stopping and for model selection. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533-542, Brussels, Belgium. Association for Computational Linguistics. In this section we describe the process of constructing the test suites. English second person pronoun "you" may have three different interpretations important when translating into Russian: the second person singular informal (T form), the second person singular formal (V form) and second person plural (there is no T-V distinction for the plural from of second person pronouns). Morphological forms for second person singular (V form) and second person plural pronoun are the same, that is why to automatically identify examples in the second person polite form, we look for morphological forms corresponding to second person plural pronouns. To derive morphological tags for Russian, we use publicly available pymorphy2 Below, all the steps performed to obtain the test suite are described in detail. For each sentence we try to automatically find indications of using T or V form. Presence of the following words and morphological forms are used as indication of usage of T/V forms: 1. second person singular or plural pronoun, 2. verb in a form corresponding to second person singular/plural pronoun, 3. verbs in imperative form, 4. possessive forms of second person pronouns. For 1-3 we used morphological tags predicted by pymorphy2, for 4th we used hand-crafted lists of forms of second person pronouns, because pymorphy2 fails to identify them. The first rule is needed as morphological forms for second person plural and second person singular V form pronouns and related verbs are the same, and there is no simple and reliable way to distinguish these two automatically. The second rule is to exclude cases where there is only one appropriate level of politeness according to the relation between the speaker and the listener. Such markers include "Mr.", "Mrs.", "officer", "your honour" and "sir". For the impolite form, these include terms denoting family relationship ("mom", "dad"), terms of endearment ("honey", "sweetie") and words like "dude" and "pal". To construct contrastive examples aiming to test the ability of a system to produce translations with consistent level of politeness, we have to produce an alternative translation by switching the formality of the reference translation. First, we do it automatically: 1. change the grammatical number of second person pronouns, verbs, imperative verbs, 2. change the grammatical number of possessive pronouns. For the first transformation we use pymorphy2, for the second use manual lists of possessive second person pronouns, because pymorphy2 can not change them automatically. We manually correct the translations from the previous step. Mistakes of the described automatic change of politeness happen because of: 1. ambiguity arising when imperative and indicative verb forms are the same, 2. inability of pymorphy2 to inflect the singular number to some verb forms (e.g., to inflect singular number to past tense verbs), 3. presence of related adjectives, which have to agree with the pronoun, 4. ambiguity arising when a plural form of a pronoun may have different singular forms. A.1.5 Human annotation: are both polite and impolite versions appropriate? After the four previous steps, we have text fragments of several consecutive sentences with consistent level of politeness. Each fragment uses second person singular pronouns, either T form or V form, without nominal markers indicating which of the forms is the only one appropriate. For each group we have both the original version, and the version with the switched formality. To control for appropriateness of both levels of politeness in the context of a whole text fragment we conduct a human annotation. Namely, humans are given both versions of the same text fragment corresponding to different levels of politeness, and asked if these versions are natural. The answers they can pick are the following: 1. both appropriate, 2. polite version is not appropriate, 3. impolite version is not appropriate, 4. both versions are bad. The annotators are not given any specific guidelines, and asked to answer according to their intuition as a native speaker of the language (Russian). There are a small number of examples where one of the versions is not appropriate and not equally natural as the other one: 4%. Cases where annotators claimed both versions to be bad come from mistakes in target translations: OpenSubtitles data is not perfect, and target sides contain translations which are not reasonable sentences in Russian. These account for 1.5% of all examples. We do not include these 5.5% of examples in the resulting test sets. The process of creating the lexical cohesion test set consists of several stages: 1. find passages where named entities are translated consistently, 2. extract alternative translations for these named entities from the lexical table of Moses We look for infrequent words that are translated consistently in a text fragment. Since the target language has rich morphology, to verify that translations are the same we have to use lemmas of the translations. More precisely, we 1. train Berkeley aligner on about 6.5m sentence pairs from both training and held-out data, 2. find lemmas of all words in the reference translations in the held-out data using pymorphy2, 3. find words in the source which are not in the 5000 most frequent words in our vocabulary whose translations have the same lemma. For the words under consideration, we find alternative translations which would be (i) equally appropriate in the context of the remaining sentence and text fragment (ii) possible for the model to produce. To address the first point, we focus on named entities, and we assume that all translations of a given named entity seen in the training data are appropriate. After that, more than 90% of examples are translations of named entities (incl. names of geographical objects). We manually filter the examples with named entities. From the two previous steps, we have examples with named entities in context and source sentences and several alternative translations for each named entity. Then we 1. construct alternative translations of each example by switching the translation of instances of the named entity; since the target language has rich morphology, we do it manually, 2. for each example, construct several test instances. For each version of the translation of a named entity, we use this translation in the context, and vary the translation of the entity in the current sentence to create one consistent, and one or more inconsistent (contrastive) translation. We use the publicly available OpenSubtitles2018 corpus We use the tokenization provided by the corpus and use multi-bleu.perl Sentences were encoded using byte-pair encoding We follow the setup of Transformer base model We use regularization as described in The optimizer we use is the same as in
1,143
1,359
1,143
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this. However, these approaches are fundamentally bounded by the correctness of explanations, which themselves are often noisy and inconsistent. In this work, we develop MAIEUTIC PROMPTING, which aims to infer a correct answer to a question even from the unreliable generations of LM. MAIEUTIC PROMPTING induces a tree of explanations abductively (e.g. X is true, because . . . ) and recursively, then frames the inference as a satisfiability problem over these explanations and their logical relations. We test MAIEUTIC PROMPTING for true/false QA on three challenging benchmarks that require complex commonsense reasoning. MAIEU-TIC PROMPTING achieves up to 20% better accuracy than state-of-the-art prompting methods, and as a fully unsupervised approach, performs competitively with supervised models. We also show that MAIEUTIC PROMPTING improves robustness in inference while providing interpretable rationales. 1
Following the remarkable success of few-shot prompting over large language models (e.g.
Input Prompt Q: Captain Kirk is part of Star Wars? A: Captain Kirk is a character in Star Trek. Therefore, the answer is False. Output A: There are female mayors. Therefore, the answer is True. Smoke is not the source of fire? Smoke is a result of fire. Therefore, the statement is False. One is a number that comes before zero? One is ... Therefore, the statement is True. One is a number that comes after zero? One is ... Therefore, the statement is True. Butterflies fly with 3 wings? Butterflies have 4 wings. Therefore, the statement is False. Butterflies have 2 wings on each side of their body. Therefore, the statement is False. ??? Figure Explanation-based prompting is intuitively motivated by the reasoning steps humans typically employ to solve a problem (Hausmann and To this end, we propose MAIEUTIC PROMPT- In a context of war, there's always a victor and a loser? False, because There can be cases where the loser is not clear. Depth-wise spanning Entail Contradict : 0.92, : 0.98 : 1.00, : 1.00, ... Defining the relations Inference Given a question Q, we generate maieutic tree consisting of abductive and recursive explanations, define the relations between them, and employ MAX-SAT to find the best truth-value assignments to the explanations and Q. ING, a novel few-shot inference method that infers a correct answer by enumerating a structure of explanations -possibly noisy and contradictory -and resolving them with a symbolic inference algorithm. Inspired by the maieutic method 2 of Socrates, MAIEUTIC PROMPTING induces the LM to generate abductive explanations for diverse hypotheses with deep recursive reasoning, then collectively eliminates the contradicting candidates, resulting in consistent answers. Figure To infer the answer for the original question, we quantify the strength of the LM's belief in each proposition and the logical relationships between propositions in the maieutic tree. We then employ the weighted MAX-SAT 2 Maieutic method brings out definitions implicit in the interlocutor's beliefs, ... is a method of hypothesis elimination, steadily identifying and eliminating those that lead to contradictions Our experiments show that the performance of MAIEUTIC PROMPTING exceeds that of all the fewshot prompting baselines (e.g., Chain of Thought; Our goal is to infer whether a given statement Q makes sense, i.e. inferring the truth value A of Q. Conventionally, this can be done through prompting an LM with the following two methods: Standard Prompting Let Q be a statement we want to infer the truth value of (i.e., either True or False). In standard few-shot prompting, the modelinferred answer  is defined as: where Explanation-based Prompting In explanationbased prompting, the inference process is factorized into two steps: Here, E denotes the explanation generated prior to inferring the answer label, and C = {(q 1 , e 1 , a 1 ), • • • , (q k , e k , a k )} includes k examples of questions, explanations and answers. Since marginalizing over all E is intractable, prior works resort to a sampling based approximation: where E ∼ pLM (E|Q, C) (3) In this section, we introduce MAIEUTIC PROMPT-ING, which performs inference over a maieutic tree of generated explanations. First, we introduce logical integrity, a key concept that is used to determine the reliability of propositions. Language models often generate logically inconsistent propositions; for instance, in Figure A statement is considered to be logically integral / True when condition 1 is met, and logically integral / False when condition 2 is met. Intuitively, the truth values of logically integral propositions are more credible than non-integral ones, to which LMs are inconsistent given a simple negation. For example, "One is a number that comes before zero." in Figure For the rest of section, we first search for logically integral propositions by constructing the maieutic tree (Section 3.1), then quantify the relations between the propositions (Section 3.2), based on which we infer the final answer (Section 3.3). Given a question, we require the LM to post-hoc rationalize both True and False labels. This abductive explanation generation has several advantages over an ad-hoc approach that first generates an explanation, then predicts the label. First, in the ad-hoc setting, the model is required to generate a discriminative explanation that helps in choosing one label over the other. Abductive generation Concretely, we define a function abduction which gets the statement Q as the input and outputs a tuple of two abductive explanations with True, False given as the answer, respectively: (5) Figure As shown in Figure To enhance the robustness of reasoning, we hypothesize that the inference process should entail not only the breadth of reasoning, but also the depth of reasoning -whether the reasoning paths themselves are credible and consistent with each other. To do this, we require the LM itself to validate its own generations -by recursively prompting the LM with the generated explanations. As Figure Let S i denote the set of nodes at depth i in the maieutic tree T . Each node in S i is an explanation for an answer label (True or False), recursively generated given its parent node as the question: Note that T is a full tree when the equality holds for all depths. For instance, in Figure In practice, we sample multiple explanations with the same Q and A through nucleus sampling Generating a full tree could be computationally expensive, as the number of generation grows exponentially with the maximum tree depth. Therefore, in each branch, we stop generating further once we reach a logically integral proposition; intuitively, this aligns with our goal to identify propositions that can be validated by the LM with confidence. Figure Now that we have generated the maieutic tree, we seek to define the relations between propositions and quantify their strength into scalar weights. For illustration, assume that an LM has generated the following E F for the given Q: The generation can be logically interpreted as follows: (1) the LM believes that Captain Kirk is a character in Star Trek, (2) the LM believes that the proposition Captain Kirk is a character in Star Trek can be a reason to deny that Captain Kirk is part of Star Wars. Accordingly, we define belief and consistency to represent the two dimensions of the logical relationship. Belief w E corresponds to the LM's belief that the proposition E is true (and therefore, ¬E is false). To quantify belief, we prompt the LM with E and ¬E respectively as a question, then comparing the probability assigned to True: Note that calculating this does not require any additional prompting, as we already gained access to these values while checking for the logical integrity of each proposition. Consistency w E,Q,A corresponds to the consistency of the generated E with the given Q and A. Intuitively, if the LM is logically consistent, the likelihood of E being generated given an answer (e.g., E F being generated given False) should be larger than its likelihood given the opposite answer (e.g., E F being generated given True). Following this intuition, we compute the consistency as: The two types of relations formulate a set of unary and binary logical constraints, based on which we assign the truth values to all nodes in the maieutic tree T , and in consequence, infer the answer to the original question. First, we represent C blf as the set of unary constraints. For each leaf node E in T , Note that all the leaf nodes in T are logically integral, hence we can count on the credibility of belief for these nodes. We now define the set of all belief constraints C blf as: For example, the nodes E F and E T F in Figure Likewise, for consistency, we define C con as the set of binary constraints using logical implication. For each edge (E l , E lA ) in T , Our objective is to assign the truth values for all Es and the root node Q in T , such that we maximize which sums up the weights of satisfied constraints. This problem is naturally formulated as weighted MAX-SAT, which is a problem of determining truth values of variables that maximize the weight of satisfied clauses. The problem can be algorithmically solved using an off-the-shelf solver. One limitation of the consistency definition in Section 3.2 is that it only considers the relationship between a parent node and a child node. Since the definition builds upon the likelihood of each generation from an LM, we cannot take into account the relationships across branches, e.g. E T and E F in Figure For all pairs of nodes For NLI-based clauses, we fix the weights to 1. Datasets We evaluate MAIEUTIC PROMPTING on three commonsense reasoning and fact verification benchmarks in binary QA format: Com2Sense Baselines We compare our method with both the few-shot prompting methods and supervised models. Along with the standard prompting, we include Chain of Thought For supervised models, we consider the strong baselines used for the respective dataset, such as T5 Configuration Details For all prompting methods, we use the same set of 6 demonstration examples and the same version of GPT-3 (text-davinci-001) as the LM. We determine the hyperparameters of MAIEUTIC PROMPTING and baselines based on the dev set performance on the benchmarks. In maieutic tree generation, we set the maximum depth to 2. For depth 1, we use nucleus sampling (p = 1.0) Table We perform additional analyses to understand the working of our method under semantic perturbations and different prompt formats. Robustness to semantic perturbations In addition to the standard accuracy, we report two additional metrics called pairwise accuracy and contrast set accuracy in Table We ablate different components of MAIEUTIC PROMPTING to investigate their respective contributions as shown in Table Generation First, we consider MAIEUTIC PROMPTING without abductive generation -we generate each explanation without providing an answer label, i.e. in an identical fashion to Chain of Thought. In this setting, the performance of MAIEUTIC PROMPTING degrades by 4%, alluding to the importance of abductive generation in eliciting the latent knowledge from LM. Next, we ablate the depth-adaptive decoding mechanism (Section 4), by applying either greedy decoding or nucleus sampling for all depths of the maieutic tree. All greedy decoding restrains width-wise spanning of knowledge, hence leads to large degradation of performance. All nucleus sampling performs much more comparably with our best configuration, although the stochastic decoding produces slightly more errors in the explanations. To minimize subjectivity, we use a strict 3-level scale, where annotators choose All only when all the statements in the true Es are desirable (e.g. grammatical) on its own, Mixed when at least one E is undesirable, and None otherwise. Consistency We ablate the NLI-based clauses and replace them with the original C con discussed in Section 3.2. With the likelihood-based C con , the accuracy reduces by about 7%, but still prevails over the prompting baselines in Table Effect of tree size We also investigate how the size of the maieutic tree influences the performance. In Table We qualitatively analyze actual inference results of MAIEUTIC PROMPTING through human evaluation. For each sample, we first retrieve true Es (the set of generated Es that are inferred to be True by MAIEUTIC PROMPTING), then evaluate them over the four criteria from : War cannot have a tie. : In order for one side to win a war, the other side must lose. : In the context of a war, there is always a victor and a loser. : In any conflict there is a winner and a loser. : There can be cases where both sides claim victory or where the loser is not clear. : Historically there have been many wars where no victor was declared. : The Korean War ended in a military armistice, meaning that the war ended in a draw and neither side could claim victory. True s : , , , , Ground-Truth : : In football, the top division almost always contains the same clubs. : The Football League is a hierarchical organization with a promotion and relegation system between its member clubs. : There is little movement of clubs between football's top division, known as the Premier League, and the second division, known as the Championship. : There is a high level of parity between clubs in the Premier League and the Championship. : There are many teams that change divisional placements from one year to the next. : There are many teams that get relegated (move down a division) in football. Inferred Answer : False True 50 were answered correctly (Set 1) and 50 were answered wrongly by the model (Set 2). Figure Prior works have leveraged natural language explanations (NLEs) to promote model reasoning, either by training a model to explain Meanwhile, recent observations reveal that LM explanations are unreliable, as they often lack logical consistency and are not factually grounded Another line of works apply symbolic methods on top of LMs to improve their consistency, spanning from a lexical constraint on sequence decoding In this work, we propose MAIEUTIC PROMPTING, a novel few-shot inference method inspired by the Socratic way of conversation. We systematically generate a tree of explanations that bear logical relations between each other, then find the truth values that max-satisfy these relations. Empirical results show that MAIEUTIC PROMPTING is both competitive and robust compared to diverse baselines, while providing intrinsic interpretations over its inference. Extension to different task formats In this work, we limit our experiments to validating a given statement. In future works, we aim to extend our method over a broader range of tasks, e.g. multiple-choice QA. A potential strategy could be binarizing multiple-choice options to respective statements and scoring them with MAIEUTIC PROMPTING, e.g. using the sum of weight of satisfied clauses from MAX-SAT. Modeling relationships between trees MAIEU-TIC PROMPTING models the relations between the nodes in each maieutic tree to infer a consistent answer. The scope of modeled relationships, however, could be further generalized beyond a single tree -a span of knowledge generated for one question could serve as the evidence for another question. Indeed, modeling the relationship between questions is an active area of research : A city is a place where many people live. E T 2 T 0 : A city is a place where people live and work. E T 2 F 0 : A city will have residents who have permanent addresses and commuters who have temporal addresses.
1,114
87
1,114
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data. A famous result by In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation. This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications. An IBFP Elman-RNN is finite state. But what about other RNN variants? In particular, we consider the Elman RNN (SRNN) The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust. The LSTM and GRU are often considered as almost equivalent variants of each other. We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot. This makes the LSTM a variant of a k-counter machine These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs. In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure. Figure 1 Is the ability to perform unbounded counting relevant to "real world" NLP tasks? In some cases it might be. For example, processing linearized parse trees
An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t-1 and returns a state vector h t : The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1). Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means). Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ). An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1. Typically, f is a log-linear classifier or multi-layer perceptron. We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. In the the function R takes the form of an affine transform followed by a tanh nonlinearity: Elman-RNNs are known to be at-least finitestate. IRNN The IRNN model, explored by The computational power of such RNNs (given infinite precision) is explored in Gated Recurrent Unit (GRU) In the GRU Where σ is the sigmoid function and • is the Hadamard product (element-wise product). Long Short Term Memory (LSTM) In the LSTM where g can be either tanh or the identity. Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation. Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0. This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values. Thus, all the above RNNs can recognize finitestate languages. Power beyond finite state can be obtained by introducing counters. Counting languages and kcounter machines are discussed in depth in SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM). A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0). Informally, In what follows, we consider the effect on the state-update equations on a single dimension, h t [j]. We omit the index [j] for readability. LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters. In non-counting steps, set i t = 0, f t = 1 through equations (8-9). In counting steps, the counter direction (+1 or -1) is set in ct (equation 11) based on the input x t and state h t-1 . The counting itself is performed in equation ( Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0. 3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice. SRNN The finite-precision SRNN cannot designate unbounded counting dimensions. The SRNN update equation is: By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t However, this counting behavior is within a tanh activation. Theoretically, this means unbounded counting cannot be achieved without infinite precision. Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region. While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge. IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols. This requires representing each counter as two diand implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0. Indeed, Appendix A in Relation to known architectural variants: Adding peephole connections ReLU activation more powerful than IBFP-RNN with a squashing activation. Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem. GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension. The tanh in equation ( Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot. This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU. Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation? We show empirically that: 1. LSTMs can be trained to recognize a n b n and a n b n c n . 2. These LSTMs generalize to much higher n than seen in the training set (though not infinitely so). 3. The trained LSTM learn to use the perdimension counting mechanism. 4. The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain. 5. Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n . Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n . For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50. Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows. On a n b n c n the LSTM recognizes well until n = 100. It then starts accepting also a n b n+1 c n . At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows. The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63. Figure Finally, we created 1000-sample test sets for each of the languages. For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(-2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(-2, 2). The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively. All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement "unbounded" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations. We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice. This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages. The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN. We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice. We leave further exploration of this question for future work.
694
1,522
694
Evaluating Explanation Methods for Neural Machine Translation
Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods. Word Alignment Error Rate can be used as such a metric that matches human understanding, however, it can not measure explanation methods on those target words that are not aligned to any source word. This paper thereby makes an initial attempt to evaluate explanation methods from an alternative viewpoint. To this end, it proposes a principled metric based on fidelity in regard to the predictive behavior of the NMT model. As the exact computation for this metric is intractable, we employ an efficient approach as its approximation. On six standard translation tasks, we quantitatively evaluate several explanation methods in terms of the proposed metric and we reveal some valuable findings for these explanation methods in our experiments.
Neural machine translation (NMT) has witnessed great success during recent years Generally speaking, we recognize two orthogonal dimensions for evaluating the explanation methods: i) how much the pattern (such as source words) extracted by an explanation method matches human understanding on predicting a target word; or ii) how the pattern matches predictive behavior of the NMT model on predicting a target word. In terms of i), Word Alignment Error Rate (AER) can be used as a metric to evaluate an explanation method by measuring agreement between human-annotated word alignment and that derived from the explanation method. However, AER can not measure explanation methods on those target words that are not aligned to any source words according to human annotation. In this paper, we thereby make an initial attempt to measure explanation methods for NMT according to the second dimension of interpretability, which covers all target words. The key to our approach can be highlighted as fidelity: when extracting the most relevant words with an explanation method, if those relevant words have the potential to construct an optimal proxy model that agrees well with the NMT model on making a translation decision, then this explanation method is good ( §3). To this end, we formalize a principled evaluation metric as an optimization problem over the expected disagreement between the optimal proxy model and the NMT model( §3.1). Since it is intractable to exactly calculate the principled metric for a given explanation method, we propose an approximate metric to address the optimization problem. Specifically, inspired by statistical learning theory We apply the approximate metric to evaluate four explanation methods including attention This paper makes the following contributions: • It presents an attempt at evaluating the explanation methods for neural machine translation from a new viewpoint of fidelity. • It proposes a principled metric for evaluation, and to put it into practice it derives a simple yet efficient approach to approximately calculate the metric. • It quantitatively compares several different explanation methods and evaluates their effects in terms of the proposed metric.
Suppose Most NMT literature models the following conditional probability P (y | x) in an encoder-decoder fashion: where y <t = {y 1 , • • • , y t-1 } denotes a prefix of y with length t -1, and s t is the decoding state vector of timestep t. In the encoding stage, the encoder of a NMT model transforms the source sentence x into a sequence of hidden vectors h = In the decoding stage, the decoder module summarizes the hidden vectors h and the history decoding states s <t = {s 1 , • • • , s t-1 } into the decoding state vector s t . In this paper, we consider two popular NMT translation architectures, RNN-SEARCH where Attn is the attention function, which is defined as follows: where q and v i are vectors, e is a similarity function over a pair of vectors and α is its normalized function. Different from RNN-SEARCH, which relies on RNN, TRANSFORMER employs an attention network to define h, and two additional attention networks to define s t as follows: (4) In this section, we describe several popular explanation methods that will be evaluated with our proposed metric. Suppose c t = y <t , x denotes the context at timestep t, w (or w ) denotes either a source or a target word in the context c t . According to Attention Since To interpret RNN-SEARCH and TRANS-FORMER, we define different φ for them based on attention. For RNN-SEARCH, since attention is only defined on source side, φ(w; y, c t ) can be defined only for the source words: where α is the attention weight defined in Eq.(3), and s t-1 is the decoding state of RNN-SEARCH defined in Eq.(2). In contrast, TRANSFORMER defines the attention on both sides and thus φ(w; y, c t ) is not constrained to source words: where s t-1 and s t+ 1 2 are defined in Eq.( Gradient Different from attention that is restricted to a specific family of networks, the explanation methods based on gradient are more general. Suppose g(w, y) denotes the gradient of P (y | c t ) w.r.t to the variable w in c t : where ∂w denotes the gradient w.r.t the embedding of the word w, since a word itself is discrete and can not be taken gradient. Therefore, g(w, y) returns a vector with the same shape as the embedding of w. In this paper, we implement two different gradient-based explanation methods and derive different definitions of φ(w; y, c t ) as follows. • Gradient Norm • Weighted Gradient It is worth noting that for each sentence x, y , one has to independently calculate ∂P (y|ct) ∂w for each timestep t. Therefore, one has to calculate |y| times of gradient for each sentence. In contrast, when training NMT, one only requires calculating sentence level gradient and it only calculates one gradient thanks to gradient accumulation in back propagation algorithm. Prediction Difference where P (y | c t ) is the NMT probability of y defined in Eq.( The key to our metric is described as follow: to define an explanation method φ good enough in terms of our metric, the relevant words selected by φ from the context c t should have the potential to construct an optimal model that exhibits similar behavior to the target model P (y | c t ). To formalize this metric, we first specify some necessary notations. Assume that f (c t ) is the target word predicted by P (y | c t ), i.e., f (c t ) = arg max y P (y | c t ). In addition, let W k φ (c t ) be the top-k relevant words on the source side and target side of the context c t : where ∪ denotes the union of two sets, and top k w∈x φ(w; f (c t ), c t ) returns words corresponding to the k largest φ values. ) is a proxy model that makes a translation decision on top of W k φ (c t ) rather than the entire context c t like a standard NMT model. Formally, we define a principled metric as follows: Definition 1 The metric of φ is defined by where E ct [•] denotes the expectation with respect to the data distribution of c t , and Q is minimized over all possible proxy models. The underlying idea of the above metric is to measure the expectation of the disagreement between an optimal proxy model Q constructed from φ and the NMT model P . Here the disagreement is measured by the minus log-likelihood of Q over the data Definition of Fidelity The metric of φ actually defines fidelity by measuring how much the optimal proxy model defined on W k φ (c t ) disagrees with P (y | c t ). The mention of fidelity is widely used in model compression Generally, it is intractable to exactly calculate the principled metric due to two main challenges. On one hand, the real data distribution of c t is unknowable, making it impossible to exactly define the expectation with respect to an unknown distribution. On the other hand, the domain of a proxy model Q is not bounded, and it is difficult to minimize a model Q within an unbounded domain. Empirical Risk Minimization Inspired by the statistical learning theory (Vapnik, 1999), we calculate the expected disagreement over c t by a twostep strategy: we minimize the empirical risk to obtain an optimized θ for a given Q; and then we estimate the risk defined on a held-out test set by using the optimized θ. In this way, we cast the principled metric into a standard machine learning task. For a given model architecture Q, to optimize θ, we first collect the training set as } for each sentence pair x, y at every time step t, where x, y is a sentence pair from a given bilingual corpus Then we optimize θ by the empirical risk minimization: Proxy Model Selection In response to the second challenge of the unbounded domain, we define a surrogate distribution family Q, and then approximately calculate Eq.( We consider three different proxy models including multi-layer feedforward network (FN), recurrent network (RN) and self-attention network (SA). In details, for different networks ∈ {FN, RN, SA}, the proxy model Q is defined as follows: where s t is the decoding state regarding different architecture . Specifically, for feedforward network, the decoding state is defined by For ∈ {RN, SA}, the decoding state s t is defined by where x and ỹ are source and target side words from W k φ (c t ), s 0 is the query of init state, h is the position-aware representations of words, generated by the encoder of RN or SA as defined in Eq.(3) and Eq.( 11: end for 12: end for 13: Return min standard process of addressing a machine learning problem, Algorithm 1 summarizes the procedure to approximately calculate the metric of φ on the test dataset D test , which returns the preplexity (PPL) on FW test . In this section, we conduct experiments to prove the effectiveness of our metric from two viewpoints: how good an explanation method is and which explanation method is better than others. Datasets We carry out our experiments on three standard IWSLT translation tasks including IWSLT14 De⇒En (167k sentence pairs), IWSLT17 Zh⇒En (237k sentence pairs) and IWSLT17 Fr⇒En (229k sentence pairs). All these datasets are tokenized and applied BPE (Byte-Pair Encoding) following NMT Systems To examine the generality of our evaluation method, we conduct experiments on two NMT systems, i.e. RNN-SEARCH (denoted by RNN) and TRANSFORMER (denoted by Trans.), both of which are implemented with fairseq Explanation Methods On both NMT systems, we implement four explanation methods, i.e. Attention (ATTN), gradient norm (NGRAD), weighted gradient (WGRAD), and prediction difference (PD) as mentioned in Section §2. Our metric We implemented five instantiations of the proposed metric including FN, RN, SA, Comb, and Baseline (Base for brevity) as presented in section §3.3. To configurate them, we adopt the same settings from NMT systems to train SA and RN. FN is implemented with feeding the features of bag of words through a 3-layer fully connected network. As given in algorithm 1, the approximate fidelity is estimated through Q with the lowest PPL, therefore the best metric is that achieves the lowest PPL since it results in a closer approximation to the real fidelity. In this subsection, we first conduct experiments and analysis on the IWSLT De⇒En task to configurate fidelity-based metric and then extend the experiments to other IWSLT tasks. We calculate PPL on the IWSLT De⇒En dataset for four metric instantiations (FN, RN, SA, Comb) and Baseline (Base) with k = 1 to extract the most relevant words. Table Density of generalizable rules To understand possible reasons for why one explanation method is better under our metric, we make a naive conjecture: when it tries to reveal the patterns that the well-trained NMT has captured, it extracted more concentrated patterns. In other words, a generalized rule W k φ (c t ) → f (c t ) from one sentence pair can often be observed among other examples. To measure the density of the extracted rules, we first divide all extracted rules into five bins according to their frequencies. Then we collect the number of rules in each bin as well as the total number of rules. Table In Table Testing on other scenarios In the previous experiments, our metric instantiations are trained and evaluated under the same scenario, where c t used to extract relevant words is obtained from gold data and its label f (c t ) is the prediction from NMT f , namely Teacher Forcing Decode. To examine the robustness of our metric, we apply the trained metric to two different scenarios: real decoding scenario (Real-Decode) where both c t and its label f (c t ) are from the NMT output; and golden data scenario (Golden-Data) where both c t and its label are from golden test data. The results for both scenarios are shown in Table From Table Since our metric such as SA requires to extract generalized rules for each explanation method from the entire training dataset, it is computationally expensive for some explanation methods such as gradient methods to directly run on WMT tasks with large scale training data. four explanation methods remains unchanged with respect to different sample sizes. Secondly, with the increase of the sample size, the metric score decreases slower and slower and there is no significant drop from sampling 2 million sentence pairs to sampling 1 million. Results on WMT With the analysis of effects on various sample sizes, we choose a sample size of 1 million for the following scaling experiments. The PPL results for WMT De⇒En , Zh⇒En ,and Fr⇒En are listed in Table Figure Since the calculation of the Alignment Error Rate (AER) requires manually annotated test datasets with ground-truth word alignments, we select three different test datasets contained such alignments for experiments, namely, IWSLT Zh⇒En , NIST05 Zh⇒En Table In recent years, explaining deep neural models has been a growing interest in the deep learning community, aiming at more comprehensible and trustworthy neural models. In this section, we mainly discuss two dominating ways towards it. One way is to develop explanation methods to interpret a target black-box neural network The other way is to construct an interpretable model for the target network and then indirectly interpret its behavior to understand the target network on classification tasks With the increasing efforts on designing new explanation methods, yet there are only a few works proposed to evaluate them. This paper has made an initial attempt to evaluate explanation methods from a new viewpoint. It has presented a principled metric based on fidelity in regard to the predictive behavior of the NMT model. Since it is intractable to exactly calculate the principled metric for a given explanation method, it thereby proposes an approximate approach to address the minimization problem. The proposed approach does not rely on human annotation and can be used to evaluate explanation methods on all target words. On six standard translation tasks, the metric quantitatively evaluates and compares four different explanation methods for two popular translation models. Experiments reveal that PD, NGRAD, and ATTN are all good explanation methods that are able to construct the NMT model's predictions with relatively low perplexity and PD shows the best fidelity among them.
904
2,211
904
SARAL: A Low-Resource Cross-Lingual Domain-Focused Information Retrieval System for Effective Rapid Document Triage
With the increasing democratization of electronic media, vast information resources are available in less-frequently-taught languages such as Swahili or Somali. That information, which may be crucially important and not available elsewhere, can be difficult for monolingual English speakers to effectively access. In this paper we present SARAL, an end-to-end cross-lingual information retrieval (CLIR) and summarization system for lowresource languages that 1) enables English speakers to search foreign language repositories of text and audio using English queries, 2) summarizes the retrieved documents in English with respect to a particular information need, and 3) provides complete transcriptions and translations as needed. The SARAL system achieved the top end-to-end performance in the most recent IARPA MATERIAL CLIR+summarization evaluations.
The task of searching for a needle of relevant information in a haystack of documents is not as daunting as in previous eras, thanks to decades of information retrieval research progress. Most of us engage in this behavior daily when we search the web. Powerful IR algorithms choose the most likely matches for our queries, but humans also play a crucial role: we are typically presented with a list of ranked results, accompanied by small snippets of relevant content, and we make the final decision with this information in hand. Unfortunately, when the information content is in a language the searcher does not understand, serious challenges can arise. This is the problem of cross-lingual information retrieval (CLIR), and there are several straightforward approaches to this problem, many of which have been well-studied. One can translate queries into the language of the search corpus before matching, or conversely translate the documents into the language of the query. Both approaches naturally rely on the availability of good-quality translation, which improves as more parallel data is available. Thus, CLIR may be adequate when the languages are English, French, Spanish, etc., but will be less effective for lowerresourced languages such as Swahili or Somali. Moreover, the crucial role played by humans in triaging results is complicated in a low-resource cross-lingual setting, since the system must somehow present the user with the context for its retrieval, e.g. an English speaker with the context for a Swahili document. But if the quality of the machine translation (MT) is too poor, just showing the surrounding text (à la Google) will be insufficiently helpful. This problem is exacerbated when the original source is audio transcribed by a low-resource automatic speech recognition (ASR) model, since ASR errors will propagate through MT. In this paper we present SARAL (Summarization and domain-Adaptive Retrieval Across Languages 1 ), an end-to-end system that addresses these challenges. SARAL operates over both text and audio input documents from a diverse set of genres (e.g. news, conversational speech, etc.), answering user queries by summarizing the retrieved documents in English with respect to a user's particular information need. Requests can be expressed as a combination of a query phrase (e.g. foreign investments) and a set of one or more desired document domains (e.g. Health or Military). The SARAL system achieved the top end-to-end performance in the most recent CLIR+summarization evaluations conducted by 1
) is a Hindi word which can be translated as ingenious or simple, depending on the relevant context. 1. SEARCHER, a novel CLIR approach designed for low-resource conditions that relies on the construction of a shared semantic space learned from bitext and monolingual corpora 2. An intuitive snippet extraction and presentation design which has been shown in human studies to provide readers with sufficient evidence to filter out erroneous query matches and preserve good ones, even in low-resource conditions 3. The entire operable SARAL system itself, an end-to-end CLIR and summarization system that combines SEARCHER and traditional IR techniques and applies them to text and speech documents in low-resource languages An example of the user interface is shown in Figure We transcribe audio data using two systems developed for SARAL by Idiap and ISI. The Idiap system trains 3 Kaldi-based LF-MMI models with a CNN-BLSTM architecture, with targets derived from alignments produced by HMM/GMM models. The first model is trained with standard data augmented by perturbing audio speeds, the second with data augmented by adding noise and then speed perturbation, and the third with bottleneck features extracted from a multilingual system (Tagalog, Swahili, Zulu, Turkish and Somali). The three systems are then fused by stacking lattices and minimum Bayes Risk (MBR) rescoring. The ISI system uses eight Kaldi-based end-to-end LF-MMI trained TDNN-F grapheme acoustic models. Audio data is decoded with each of the models with a trigram LM, followed by rescoring with an RNN-LM to generate lattices. Similar to the Idiap system, the final transcript is generated by stacking lattices from these models, followed by MBR rescoring on the composite lattice. Based on performance on a development set, we use the Idiap system for conversational speech and the ISI system for topical and news broadcasts. All models are trained with 40 hours of the transcribed audio provided in the MATERIAL program, as well as ∼500hrs of YouTube data used for unsupervised training. For Somali, language models use ∼320M words, primarily composed of webcrawl data (∼230M words) and the so16 Somali Web Corpus (∼70M words); for Swahili, they use ∼100M words of webcrawl data. For comparison, a highresource language would typically be trained with thousands of hours of speech and a language model generated from more than a billion words of data. Our low-resource MT architecture is a system combination (Heafield and Lavie, 2010) of a Transformer-based neural model We employ a combination of two approaches to cross-lingual information retrieval. The first relies on term-level matching in both the original document and its machine translation(s). Sourcelanguage matching is mediated via translation tables derived from the word alignments used by our syntax-based MT system. Terms are expanded using transformations of varying expected accuracy, e.g. stemming, WordNet transformations Our second approach, SEARCHER (Shared Embedding ARCHitecture for Effective Retrieval), maps both queries and documents into a shared embedding space and performs retrieval there, rather than relying on translation of either the document or the query terms. However, during development, we found that standard cross-lingual embeddings derived from monolingual corpora, even when aligned using sophisticated transformation techniques (e.g. To obtain sufficient precision, we train a proxy task based on sentence relevancy. Here, a sentence S is considered responsive to a query q if at least one plausible translation of S contains the term q. Training samples are derived from parallel corpora. Sample queries are drawn from the English side, with their corresponding foreign-language sentences as positive examples and other randomlydrawn foreign-language sentences as negative examples. The SEARCHER model consists of a convolutional encoder (similar to The New York Times Annotated corpus The goal of summarization is to concisely explain, in English, a particular document's relevance to a query. Our primary approach highlights in blue those terms ranked most highly by our CLIR and displays them in a fixed-context window. Semantically related words are colored in lighter blue, as with tension in Figure The primary barrier to providing accurate summaries is poor MT quality. Even if an exact match is highlighted, the context may be so garbled that a reader is unable to label it as a reliably relevant match. To mitigate this, we provide additional context for the MT system's decisions, specifically the set of options the system considers when producing word(s) matching the query. For instance, consider a summary for back injuries. If the word back was translated from the Swahili word mgongo, we might show alternate translations spine, backbone, and spinal, reassuring the reader that the translation of back is correct and of the appropriate word sense. In contrast, if the word was originally translated from kurejea, we would present alternative translations return, returning, referring, leading the reader to correctly identify a false alarm. For the purposes of summarization, we provide this kind of information via footnotes (see Figure We generate summaries for domains using the n-grams extracted for domain classification (Section 2.4). We identify these n-grams in an English machine translation of a document and create multiple candidate display windows of varying size for each. We then employ a greedy search to select and merge such windows to (a) include as much domain-relevant information as possible (a function of both the number of domain-relevant terms and their quality), (b) present exactly as much context as is necessary to make the terms understandable, and (c) avoid redundancy / prefer diversity. When presenting summaries to the user, we highlight domain-relevant terms in blue, with the shade intensity indicating the strength of its relevance to the domain. A sample summary for the Law and Order domain is shown in Figure SARAL's user interface allows users to search for a single English query phrase. Following the most common practice of the MATERIAL program, we focus on direct cross-lingual search rather than conceptual expansion. So, for the query vaccine, synonyms (e.g. immunization) and morphological variations (e.g. vaccinated) would be considered responsive, but a sentence generically discussing methods for the prevention of the flu would not. (Users may also opt to exclude morphological variations.) Users also select the target language and optionally restrict to either text or audio documents. In the MATERIAL program, queries typically require exactly one domain. However, a user's interests might extend to more than one domain at a time. We therefore allow the user to select multiple domains; any document that matches at least one domain of interest is allowed to be returned as relevant. To avoid crowding the screen when a document is relevant to multiple domains, we show instead, for each document, a bar graph displaying the relative strength of each domain that the system identified as being potentially represented in a document. Clicking on the Why? button next to a domain displays the evidence that the system found for that domain, i.e. the domain-specific summary, as shown in Figure For the purposes of the demonstration, we restrict query summaries to 50 words, keeping them comfortably at the top of the page and quickly gistable. We allow 80 words for each domain summary, enough to provide convincing evidence without being too verbose to skim quickly. Finally, we provide full access to each source document (original text or audio; if audio, we also provide the automatically-generated transcription) and an English machine translation, for the user who wants to dig deeper into the context of a response. A small excerpt is shown in Figure It is simple to add a new language to the system. In a recent exercise, we brought up an end-to-end system in Lithuanian in three days using the speech and parallel text resources provided by the MATE-RIAL program; this required only a few hours of actual human effort. The two largest bottlenecks for improved performance over the three-day system are data collection (scraping monolingual data from the web to improve ASR language models) and ASR model training. With ten days, we were able to bring up a significantly improved ASR system in Lithuanian; with more efficient use of compute resources (e.g. parallelizing the web scraping), this time could be significantly reduced. The Phase 1 MATERIAL evaluation was performed on a corpus of ∼15K Somali documents annotated for relevance for 1,000 queries by native speakers. The official evaluation metric is AQWV (Average Query Weighted Value), End-to-end AQWV was calculated after human readers triaged an initial set of system results, removing those documents they judged to be false alarms using only the English summaries generated by the system. Documents were sampled evenly across queries and across true positives and false alarms; system performance was then projected to any unassessed documents. For the SARAL system, ∼15K query/document summaries were assessed, using Amazon Mechanical Turk. Overall, the SARAL system was the top-ranked end-to-end system in the evaluation. The majority of errors on true positive documents come from insufficient summaries. For instance, a query about deception results in the summary text Punamin was arrested for trafficking, but he made amazing cheating that he thought about the long arrest. Two alternative translations provided for cheating are deception and trick. Still, the English context is difficult to understand. Thus although it is in reality a true positive, it is not unreasonable that a human rejected it. Human acceptance of a false positive happens most frequently when readers accept an alternate translation as accurate when the context did not make sense. For instance, a query for midwife returns summary text I would like to advise you to be united people who create their own skills ... you will be a company that will support themselves. Our system indicates that an alternate translation for skills could be midwife, which is accepted by the reader even though clearly incorrect in context. A so-called false positive found by the system-and retained by human readers during triage-can actually be a true positive that was missed by the original foreign-language annotator. For instance, a query for mockery returns will present a exhibition to show insults to our Prophet ... aimed at presenting images of insulting Prophet Muhammed. It seems reasonable that insults here is a translation variant for mockery; both our system and a human reader think so. This shows the strength of the system; not only can it provide a monolingual speaker with access to content in low-resource foreign languages, but it can sometimes surpass search by native speakers. Recent research in CLIR and query-based summarization uses expansive, concept-based definitions of relevance. For example, given the query agriculture, documents are relevant if they describe fields, pastures, or crops, even if the word agriculture is not used, and the goal of summarization is to show that the document as a whole is relevant. In contrast, in this work we aim to retrieve documents that meet a more precise notion of relevance, similar to that used for keyword spotting. This goal influences our retrieval approach, which seeks to account for variation in translation but does not perform more expansive embedding-based query expansion, and the summarization approach, which presents in-context search term matches rather than a narrative summary of the document as a whole. The SARAL system provides a monolingual user with effective access to multimodal information in lower-resourced languages through a user interface that enables rapid triage of system results. We look forward to future work improving the quality of the underlying components for low-resource settings as well as expanding the user interface to incorporate additional semantic constraints or requests.
854
2,558
854
Composing extended top-down tree transducers *
A composition procedure for linear and nondeleting extended top-down tree transducers is presented. It is demonstrated that the new procedure is more widely applicable than the existing methods. In general, the result of the composition is an extended top-down tree transducer that is no longer linear or nondeleting, but in a number of cases these properties can easily be recovered by a post-processing step.
Tree-based translation models such as synchronous tree substitution grammars In this way we model the word drop by an ln-XTOP M and reordering by an ln-XTOP N . The syntactic properties of linearity and nondeletion yield nice algorithmic properties, and the mod- ular approach is desirable for better design and parametrization of the translation model Because ln-XTOP is not closed under composition, the composition of M and N might be outside ln-XTOP. These cases have been identified by We will demonstrate how to compose two linear and nondeleting XTOPs into a single XTOP, which might however no longer be linear or nondeleting. However, when the syntactic form of The positions are indicated in t as superscripts. The subtree t| 2 is σ(α, q(x 2 )). the composed XTOP has only bounded overlapping cuts, post-processing will get rid of them and restore an ln-XTOP. In the remaining cases, in which unbounded overlapping is necessary or occurs in the syntactic form but would not be necessary, we will compute an XTOP. This is still an improvement on the existing methods that just fail. Since general XTOPs are implemented in TIBURON and the new composition covers (essentially) all cases currently possible, our new composition procedure could replace the existing one in TIBURON. Our approach to composition is the same as in
Our trees have labels taken from an alphabet Σ of symbols, and in addition, leaves might be labeled by elements of the countably infinite set X = {x 1 , x 2 , . . . } of formal variables. Formally, for every V ⊆ X the set T Σ (V ) of Σ-trees with V -leaves is the smallest set such that V ⊆ T Σ (V ) and σ(t 1 , . . . , t k ) ∈ T Σ (V ) for all k ∈ N, σ ∈ Σ, and t 1 , . . . , t k ∈ T Σ (V ). To avoid excessive universal quantifications, we drop them if they are obvious from the context. For each tree t ∈ T Σ (X) we identify nodes by positions. The root of t has position ε and the position iw with i ∈ N and w ∈ N * addresses the position w in the i-th direct subtree at the root. The set of all positions in t is pos(t). We write t(w) for the label (taken from Σ ∪ X) of t at position w ∈ pos(t). Similarly, we use • t| w to address the subtree of t that is rooted in position w, and • t[u] w to represent the tree that is obtained from replacing the subtree t| w at w by u ∈ T Σ (X). For a given set L ⊆ Σ ∪ X of labels, we let pos L (t) = {w ∈ pos(t) | t(w) ∈ L} be the set of all positions whose label belongs to L. We also write pos l (t) instead of pos {l} (t). collects all variables that occur in t. If the variables occur in the order x 1 , x 2 , . . . in a pre-order traversal of the tree t, then t is normalized. Given a finite set Q, we write Q(T ) with T ⊆ T Σ (X) for the set {q(t) | q ∈ Q, t ∈ T }. We will treat elements of Q(T ) as special trees of T Σ∪Q (X). The previous notions are illustrated in Figure A substitution θ is a mapping θ : X → T Σ (X). When applied to a tree t ∈ T Σ (X), it will return the tree tθ, which is obtained from t by replacing all occurrences of x ∈ X (in parallel) by θ(x). This can be defined recursively by xθ = θ(x) for all x ∈ X and σ(t 1 , . . . , t k )θ = σ(t 1 θ, . . . , t k θ) for all σ ∈ Σ and t 1 , . . . , t k ∈ T Σ (X). The effect of a substitution is displayed in Figure Next, we define two notions of compatibility for trees. Let t, t ∈ T Σ (X) be two trees. If there exists a substitution θ such that t = tθ, then t is an instance of t. Note that this relation is not symmetric. A unifier θ for t and t is a substitution θ such that tθ = t θ. The unifier θ is a most general unifier (short: mgu) for t and t if for every unifier θ for t and t there exists a substitution θ such that θθ = θ . The set mgu(t, t ) is the set of all mgus for t and t . Most general unifiers can be computed efficiently The discussed model in this contribution is an extension of the classical top-down tree transducer, which was introduced by Rounds (1970) and • Σ and ∆ are alphabets of input and output symbols, respectively, ) is linear and r ∈ T ∆ (Q(var( ))), and • c : R × X → T Σ (X) assigns a look-ahead restriction to each rule and variable such that c(ρ, x) is linear for each ρ ∈ R and x ∈ X. The XTOP F M is linear (respectively, nondeleting) if r is linear (respectively, var(r) = var( )) for every rule → r ∈ R. It has no look-ahead (or it is an XTOP) if c(ρ, x) ∈ X for all rules ρ ∈ R and x ∈ X. In this case, we drop the lookahead component c from the description. A rule → r ∈ R is consuming (respectively, producing) if pos Σ ( ) = ∅ (respectively, pos ∆ (r) = ∅). We let Lhs(M ) = {l | ∃q, r : q(l) → r ∈ R}. Let M = (Q, Σ, ∆, I, R, c) be an XTOP F . In order to facilitate composition, we define sentential forms more generally than immediately necessary. Let Σ and ∆ be such that Σ ⊆ Σ and ∆ ⊆ ∆ . To keep the presentation simple, we assume that If the applicable rules are restricted to a certain subset R ⊆ R, then we also write ξ ⇒ R ζ. Figure where ⇒ * M is the reflexive, transitive closure of ⇒ M . It can easily be verified that the definition of τ M is independent of the choice of Σ and ∆ . Moreover, it is known A linear and nondeleting XTOP M with rules R can easily be reversed to obtain a linear and nondeleting XTOP M -1 with rules R -1 , which computes the inverse transformation τ M -1 = τ -1 M , by reversing all its rules. A (suitable) rule is reversed by exchanging the locations of the states. More precisely, given a rule q(l) → r ∈ R, we obtain the rule q(r ) → l of R -1 , where l = lθ and r is the unique tree such that there exists a substitution θ : X → Q(X) with θ(x) ∈ Q({x}) for every x ∈ X and r = r θ. Figure Finally, let us formally define composition. The XTOP M computes the tree transformation τ M ⊆ T Σ × T ∆ . Given another XTOP N that computes a tree transformation τ N ⊆ T ∆ × T Γ , we might be interested in the tree transformation computed by the composition of M and N (i.e., running M first and then N ). Formally, the composition τ M ; τ N of the tree transformations τ M and τ N is defined by and we often also use the notion 'composition' for XTOP with the expectation that the composition of M and N computes exactly τ M ; τ N . We want to compose two linear and nondeleting XTOPs M = (P, Σ, ∆, I M , R M ) and N = (Q, ∆, Γ, I N , R N ). Before we actually perform the composition, we will prepare M and N in two pre-processing steps. After these two steps, the composition is very simple. To avoid complications, we assume that (i) all rules of M are producing and (ii) all rules of N are consuming. For convenience, we also assume that the XTOPs M and N only use variables of the disjoint sets Y ⊆ X and Z ⊆ X, respectively. In the existing composition results for subclasses of XTOPs between a subtree at a ∆-labeled position w ∈ pos ∆ (l 1 ) in a left-hand side l 1 ∈ Lhs(M -1 ) and a left-hand side l 2 ∈ Lhs(N ). Intuitively, for every ∆-labeled position w in a right-hand side r 1 of M and any left-hand side l 2 of N , we require (ignoring the states) that either (i) r 1 | w and l 2 are not unifiable or (ii) r 1 | w is an instance of l 2 . Example 3. The XTOPs for the English-to-German translation task in the Introduction are not compatible. This can be observed on the left-hand side l 1 ∈ Lhs(M -1 ) of Figure Theorem 4. There exists an XTOP F N that is equivalent to N and compatible with M . Proof. We achieve compatibility by cutting offending rules of the XTOP N into smaller pieces. Unfortunately, both linearity and nondeletion of N might be lost in the process. We first let N = (Q, ∆, Γ, I N , R N , c N ) be the XTOP F such that c N (ρ, x) = x for every ρ ∈ R N and x ∈ X. If N is compatible with M , then we are done. Otherwise, let l 1 ∈ Lhs(M -1 ) be a left-hand side, q(l 2 ) → r 2 ∈ R N be a rule, and w ∈ pos ∆ (l 1 ) be a position such that θ(y) / ∈ X for some θ ∈ mgu(l 1 | w , l 2 ) and y ∈ Y . Let v ∈ pos y (l 1 | w ) be the unique position of y in l 1 | w . Now we have to distinguish two cases: (i) Either var(l 2 | v ) = ∅ and there is no leaf in r 2 labeled by a symbol from Γ. In this case, we have to introduce deletion and look-ahead into N . We replace the old rule ρ = q(l 2 ) → r 2 by the new rule ρ = q(l 2 [z] v ) → r 2 , where z ∈ X \ var(l 2 ) is a variable that does not appear in l 2 . In addition, we let c N (ρ , z) = l 2 | v and c N (ρ , x) = c N (ρ, x) for all x ∈ X \ {z}. (ii) Otherwise, let V ⊆ var(l 2 | v ) be a maximal set such that there exists a minimal (with respect to the prefix order) position w ∈ pos(r 2 ) with var(r 2 | w ) ⊆ var(l 2 | v ) and var(r 2 [β] w )∩V = ∅, where β ∈ Γ is arbitrary. Let z ∈ X \ var(l 2 ) be a fresh variable, q be a new state of N , and The look-ahead for z is trivial and otherwise we simply copy the old look-ahead, so Finally, we collect all newly generated states of the form l, q, v in Q l and for every such state with l = δ(l 1 , . . . , l k ) and v = iw, let l = δ(z 1 , . . . , z k ) and Overall, we run the procedure until N is compatible with M . The procedure eventually terminates since the left-hand sides of the newly added rules are always smaller than the replaced rules. Moreover, each step preserves the semantics of N , which completes the proof. We note that the look-ahead of N after the construction used in the proof of Theorem 4 is either trivial (i.e., a variable) or a ground tree (i.e., a tree without variables). Let us illustrate the construction used in the proof of Theorem 4. Example 5. Let us consider the rules illustrated in Figure Secondly, with this new rule there is an mgu, in which y 2 is mapped to σ(z 1 , z 2 ). Clearly, we are now in case (ii). Furthermore, we can select the set V = {z 1 , z 2 } and position w = . Correspondingly, the following two new rules for N replace the old rule: where the look-ahead for z remains β. Figure where q 3 = σ(z 2 , z 3 ), q 3 , 2 . Let us use the construction in the proof of Theorem 4 to resolve the incompatibility (see Example 3) between the XTOPs presented in the Introduction. Fortunately, the incompatibility can be resolved easily by cutting the rule of N (see Figure After the first pre-processing step, we have the original linear and nondeleting XTOP M and an XTOP F N = (Q , ∆, Γ, I N , R N , c N ) that is equivalent to N and compatible with M . However, in the first pre-processing step we might have introduced some non-linear (copying) rules in N (see rule ( ) in Example 5), and it is known that "nondeterminism [in M ] followed by copying [in N ]" is a feature that prevents composition to work Let L ⊆ T ∆ (Z) be the set of trees l such that • l, q, v appears as a state of Q l , or To keep the presentation uniform, we assume that for every l ∈ L, there exists a state of the form l, q, v ∈ Q . If this is not already the case, then we can simply add useless states without rules for them. In other words, we assume that the first case applies to each l ∈ L. Next, we add two sets of rules to R M , which will not change the semantics but prove to be useful in the composition construction. First, for every tree t ∈ L, let R t contain all the rules p(l) → r, where p = p(l) → r is a new state with p ∈ P , minimal normalized tree l ∈ T Σ (X), and an instance r ∈ T ∆ (P (X)) of t such that M ξ ⇒ M r for some ξ that is not an instance of t. In other words, we construct each rule of R t by applying existing rules of R M in sequence to generate a (minimal) right-hand side that is an instance of t. We thus potentially make the right-hand sides of M bigger by joining several existing rules into a single rule. Note that this affects neither compatibility nor the semantics. In the second step, we add pure ε-rules that allow us to change the state to one that we constructed in the previous step. For every new state p = p(l) → r, let base(p) = p. Then R M = R M ∪ R L ∪ R E and P = P ∪ t∈L P t where Clearly, this does not change the semantics because each rule of R M can be simulated by a chain of rules of R M . Let us now do a full example for the pre-processing step. We consider a nondeterministic variant of the classical example by p(δ(y 1 , y 2 , y 3 )) → σ(p s (y 1 ), σ(p s (y 2 ), p α (y 3 ))) p s (s (y 1 )) → s(p s (y 1 )) for every s, s ∈ {α, β}. Similarly, we let N = (Q, Σ, Σ, {q}, R N ) be the linear and nondeleting XTOP such that Q = {q, i} and R N contains the following rules for all s ∈ {α, β}. It can easily be verified that M and N meet our requirements. However, N is not yet compatible with M because an mgu between rules ( †) of M and ( ‡) of N might map y 2 to σ(z 2 , z 3 ). Thus, we decompose ( ‡) into q(σ(z 1 , z)) → δ(i(z 1 ), q(z), q (z)) where q = σ(z 2 , z 3 ), i, 1 . This newly obtained XTOP N is compatible with M . In addition, we only have one special tree σ(z 2 , z 3 ) that occurs in states of the form l, q, v . Thus, we need to compute all minimal derivations whose output trees are instances of σ(z 2 , z 3 ). This is again simple since the first three rule schemes ρ s , ρ s,s , and ρ s,s of M create such instances, so we simply create copies of them: ρ s (σ(y 1 , y 2 )) → σ(p s (y 1 ), p(y 2 )) ρ s,s (δ(y 1 , y 2 , y 3 )) → σ(p s (y 1 ), σ(p s (y 2 ), p(y 3 ))) ρ s,s (δ(y 1 , y 2 , y 3 )) → σ(p s (y 1 ), σ(p s (y 2 ), p α (y 3 ))) for all s, s ∈ {α, β}. These are all the rules of R σ(z 2 ,z 3 ) . In addition, we create the following rules of R E : for all s, s ∈ {α, β}. Especially after reading the example it might seem useless to create the rule copies in R l [in Example 6 for l = σ(z 2 , z 3 )]. However, each such rule has a distinct state at the root of the left-hand side, which can be used to trigger only this rule. In this way, the state selects the next rule to apply, which yields the desired local determinism. Now we are ready for the actual composition. For space efficiency reasons we reuse the notations used in Section 4. Moreover, we identify trees of T Γ (Q (P (X))) with trees of T Γ ((Q × P )(X)). In other words, when meeting a subtree q(p(x)) with q ∈ Q , p ∈ P , and x ∈ X, then we also view this equivalently as the tree q, p (x), which could be part of a rule of our composed XTOP. However, not all combinations of states will be allowed in our composed XTOP, so some combinations will never yield valid rules. Generally, we construct a rule of M ; N by applying a single rule of M followed by any number of pure ε-rules of R E , which can turn states base(p) into p. Then we apply any number of rules of N and try to obtain a sentential form that has the required shape of a rule of M ; N . Definition 7. Let M = (P , Σ, ∆, I M , R M ) and N = (Q , ∆, Γ, I N , R N ) be the XTOPs constructed in Section 4, where l∈L P l ⊆ P and and R contains all normalized rules → r (of the required shape) such that The required rule shape is given by the definition of an XTOP. Most importantly, we must have that ∈ S(T Σ (X)), which we identify with a certain subset of Q (P (T Σ (X))), and r ∈ T Γ (S(X)), which similarly corresponds to a subset of T Γ (Q (P (X))). The states are simply combinations of the states of M and N , of which however the combinations of a state q ∈ Q l with a state p / ∈ P l are forbidden. This reflects the intuition of the previous section. If we entered a special state of the form l, q, v , then we should use a corresponding state p ∈ P l of M , which only has rules producing instances of l. We note that look-ahead of N is checked normally in the derivation process. Example 8. Now let us illustrate the composition on Example 6. Let us start with rule ( †) of M . is a rule of M ; N for every s, s , s ∈ {α, β}. Note if we had not applied the R E -step, then we would not have obtained a rule of M ; N (because we would have obtained the state combination q, p instead of q, ρ s ,s , and q, p is not a state of M ; N ). Let us also construct a rule for the state combination q, ρ s ,s . q(ρ s ,s (δ(x 1 , x 2 , x 3 ))) ⇒ M q(σ(p s (x 1 ), σ(p s (x 2 ), p(x 3 )))) ⇒ N q (p s (x 1 )) Finally, let us construct a rule for the state combination q , ρ s ,s . q (ρ s ,s (δ(x 1 , x 2 , x 3 ))) ⇒ M q(σ(p s (x 1 ), σ(p s (x 2 ), p(x 3 )))) ⇒ R E q(σ(p s (x 1 ), σ(p s (x 2 ), ρ s (x 3 )))) ⇒ N q(σ(p s (x 2 ), ρ s (x 3 ))) ⇒ N δ(q (p s (x 1 )), q(ρ s (x 2 )), q (ρ s (x 2 ))) for every s ∈ {α, β}. After having pre-processed the XTOPs in our introductory example, the devices M and N can be composed into M ; N . One rule of the composed XTOP is illustrated in Figure Finally, we will compose rules again in an effort to restore linearity (and nondeletion). Since the composition of two linear and nondeleting XTOPs cannot always be computed by a single XTOP Let M ; N = (S, Σ, Γ, I, R) be the composed XTOP constructed in Section 5. We simply inspect each non-linear rule (i.e., each rule with a non-linear right-hand side) and expand it by all rule options at the copied variables. Since the method is pretty standard and variants have already been used in the pre-processing steps, we only illustrate it on the rules of Figure Example 9. The first (top row, left-most) rule of Figure
410
1,332
410
Action-Sensitive Phonological Dependencies
This paper defines a subregular class of functions called the tier-based synchronized strictly local (TSSL) functions. These functions are similar to the the tier-based inputoutput strictly local (TIOSL) functions, except that the locality condition is enforced not on the input and output streams, but on the computation history of the minimal subsequential finite-state transducer. We show that TSSL functions naturally describe rhythmic syncope while TIOSL functions cannot, and we argue that TSSL functions provide a more restricted characterization of rhythmic syncope than existing treatments within Optimality Theory.
The subregular program in phonology seeks to define subclasses of the regular languages and finitestate functions that describe attested phonotactic constraints and phonological processes. These subclasses provide a natural framework for typological classification of linguistic phenomena while allowing for the development of precise theories of language learning and processing. The traditional view in subregular phonology is that most phonotactic dependencies are described by tier-based strictly local languages (TSL, Recent work in subregular phonology has identified a number of exceptions to the traditional view. On the language side, unbounded culminative stress systems This paper identifies rhythmic syncope as an additional example of a phonological process that is not strictly local. In rhythmic syncope, every second vowel of an underlying form is deleted in the surface form, starting with either the first or the second vowel. While rhythmic syncope cannot be expressed as a local dependency between symbols, it can be viewed as a local dependency between actions in the computation history of the minimal subsequential finite-state transducer (SFST). We formalize such dependencies by proposing the tier-based synchronized strictly local functions (TSSL). See This paper is structured as follows. Section 2 enumerates standard definitions and notation used throughout the paper, while Section 3 reviews existing work on strictly local functions. Section 4 introduces rhythmic syncope and shows that it is not strictly local. Section 5 presents two equivalent definitions of the TSSL functions-an al-gebraic definition and a definition in terms of a canonical SFST. Section 6 develops some formal properties of the TSSL functions, showing that they are incomparable to the full class strictly local functions. Section 7 compares our proposal to existing OT treatments of rhythmic syncope, and Section 8 concludes.
As usual, N denotes the set of nonnegative integers. Σ and Γ denote finite alphabets not including the left and right word boundary symbols and , respectively. The length of a string x is denoted by |x|, and λ denotes the empty string. Alphabet symbols are identified with strings of length 1, and individual strings are identified with singleton sets of strings. For k ∈ N, α k denotes α concatenated with itself k-many times, α <k denotes k- A subsequential finite-state transducer (SFST) is a 6-tuple T = Q, Σ, Γ, q 0 , →, σ , where • Q is the set of states, with q 0 ∈ Q being the start state; • Σ and Γ are the input and output alphabets, respectively; For x ∈ Σ * ; y ∈ Γ * ; and q, r ∈ Q, the notation q x:y --→ r means that T emits y to the output stream and transitions to state r if it reads x in the input stream while it is in state q. Letting f : Σ * → Γ * , we say that T computes f if for every x ∈ Σ * , f (x) = yσ(q), where q 0 x:y --→ q. A function is subsequential if it is computed by an SFST. An SFST T = Q, Σ, Γ, q 0 , →, σ is onward if for every state q other than q 0 , lcp y ∃x∃r.q x:y --→ r ∪ {σ(q)} = λ. Putting T in onward form allows us to impose structure on the timing with which SFSTs produce output symbols. x (y). We refer to f → x as the translation of f by x and to f ← as f top. 1 Suppose T computes f . The following facts are apparent. • Fix w, x ∈ Σ * and write q 0 x:y --→ q and q 0 • T is onward if and only if for all q ∈ Q\{q 0 }, if q 0 x:y --→ q, then y = f ← (x). These observations allow us to construct the minimal SFST for f by identifying each state with a possible translation f → x The strictly local functions are classes of subsequential functions proposed by Intuitively, strictly local functions are functions computed by SFSTs in which each state represents the i -1 most recent symbols in the input stream and the j -1 most recent symbols in the output stream along with the current input symbol, for some parameter values i, j fixed. Such functions are "local" in the sense that the action performed on each input symbol depends only on information about symbols in the input and output streams within a bounded distance. In this paper, we augment strictly local functions with tier projection, a mechanism introduced by Definition 2. For any alphabet Σ, a tier on Σ is a homomorphism τ : Σ * → Σ * such that for each a ∈ Σ, either τ (a) = a or τ (a) = λ. In the former case, we say that a is on τ ; in the latter case, we say that a is off τ . Definition 3. Fix i, j > 0 and let τ be a tier on Σ ∪ Γ. A function f : Σ * → Γ * is i, j-inputoutput strictly local on tier τ (i, j-TIOSL) if for all w, x ∈ Σ * , if A function is i-input strictly local on tier τ (i-TISL) if it is i, 1-TIOSL on tier τ , and it is j-output strictly local on tier τ (j-TOSL) if it is 1, j-TIOSL on tier τ . Secondly, they define strictly local functions in terms of canonical SFSTs that directly encode (i-1)-suffixes of the input stream and (j -1)-suffixes of the output stream in their state names. Definition 4. Fix i, j > 0 and let τ be a tier on Σ ∪ Γ. An SFST T = Q, Σ, Γ, q 0 , →, σ is i, jinput-output strictly local on tier τ (i, j-TIOSL) if the following conditions hold. An SFST is i-input strictly local on tier τ (i-TISL) if it is i, 1-TIOSL on tier τ , and it is j-output strictly local on tier τ (j-TOSL) if it is 1, j-TIOSL on tier τ . These definitions turn out to be equivalent when the canonical SFSTs are required to be onward. Theorem 5 Example 6. Rhythmic reduction is a phonological process in which alternating vowels in a word undergo reduction. The examples in ( (7) Rhythmic reduction in Ojibwe circa 1912 [g@tIg@mIn@gIb@na:d] 'if he rolls him' Figure Rhythmic syncope is a phonological process in which every second vowel in a word is deleted. The examples of (8) show rhythmic syncope in Macushi, in which deletion begins with the first vowel. In this section, we show that rhythmic syncope is not TIOSL. To see this, we formalize rhythmic syncope as a function over two alphabet symbols: C, representing consonants, and V , representing vowels. This idealization does not affect the argument that rhythmic syncope is not TIOSL, presented in Proposition 10. Definition 9. The rhythmic syncope function ρ : The intuition underlying the argument presented below is that (i -1)-suffixes of the input and (j -1)-suffixes of the output do not contain information about whether vowels occupy even or odd positions within the input and output strings. Therefore, while an i, j-TIOSL SFST can record the most recent vowels read from the input stream and emitted to the output stream, this information is not sufficient for determining whether or not the SFST should delete a vowel. Proposition 10. The rhythmic syncope function is not i, j-TIOSL on tier τ for any i, j > 0 and any τ : This means that ρ → w (V ) = λ but ρ → x (V ) = V , so ρ is not i, j-TIOSL on tier τ . Proposition 10 raises the question of how to characterize the kind of computation that effects rhythmic syncope. To investigate this question, Figure Recall that at each time step, an SFST must read exactly one input symbol while producing an output string of any length. Since the minimal SFST for a function f must produce f ← (z) after reading the input string z, we can determine the possible actions of f by comparing f ← (z) with f ← (zx) for arbitrary z ∈ Σ * and x ∈ Σ. We denote elements x, y of A(f ) by x : y. Strings over A(f ) represent computation histories of the minimal SFST for f . Definition 12. Let x ∈ Σ * and let f : Σ * → Γ * . The run of f on input x is the string f ⇐ (x) ∈ A(f ) * defined as follows. • If |x| ≤ 1, then f ⇐ (x) := x : f ← (x). • If x = yz, where |y| ≥ 1 and |z| = 1, then f ⇐ (x) := f ⇐ (y)(z : w), where w is the unique string such that f ← (x) = f ← (y)w. The notation f ⇐ allows us to define the TSSL functions in a straightforward manner, highlighting the analogy to the TIOSL functions. Definition 13. Fix k > 0 and let τ be a tier on us define the canonical SFSTs for TSSL functions. We define the actions of an SFST to be its possible transition labels. Definition 14. Let T = Q, Σ, Γ, q 0 , →, σ be an SFST. The actions of T are the alphabet A(T ) := { x, y |∃q∃r.→(q, x) = r, y } . We denote elements x, y of A(T ) by x : y. Again, the definition of the TSSL SFSTs is directly analogous to that of the TIOSL SFSTs. Definition 15. Fix k > 0 and let τ be a tier on Σ × Γ * . An SFST T = Q, Σ, Γ, q 0 , →, σ is ksynchronized strictly local on tier τ (k-TSSL) if the following conditions hold. • Q = ({ } ∪ A(T )) k-1 and q 0 = k-1 . • For every q ∈ Q, if →(q, x) = r, y , then r = suff k-1 (τ (q(x : y))) . As is the case with TIOSL SFSTs, TSSL SFSTs compute exactly the class of TSSL functions when they are required to be onward. Theorem 16. Fix k > 0, and let τ be a tier on Σ × Γ * . A function is k-TSSL on tier τ if and only if it is computed by an onward SFST that is k-TSSL on tier τ . We leave the proof of this fact to Appendix A. Having now defined the TSSL functions, this section investigates some of their formal properties. Subsection 6.1 compares the TSSL functions to the TISL, TOSL, and TIOSL functions. Subsection 6.2 observes that TSSL SFSTs compute a large class of functions when they are not required to be onward. A natural first question regarding the TSSL functions is that of how they relate to previouslyproposed classes of subregular functions. We know from the discussion of rhythmic syncope that the TSSL functions are not a subset of the TIOSL functions: we have already seen that the rhythmic syncope function is 2-TSSL but not i, j-TIOSL for any i, j. We will see in this subsection that the TIOSL functions are not a subset of the TSSL functions, though both function classes fully contain the TISL and TOSL functions. Therefore, the two function classes are incomparable, and offer two different ways to generalize the TISL and TOSL functions. The fact that the TSSL functions contain the TISL and TOSL functions follows from the observation that actions contain information about input and output symbols. Remembering the i most recent actions automatically entails remembering the i most recent input symbols, and the j most recent output symbols can be extracted from the j most recent actions if deletions are ignored. Proposition 17. Fix k > 0. Every k-TISL function and every k-TOSL function is k-TSSL. Proof. Let f : Σ * → Γ * , and let τ be a tier on Σ∪Γ. First, suppose that f is k-TISL on tier τ . Let υ be a tier on Σ × Γ * defined as follows: an action x : y is on υ if and only if x is on τ . Now, suppose w, x ∈ Σ * are such that suff k-1 (υ(f Then, we have τ (w) = w 1 w 2 . . . w n and τ (x) = x 1 x 2 . . . x n . For all i > n -k + 1, w i : y i = x i : z i , and therefore Next, suppose that f is k-TOSL on tier τ . Let ϕ be a tier on Σ×Γ * defined as follows: an action x : y is on ϕ if and only if τ (y) = λ. Now, suppose w, x ∈ Σ * are such that suff This intuition does not carry over to the TIOSL functions. In Proposition 17, the proposed action tiers ignore symbols off the input and output tiers, thus ensuring that the relevant input and output symbols can always be recovered from the computation history. This approach encounters problems when an onward TIOSL SFST deletes symbols on the tier. Such SFSTs perform actions of the form x : λ, where x is on the tier. These actions do not record any output symbols, but they must be kept on the tier in a TSSL implementation so that the input symbol x can be recovered. If too many (x : λ)s are performed consecutively, they can overwhelm the memory of a TSSL SFST, causing it to forget the most recent output symbols. The following construction features exactly this kind of behavior. Proposition 18. There exists a function that is i, j-TIOSL for some i, j but not k-TSSL for any k. Proof. Let T be the SFST shown in Figure output. Thereafter, T behaves as follows: all as are deleted; a b is changed to a c if the most recent input symbol is the same as the first input symbol; a b is changed to a d otherwise. For example, f (baabb) = bdc. Let k > 0, and let υ be a tier on {a, b} × {a, b, c, d} * . Suppose that either k = 1 or a : λ is not on υ, and consider the strings w := ba and Next, suppose that k > 1 and a : λ is on υ. Consider the input strings w := a k+1 and x := ba k . Observe that f ⇐ (w) = (a : a)(a : λ) k and The equivalence between the two definitions of the TSSL functions presented in Section 5 crucially depends on the criterion that TSSL SFSTs be onward. In this subsection we show that without this criterion, TSSL SFSTs compute a rich class of subsequential functions. To illustrate how this is possible, let us consider an example that witnesses the separation between TSSL functions and TSSL SFSTs. Proposition 19. There exists a 2-TSSL SFST that computes a function that is not k-TSSL for any k. Proof. Consider the SFST in Figure We need to show that f is not k-TSSL for any k > 0 and for any tier τ over {a, b} × {a, b} * . Fix k and τ . Suppose a : a is on τ , and consider the input strings w = a k+1 and x = ba k . Observe Next, suppose a : a is not on τ , and consider the input strings w = b and x = ab. We have Let f be the function described in Proposition 19. As discussed in the proof, an onward SFST computing f must copy the current input symbol to the output stream during each time step. At the end of the computation, the final output function is responsible for adding the first input symbol to the end of the output string. Any onward TSSL SFST that attempts to compute f will eventually forget the identity of the first input symbol, so the final output function cannot determine what to add to the output. The SFST T in Figure The view of rhythmic syncope we have presented here differs substantially in approach from existing treatments of rhythmic syncope in phonolog-ical theory. Proposition 21. Every subsequential function f can be written in the form f = h • g, where g is 2-TOSL and h is a homomorphism. Proof. Let T = Q, Σ, Γ, q 0 , →, σ be the minimal SFST for f . Define g as follows. Let g(λ) := σ, f (λ) . For x 1 , x 2 , . . . , x n ∈ Σ, write q 0 Then, g(x 1 x 2 . . . x n ) := q 1 , y 1 q 2 , y 2 . . . q n , y n σ, σ(q n ) . Next, define h so that for any q, y , h( q, y ) = y. It is clear that f (x) = h(g(x)) for every x. We now show that g is 2-TOSL on a tier containing the full output alphabet. Fix w, x ∈ Σ * . Observe that for all z ∈ Σ * , g ← (z) ∈ (Q × Γ * ) * . Therefore, suppose that suff 1 (g ← (w)) = suff 1 (g ← (x)) = q, y . This means that q 0 w:u --→ q and q 0 x:v --→ q for some u, v ∈ Γ * , so g → u = g → v by definition. In both pseudo-deletion and Harmonic Serialism, non-segmental phonological symbols are used to encode state information in the output, making rhythmic syncope 2-TOSL. Proposition 21 shows that this technique can be applied to arbitrary SFSTs, and therefore results in massive overgeneration. By contrast, we have already seen that the TSSL functions are a proper subset of the subsequential functions, making action-sensitivity a more restrictive alternative to current approaches to rhythmic syncope. The classic examples of TIOSL phenomena in phonology are local processes and unidirectional spreading processes A potential risk of such an analysis is that the notion of "action" is specific to the computational system used to implement rhythmic syncope, and therefore potentially subject to a broad range of interpretations. In this paper, we have used onwardness and the existence of the minimal SFST to formulate a notion of "action-sensitivity" that is both formalism-independent and implementationindependent. In Subsection 6.2, we have seen that action-sensitivity can be made very powerful if we relax our assumptions about the nature of the computation. This means that if actionsensitivity is to be incorporated into phonological analyses of rhythmic syncope, then care should be taken to avoid loopholes like the one featured in Proposition 19. Based on Proposition 21, a similar warning can be made regarding the composition of phonological processes. When decomposing phonemena into several processes, as Outstanding formal questions regarding the TSSL functions include their closure properties and the complexity of learning TSSL functions. We leave such questions to future work. This appendix proves the equivalence between TSSL functions and onward TSSL SFSTs. We begin by showing how to construct an onward TSSL SFST computing any given TSSL function. Definition 22. Let f : Σ * → Γ * be k-TSSL on tier τ . Define the SFST transducer T (f ) = Q, Σ, Γ, q 0 , →, σ as follows. • Q := ({ } ∪ A(f )) k-1 and q 0 := k-1 . • For each x ∈ Σ, →(q 0 , x) := r, f ← (x) , where r = suff k-1 (τ (x : f ← (x))). • For each q ∈ Q\{q 0 }, let x ∈ Σ * be such that suff k-1 (τ (f ⇐ (x))) = q, and let w : We define →(q, w) := r, y , where r = suff k-1 (τ (q(w : y))). • Fix q ∈ Q. If q = q 0 , then σ(q) := f (λ). Otherwise, we define σ(q Note that in the third and fourth bullet points of Definition 22, the action w : y and the string f → x (λ) only depend on q and not on x, since f is k-TSSL on tier τ . We now need to show that T (f ) computes f and that it is onward. Lemma 24. Let f : Σ * → Γ * be k-TSSL on tier τ , and write Proof. Let us induct on |x|. For the base case, suppose |x| = 1. Then, y = f ← (x) by definition. Now, fix n > 1, and suppose that if 0 < |u| < n and q 0 u:v --→ r, then v = f ← (u). Fix w ∈ Σ n-1 and x ∈ Σ, and suppose that q 0 w:y --→ s x:z --→ t. By the induction hypothesis, y = f ← (w). The definition of T (f ) states that z is the unique string such that f ← (wx) = f ← (w)z. Thus, yz = f ← (w)z = f ← (wx), and the proof is complete. Lemma 25. Let f : Σ * → Γ * be k-TSSL on tier τ , and write T (f ) = Q, Σ, Γ, q 0 , →, σ . For all x ∈ Σ + , if q 0 x:y --→ r, then r = suff k-1 (τ (f ⇐ (x))). Proof. Let us induct on |x|. For the base case, suppose |x| = 1. Since f ⇐ (x) = x : f ← (x), by definition r = suff k-1 (τ (f ⇐ (x))). Now, fix n > 1, and suppose that if |w| < n and q 0 w:y --→ r, then r = suff k-1 (τ (f ⇐ (w))). We need to show that for all w ∈ Σ n-1 and x ∈ Σ, if q 0 w:y --→ r x:z --→ s, then s = suff k-1 (τ (f ⇐ (wx))). The induction hypothesis gives us r = suff k-1 (τ (f ⇐ (w))). Since s, z = →(r, x), by the definition of T (f ), s = suff k-1 (τ (r(x : z))) = suff k-1 (τ (r)τ (x : z)) = suff k-1 τ suff k-1 (τ (f ⇐ (w))) τ (x : z) = suff k-1 (τ (τ (f ⇐ (w)))τ (x : z)) = suff k-1 (τ (f ⇐ (w))τ (x : z)) = suff k-1 (τ (f ⇐ (w)(x : z))) = suff k-1 (τ (f ⇐ (wx))) , (26) as desired. Proposition 27. If f : Σ * → Γ * is k-TSSL on tier τ , then T (f ) computes f. Proof. We need to show that for every x ∈ Σ * , T (f ) outputs f (x) on input x. Write T (f ) = Q, Σ, Γ, q 0 , →, σ and q 0 x:y --→ q. By Lemma 24, y = f ← (x), and by Lemma 25, q = suff k-1 (τ (f ⇐ (x))). Definition 22 then states that σ(q) = f → x (λ), so yσ(q) = f ← (x)f → x (λ) = f (x), thus T (f ) outputs f (x) on input x. Corollary 28. If f : Σ * → Γ * is k-TSSL on tier τ , then T (f ) is onward. We then complete the proof by showing that every onward TSSL SFST computes a TSSL function. Lemma 29. Let T = Q, Σ, Γ, q 0 , →, σ be onward and k-TSSL on tier τ . Let f be the function computed by T . For all x ∈ Σ * , if q 0 x:y --→ q, then q = suff k-1 (τ (f ⇐ (x))). Proof. Let us induct on |x|. For the base case, suppose |x| = 1. Since T is onward, y = f ← (x), so q = suff k-1 τ k-1 (x : y) Now, fix n > 1, and suppose that if |w| < n and q 0 w:y --→ q, then q = suff k-1 (τ (f ⇐ (w))). We need to show that for all w ∈ Σ n-1 and x ∈ Σ, if q 0 w:y --→ r x:z --→ s, then s = suff k-1 (τ (f ⇐ (wx))). The induction hypothesis gives us r = suff k-1 (τ (f ⇐ (w))), and Definition 15 states that s = suff k-1 (τ (r(x : z))). A derivation similar to equation ( Proof of Theorem 16. Proposition 27 has already shown the forward direction. Let T = Q, Σ, Γ, q 0 , →, σ be an onward SFST computing f that is k-TSSL on tier τ . Suppose x, y ∈ Σ * are such that suff k-1 (τ (f ⇐ (w))) = suff k-1 (τ (f ⇐ (x))). Write q 0 w:y --→ r and q 0 x:z --→ s. By Lemma 29, r = suff k-1 (τ (f ⇐ (w))) = suff k-1 (τ (f ⇐ (x))) = s, so f → w = f → x , thus f is k-TSSL on tier τ .
624
1,931
624
Using Human Attention to Extract Keyphrase from Microblog Post
This paper studies automatic keyphrase extraction on social media. Previous works have achieved promising results on it, but they neglect human reading behavior during keyphrase annotating. The human attention is a crucial element of human reading behavior. It reveals the relevance of words to the main topics of the target text. Thus, this paper aims to integrate human attention into keyphrase extraction models. First, human attention is represented by the reading duration estimated from eye-tracking corpus. Then, we merge human attention with neural network models by an attention mechanism. In addition, we also integrate human attention into unsupervised models. To the best of our knowledge, we are the first to utilize human attention on keyphrase extraction tasks. The experimental results show that our models have significant improvements on two Twitter datasets.
Rapidly growth of user-generated content on social media has far outpaced human beings' reading and understanding capacity. Keyphrase extraction is one of the technologies that can organize this massive content. A keyphrase consists of one or more salient words, which represents the main topics of a document. It has a series of downstream applications, e.g., text summarization Generally, corpus with human annotated keyphrases are needed to train models in supervised keyphrase extraction frameworks. The premise for annotators to annotate keyphrases is to read the corresponding content. Intuitively, features estimated from human reading behavior can be leveraged to assist keyphrase extraction. Previous studies on keyphrase extraction have ignored these features When human reading, they do not pay the same attention to all words To integrate human attention into keyphrase extraction models, this paper constructs a neural network model with attention mechanism. Attention mechanism is a neural module designed to imitate human visual attention when they reading and looking
Recently, keyphrase extraction technologies have been extended to social media The open source eye-tracking corpus of natural reading include the Dundee corpus Formally, given a target microblog post x i formulated as word sequence where y i,w indicates whether x i,w is part of a keyphrase. As shown in Figure where h i,w is the representation of x i,w after passing through the Bi-directional LSTM (BiLSTM) layer, W y and b y are parameters of the function where Single represents that x i,w is a one-word keyword. Begin, Middle and End represent that x i,w is the first word, the middle word and the last word of a keyphrase, respectively. Not represents that x i,w is not a keyword or part of a keyphrase. From the hidden states, we directly predict word level raw attention scores a i,w : where W e and b e are parameters of function tanh(•). Then, we normalize these predictions to attention weights a i,w : where k is the length of x i . Inspired by The attention-level objective, similarly, is to minimize the squared error between the attention weights a i,w and real human attention âi,w estimated from eye-tracking corpus. When combined, λ word and λ att (between 0 and 1) are utilized to trade off loss functions at the wordlevel and attention-level, respectively. In addition to above mentioned single layer models, we also use joint-layer BiLSTM proposed by 4 Experiment Settings Our experiments are conducted on two datasets, i.e., Daily-Life dataset and Election-Trec dataset. Daily-Life This is collected from January of 2018 to April of 2018 using Twitter's steaming API with a set of daily life keywords. Election-Trec This is constructed based on opensource dataset TREC2011 track 1 and Election corpus For keyphrase annotation, we follow This paper estimates human attention from GECO corpus Human attention correlates with word frequency In the training phrase, we choose BiL-STM The epoch is set to 5. We initialize target post by embeddings pre-trained on 99M tweets with 27B tokens and 4.6M words in the vocabulary. We compare our models with CRF BiLSTM model This model is merely constructed by the character-level word embedding and the BiLSTM layer. STM layer and attention mechanism. Different with HA-BiLSTM, the attention mechanism in A-BiLSTM is not modified by human attention. Human attention estimated from eye-tracking corpus is helpful in improving the performance of neural network keyphrase extraction. As shown in Table The open-source eye-tracking corpus can improve the performance of models on datasets in different genres. Although the genre of the GECO eye-tracking corpus is fiction, which is different with the genre of the target dataset (Microblog), it has the ability to improve the performance of keyphrase extraction on target datasets. To qualitatively analyze why models with human attention generally perform better in comparison, we conduct a case study on two simple instances in Table In Table In this section, we explore the idea of using human attention on TextRank is appeared within the window of x i,j , there is an edge e(x i,m , x i,j ) between these two words. Based on the graph composited by word vertices and edges, the importance of each word vertices can be calculated. In TextRank, the value of x i,j 4.0 12.1 6.0 7.4 24.9 11.4 Table and e(x i,m , x i,j ) are initialized unprivileged. In our models, we utilize human attention to normalize the initialized value of x i,j and e(x i,m , x i,j ). The initialized value of x i,j depends on the N-ATRT value of itself. The initialized value of e(x i,m , x i,j ) depends on the N-ATRT value of x i,m and x i,j . After extracting candidate words by HATR, we generate keyphrases by combining candidate words if words are connected together in target posts. As shown in Table In this paper, we consolidate the neural network keyphrase extraction algorithm with human attention represented by total reading time (TRT) estimated from GECO eye-tracking corpus. The proposed models yield a better performance on two Twitter datasets. Moreover, human attention is also effective on unsupervised models. In the future, first, we try to utilize more eyetracking corpus and estimate more features of reading behavior. Then, we will attempt to analyze real human reading behavior on social media and thereby explore more specific human attention features on social media.
877
1,083
877
Learning to Exploit Structured Resources for Lexical Inference
Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1
Recognizing lexical inference is an important component in semantic tasks. Various lexicalsemantic relations, such as synonomy, classmembership, part-of, and causality may be used to infer the meaning of one word from another, in order to address lexical variability. For instance, a question answering system asked "which artist's net worth is $450 million?" might retrieve the candidates Beyoncé Knowles and Lloyd Blankf ein, who are both worth $450 million. To correctly answer the question, the application needs to know that Beyoncé is an artist, and that Lloyd Blankfein is not. Corpus-based methods are often employed to recognize lexical inferences, based on either cooccurrence patterns While corpus-based methods usually enjoy high recall, their precision is often limited, hindering their applicability. An alternative common practice is to mine high-precision lexical inferences from structured resources, particularly WordNet We begin by examining whether the common usage of WordNet for lexical inference can be extended to larger resources. Typically, a subset of WordNet relations is manually selected (e.g. all synonyms and hypernyms). By nature, each application captures a different aspect of lexical inference, and thus defines different relations as indicative of its particular flavor of lexical infer- Since WordNet has a relatively simple schema, manually finding such an optimal subset is feasible. However, structured knowledge resources' schemas contain thousands of relations, dozens of which may be indicative. Many of these are not trivial to identify by hand, as shown in Table We present a principled supervised framework, which automates the selection process of resource relations, and optimizes this subset for a given target inference relation. This automation allows us to leverage large-scale resources, and extract many high-precision inferences over propernames, which are absent from WordNet. Finally, we show that our framework complements stateof-the-art corpus-based methods. Combining the two approaches can particularly benefit real-world tasks in which proper-names are prominent.
WordNet One approach looks for chains of these predefined relations While there is a broad consensus that synonyms entail each other (elevator ↔ lif t) and hyponyms entail their hypernyms (cat → animal), other relations, such as meronymy, are not agreed Overall, there is no principled way to select the subset of relevant relations, and a suitable subset is usually tailored to each dataset and task. This work addresses this issue by automatically learning the subset of relations relevant to the task. While WordNet is quite extensive, it is handcrafted by expert lexicographers, and thus cannot compete in terms of scale with community-built knowledge bases such as Wikidata In this paper, we experimented with such resources, in addition to WordNet. DBPedia We wish to leverage the information in structured resources to identify whether a certain lexicalinference relation R holds between a pair of terms. Formally, we wish to classify whether a term-pair (x, y) satisfies the relation R. R is implicitly defined by a training set of (x, y) pairs, annotated as positive or negative examples. We are also given a set of structured resources, which we will utilize to classify (x, y). Each resource can be naturally viewed as a directed graph G (Figure When using multiple resources, G is a disconnected graph composed of a subgraph per resource, without edges connecting nodes from different resources. One may consider connecting multiple resource graphs at the term nodes. However, this may cause sense-shifts, i.e. connect two distinct concepts (in different resources) through the same term. For example, the concept January 1 st in Wikidata is connected to the concept f ruit in WordNet through the polysemous term date. The alternative, aligning resources in the concept space, is not trivial. Some partial mappings exist (e.g. Yago-WordNet), which can be explored in future work. We present an algorithmic framework for learning whether a term-pair (x, y) satisfies a relation R, given an annotated set of term-pairs and a resource graph G. We first represent (x, y) as the set of paths connecting x and y in G ( §4.1). We then classify each such path as indicative or not of R, and decide accordingly whether xRy ( §4.2). We represent each (x, y) pair as the set of paths that link x and y within each resource. We retain only the shortest paths (all paths x ; y of minimal length) as they yielded better performance. Resource graphs are densely connected, and thus have a huge branching factor b. We thus limited the maximum path length to = 8 and employed bidirectional search To further reduce complexity, we split the search to two phases: we first find all nodes along the shortest paths between x and y, and then reconstruct the actual paths. Searching for relevant nodes ignores edge types, inducing a simpler resource graph, which can be represented as a sparse adjacency matrix and manipulated efficiently with matrix operations (elaborated in appendix A). Once the search space is limited to relevant nodes alone, path-finding becomes trivial. We consider edge types that typically connect between concepts in R to be "indicative"; for example, the occupation edge type is indicative of the is a relation, as in "Beyoncé is a musician". Our framework's goal is to learn which edge types are indicative of a given relation R, and use that information to classify new (x, y) term-pairs. Figure In this work, we are not only interested in optimizing accuracy or F 1 , but in exploring the entire recall-precision trade-off. Therefore, we optimize the F β objective, where β 2 balances the recallprecision trade-off. 4 In particular, we expect structured resources to facilitate high-precision inferences, and are thus more interested in lower values of β 2 , which emphasize precision over recall. A typical neural network approach is to assign a weight w i to each edge type e i , where more indicative edge types should have higher values of w i . The indicativeness of a path (p) is modeled using logistic regression: p σ( w • φ), where φ is the path's "bag of edges" representation, i.e. a feature vector of each edge type's frequency in the path. The probability of a term-pair being positive can be determined using either the sum of all path scores or the score of its most indicative path (max-pooling). We trained both variants with back-propagation This model can theoretically quantify how indicative each edge type is of R. Specifically, it can differentiate weakly indicative edges (e.g. meronyms) from those that contradict R (e.g. antonyms). However, on our datasets, this model yielded sub-optimal results (see §6.1), and therefore serves as a baseline to the binary model presented in the following section. Preliminary experiments suggested that in most datasets, each edge type is either indicative or non-indicative of the target relation R. We therefore developed a binary model, which defines a 4 F β = (1+β 2 )•precision•recall β 2 •precision+recall 5 Classification We represent each path p as a binary "bag of edges" φ, i.e. the set of edge types that were applied in p. Given a term-pair (x, y) represented as a path-set paths(x, y), and a whitelist w, the model classifies (x, y) as positive if: ∃φ ∈ paths(x, y) : In other words: 1. A path is classified as indicative if all its edge types are whitelisted. 2. A term-pair is classified as positive if at least one of its paths is indicative. The first design choice essentially assumes that R is a transitive relation. This is usually the case in most inference relations (e.g. hypernymy, causality). In addition, notice that the second modeling assumption is unidirectional; in some cases xRy, yet an indicative path between them does not exist. This can happen, for example, if the relation between them is not covered by the resource, e.g. causality in WordNet. Training Learning the optimal whitelist over a training set can be cast as a subset selection problem: given a set of possible edge types E = {e 1 , ..., e n } and a utility function u : 2 E → R, find the subset (whitelist) w ⊆ E that maximizes the utility, i.e. w * = arg max w u(w). In our case, the utility u is the F β score over the training set. Structured knowledge resources contain hundreds of different edge types, making E very large, and an exhaustive search over its powerset infeasible. The standard approach to this class of subset selection problems is to apply local search algorithms, which find an approximation of the optimal subset. We tried several local search algorithms, and found that genetic search In our application of genetic search, each individual (candidate solution) is a whitelist, represented by a bit vector with a bit for each edge type. We defined the fitness function of a whitelist w according to the F β score of w over the training set. We also applied L 2 regularization to reduce the fitness of large whitelists. The binary edge model works well in practice, successfully replicating the common practice of manually selected relations from WordNet (see §6.1). In addition, the model outputs a humaninterpretable set of indicative edges. Although the weighted model's hypothesis space subsumes the binary model's, the binary model performed better on our datasets. We conjecture that this stems from the limited amount of training instances, which prevents a more general model from converging into an optimal solution. We used 3 existing common-noun datasets and one new proper-name dataset. Each dataset consists of annotated (x, y) term-pairs, where both x and y are noun phrases. Since each dataset was created in a slightly different manner, the underlying semantic relation R varies as well. kotlerman2010 Two additional datasets were created using WordNet An important linguistic component that is missing from these lexical-inference datasets is propernames. We conjecture that much of the added value in utilizing structured resources is the ability to cover terms such as celebrities (Lady Gaga) and recent terminology (social networks) that do not appear in WordNet. We thus created a new dataset of (x, y) pairs in which x is a proper-name, y is a common noun, and R is the is a relation. For instance, (Lady Gaga, singer) is true, but (Lady Gaga, f ilm) is false. To construct the dataset, we sampled 70 articles in 9 different topics from a corpus of recent events (online magazines). As candidate (x, y) pairs, we extracted 24,000 pairs of noun phrases x and y that belonged to the same paragraph in the original text, selecting those in which x is a propername. These pairs were manually annotated by graduate students, who were instructed to use their world knowledge and the original text for disambiguation (e.g. England → team in the context of football). The agreement on a subset of 4,500 pairs was κ = 0.954. After annotation, we had roughly 800 positive and 23,000 negative pairs. To balance the dataset, we sampled negative examples according to the frequency of y in positive pairs, creating "harder" negative examples, such as (Sherlock, lady) and (Kylie M inogue, vice president). We first validate our framework by checking whether it can automatically replicate the common manual usage of WordNet. We then evaluate it on the proper-name dataset using additional resources. Finally, we compare our method to stateof-the-art distributional methods. Experimental Setup While F 1 is a standard measure of performance, it captures only one point on the recall-precision curve. Instead, we present the entire curve, while expecting the contribution of structured resources to be in the high-precision region. To create these curves, we optimized our method and the baselines using F β with 40 values of β 2 ∈ (0, 2). We randomly split each dataset into 70% train, 25% test and 5% validation. We examine whether our algorithm can replicate the common use of WordNet ( §2.1), by manually constructing 4 whitelists based on the literature (see Table We also observe that, in most cases, our algorithm outperforms Resnik's similarity. In addition, the weighted model does not perform as well as the binary model, as discussed in §4.2. We therefore focus our presentation on the binary model. We evaluated our model on the new proper-name dataset proper2015 described in §5.2. This time, we incorporated all the resources described in §2.2 (including WordNet) into our framework, and compared the performance to that of using WordNet alone. Indeed, our algorithm is able to exploit the information in the additional resources and greatly increase performance, particularly recall, on this dataset (Figure The performance boost in proper2015 demonstrates that community-built resources have much added value when considering propernames. As expected, many proper-names do not appear in WordNet (Doctor W ho). That said, even when both terms appear in WordNet, they often lack important properties covered by other resources (Louisa M ay Alcott is a woman). Lexical inference has been thoroughly explored in distributional semantics, with recent supervised methods To represent term-pairs with distributional features, we downloaded the pre-trained word2vec embeddings. In levy2014, there is an overwhelming advantage to our resource-based method over the corpus-based method. This dataset contains healthcare terms and might require a domainspecific corpus to train the embeddings. Having said that, many of its examples are of an ontological nature (drug x treats disease y), which may be more suited to our resource-based approach, regardless of domain. Since resource-based methods are precisionoriented, we analyzed our binary model by selecting the setting with the highest attainable recall that maintains high precision. This point is often at the top of a "precision cliff" in Figures The high-precision settings we chose resulted in few false positives, most of which are caused by annotation errors or resource errors. Naturally, regions of higher recall and lower precision will yield more false positives and less false negatives. We thus focus the rest of our discussion on false negatives (Table While structured resources cover most terms, the majority of false negatives stem from the lack of indicative paths between them. Many important relations are not explicitly covered by the resources, such as noun-quality (saint → holiness), which are abundant in turney2014, or causality (germ → inf ection), which appear in levy2014. These examples are occasionally captured by other (more specific) relations, and tend to be domain-specific. In kotlerman2010, we found that many false negatives are caused by annotation errors in this dataset. Pairs are often annotated as positive based on associative similarity (e.g. transport → environment, f inancing → management), making it difficult to even manually construct a coherent whitelist for this dataset. This may explain the poor performance of our method and other baselines on this dataset. In this paper, we presented a supervised framework for utilizing structured resources to recognize lexical inference. We demonstrated that our framework replicates the common practice of WordNet and can increase the coverage of propernames by exploiting larger structured resources. Compared to the prior practice of manually identifying useful relations in structured resources, our contribution offers a principled learning approach for automating and optimizing this common need. While our method enjoys high-precision, its recall is limited by the resources' coverage. In future work, combining our method with high-recall corpus-based methods may have synergistic results. Another direction for increasing recall is to use cross-resource mappings to allow crossresource paths (connected at the concept-level). Finally, our method can be extended to become context-sensitive, that is, deciding whether the lexical inference holds in a given context. This may be done by applying a resource-based WSD approach similar to
724
2,127
724
Selective Labeling: How to Radically Lower Data-Labeling Costs for Document Extraction Models
Building automatic extraction models for visually rich documents like invoices, receipts, bills, tax forms, etc. has received significant attention lately. A key bottleneck in developing extraction models for new document types is the cost of acquiring the several thousand highquality labeled documents that are needed to train a model with acceptable accuracy. In this paper, we propose selective labeling as a solution to this problem. The key insight is to simplify the labeling task to provide "yes/no" labels for candidate extractions predicted by a model trained on partially labeled documents. We combine this with a custom active learning strategy to find the predictions that the model is most uncertain about. We show through experiments on document types drawn from 3 different domains that selective labeling can reduce the cost of acquiring labeled data by 10× with a negligible loss in accuracy.
Visually rich documents such as invoices, receipts, paystubs, insurance statements, tax forms, etc. are pervasive in business workflows. The tedious and error-prone nature of these workflows has led to much recent research into machine learning methods for automatically extracting structured information from such documents A critical hurdle in the development of highquality extraction systems is the large cost of acquiring and annotating training documents belonging to the target types. The human annotators often require training not only on the use of the annotation tools but also on the definitions and semantics of the target document type. The annotation task can be tedious and cognitively taxing, requiring the annotator to identify and draw bounding boxes around dozens of target fields in each document. This data efficiency requirement has not gone unnoticed in the research literature on this topic. However, even with model pre-training The cost of acquiring high quality labeled data for hundreds of document types is prohibitively expensive and is currently a key bottleneck. We could apply active learning strategies to select a few but informative documents for human review notator We interleave rounds of such human annotation with training a model that is capable of consuming partially labeled documents. In combination, our proposed approach dramatically improves the efficiency of the annotation workflow for this extraction task. In fact, through experiments on document types drawn from multiple domains, we show that selective labeling allows us to build models with 10× lower annotation cost while achieving nearly the same accuracy as a model trained on several thousand labeled documents. Note that our goal in this paper is not to advance the state-of-the-art in active learning, nor to propose a more data-efficient model for extraction from layout-heavy documents. Our main contribution is that we demonstrate that a novel combination of an existing active-learning strategy with an existing extraction model can be used to dramatically cut down the primary bottleneck in developing extraction models for visually rich documents.
We first describe how a typical annotation task is set up to acquire labeled documents. We point out two major deficiencies with this approach before outlining an alternative that takes advantage of the characteristics of this domain. We then describe the assumptions underlying our approach. Given a document type for which we want to learn an extraction model, we begin by listing out the fields that we want to extract, along with humanreadable descriptions, viz., "labeling instructions". We provide these instructions to human annotators and present them with various document images to label. The classic annotation task is to draw a bounding box around each instance of any of the target fields and label it with the corresponding field name (Figure The high cognitive burden of the classic annotation workflow leads to two major drawbacks. First, it makes training data collection extremely expensive. In one annotation task for paystub-like documents with 25 target fields, the average time to label each document was about 6 minutes. Scaling this to hundreds of document types with thousands of documents each would be prohibitively expensive. Second, the resulting annotation quality is often quite poor. We have observed systematic errors such as missing labels for fields that occur infrequently in the documents or for instances that are in the bottom third of the page. To obtain acceptable training and test data quality, each document must be labeled multiple times, further exacerbating the annotation cost issue. We propose the following alternative to the classic annotation workflow: 1. We speed up labeling throughput by simplifying the task: rather than drawing bounding boxes, we ask annotators to accept or reject a candidate extraction. Figure 2. We further cut down annotation cost by only labeling a subset of documents and only a subset of fields in each document. 3. We use a model trained on partially labeled documents to propose the candidate extraction spans for labeling. This allows us to interleave model training and labeling so that the model keeps improving as more labels are collected. 4. We use a customized active learning strategy to identify the most useful labels to collect, viz., the candidate extraction spans about which the model is most uncertain. In successive labeling rounds, we focus our labeling budget on the fields that the model has not yet learned to extract well, such as the more infrequent ones. In Section 5, we show empirical evidence that this improved workflow allows us to get to nearly the same quality as a model trained on 10k docs by spending an order-of-magnitude less on datalabeling. Note that naively switching the labeling task to the "yes/no" approach does not cut down the labeling cost -if we were to highlight every span that might potentially be an amount and present an "Is this the tax_amount?" question, with the dozens of numbers that are typically present in an invoice, this workflow will be much more expensive than the classic one. A key insight we contribute is that a model trained on a modest amount of data can be used to determine a highly effective subset of "yes/no" questions to ask. We make the following four assumptions about the problem setting: 1. We assume access to a pool of unlabeled documents. This is a natural assumption in any work on managing cost of acquiring labeled training data. We assume the extraction model can be trained on partially labeled documents. 3. We assume the model can generate candidate spans for each field and a measure of uncertainty -this is used to decide the set of "yes/no" questions to present to the annotator. 4. The analysis in this paper uses empirical measurements for labeling tasks on documents with roughly 25 fields to model the costs of the traditional approach (6 minutes per document, details in Appendix) and the proposed approach (10 seconds per "yes/no" question Throughout this work, we use an extraction system similar to the architecture described in We first provide an overview of the selective labeling framework before describing various uncertainty measures and ways to deal with the unique characteristics of this setting, such as varying difficulty for different fields. Figure We begin by fully labeling a small randomly sampled subset of documents S d ⊆ U d , say 50-250 documents, using the classic annotation workflow. We learn an initial document extraction model f (x|S c ), where S c represents the candidate set contained in S d and we mark all the remaining unlabeled candidates in U d \S d as U c . Our labeling workflow proceeds in rounds. In each round j, the model is used to select candidates S c j from U c and have them reviewed by human annotators. The annotators answer a "yes/no" question either accepting or rejecting this proposed label. As a result, S c = S c ∪ S c j and U c = U c \S c j , meaning the newly labeled examples are merged into the training set and removed from the unlabeled set. The model is retrained on S c in each round and we repeat this iterative labeling-and-training procedure until we exhaust our annotation budget or reach our target F1 score. We select the candidates that the model is most uncertain about. In this work, we explored two metrics to quantify a model's prediction uncertainty. Score distance. This method assigns a metric to each candidate based on the distance that the score is from some threshold Our model's predicted scores tend to be uncalibrated (as is very typical of neural networks By calibrating the scores, threshold selection becomes much more intuitive for the score-based uncertainty metric. For example, if we specify a threshold of 0.5, we expect that to mean we will select candidates for which the model has a 50% chance of classifying correctly across all fields. Once the uncertainty metric is calculated for each candidate in the unlabeled set, the next step is to select a subset of those candidates for human review. The most obvious method is to select the top-k candidates, thereby selecting the candidates for which the model is most uncertain. In practice, this can lead to sub-optimal results when the model finds many examples for which it is uncertain but may in fact be very similar to one another. The most common approach to break out of this trap is to introduce some notion of diversity in the sampling methodology After candidates have been selected and labeled, we merge the newly-labeled candidates into our training set. At this point, there is another opportunity to draw additional value from the unlabeled corpus by utilizing the structure of the extraction problem, in particular, for fields that are defined in the domain's schema to only have a single value per document (such as a document identifier, statement date, amount due, etc.). The key insight here is that when a positive label is revealed via selective labeling, we can infer negative labels for some remaining candidates in the document. If the schema indicates that a particular field is non-repeating, we can automatically infer that all of that field's remaining candidates in the document are negative. To evaluate the performance of our proposed methods, we use datasets belonging to three different domains, summarized in Table In this section, we provide evidence to prove that selective labeling reduces the annotation cost by 10X in different domains and analysis to support the design choices including number of selection rounds, selection and sampling strategies. We train three initial models on a randomly sampled and labeled set of 100 documents for each domain. For example, as shown in Figure Starting from the same initial model, we apply our best selective labeling strategy (which we discuss in the following sections) to reveal the labels from a subset of candidates that comprises only 10% of the annotation cost of fully labeling the hidden-label dataset. For the Supply Chain domain, this achieves an F1 score of 0.687, which closes the performance gap by 89%. Similarly, we close the gap by 88% and 92% for the Retail Finance and Tax Forms domains, respectively. This demonstrates that our method can dramatically decrease the annotation cost without sacrificing much performance and can be generalized well to other document types. In Figure Figure The top-k strategies produce much more impressive results. Furthermore, we observe in later rounds that injecting some diversity via randomness achieves slightly better performance than the vanilla top-k approach. We believe this mimics the aggregation of exploitation (top-k) and exploration (random) processes, proven to be beneficial in reinforcement learning applications In Figure As we increase the total number of rounds, the model tends to yield better extraction performance until it peaks at about 12 rounds. This finer-grained strategy usually performs better than coarser ones but the gains become marginal at a higher number of rounds. Interestingly, we find that using up just half the budget in the first 8 rounds of a 16-round experiment achieves slightly better performance than exhausting the entire budget in the 1-round experiment. This comparison underscores the importance of employing a multi-round approach. Table It is reasonable to conclude that increasing diversity intelligently helps us select more useful candidates than relying on the uncertainty metric alone. Given the dependence of the selective labeling method on an initially labeled small dataset, it is imperative that we evaluate how the approach is affected by the number of documents in this initial dataset. We experiment with initial datasets of 50, 100, and 250 documents in the Supply Chain domain using our best selective labeling strategy and a budget equivalent of 10% cost of annotating the "unlabeled" dataset. Figure Note that the model can extrapolate to fields that are not present in the initial set of documents. For each document type, a schema is defined to include all types of fields that users may be interested in. The model can generate candidates no matter if the field exists in the initial document set or not, as long as the field is included in the schema. We examine the extraction performances of eight fields from the Supplier Chain document type in Figure For frequent fields such as date_invoiced and in-voice_number, the initial model performs well, and there is not much room for improvement. Consequently, few candidates are selected and the resulting Selective Labeling model remains competitive on these fields. Form Extraction. There have been numerous recent studies on information extraction for form-like documents. Existing approaches either individually categorize every text span in the document We propose selective labeling that dramatically cuts down the primary dataset annotation bottleneck in developing extraction models for visually rich documents. There are several future avenues for investigation. First, we simplified the annotation task to a binary "yes/no" question. Another approach is to allow the annotator to either accept the candidate annotation, or correct it -either by deleting it or by adjusting the bounding box. For certain text fields it can be valuable to adjust spans to include/exclude details like salutations from a name field ("Mr.", "Dr." etc.) or names from an address. The cost model for such an option is more complex than "yes/no", but can be used to build on the results in this paper. Second, many recent approaches Within the scope of this paper, the proposed method is limited to utilizing combinations of candidate generators and scorers. As explained in Section 7, many recent attractive approaches treat document extraction as a sequence labeling problem using a layout-aware language model. This model family is attractive because it does not require a generator. However, constructing selective labeling on sequence labeling models is not a simple task, as we must figure out how to obtain an uncertainty estimate for each span from a sequence labeling model, how to define spans without a candidate generator, and how to train the model with partially labeled documents, etc. We understand the limitation of the availability of datasets. We are currently unable to open-source them since the datasets contain proprietary information (such as vendors and suppliers) that prevent us from sharing publicly. We use internal datasets in this work because they reflect the real-world needs of our institution and its customers better than public datasets. Compared to the few available public datasets, such as FUNSD To explore how the size of the initial labeled dataset impacts our methods, we create three initial splits for the Supply Chain domain with 50, 100, and 250 documents. In all of our experiments, we split the train set into 80-20 training-validation sets. The validation set is used to pick the best model by AUC-ROC, and we use the test split to report the performance metrics. We train using the Rectified Adam We evaluate our methods by measuring the overall extraction system's performance on the test set using the maximum F1 averaged across all fields, denoted as "Average E2E Max F1" in We acquired stats from our team of annotators on how long the classic annotation takes for various document types. We found it averaged 6-8 min for an annotator to label a single-page document with fewer than 20 fields while it averaged 10-30 min for an annotator to label a multi-page document with 25 fields. So we picked a very conservative value (6 min) as the estimated time of labeling one document in this paper. We employ two annotation methods: the classic annotation method, which is always applied to the initial training set, and the proposed "yes/no" method, which is applied during the selective labeling procedure on the unlabeled dataset. The annotation budget is computed based on the time needed to annotate a full document and to answer a yes/no question. Targeting 10% of the cost to fully label the unlabeled dataset via the classic annotation method, translates to selectively labeling 36k, 14k, and 27k "yes/no" questions for Supply Chain, Retail Finance, and Tax Forms domains according to the estimation of same amount of annotation hours. If we bootstrap the model using the classic annotation workflow on a small number of documents, we simply subtract that cost from the budget for selective annotation. We believe that the problem of imperfect candidate generation requires more discussion. We build our Selective Labeling framework on the model architecture introduced in We gradually increased the annotation budget and observed the corresponding results in Table As explained in Section 2.2, we use an extraction system similar to the architecture described in To predict whether an extraction candidate is a valid value for a given target field, the scorer model takes the target field and the extraction candidate as input and outputs a prediction score. The model is trained and evaluated as a binary classifier, which means that it predicts whether an extraction can-didate is valid or invalid. The features of each extraction candidate used in the scorer model are its neighboring words and their relative positions. The model learns a dense representation for each extraction candidate using a simple self-attention based architecture. This representation captures the semantics of the extraction candidate. The model also learns dense representations for each field in the target schema. These representations capture the semantics of the fields. Based on the learned candidate and field representations, each extraction candidate is scored based on the similarity to its corresponding field embedding. The model is trained as a binary classifier using cross-entropy loss. The target labels are obtained by comparing the candidate to the ground truth. Details of the model architecture can be found in
910
2,166
910
A Semi-Markov Structured Support Vector Machine Model for High-Precision Named Entity Recognition
Named entity recognition (NER) is the backbone of many NLP solutions. F 1 score, the harmonic mean of precision and recall, is often used to select/evaluate the best models. However, when precision needs to be prioritized over recall, a state-of-the-art model might not be the best choice. There is little in the literature that directly addresses training-time modifications to achieve higher precision information extraction. In this paper, we propose a neural semi-Markov structured support vector machine model that controls the precisionrecall trade-off by assigning weights to different types of errors in the loss-augmented inference during training. The semi-Markov property provides more accurate phrase-level predictions, thereby improving performance. We empirically demonstrate the advantage of our model when high precision is required by comparing against strong baselines based on CRF. In our experiments with the CoNLL 2003 dataset, our model achieves a better precisionrecall trade-off at various precision levels.
Named Entity Recognition (NER) is the task of locating and categorizing phrases into a closed set of classes, such as organizations, people, and locations. NER is an information extraction task that is important for understanding large bodies of text and is an essential component for many natural language processing (NLP) pipelines. The most common evaluation metric for information extraction tasks is F 1 , which is the harmonic mean between precision and recall: that is, false positives and false negatives are weighted equally. In certain real-world applications (e.g., medicine and finance), extracting wrong information is much worse than extracting nothing: hence, * Work conducted while working at Bloomberg L.P. in such domains, high precision is emphasized. Trade-offs between precision and recall have been well researched for classification By defining custom loss objectives for the structured SVM (SSVM) model, we extend costsensitive learning We compare our semi-Markov SSVM model with several competitive inference-time baselines that have been proposed for high-precision NER. Our results show that our model outperforms competitive baselines on organization names, and is at least as good as the best inference-time approaches at some precision levels for other NER classes.
For classification, several papers try to optimize different evaluation metrics directly. Cost-sensitive classification For sequence tagging problems, inference-time heuristics for tuning the precision-recall trade-off for information extraction models have been pro- We adopt the BiLSTM-CNNs architecture embedding At the output layer, instead of using a CRF where ∆ is the Hamming loss between two sequences, Y x i contains all possible label assignments for the sentence x i , and s is the decoding score between input sentence x and label sequence y. Without modifications, the SSVM performs similar to the CRF. However, the presence of ∆(y i , y) in the SSVM loss allows us to design custom loss functions for high precision NER. No inferencetime changes are introduced. The first modification we make is to pick a target entity class and modify ∆(y i , y) to have word-wise loss of tgt for false positives on the target class and loss of t gt for false positives on other classes. That is, let y j i be j-th element of sequence y i , we define ∆(y i , y) = j w j , where if y j i = y j tgt , if y j i = y j and y j = target class t gt , if y j i = y j and y j = target class Note that the target class in the above equation contains all the labels related to the target entity type; that is, if the target class is ORG, we consider B-ORG and I-ORG to be the related labels. Typically tgt t gt so that the false positives on the target class will generate more loss, thereby discouraging the model from making such decisions. Both tgt and t gt are determined through hyper-parameter tuning. Setting tgt = t gt = 1 falls back to the standard Hamming loss. Semi-Markov SSVM A problem with tokenlevel loss is that it does not always reflect phraselevel errors accurately; it may over generate loss since a phrase could consist of multiple tokens. It is unclear how individual token false positives contribute to phrase-level false positives. Therefore, we try a semi-Markov variation of the SSVM following To tune the semi-Markov SSVM model to high precision for a specific class, a segment will contribute tgt to the loss if it is predicted as the target class and this segment does not exist in the gold segmentation. Other types of errors in the prediction have a loss of t gt . This is similar to the class-specific loss used on the token-level in the SSVM formulation. In our experiments, we refer to the token-level model simply as SSVM, and the segment-level model as semi-Markov SSVM. All experiments were conducted on the CoNLL 2003 English dataset. We first show the performance of CRF, SSVM, and semi-Markov SSVM models without tuning for high precision in Table 1. We see that all three models perform similarly, with CRF being slightly better. These numbers are the starting points for the rest of the experiments. We compare the proposed models with the following inference-time baselines: Bootstrap CRF By generating bootstrap samples of the CoNLL training set, we generate 100 BiLSTM CRF models. To increase precision over a single CRF, we decode each sentence with each of the 100 models and compute the votes for each proposed named entity. The threshold (percent of votes) for a candidate entity is hyper-tuned. Using the dev set, we tune the hyper-parameters of each model at which the desired precision is achieved. For our proposed SSVM-based mod- We set several precision levels from 90 to 100. For each precision level, we choose the hyperparameters which have precision higher than the target precision level and obtain the maximum We can see that semi-Markov SSVM clearly outperforms all the other models for ORG, is on par with Thresholded CRF for LOC, and has some strong points in the high precision region for PER. The good performance on ORG is consistent with the observation in Ye and Ling (2018) that semi-Markov models have advantages in longer phrases because labels are assigned at the segment level directly. Since longer mentions tend to have a smaller phrase probability and the length of ORG mentions varies more than the length of the other two types, Thresholded CRF is less robust for ORG. The token-based SSVM is consistently worse than semi-Markov SSVM and fails to achieve higher precision, especially for PER. This shows that the semi-Markov property penalizes false positives at the phrase-level more accurately. Bootstrap CRF does not perform well for ORG and LOC, but is pretty strong for PER at some precision levels. We believe higher performance of bootstrap CRF on PER class comes from the fact 3 tgt is searched in the range between 1 and 5, and t gt is between 0.0001 and 0.1. that the baseline CRF model itself achieves very high precision for this class, which allows bootstrapping technique reduce the variance on predictions accurately. This makes bootstrapping approach more promising to situations where models have already achieved very high precision. We perform error analysis for the two main methods: Thresholded CRF and semi-Markov SSVM. We pick model settings such that both models achieve the same precision level (ORG:94.5 PER:97.9 LOC:95.5) for a given class. Table These two methods can be applied together to achieve even better results. For example, thresholding and bootstrap techniques can be applied to semi-Markov SSVM models as well. In this work, we focus on showing the performance of individual approaches. Another question is what types of errors are reduced when tuning towards precision? We find that precision tuning reduces all error types, but especially the MISC type errors for all 3 classes (i.e., MISC being classified as one of the other 3 classes). We proposed a semi-Markov SSVM model for high-precision NER. To our best knowledge, it is the first training-time model for high precision structured prediction. Experiment results show that our model performs better than inference-time approaches at several precision levels, especially for longer mentions. The proposed model offers promising future extensions in terms of directly optimizing other metrics such as Recall and F β . This work also opens up a range of questions from modeling to evaluation methodology.
1,031
1,295
1,031
Dealing with Semantic Underspecification in Multimodal NLP
Intelligent systems that aim at mastering language as humans do must deal with its semantic underspecification, namely, the possibility for a linguistic signal to convey only part of the information needed for communication to succeed. Consider the usages of the pronoun they, which can leave the gender and number of its referent(s) underspecified. Semantic underspecification is not a bug but a crucial language feature that boosts its storage and processing efficiency. Indeed, human speakers can quickly and effortlessly integrate semanticallyunderspecified linguistic signals with a wide range of non-linguistic information, e.g., the multimodal context, social or cultural conventions, and shared knowledge. Standard NLP models have, in principle, no or limited access to such extra information, while multimodal systems grounding language into other modalities, such as vision, are naturally equipped to account for this phenomenon. However, we show that they struggle with it, which could negatively affect their performance and lead to harmful consequences when used for applications. In this position paper, we argue that our community should be aware of semantic underspecification if it aims to develop language technology that can successfully interact with human users. We discuss some applications where mastering it is crucial and outline a few directions toward achieving this goal.
They put the flowers there. Speakers of a language hear sentences like this every day and have no trouble understanding what they mean-and what message they convey. This is because, in a normal state of affairs, they can count on a wide range of information from the surrounding context, personal knowledge and experience, social or cultural conventions, and so on. Upon hearing this sentence, for example, they would know that flowers go into vases, look in the direction where their interlocutor nodded their chin, see a vase with tulips on the windowsill, and infer that this is where someone put the flowers. Every time listeners need to count on extra, non-linguistic information to understand a linguistic signal, like in this example, it is because the language used is semantically underspecified The reason why semantic underspecification is so widespread has to do with language efficiency, which is a trade-off between informativeness and conciseness Semantic underspecification allows our limited repertoire of symbols to be used in many contexts and with different intentions without compromising its communicative effectiveness. For example, we can use the pronoun they to omit a person's gender or refer to a group of friends; the locative here to refer to a free table at a café or the institution you work for. Semantic underspecification is not a bug but a crucial feature of language that is ubiquitous in human communication In this position paper, we argue that semantic underspecification should be high on the NLP community agenda, particularly within approaches combining language and vision. We report that SotA multimodal NLP models struggle with it, and advocate a comprehensive, thorough investigation of the phenomenon along several research directions and concrete steps. Mastering semantic underspecification is a long-term goal that implies shifting the paradigm to a scenario where models use language as humans do, that is, with a communicative goal. In line with what was argued elsewhere
Semantic Underspecification? The field of multimodal or visually grounded NLP is currently dominated by pre-trained multimodal Transformers. Since their introduction, models like CLIP These models differ from each other in several dimensions. For example, they either concatenate and jointly process the visual and textual embeddings (single-stream models), or process the two modalities by means of separate encoders with an optional cross-modal fusion (dualstream models); or, they use visual features extracted with either CNN-based (e.g., region features from Faster R-CNN; Given this impressive performance, it is reasonable to expect that these models are robust to se-mantically underspecified language. Describing an image, asking a question, or entertaining a conversation about it are all communicative scenarios that admit a varying degree of semantic underspecification. For example, the question What are they doing? referred to a visual context with people playing an unusual sport is perfectly acceptableand indeed likely to be asked; or, the sentence A person is typing on their laptop to describe an office environment is not only a very good description of that context but perhaps even a desirable one. Therefore, mastering semantically underspecified language is a requisite for any multimodal NLP model which aims at both genuinely solving these tasks and being used for user-facing applications. To scratch the surface of the problem, we carry out two Proofs of Concept (hence, PoCs) using image descriptions and the CLIP model. When talking about a visual context, speakers of a language can convey the same message with varying levels of semantic specification. For example, they can describe someone waiting for the bus by referring to them as an elderly lady, a woman, a person, or they. Similarly, they can mention a location, i.e., the bus stop, or use the locatives here or there; an object, i.e., the bus, or use the demonstratives this or that; and so on. This is possible because the visual context provides enough information for the addressee to understand the message, even when it is extremely semantically underspecified. Almost by definition, standard image descriptions as those in COCO In the two PoCs below, we explore these two hypotheses. Note that we do so for illustrative purposes, highlighting general trends that can be useful for further, more thorough research. Moreover, it is worth stressing that, while we employ CLIP due to its effectiveness and accessibility, the point we make is more general in scope than focused on this specific model. The point is that models should not be affected by semantic underspecification when assessing the validity or applicability of an image description. Concretely, we use 100 images and corresponding descriptions (495 in total) from the 2014 train partition of COCO. Data and code available at: We compute CLIPScore for each of the 495 ⟨image, description⟩ pairs in our sample and select the 100 with the highest score. We refer to these 100 descriptions as Original. We then create up to 6 underspecified versions of each description in Original by manually perturbing their text to account for various underspecification phenomena. Such an annotation task was performed by a single annotator, the author of this paper, with a background in formal and computational linguistics. Perturbations are carried out only where possible (thus, not all descriptions have all 6 versions), without altering the grammatical structure of the sentence. The semantic underspecification phenomena we consider are illustrated in the example in Figure • Quantity: We replace numbers (e.g., two) and quantity expressions (e.g., a couple) with the quantifier some The woman is standing above the two packed suitcases. The woman is standing above some packed suitcases. The person is standing above the two packed suitcases. They are standing above the two packed suitcases. The woman is standing here. The woman is standing above this. We compute CLIPScore for each underspecified description. In Figure These observations are surprising and go against our expectations that underspecified descriptions, if semantically valid, should be considered as good as overspecified ones. Indeed, why should a sentence containing a quantifier, a pronoun, or a locative be considered a poor description of a visual context? One possible explanation is that models like CLIP are sensitive to the amount of detail provided by an image description. More specifically, the more words there are in the sentence with a clear and unique visual referent, the more the description is deemed 'aligned' to an image. Using the terminology introduced by Are Underspecified Descriptions Better than Unrelated Ones? Even if CLIP was sensitive to the amount of detail provided by an image description (the more, the better), a valid underspecified description should always be deemed more aligned than an unrelated, overspecified one. That is, even a highly underspecified sentence like They are doing something here-if semantically valid for the image, which is the case in our small sampleshould always be preferred over a description that is fully unrelated to the image. To test this hypothesis, we experiment with this Full description, Full They are doing something here. r1 A woman in a white dress is sitting with her cell phone. r2 A girl with long brown hair with streaks of red lays on a bed and looks at an open laptop computer. r3 A lady holding a bottle of ketchup and a dog in a hot dog bun costume. r4 An infant sits next to a stuffed teddy bear toy. r5 Woman sitting on a bench holding a hotdog in her hand r6 Two small children playing with their refrigerator magnets. and, for each image, we test it against 10 randomly sampled Original descriptions of other images. Surprisingly, for 82 images out of 100, at least one random caption achieves a higher CLIPScore than Full. While the actual numbers may depend on the number and the type of random descriptions being sampled, some qualitative observations are helpful to highlight the behavior of the model. Consider, as an example, the case reported in Figure There are various possible explanations for this behavior. For example, the model could be 'dazzled' by the presence of words that have a grounded referent in the image (e.g., woman, girl, or lady in some of the unrelated descriptions), that could lead it to assign some similarity even when the sentence is completely out of place. Conversely, the absence of words, and particularly nouns, with a clear grounded referent in the FULL description would be considered by the model as an indicator of misalignment. This could be a result of the model training data and learning objective. On the one hand, the ⟨image, text⟩ pairs scraped from the web may be poorly representative of language uses in real communicative contexts, where semantic underspecification is ubiquitous. On the other hand, the contrastive learning objective being employed may be too aggressive with texts that do not conform to those typically seen in training. In both cases, the similarity assigned to an underspecified description would be lower than the (possibly small) similarity assigned to an unrelated sentence with one or a few matching elements. Moving forward Taken together, the results of the two PoCs show that CLIP struggles with semantically underspecified language. This limitation must be taken into consideration if we want to use this and similar systems to model real communicative scenarios or use them in applications that interact with human users-which is not the case for most of the tasks these models are trained and tested on. Indeed, these models may fail to retrieve an image if the language query used does not conform to the standard type of descriptions seen in training. Or, they could misunderstand inclusive uses of certain pronouns (e.g., they), and exhibit unwanted overspecification biases when producing an image description or referring utterance. We argue that our community, if it aims at developing language technology that can successfully and efficiently communicate with human users, should be aware of semantic underspecification and take steps toward making our models master it properly. In the next section, we discuss how this is relevant to a range of studies exploring multimodal tasks in communicative settings. Mastering semantic underspecification is relevant to a wide range of studies that take a communicative or pragmatic approach to multimodal tasks. Below, we focus on a select sample of them Standard image captioning 3 consists in generating a description that is as close as possible to the content of the image. Typically, the task is not tied to a real communicative goal: image descriptions are provided by crowdworkers who are asked to mention all the important aspects of an image To make the task more pragmatically valid, some work proposed a discriminative version of it where models need to generate a description for an image that is pragmatically informative, i.e., that is good for the image in the context of other distractor images 5 Captions of images in the context of news articles are a prototypical example the specular task of image-to-text generation, as recently claimed by Goal-oriented visual question answering Standard visual question answering datasets Similarly, these models may better integrate the complementary information conveyed by language and vision in, e.g., BD2BB Object naming and referring expressions Multimodal models should be robust to variation in object naming. For example, they should not consider as an error the use of the noun artisan to refer to the person in Figure Naming variation is also observed in more complex visually grounded reference games, where the task is to produce a referring expression that is pragmatically informative, i.e., that allows a listener to pick the target object (image). This task is the ideal benchmark for testing how various pragmatic frameworks, such as the Rational Speech Acts Turning to naturalistic scenarios, recent work used CLIP to quantify the properties of human referring expressions. The model was shown to capture the degree of discriminativeness of a referring expression over a set of images, though it assigned lower alignment scores (computed without taking into account the broader visual context) to progressively more compact utterances Visually-grounded goal-oriented dialogue All the abilities mentioned above are relevant to the development of dialogue systems that can entertain a goal-oriented conversation with human users. Examples of visually grounded goal-oriented dialogue encompass reference tasks where either yes/no questions In the next section, we outline a few research directions and provide examples of concrete steps that can guide work aimed at achieving this goal. As discussed in Section 1, semantic underspecification can be generally defined as the lack, in a linguistic signal, of part of the semantic information required to understand the message, which is typically obtained from other linguistic and nonlinguistic sources. To tackle the problem at a computational level, it is important to formally define and operationalize the phenomenon. For example, by identifying which linguistic phenomena, words, or classes of words are considered by the linguistic theory as instances of semantic underspecification and under which circumstances (top-down approach). Or, by means of a data-driven measure, such as the applicability of a text to a more or less large number of visual contexts (bottom-up approach). In either case, computational methods can be used to refine or validate such definition (this is the approach used, for example, by a recent work testing the Uniform Information Density theory using language models; Novel datasets or ad hoc annotations of existing resources can be collected to study underspecified language. These datasets can encompass the standard multimodal tasks (image captioning, visual question answering, etc.) and therefore be used as evaluation benchmarks to test existing models; or, new tasks can be proposed, including the prediction of an underspecification score, the para-phrasing or explanation of an underspecifed sentence (or, vice versa, the de-overspecification of a sentence), and so on. Moreover, annotations may be collected at the sample and dataset level to investigate, for example, whether overspecified and underspecified image descriptions or referring utterances are equally good, informative, or inclusive 7 according to human speakers, how many and which non-linguistic cues are needed to understand them, which visual and communicative contexts elicit more underspecified language, and so on. Operationalizing and annotating semantic underspecification can be useful, in turn, for training and testing purposes. As for the former, sampling cases from a dataset with a varying degree of semantic underspecification can be helpful for training or finetuning models to make them more robust to any language. As for the latter, benchmarking a model with underspecified language can shed light on its generalization abilities and applicability to truly communicative scenarios. Moreover, a measure of a sample's semantic underspecification could be used as an additional learning signal for the training of foundational, task-agnostic multimodal models. Indeed, such a measure may indicate the extent to which language and vision convey redundant or complementary information, the relative importance of each modality, and the relation between the correctness and self-sufficiency of a sample. Finally, it may be interesting to leverage the degree of semantic underspecification as a dimension to which NLG models can adapt, e.g., to generate text that is more or less specified depending on the context, the interlocutor's needs or style, and the communicative goal of the linguistic interaction. In this position paper, we argued that the NLP community must deal with semantic underspecification, that is, the possibility for a linguistic signal to convey only part of the information needed to understand a message. This is a ubiquitous phenomenon in human communication, that speakers deal with by quickly and effortlessly integrating non-linguistic information, e.g., from the surrounding visual context. We argued that research in multimodal NLP combining language and vision is 7 These directions may also be relevant to the line of work exploring how to minimize biases and misrepresentations when describing images (e.g., On a technical level, our paper highlights the need to improve SotA models by making them robust to scenarios that may be different from those seen in training. In our case, CLIP suffers with sentences that resemble the language used in real communicative contexts, which poses a problem if we were to use it for modeling communicative tasks or embed it in user-facing applications. This general weakness of SotA models has been recently illustrated by On a theoretical level, the ideas presented in our paper are consonant with a recent line of thought that advocates approaches that are aware of communicative and pragmatic aspects in language understanding and generation Semantic underspecification has been extensively studied in semantics, pragmatics, psycholinguistics, communication sciences, and cognitive sciences. In this position paper, we review this literature only superficially, although we are aware that a generalized and exhaustive understanding of the phenomenon necessarily requires knowledge of this previous work. We encourage the scholars working on this topic to embrace its complexity and depth. The paper focuses on approaches, tasks, and models within multimodal NLP. As such, it almost completely neglects a discussion of semantic underspecification within text-only NLP. However, we are aware of the growing interest in the community at large for frameworks that propose and evaluate models in pragmatic or communicative contexts The two proofs of concept we report in the paper consider a rather narrow set of semantic underspecification phenomena, which may not be entirely representative. Moreover, the manual annotation that we perform, though consistent, does not adhere to any strict guidelines, and borderline cases are entrusted to the linguistic competence of the annotator. Finally, and more in general, these proofs of concepts are mostly intended to serve as a basis for the discussion and as an indication of patterns and trends. Therefore, future work should further and more thoroughly investigate this issue. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 2 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D Did you use human annotators (e.g., crowdworkers) or research with human participants? Section 2.1 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 2.1 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
1,399
2,023
1,399
NLP Workbench: Efficient and Extensible Integration of State-of-the-art Text Mining Tools
NLP Workbench is a web-based platform for text mining that allows non-expert users to obtain semantic understanding of large-scale corpora using state-of-the-art text mining models. The platform is built upon latest pretrained models and open source systems from academia that provide semantic analysis functionalities, including but not limited to entity linking, sentiment analysis, semantic parsing, and relation extraction. Its extensible design enables researchers and developers to smoothly replace an existing model or integrate a new one. To improve efficiency, we employ a microservice architecture that facilitates allocation of acceleration hardware and parallelization of computation. This paper presents the architecture of NLP Workbench and discusses the challenges we faced in designing it. We also discuss diverse use cases of NLP Workbench and the benefits of using it over other approaches. The platform is under active development, with its source code released under the MIT license 1 . A website 2 and a short video 3 demonstrating our platform are also available.
Text mining, also known as text analytics or text analysis, is the process where a user interacts with machine-supported analysis tools that transform natural language text into structured data, to gain insights and new knowledge from the text Nearly every subfield of NLP involved in text mining has been rapidly evolving in recent years, with records on benchmarks being continuously broken NLP Workbench is designed with two fundamental principals in mind: for developers and NLP researchers, fast and easy adaptation of off-the-shelf models and tools; and for non-expert users such as sociologists, a user-friendly interface for both document-level and corpus-level analysis. Following these principles, NLP Workbench offers the following key features: Platform NLP Workbench unifies corpus management, text mining tools, and visualization in a single platform. It provides a growing list of models and tools that are based on state-of-the-art research, currently offering functionalities like named entity recognition, entity linking, relation extraction, semantic parsing, summarization, sentiment analysis, and social network analysis. Interaction A web interface is included for user interactions with the ability to visualize the model results at document and corpus levels. Users could choose to interactively apply a model on a given document and have the results saved for future queries, or to apply models in batches on selected documents in a corpus. Architecture For development, NLP Workbench adopts containerization, allowing new models to be added independent of the software stack of existing models. For deployment, its microservice architecture allows models to be deployed in a distributed way on machines that meet the computing and networking requirements of individual models, enabling horizontal scaling. Interface Tools in NLP Workbench can be accessed in versatile ways. Besides the web interface, non-expert users could import new documents into the platform via a browser extension. For developers and researchers, NLP Workbench provides RESTful API and remote procedure call (RPC) interfaces for easy integration with other applications and pipelines. 2 Related Work A plethora of NLP toolkits focusing on building NLP pipelines have been developed in the past few decades, including Stanford CoreNLP Several libraries and tools are able to manage corpora or models from multiple sources. For example, Datasets library 3 Architecture NLP Workbench is built on top of various open source software and incorporates the code and models from many research projects. We design our architecture to leverage off-the-shelf functionalities provided by these software and projects, and to minimize the effort of integrating new code from a research project.
Figure The connected Neo4j Browser 11 provides an interactive web interface for exploring social networks constructed from a corpus. System Perspective From the perspective of the system, both the corpus and the outputs of text mining tools are stored and indexed in Elasticsearch prove efficiency. In addition to that, Elasticsearch provides convenient tools to filter documents based on the outputs of text mining tools and visualizing statistics, which are very useful for downstream analytics. If running a tool is indeed necessary, the task is added to a priority queue. Ad hoc and interactive requests, issued when a user is examining a single document and applying tools on it, are prioritized over batched requests that run in the background. Each tool or model has workers processing the tasks in the queue. This ensures that users performing interactive analysis experience little latency even when the number of workers is limited, which is usually the case in practice as deep learning models are often resource-intensive and it is infeasible to have multiple instances running in parallel. Text mining tools often rely on the outputs of other tools or NLP models and are built as pipelines. For example, both entity linking and relation extraction require named entity recognition and coreference resolution. To ensure efficiency, re-computing the outputs of tools that are already available should be avoided, and tools should be run in parallel if pos- sible. In addition to persisting and re-using outputs as discussed in Section 3.1, we design a pipelining and scheduling system that automatically detects the dependencies between tools and schedules the tasks in a way that eliminates re-computation and encourages parallelism. The dependencies of a tool can naturally be represented as a directed acyclic graph (DAG), where inbound edges represent the dependencies. When multiple tools are requested to be run on a single document, we gather the direct and transitive dependencies of these tools in a single graph, as shown in Figure In the example illustrated in Figure A major obstacle to integrating third party code is the dependency hell problem: it is an NP-complete problem to find a set of compatible versions of all software library dependencies For deployment, a practical problem is that it is often difficult or costly to find a single physical machine that satisfies the computing and networking requirements of all the components: deep learning models require GPU for inference, database management systems consume large amounts of memory and disk space, and web servers need ac-cess to the Internet. One solution to this problem is the ability to deploy components of NLP Workbench on multiple machines, which is achieved by our design. We solve both problems discussed above at once by deploying both text mining tools and infrastructure components as containerized microservices. Each component is deployed as a Docker container NLP Workbench already includes a variety of tools and models for text mining. Most of the components come from state-of-the-art research in the respective subfields. Others are baseline implementations to demonstrate NLP Workbench's extensibility, showing that developers can straightforwardly incorporate new tools and build pipelines from existing ones. One benefit of the flexible and modular design as described in §3 is that all built-in tools and models can easily be replaced or upgraded. Existing tools and models in NLP Workbench include: Named Entity Recognition The task, known as NER, is to identify mentions to entities such as people, organizations, and locations. We incorporated the NER model from PURE Coreference Resolution To determine which entity a pronoun refers to, we adopted the heuristic algorithm by Entity Linking Mentions to entities in the text are disambiguated and linked to Wikidata (Vrandečić and Krötzsch, 2014) entities. Candidate entities are generated by a fuzzy match on name. In addition to name similarity, the ranking of candidates utilizes the cosine similarity between the sentence embeddings The user can extract structured facts in the form of knowledge triples like (Annie Ernaux, Country, France) from a text. The underlying model Semantic Parsing Semantic parsing provides a structured representation of the meaning of a sentence, allowing users to obtain information like who did what to whom, when, and where without caring about the form. NLP Workbench uses AM-RBART Summarization We build an application on top of semantic parsing to create natural language summaries of events related to people in the document, partly to demonstrate the simplicity of building pipelines in NLP Workbench. For each sentence in the document, we prune its AMR graph to only contain the nodes and edges of pattern subjectpredicate-object, where the subject or object is a person. The pruned AMR graphs are then converted to natural language using AMRBART. The sentiment of a document is predicted by VADER Social Network Analysis For corpora consisting of social media posts, NLP Workbench is equipped with a tool that builds graphs of social network interactions from posts. Powered by the graph database Neo4j Text mining has been proven useful in a variety of domains, such as corporate finance, patent research, life sciences, and many others Digital Humanities Accessible and reliable NLP tools are useful in digital humanities projects such as Linked Infrastructure for Networked Cultural Scholarship (LINCS) The Centre for Artificial Intelligence, Data, and Conflict (CAIDAC) Business Analytics Business analysts ask questions like "are recent news reports about Apple Inc. positive or negative?". These type of questions can easily be answered by NLP Workbench. After performing NER and entity linking on the news articles, the analyst can conduct a semantic search to find the articles that are related to Apple Inc. rather than apple the fruit. Then, the analyst can use the sentiment analysis tool and visualize the distribution of sentiment polarity scores with Kibana Lens, as shown in the bottom right screenshot in Figure NLP Research All NLP models in NLP Workbench can be accessed via RESTful API and RPC, or used directly as containers. For researchers who wish to perform inferences with the models on their own data, they could use the interfaces provided by NLP Workbench, without needing to set up the environment to run the models. NLP Workbench is still in its early stages of development, and we are actively working on improving the system. Besides usability, stability, and security updates, we plan to work on the following major features in the near future: Human-in-the-loop NLP Adding annotation support to the web interface will allow users to provide feedback to the outputs of models. This will help researchers to collect domain-specific labelled data and improve the performance of the models in a human-in-the-loop fashion Improved Corpus Management Managing document collections is a crucial aspect of text mining The extensible design of NLP Workbench allows us to keep existing tools and models up to date by replacing them when better models are released, and integrate emerging text mining tools to the system. For example, we hope to add claim extraction models to facilitate fact checking tasks Multi-modal Analysis Social media posts often refer to or contain information in other modalities (images, video, audio) of interest. At the same time, there is growing interest in grounding NLP models and analysis on knowledge extracted from videos and other sources. While adding support for processing different media in NLP Workbench is as easy as adding more NLP tools, we are interested in integrating these models so that co-training or grounding can be automated to the extent possible. We introduced NLP Workbench, a platform that caters to all three major aspects of text mining systems: corpus management, text mining tools, and user interface. We explained what design features make NLP Workbench efficient and extensible, and how it can be used in a variety of applications. We have already identified several important features that are not yet implemented in NLP Workbench, as discussed in §6: the platform needs an annotation feature for human-in-the-loop AI; it should have access to commonly used public corpora; and it should include text mining tools such as one for claim extraction. There are some intrinsic limitations that even the state-of-the-art models in NLP Workbench do not solve. For example, long tail entities may not be covered by the knowledge graph, and current entity linking models do not have a notion for out-of-knowledge-graph entities By encapsulating the models, NLP Workbench lowers the entry barrier for non-experts to use state-ofthe-art AI models. The microservice architecture, which allows models to be deployed on multiple servers with different capabilities rather than a single omnipotent server, also makes this text mining platform more accessible. Containerizing thirdparty models also helps with reproducibility and transparency. There have been attempts of using NLP Workbench to analyze datasets to help understand propaganda, misinformation, and disinformation related to war and terrorism. However, users must be warned that, as NLP Workbench uses thirdparty data and models without modification, outputs obtained from NLP Workbench are inevitably affected by the bias inherent in the datasets and models.
1,085
2,779
1,085