title
stringlengths 15
188
| abstract
stringlengths 400
1.8k
| introduction
stringlengths 9
10.5k
| content
stringlengths 778
41.9k
| abstract_len
int64 400
1.8k
| intro_len
int64 9
10.5k
| abs_len
int64 400
1.8k
|
---|---|---|---|---|---|---|
Improving Model Generalization: A Chinese Named Entity Recognition Case Study
|
Generalization is an important ability that helps to ensure that a machine learning model can perform well on unseen data. In this paper, we study the effect of data bias on model generalization, using Chinese Named Entity Recognition (NER) as a case study. Specifically, we analyzed five benchmarking datasets for Chinese NER, and observed the following two types of data bias that may compromise model generalization ability. Firstly, the test sets of all the five datasets contain a significant proportion of entities that have been seen in the training sets. These test data are therefore not suitable for evaluating how well a model can handle unseen data. Secondly, all datasets are dominated by a few fat-head entities, i.e., entities appearing with particularly high frequency. As a result, a model might be able to produce high prediction accuracy simply by keyword memorization. To address these data biases, we first refine each test set by excluding seen entities from it, so as to better evaluate a model's generalization ability. Then, we propose a simple yet effective entity rebalancing method to make entities within the same category distributed equally, encouraging a model to leverage both name and context knowledge in the training process. Experimental results demonstrate that the proposed entity resampling method significantly improves a model's ability in detecting unseen entities, especially for company, organization and position categories.
|
Named Entity Recognition (NER) is a fundamental building block for various downstream natural language processing tasks such as relation extraction Recently, by leveraging upon the pretrained language model (e.g, BERT First, we observe that in widely used Chinese NER datasets, 50% to 70% entities in test data are seen in the training data. Such test data would therefore not be able to evaluate the true generalization ability of a model. Second, the datasets are dominated by a few fat-head entities, i.e., entities appearing with particularly high frequency. For example, within the organization category of Cluener To address these data biases, we first refine each test set by excluding seen entities from it, so as to better evaluate a model's generalization ability. Then, we propose a simple yet effective entity rebalancing method to make entities within the same category distributed equally, encouraging a model to leverage both name and context knowledge in the training process. The contributions of this paper are as follows. • We design a simple yet effective algorithm to rebalance the entity distribution. The experiments show that the proposed method significantly improves the model generalization. In particular, the F1 score has been improved by 12.61% and 37.14% on the organization category of Cluener and MSRA dataset respectively. 2 Dataset Observation
|
In this study, we analyze five benchmarking Chinese NER datasets, including: (1) MSRA If an entity in dev/test data has been covered by the training data, we refer it as a seen entity. Otherwise, it is an unseen entity. To quantify the degree to which entities in the dev/test data have been seen in the training data, we define a measurement called entity coverage ratio. The entity coverage ratio of data D te is denoted by r(D te ), which is calculated using the below equation. where Ent(.) denotes a function to obtain the list of annotated entities and D train represents the training data. As Table Fat-head entity is defined as the entity appearing with particularly high frequency, while long-tail entity is defined as the entity with very few mentions. To identify the existence of fat-head entity, we use kurtosis Table Observation 2 Fat-head entities prevail in different categories of Chinese NER datasets. We think this finding is also valid in other NER datasets, since the annotated corpus is usually collected within a certain time frame when some entities (e.g., celebrities, organizations) get much more exposure than others. We hypothesize that the dominance of fat-head entities will cause the model to simply memorize To improve model's generalization ability in detecting unseen entities, we argue that the model should be trained to leverage both name and context knowledge There are two major reasons why the proposed entity rebalancing algorithm works. First, the equal distribution will encourage the model to leverage both name knowledge and context knowledge, since there are no simple statistical cues The proposed algorithm works as follows. First, rebalance the annotated entity frequency in the training data. Let C l denotes the original entity frequency counter of category l. For example, given C l = {e 1 : 11, e 2 : 1, e 3 : 1}, which means entity e 1 is annotated 11 times, and both e 2 and e 3 are annotated once in the category l, which is very imbalanced. Then we turn C l to the balanced entity frequency counter C b l , which is C b l = {e 1 : 5, e 2 : 4, e 3 : 4}. In C b l , the difference between the maximum and minimum entity frequency is 1 at most. Second, replace the fathead entity with randomly sampled entity of the same category, once its accumulated occurrence surpasses the rebalanced frequency in C b l . Details are shown in Algorithm 1. 4 Experiments According to observation 1, the test sets of Chinese NER datasets contain a significant proportion of seen entities, which fails to evaluate the true model generalization ability. In our study, the test sample will be excluded if it contains entities that are covered in training data. For Cluener We use BERT+CRF as the model architecture. In particular, we use bert-base-chinese pre-trained model 1 (12-layer, 768-hidden, 12-heads) released by google MSRA OntoNotes Resume Weibo Glyce-BERT Table For Weibo dataset, the proposed outperforms the baseline by 8.89% in PER.NAM category, but performs worse in PER.NOM category. Note that the PER.NOM category contains entities such as man, woman and friend, which are hard to generalize based on context knowledge. For Resume dataset, the proposed method does not work well. We think it is due to the structure of the resume corpus, which is the mere concatenation of name, education and organization, etc. Thus, there is very few context knowledge to leverage. Overall, the proposed entity rebalancing method is able to improve model's generalization ability in detecting unseen entities. However, the proposed method only works for categories which meet cer- tain conditions. First, the entities of the same category require to be interchangeable semantically. Second, the entities should be dependent of context knowledge. In this paper, we take Chinese NER as a case study, aiming to improve the model generalization by mitigating the data bias. We first refine each test set by excluding seen entities from it, so as to better evaluate a model's generalization ability. Then, we propose an entity rebalancing method to make entities within the same category distributed equally. Experimental results show that the proposed entity rebalancing method significantly improves a model's ability in detecting unseen entities. As future work, we will first investigate the generalizability of this study to non-Chinese NER. Second, we will improve the entity replacement algorithm by leveraging language model so that the replaced entity is more semantically plausible.
| 1,470 | 1,378 | 1,470 |
HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale Supervision
|
Explainable multi-hop question answering (QA) not only predicts answers but also identifies rationales, i. e. subsets of input sentences used to derive the answers. This problem has been extensively studied under the supervised setting, where both answer and rationale annotations are given. Because rationale annotations are expensive to collect and not always available, recent efforts have been devoted to developing methods that do not rely on supervision for rationales. However, such methods have limited capacities in modeling interactions between sentences, let alone reasoning across multiple documents. This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document. Experimental results show that our approach is more accurate at selecting rationales than the previous methods, while maintaining similar accuracy in predicting answers.
|
Multi-hop reasoning is an important capability for any intelligent machine comprehension system. Question answering (QA) is a common application for evaluating a system's ability to reason across multiple steps Researchers have thus explored approaches that do not require rationale annotations We propose HOP, UNION, GENERATE (HUG), a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. HUG overcomes the two-sentence limitation of previous methods by directly reasoning about rationales as sets of sentences, while also extending rationale prediction to the multidocument setting. We show an overview of HUG in Figure Training a set-prediction model quickly becomes intractable as the size increases. We make two algorithmic choices that lead to tractable training for HUG. Treating rationales as a latent variable requires HUG to marginalize over all possible rationales, leading to an intractable learning objective. HUG overcomes this issue by performing sampling in a hierarchical way -it first identifies the most promising documents and then the most promising sentences within those documents. Second, multi-hop QA often involves reasoning over long documents, which is challenging due to the computational complexity of encoding long documents with neural models such as transformers. To make this encoding efficient, HUG performs computation in the embedding space. We empirically evaluate HUG on three different multi-hop QA datasets: HotpotQA
|
Explainable methods for multi-hop QA. Active research has been devoted to collecting human rationales for a wide range of QA tasks; a recent survey has identified 65 datasets that provide explanation annotations Other works have explored multi-hop QA with only answer supervision but not rationale supervision. As in our work, Retrieve and Generate (RAG) Outside of unsupervised methods, Rationales as latent variables. A focus for rationale methods in NLP outside of multi-hop QA has been identifying subsets of input tokens to justify decisions. For text classification, Outside of using input tokens for rationales, Unsupervised retrieval. A task closely related to our setting (i.e., no access to rationale supervision) is unsupervised retrieval, which searches for sentences relevant to the questions but does not predict answers. For example, one could apply x: Emily Beecham is best known for her role in a television series whose second season premiered on what date? d1: [1] Emily Beecham is an English-American actress. [2] She is best known for her role in the AMC television series "Into the Badlands" [3] In 2011, she received the Best Actress award at the London Independent Film Festival. d2: [4] Into the Badlands is an American television series that premiered on AMC November 15, 2015 [5] The series features a story about a warrior and a young boy who journey through a dangerous feudal land together seeking enlightenment. [6] AMC renewed the show for a 10-episode second season, which premiered on March 19, 2017. [7] On April 25, 2017, AMC renewed the series for a 16-episode third season.' Figure propose to model rationales as a latent variable, we additionally introduce the hierarchical structure in our probabilistic model, enabling efficient inference. 3 Generative Multi-Hop QA In the standard multi-hop QA setting, an example consists of a question x, a set of documents D, and an answer y. Within D, some documents are relevant to the question, while the others are distractors. Explainable multi-hop QA models predict a rationale z, a minimal set of sentences across the relevant documents, in addition to predicting the answer y. We show a multi-hop QA example (with distracting documents omitted) in Figure We propose the following generative model for multi-hop QA. Given the question x, we first select a subset of documents d = {d 1 , d 2 , . . .} ⊆ D. Next, within each document d i , we select a subset of sentences z i . Finally, conditioned on the union of sentence sets from each document, z = ∪ i z i , we generate an answer y. The only assumption we make in the model is that sentence sets are selected independently among documents. Formally, we write the model as, (1) • p(y | z, x). (3) We refer to Eq. 1 as the document set selection model, Eq. 2 as the sentence set selection model, Eq. 3 as the answer generation model. We select a set of documents d by directly parameterizing a distribution over all valid document sets. We rely on a document set scoring function f (d, x), which captures both the relevance of the document set d to the question x, as well as the dependencies among the documents in the set. The document set selection model is given by This distribution is globally normalized over all valid subsets of documents D, requiring the evaluation of the document scoring function f on all valid document subsets. Document set validity is dataset specific, and is discussed in Section 4. For efficiency, the document set scoring function f first computes embeddings of each document in the set d independently, then combines them with a neural network (MLP). Formally, let emb : V * → R n be an embedding function that maps a sequence of text to an n-dimension vector, where V is the vocabulary. The document set scoring function is given by We provide the details of the MLP in Appendix A and the details of the embedding function below, as part of the sentence selection model description. Sentence Set Selection Within each document d i , we select z i ∈ P(d i ), a power set of all sentences in d i . We rely on a sentence set scoring function g(z i , x), similar to the document set scoring function, which captures all relationships between selected sentences and the question. The sentence set selection model is given by which is globally normalized over all valid subsets of sentences in the document d i . Computing p(z i | d i , x) requires enumerating all sentences subsets, which is intractable. We instead extend the approach of We then obtain sentence subset emb(z i ) embeddings by feeding this to an encoder-only model such as BERT Finally, let v be a learnable vector, g(z i , x) = v T emb(z i , x) In practice, we note that encoder methods have a maximum input length, which can prevent full document encodings. We provide the details of long document encoding in Appendix B. We also only consider subsets of up to a fixed max length. Answer Generation Parameterization of the answer generation model, p(y | z, x), is done using a sequence-to-sequence model where the question and rational are fed to an encoder, and that answer is generated. This process is complicated by the fact that answers can take on different forms, depending on specific QA tasks such as Boolean QA, multiple-choice QA, extractive QA, and abstractive QA, etc. We can therefore use a sequenceto-sequence model such as BART To learn an explainable multi-hop QA system, HUG optimizes an approximation of the marginal likelihood. The marginal likelihood, is intractable, as it requires computing p(y | z, x) under the answer generation model for every valid set of sentences across documents. We instead optimize a top-K Viterbi approximation of the marginal likelihood. Given we use the following approximation of the marginal likelihood as our training objective: At test time, we must choose the best documents and rationales. Similar to training, we first choose the most likely pair of documents from S 1 , then the most likely rationale from S 1 d . Finally, for span-based QA, we generate an answer by performing greedy search on the answer generation model p(y|z, x); for Boolean QA or multiplechoice QA, we normalize the answers between different choices and take arg max y p(y|z, x). Datasets and Their Representations. We evaluate HUG on four multi-hop QA datasets: Hot-potQA in the distractor setting Metrics and Comparison Systems. We compute F1 scores for rationale and document selection, and answer prediction. F1 scores for rationales are computed at the sentence level. Because the three QA datasets are in different formats, F1 scores for answers are computed differently. For extractive QA, F1 scores are computed at the token level for the answer spans. For Boolean QA and multiplechoice QA, F1 scores measure categorical answers. For each dataset, we compare to (1) state-of-the-art approaches that require no rationale supervision and (2) at least one fully supervised method (i.e., answers and rationales available for training). The latter provides an upper bound on performance. -On HotpotQA and MuSiQue, we compare to a rule-based approaches, BM25, and RAG We also consider a semi-supervised approach -CHAIN -On FEVER and MultiRC, we also compare to RAG as an unsupervised baseline (predicting top-2 sentences for MultiRC, and top-1 sentence for FEVER). Additionally, we consider diagnosticsguided explanation generation (DIAGNOSTICS) Implementation and Hyperparameters. We test HUG with language models of both small and large sizes. For the small version (HUG-Small), we use distilBERT HotpotQA. We summarize the results on Hot-potQA in Table Table FEVER We summarize the results on FEVER in Table MultiRC Table Scaling HUG to Larger Models. On all three datasets, by increasing the number of model parameters, HUG can consistently achieve better performance. Additionally, as the number of reasoning hops increases, HUG can more benefit from the larger language models -compared to HUG-Small, HUG has the least improvement on FEVER and had the most improvement on MuSiQue. Document Dependencies HUG explicitly models the dependencies between documents for multi-hop reasoning. We consider independent document selection To understand how document modeling impact rationale selection performance, we break the performance down by the reasoning types proposed in In addition to the quantitative analysis, we also qualitatively compare the two models in Figure Speed evaluation. While HUG obtains strong sentence F1 scores, training is more expensive because the model must consider a set of rationales for every example. In particular, the answer model p(y | z, x) must be run for every sampled z for each training example. At inference, the answer model requires only a single evaluation of p(y | z, x) for arg max z p(z | x). We empirically measure the runtime overhead of HUG compared to FAITHFUL on MultiRC, using 80 samples of z at training time. We report the total training time and inference time in Table We present HUG, a probabilistic, principled approach for explainable multi-hop reasoning without rationale supervision. HUG explicitly models multi-hop reasoning by considering the dependency between documents and between sentences within a document. Experimental results demonstrate that HUG outperforms other state-of-the-art methods that do not rely on rationale labels. The goal of explainable methods is to improve the trustworthiness of systems. HUG presents a method for fine-tuning language models for selecting rationales, without rationale annotations, that exploits the knowledge already present in pretrained language models. While this has the potential of improving the trustworthiness of the model, it may also reinforce existing harmful biases in the language model. For extending this parameterization to large document sets, we could use a similar parameterization to the sentence set scoring function: Transformer-based text encoders can only accept inputs shorter than a fixed length (e.g., 512 tokens). To address this limitation, we partition documents into slices of m sentences and compute the embedding for each slice individually. We denote a slice for a document d as d i:j that starts at the ith sentence and ends before the jth sentence. We set the slice length m purely based on whether the longest slice is under 512 tokens. m is set to 3 for HotpotQA, 5 for FEVER, and 9 for MultiRC. In the unsupervised sentence selection setting, we cannot perform model selection by choosing the model with the highest validation sentence F1 score. Instead, we must rely on answer evaluation measures: validation answer F1, answer EM, or likelihood. We train HUG for three epochs, checkpoint every 2500 steps, and evaluate sentence F1 for the checkpoint with the best validation performance measure. The results of these selection methods are presented in Table We show that HUG is able to discover examples in which answers can be derived with reasoning shortcuts.
| 1,094 | 1,515 | 1,094 |
MUTE: A Multimodal Dataset for Detecting Hateful Memes
|
The exponential surge of social media has enabled information propagation at an unprecedented rate. However, it also led to the generation of a vast amount of malign content, such as hateful memes. To eradicate the detrimental impact of this content, over the last few years hateful memes detection problem has grabbed the attention of researchers. However, most past studies were conducted primarily for English memes, while memes on resource-constraint languages (i.e., Bengali) remain under-studied. Moreover, current research considers memes with a caption written in monolingual (either English or Bengali) form. However, memes might have code-mixed captions (English+Bangla), and the existing models can not provide accurate inference in such cases. Therefore, to facilitate research in this arena, this paper introduces a multimodal hate speech dataset (named MUTE) consisting of 4158 memes having Bengali and code-mixed captions. A detailed annotation guideline is provided to aid the dataset creation in other resource-constraint languages. Additionally, extensive experiments have been carried out on MUTE, considering the only visual, only textual, and both modalities. The result demonstrates that joint evaluation of visual and textual features significantly improves (≈ 3%) the hateful memes classification compared to the unimodal evaluation.
|
With the advent of the Internet, social media platforms (i.e., Facebook, Twitter, Instagram) significantly impact people's day-to-day life. As a result, many users communicate by posting various content in these mediums. This content includes promulgating hate speech, misinformation, aggressive and offensive views. While some contents are beneficial and enrich our knowledge, they can WARNING: This paper contains meme examples and words that are offensive in nature. also trigger human emotions that can be considered harmful. Among them, the propagation of hateful content can directly or indirectly attack social harmony based on race, gender, religion, nationality, political support, immigration status, and personal beliefs. In recent years, memes have become a popular form of circulating hate speech • Created a multimodal hate speech dataset (MUTE) in Bengali consisting of 4158 memes annotated with Hate and Not-Hate labels. • Performed extensive experiments with stateof-the-art visual and textual models and then integrate the features of both modalities using the early fusion approach. 2 Related Work Differences with existing researches: Though a considerable amount of work has been accomplished on multimodal hate speech detection, only a few works studied low-resource languages (i.e., Bengali). In our exploration, we found a work
|
This work developed MUTE: a novel multimodal dataset for Bengali Hateful memes detection. The MUTE considered the memes with code-mixed and cod-switched captions. For developing the dataset, we follow the guidelines provided by For dataset construction, we have manually collected memes from various social media platforms such as Facebook, Twitter, and Instagram. We search the memes using a set of keywords such as Bengali Memes, Bangla Troll Memes, Bangla Celebrity Troll Memes, Bangla Funny Memes etc. Besides, some popular public memes pages are also considered for the data collection, such as Keu Amare Mairala, Ovodro Memes etc. We accumulated 4210 memes from January 10, 2022, to April 15, 2022. During the data collection, some inappropriate memes are discarded by following the guidelines provided by The collected memes are manually labelled into two distinct categories: Hate and not-Hate. However, to ensure the dataset's quality, it is essential to follow a standard definition for segregating the two categories. After exploring some existing works on multimodal hate speech detection Not-Hate: A meme is reckoned as not-Hateful if it does not express any inappropriate cogitation and conveys positive emotions (i.e., affection, gratitude, support, and motivation) explicitly or implicitly. We instructed the annotators to follow the class definitions for performing the annotation. It also asked them to mention the reasons for assigning a meme to a particular class. This explanation will aid the expert in selecting the correct label during contradiction. Initially, we trained the annotators with some sample memes. Four annotators (computer science graduate students) performed the manual annotation process, and an expert (a Professor conducting NLP research for more than 20 years) verified the labels. Annotators were equally divided into two groups where each annotated a subset of memes. In case of disagreement, the expert decided on the final label. The expert ruled a total of 113 non-hateful and 217 hateful memes as hostile and non-hateful. An inter-annotator agreement was measured using Cohen For training and evaluation, the MUTE is split into the train (80%), test (10%), and validation (10%) set. Several computational models have been explored to identify hateful memes by considering the single modality (i.e., image, text) and the combination of both modalities (image and text). This section briefly discusses the methods and parameters utilized to construct the models. This work employed convolutional neural networks (CNN) to classify hateful memes based on visual information. Initially, the images are resized into 150 × 150 × 3 and then driven into the pre-trained CNN models. Specifically, we curated the VGG19, VGG16 For text based hateful memes analysis, various deep learning models are employed including BiLSTM + CNN BiLSTM + CNN: At first, the word embedding BiLSTM + Attention: We applied the additive attention The attention layer tries to give higher weight to the significant words for inferring a particular class. Transformers: Pretrained transformer models have recently obtained remarkable performance in almost every NLP task In recent years, joint evaluation of visual and textual data has proven superior in solving many complex NLP problems The training set is used to train the models, whereas the validation set is for tweaking the hyperparameters. We have empirically tried several hyperparameters to obtain a better model's performance and reported the best one. The final evaluation of the models is done on the test set. This work selects the weighted f1-score (WF) as the primary metric for the evaluation due to the class imbalance nature of the dataset. Apart from this, we used the class weighting technique Table We conducted a quantitative error analysis to investigate the model's mistakes across the two classes. To illustrate the errors, the number of misclassified instances is reported in Figure This paper presented a multimodal framework for hateful memes classification and investigated its performance on a newly developed multimodal dataset (MUTE) having Bengali and code-mixed (Bangla + English) captions. For benchmarking the framework, this work exploited several computational models for detecting hateful content. The key finding of the experiment is that the joint evaluation of multimodal features is more effective than the memes' only visual or textual information. Moreover, the cross-lingual embeddings (XLM-R) did not provide the expected performance compared to the monolingual embeddings (Bangla-BERT) when jointly evaluated with the visual features. The error analysis reveals that the model's performance gets biased to a particular class due to the class imbalance. In future, we aim to alleviate this problem by extending the dataset to a large scale and framing it as a multi-class classification problem. Secondly, for robust inference, advanced fusion techniques (i.e., co-attention) and multitask learning approaches will be explored. Finally, future research will explore the impact of dataset sampling and do some ablation study (i.e., experimenting with only English, only Bangla, code-mixed, and code-switched text) to convey valuable insights about the models' performance.
| 1,357 | 1,351 | 1,357 |
Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT
|
The tasks of out-of-domain (OOD) intent discovery and generalized intent discovery (GID) aim to extend a closed intent classifier to openworld intent sets, which is crucial to taskoriented dialogue (TOD) systems. Previous methods address them by fine-tuning discriminative models. Recently, although some studies have been exploring the application of large language models (LLMs) represented by ChatGPT to various downstream tasks, it is still unclear for the ability of ChatGPT to discover and incrementally extent OOD intents. In this paper, we comprehensively evaluate ChatGPT on OOD intent discovery and GID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPT exhibits consistent advantages under zero-shot settings, but is still at a disadvantage compared to fine-tuned models. More deeply, through a series of analytical experiments, we summarize and discuss the challenges faced by LLMs including clustering, domain-specific understanding, and cross-domain in-context learning scenarios. Finally, we provide empirical guidance for future directions to address these challenges.
|
Traditional task-oriented dialogue (TOD) systems are based on the closed-set hypothesis Previous work studied above OOD tasks by finetuning the discriminative pre-training model BERT As one of the representative LLMs, ChatGPT, developed by OpenAI, has attracted significant attention from researchers and practitioners in a short period of time. While the NLP community has been studying the ability of LLMs to be applied to various downstream tasks, such as translation To the best of our knowledge, we are the first to comprehensively evaluate ChatGPT's performance on OOD intent discovery and GID. In detail, we first design three prompt-based methods based on different IND prior to guide ChatGPT to perform OOD discovery in an end-to-end manner. For GID, we innovatively propose a pipeline framework for performing GID task under a generative LLM (Section 3). Then we conduct detailed comparative experiments between ChatGPT and representative baselines under three dataset partitions (Section 4). In order to further explore the underlying reasons behind the experiments, we conduct a series of analytical experiments, including in-context learning under cross-domain demonstrations, recall analysis, and factors that affect the performance of ChatGPT on OOD discovery and GID. Finally, we compare the performance of different LLMs on these OOD tasks (Section 5). Our findings. The major findings of the study include: What ChatGPT does well: • ChatGPT can perform far better than the nonfine-tuned BERT on OOD tasks without any IND prior, thanks to its powerful semantic understanding ability. • For OOD intent discovery, when there are few samples for clustering, ChatGPT's performance can rival that of fine-tuned baselines. • ChatGPT can simultaneously perform text clustering and induce the intent of each clus-ter, which is not available in the discriminant model.
|
• For OOD intent discovery, ChatGPT performs far worse than the fine-tuned baselines under multi-sample or multi-category scenes, and is severely affected by the number of clusters and samples, with poor robustness. • For GID, the overall performance of ChatGPT is inferior to that of the fine-tuned baselines. The main reason is the lack of domain knowledge, and the secondary reason is the quality of the pseudo-intent set. • There are obvious recall errors in both OOD discovery and GID. In OOD discovery, this is mainly due to the generative architecture of ChatGPT. In GID, recall errors are mainly caused by ChatGPT's lack of domain knowledge and unclear understanding of intent set boundaries. • ChatGPT can hardly learn knowledge from IND demonstrations that helps OOD tasks and may treat IND demonstrations as noise, which brings negative effects to OOD tasks. In addition to the above findings, we further summarize and discuss the challenging scenarios faced by LLMs including large scale clustering, semantic understanding of specific domain and crossdomain in-context learning in Section 6 as well as provide guidance for future directions. Recently, there are growing interest in leveraging large language models (LLMs) to perform various NLP tasks, especially in evaluating ChatGPT in various aspects. For example, We evaluate the performance of ChatGPT on OOD intent discovery by designing prompts that include task instructions, test samples, and IND prior. We heuristically propose the following three methods based on different IND prior: Direct clustering (DC): Since OOD intent discovery is essentially a clustering task, a naive approach is to cluster directly without utilizing any IND prior. The prompt is in the following Previous discriminative GID framework first assign a pseudo-label index to each OOD sample through clustering and then jointly training the classifier with labeled IND data. However, the classification of queries by generative LLMs depends on specific intent semantics rather than abstract pseudo-label index symbols. Based on this, we innovatively propose a new framework that is suitable for generative LLMs, which relies on LLMs to generate an intent description with specific semantics as the pseudo-intent for each cluster, as shown in Fig 3. In the first stage, on the basis of OOD intent discovery prompts, we add an additional instruc- We conduct experiments on the widely used intent dataset Banking For OOD intent discovery, we choose to use BERT directly for k-means clustering • DeepAligned • DeepAligned-GID is a representative pipeline method constructed by We only use the samples belonging to IND and OOD intents in training set of Banking to train all fine-tuned methods. For OOD intent discovery, We adopt three widely used metrics to evaluate the clustering results: Accuracy (ACC) Table Next, we analyze the results from three aspects: (1) Compare Method without IND Prior From Table (2) Compare ChatGPT with Finetuned BERT For OOD discovery, when the OOD ratio is relatively low, the optimal ChatGPT method is slightly inferior to fine-tuned baselines. However, as the OOD ratio increases, ChatGPT is significantly lower than fine-tuned model. We believe this is because as the OOD ratio increases, the number of clustered samples increases and more data brings more difficult semantic understanding challenges to generative LLMs. However, discriminative finetuned methods encode the samples one by one and are therefore less affected by the OOD ratio. For GID, ChatGPT is significantly weaker than fine-tuned model in both IND and OOD metrics. According to Table 2, on average in three scenarios, the optimal ChatGPT method is weaker than the optimal fine-tuned method by 17.37% (IND ACC), 20.56% (OOD ACC), and 23.40% (ALL ACC), respectively. We believe this is because ChatGPT is pre-trained on large-scale general training data, which makes it difficult to perform better than finetuned models on specific domain data. (3) Compare different ChatGPT methods For OOD discovery, DC generally achieves the best performance, while ZSD is slightly inferior, and FSD performs the worst. Although DC is slightly inferior to ZSD in the IND/OOD=3:1 scenario, it significantly outperforms other ChatGPT methods in the other two scenarios. FSD almost performs the worst among the three methods. ZSD provides additional prior knowledge of IND categories, while FSD provides labeled IND samples as context. However, more IND priors actually lead to worse performance for ChatGPT. For GID, GID-FSD performs best on IND classification, while GID-DC performs best on OOD intents. Comparing GID-ZSD and GID-DC, the difference lies in the pseudo-intent set used. GID-ZSD is on average 6.22% (ALL ACC) behind GID-DC, indicating the importance of the pseudo-intent set. For GID-FSD, due to the IND demonstration, IND classification ability is significantly improved through in-context learning. However, its OOD classification metric is not as good as that of GID-DC. We think this is because the quality of the pseudo-intent set induced by FSD is poor and the IND demonstration may be treated as noise. We leave the further analysis of demonstrations in Section 5.1 and GID exploration in Section 5.2. 5 Qualitative Analysis For OOD Demos, it can be considered that the demonstration and testing are of the same distribution, so ChatGPT can improve task performance through in-context learning. For IND Demos, the different distribution between demonstration and testing causes ChatGPT not only unable to bring performance gains through in-context learning but also regard demonstrations as in-context noise that interferes with task performance. This shows that the distribution of demonstration text has a great impact on the effect of in-context learning, as also mentioned in Since ChatGPT performs GID in a pipeline manner, we analyze ChatGPT's performance separately in generating pseudo intent sets and performing joint classification. We show a set of pseudo intent sets in Table As mentioned in Section 4.3, ChatGPT has the problem of missing and repeated recall, which is a For OOD discovery, as the OOD ratio (clustering number) increases, the proportion of missing recall and repeated recall both increase significantly. For example, under IND/OOD=1:1, the probability of missing and repeated recall reaches 24.15% and 4.44% respectively, which has seriously damaged task performance. Since clustering tasks require inputting all samples into ChatGPT simultaneously, more samples bring more difficult task understanding and processing to ChatGPT, resulting in higher incorrect recall rates. For GID, the proportion of incorrect recall is almost unaffected by the OOD ratio, as GID is performed on a sample-by-sample basis. Furthermore, we find that incorrect recall on GID is mainly due to the lack of domain knowledge. This results in ChatGPT being unable to clearly identify intent set boundaries and may proactively allocate a query to multiple intents or refuse to allocate the query to predefined intent sets. We explore the effect of the number of clustered samples on ChatGPT by changing the ground-truth number of each OOD intent. As shown in Fig 7 , ChatGPT has poor robustness to the number of samples. The clustering performance first reaches an optimal effect between 5 and 10 samples per class, and then drops rapidly. In contrast, discriminative fine-tuned methods exhibits good robustness. We believe this is because when there are too few samples, it's difficult for ChatGPT to discover clustering patterns. And when there are too many samples, ChatGPT needs to process too many samples at the same time, leading to more difficult clustering. In the experiments above, the number of OOD classes was assumed to be ground truth. However, in real-world applications, the number of OOD clusters often needs to be estimated automatically. We use the same estimation algorithm DeepAligned as a baseline following In this section, we evaluate the performance of other mainstream LLMs and compare them with ChatGPT. Text-davinci-002 and text-davinci-003 belong to InstructGPT and text-davinci-003 is an improved version of text-davinci-002. In addition to the GPT family models, we also evaluate a new LLM Claude developed by Anthropic Thorough prompt engineering is crucial to mitigate the variability introduced by different prompts. To address this, we devise three additional variations (Paraphrase, Verbosity, Simplification) for Chat-GPT (DC/GID-DC) beyond the original prompt and conduct experiments with an IND/OOD ratio of 1:1. 7 The results are shown in Table Based on above experiments and analysis, we summarize three challenging scenarios faced by LLMs and provide guidance for the future. Experiments show that there are three main reasons why LLMs is limited in performing largescale clustering tasks: (1) The maximum length of input tokens limits the number of clusters. (2) When the number of clusters increases, LLMs will have serious recall errors. (3) LLMs have poor robustness to the number of cluster samples. There have been some work attempts to solve the sequence length constraints of transformer-based models, such as In Section 5.2, we find that the main reason for the limited performance of LLMs on GID is the lack of semantic understanding of specific domains. To improve the performance of general LLMs in specific domains, one approach is through fine-tuning LLMs, which often requires high training costs and hardware resources. Another approach is to inject domain knowledge into prompts to enhance LLMs. Section 5.1 and 5.2 show that providing demonstration examples or describing label sets can significantly improve performance, but long prompts will increase the inference cost of each query. How to efficiently adapt LLMs to specific domains without increasing inference costs is still an area under exploration. In some practical scenarios, such as the need to perform a new task or expand business scope, there is often a lack of demonstration examples directly related to the new task. We hope to improve the performance of new tasks by leveraging previous domain demonstrations. However, previous experiments show that cross-domain in-context learning has failed in current LLMs. A meaningful but challenging question is how in-context learning with IND demonstrations performs well in OOD tasks? A preliminary idea is to use manual chains of thought to provide inference paths from IND demonstration samples to the labels, thereby producing more fine-grained domain-specific knowledge. These fine-grained intermediate knowledge may help generalize to OOD tasks. In this paper, we conduct a comprehensive evaluation of ChatGPT on OOD intent discovery and GID, and summarize the pros and cons of Chat-GPT in these two tasks. Although ChatGPT has made significant improvements in zero or few-shot performance, our experiments show that ChatGPT still lags behind fine-tuned models. In addition, we perform extensive analysis experiments to deeply explore three challenging scenarios faced by LLMs: large-scale clustering, domain-specific understanding, cross-domain in-context learning and provide guidance for future directions. In this paper, we investigate the advantages, disadvantages and challenges of large language models Furthermore, we undertake an exploration of experimental outcomes utilizing an alternative widely employed dataset, CLINC As we perform the GID task on ChatGPT in the manner of a pipeline, we list the complete prompts we used in each stage of the three GID methods in Table We provide cases of incorrect recall in OOD discovery and GID in Figures Next, I will first give you a set of sentences , which will be recorded as Set 1. First,Please classify the sentences in Set 1 into 5 categories according to their intentions. You only need to output the category number and the corresponding sentence number in the following format: Category 1: 1,2,3,4,5 . . . . . . I will provide you with a collection of sentences, noted as Set 1. Your task is to categorize the sentences in Set 1 into 5 distinct groups based on their underlying intentions. Your output should include the category number along with the corresponding sentence number, formatted as follows: Category 1: 1, 2, 3, 4, 5, and so on... Next, I will be presenting you with a compilation of sentences, collectively labeled as "Set 1". Your task is to categorize these sentences into 5 distinct groups according to their underlying intentions. Upon completing the task, your response is anticipated to take the form of a structured enumeration. Your response should consist of the assigned category number along with the respective sentence numbers following this format: Category 1: 1, 2, 3, 4, 5... Next, I'll provide sentences in Set 1. Please categorize them into 5 groups based on intentions. Output the category number and sentence number in this format: Category 1: 1, 2, 3, 4, 5. . . 10304
| 1,109 | 1,876 | 1,109 |
Chinese NER Using Lattice LSTM
|
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.
|
As a fundamental task in information extraction, named entity recognition (NER) has received constant research attention over the recent years. The task has traditionally been solved as a sequence labeling problem, where entity boundary and category labels are jointly predicted. The current stateof-the-art for English NER has been achieved by using LSTM-CRF models Chinese NER is correlated with word segmentation. In particular, named entity boundaries are also word boundaries. One intuitive way of performing Chinese NER is to perform word segmentation first, before applying word sequence labeling. The segmentation → NER pipeline, however, can suffer the potential issue of error propagation, since NEs are an important source of OOV * Equal contribution. in segmentation, and incorrectly segmented entity boundaries lead to NER errors. This problem can be severe in the open domain since crossdomain word segmentation remains an unsolved problem One drawback of character-based NER, however, is that explicit word and word sequence information is not fully exploited, which can be potentially useful. To address this issue, we integrate latent word information into characterbased LSTM-CRF by representing lexicon words from the sentence using a lattice structure LSTM. As shown in Figure Since there are an exponential number of wordcharacter paths in a lattice, we leverage a lattice LSTM structure for automatically controlling information flow from the beginning of the sentence to the end. As shown in Figure
|
Our work is in line with existing methods using neural network for NER. How to better leverage word information for Chinese NER has received continued research attention External sources of information has been leveraged for NER. In particular, lexicon features have been widely used Lattice structured RNNs can be viewed as a natural extension of tree-structured RNNs We follow the best English NER model The character-based model is shown in Figure e c denotes a character embedding lookup table. A bidirectional LSTM (same structurally as Eq. 11) is applied to x 1 , x 2 , . . . , x m to obtain ←h c m in the left-to-right and right-to-left directions, respectively, with two distinct sets of parameters. The hidden vector representation of each character is: A standard CRF model (Eq. 17) is used on h c 1 , h c 2 , . . . , h c m for sequence labelling. • Char + bichar. Character bigrams have been shown useful for representing characters in word segmentation where e b denotes a charater bigram lookup table. • Char + softword. It has been shown that using segmentation as soft features for character-based NER models can lead to improved performance where e s represents a segmentation label embedding lookup table. seg(c j ) denotes the segmentation label on the character c j given by a word segmentor. We use the BMES scheme for repre-senting segmentation Similar to the character-based case, a standard CRF model (Eq. 17) is used on h w 1 , h w 2 , . . . , h w m for sequence labelling. The word-based model is shown in Figure where e w denotes a word embedding lookup table . A bi-directioanl LSTM (Eq. 11) is used to obtain a left-to-right sequence of hidden states ←h w n for the words w 1 , w 2 , . . . , w n , respectively. Finally, for each word w i , -→ h w i and ←h w i are concatenated as its representation: Integrating character representations Both character CNN • Word + char LSTM. Denoting the embedding of each input character as e c (c j ), we use a bi-directional LSTM (Eq. 11) to learn hidden states -→ h c t(i,1) , . . . , for the characters c t(i,1) , . . . , c t(i,len(i)) of w i , where len(i) denotes the number of characters in w i . The final character representation for w i is: • Word + char LSTM . We investigate a variation of word + char LSTM model that uses a single LSTM to obtain -→ h c j and ←h c j for each c j . It is similar with the structure of • Word + char CNN. A standard where W CNN and b CNN are parameters, ke = 3 is the kernal size and max denotes max pooling. The overall structure of the word-character lattice model is shown in Figure Shown in Figure The basic recurrent structure of the model is constructed using a character cell vector c c j and a hidden vector h c j on each c j , where c c j serves to record recurrent information flow from the beginning of the sentence to c j and h c j is used for CRF sequence labelling using Eq. 17. The basic recurrent LSTM functions are: where i c j , f c j and o c j denote a set of input, forget and output gates, respectively. W c and b c are model parameters. σ() represents the sigmoid function. Different from the character-based model, however, the computation of c c j now considers lexicon subsequences w d b,e in the sentence. In particular, each subsequence w d b,e is represented using where e w denotes the same word embedding lookup table as in Section 3.2. In addition, a word cell c w b,e is used to represent the recurrent state of x w b,e from the beginning of the sentence. The value of c w b,e is calculated by: where i w b,e and f w b,e are a set of input and forget gates. There is no output gate for word cells since labeling is performed only at the character level. With c w b,e , there are more recurrent paths for information flow into each c c j . For example, in The calculation of cell values c c j thus becomes In Eq. 15, the gate values i c b,j and i c j are normalised to α c b,j and α c j by setting the sum to 1. The final hidden vectors h c j are still computed as described by Eq. 11. During NER training, loss values back-propagate to the parameters 2 We experimented with alternative configurations on indexing word and character path links, finding that this configuration gives the best results in preliminary experiments. Single-character words are excluded; the final performance drops slightly after integrating single-character words. A standard CRF layer is used on top of h 1 , h 2 , . . . , h τ , where τ is n for character-based and lattice-based models and m for word-based models. The probability of a label sequence y = l 1 , l 2 , . . . , l τ is Here y represents an arbitary label sequence, and W l i CRF is a model parameter specific to l i , and b is a bias specific to l i-1 and l i . We use the first-order Viterbi algorithm to find the highest scored label sequence over a word-based or character-based input sequence. Given a set of manually labeled training data {(s i , y i )}| N i=1 , sentence-level log-likelihood loss with L 2 regularization is used to train the model: where λ is the L 2 regularization parameter and Θ represents the parameter set. We carry out an extensive set of experiments to investigate the effectiveness of word-character lattice LSTMs across different domains. In addition, we aim to empirically compare word-based and character-based neural Chinese NER under different settings. Standard precision (P), recall (R) and F1-score (F1) are used as evaluation metrics. Data. Four datasets are used in this paper, which include OntoNotes 4 Segmentation. For the OntoNotes and MSRA datasets, gold-standard segmentation is available in the training sections. For OntoNotes, gold segmentation is also available for the development and test sections. On the other hand, no segmentation is available for the MSRA test sections, nor the Weibo / resume datasets. As a result, OntoNotes is leveraged for studying oracle situations where gold segmentation is given. We use the neural word segmentor of Hyper-parameter settings. Table We compare various model configurations on the OntoNotes development set, in order to select the best settings for word-based and character-based NER models, and to learn the influence of lattice word information on character-based models. Character-based NER. As shown in Table A CNN representation of character sequences gives a slightly higher F1-score compared to LSTM character representations. On the other hand, further using character bigram information leads to increased F1-score over word+char LSTM, but decreased F1-score over word+char CNN. A possible reason is that CNN inherently captures character n-gram information. As a result, we use word+char+bichar LSTM for wordbased NER in the remaining experiments, which gives the best development results, and is structurally consistent with the state-of-the-art English NER models in the literature. Lattice-based NER. Figure As shown in Table OntoNotes. The OntoNotes test results are shown in Table F1 against sentence length. Figure Note that both word+char+bichar and lattice use the same source of word information, namely the same pretrained word embedding lexicon. However, word+char+bichar first uses the lexicon in the segmentor, which imposes hard constrains (i.e. fixed words) to its subsequence use in NER. In contrast, lattice LSTM has the freedom of considering all lexicon words. Entities in lexicon. Table The quality of the lexicon may affect the accuracy of our NER model since noise words can potentially confuse NER. On the other hand, our lattice model can potentially learn to select more correct words during NER training. We leave the investigation of such influence to future work. We empirically investigated a lattice LSTM-CRF representations for Chinese NER, finding that it gives consistently superior performance compared to word-based and character-based LSTM-CRF across different domains. The lattice method is fully independent of word segmentation, yet more effective in using word information thanks to the freedom of choosing lexicon words in a context for NER disambiguation.
| 630 | 1,521 | 630 |
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language
|
Rule-based models are attractive for various tasks because they inherently lead to interpretable and explainable decisions and can easily incorporate prior knowledge. However, such systems are difficult to apply to problems involving natural language, due to its linguistic variability. In contrast, neural models can cope very well with ambiguity by learning distributed representations of words and their composition from data, but lead to models that are difficult to interpret. In this paper, we describe a model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language. Specifically, we propose to use a Prolog prover which we extend to utilize a similarity function over pretrained sentence encoders. We fine-tune the representations for the similarity function via backpropagation. This leads to a system that can apply rulebased reasoning to natural language, and induce domain-specific rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it outperforms two baselines -BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al., 2017b) on a subset of the WIKIHOP corpus and achieves competitive results on the MEDHOP data set
|
We consider the problem of multi-hop reasoning on natural language data. For instance, consider the statements "Socrates was born in Athens" and "Athens belongs to Greece", and the question "Where was Socrates born?". There are two possible answers following from the given statements, namely "Athens" and "Greece". While the answer "Athens" follows directly from "Socrates was born in Athens", the answer "Greece" requires the reader to combine both statements, using the knowledge that a person born in a city X, located in a country Y , is also born in Y . This step of combining multiple pieces of information is referred to as multi-hop reasoning In contrast, rule-based models are easily interpretable, naturally produce explanations for their decisions, and can generalise from smaller quantities of data. However, these methods are not robust to noise and can hardly be applied to domains where data is ambiguous, such as vision and language In this paper, we introduce NLPROLOG, a system combining a symbolic reasoner and a rulelearning method with distributed sentence and entity representations to perform rule-based multihop reasoning on natural language input. 1 NLPRO-LOG generates partially interpretable and explain-able models, and allows for easy incorporation of prior knowledge. It can be applied to natural language without the need of converting it to an intermediate logic form. At the core of NLPROLOG is a backward-chaining theorem prover, analogous to the backward-chaining algorithm used by Prolog reasoners Our main contributions are the following: i) We show how backward-chaining reasoning can be applied to natural language data by using a combination of pretrained sentence embeddings, a logic prover, and fine-tuning via backpropagation, ii) We describe how a Prolog reasoner can be enhanced with a differentiable unification function based on distributed representations (embeddings), iii) We evaluate the proposed system on two different Question Answering (QA) datasets, and demonstrate that it achieves competitive results in comparison with strong neural QA models while providing interpretable proofs using learned rules.
|
Our work touches in general on weak-unification based fuzzy logic Multi-hop Reasoning for QA. One prominent approach for enabling multi-hop reasoning in neural QA models is to iteratively update a query embedding by integrating information from embeddings of context sentences, usually using an attention mechanism and some form of recurrency All of the methods above perform reasoning implicitly as a sequence of opaque differentiable operations, making the interpretation of the intermediate reasoning steps very challenging. Furthermore, it is not obvious how to leverage user-defined inference rules during the reasoning procedure. Combining Rule-based and Neural Models. In Artificial Intelligence literature, integrating symbolic and sub-symbolic representations is a longstanding problem An area in which neural multi-hop reasoning models have been investigated is Knowledge Base Completion (KBC) Very related to our approach are Neural Theorem Provers (NTPs) Theorem Proving for Question Answering. Our work is not the first to apply theorem proving to QA problems. Systems like Watson In the following, we briefly introduce the backward chaining algorithm and unification procedure (Russell and Norvig, 2016) used by Prolog reasoners, which lies at the core of NLPROLOG. We consider Prolog programs that consists of a set of rules in the form of Horn clauses: where h, p i are predicate symbols, and f i j are either function (denoted in lower case) or variable (upper case) symbols. The domain of function symbols is denoted by F, and the domain of predicate symbols by ) the body of the rule. We call B the body size of the rule and rules with a body size of zero are named atoms (short for atomic formula). If an atom does not contain any variable symbols it is termed fact. For simplicity, we only consider function-free Prolog in our experiments, i.e. Datalog A central component in a Prolog reasoner is the unification operator: given two atoms, it tries to find variable substitutions that make both atoms syntactically equal. For example, the atoms country(Greece, Socrates) and country(X, Y) result in the following variable substitutions after unification: {X/Greece, Y /Socrates}. Prolog uses backward chaining for proving assertions. Given a goal atom g, this procedure first checks whether g is explicitly stated in the KBin this case, it can be proven. If it is not, the algorithm attempts to prove it by applying suitable rules, thereby generating subgoals that are proved next. To find applicable rules, it attempts to unify g with the heads of all available rules. If this unification succeeds, the resulting variable substitutions are applied to the atoms in the rule body: each of those atoms becomes a subgoal, and each subgoal is recursively proven using the same strategy. For instance, the application of the rule country(X, Y ) ⇐ born_in(Y, X) to the goal country(Greece, Socrates) would yield the subgoal born_in(Socrates, Greece). Then the process is repeated for all subgoals until no subgoal is left to be proven. The result of this procedure is a set of rule applications and variable substitutions referred to as proof. Note that the number of possible proofs grows exponentially with its depth, as every rule might be used in the proof of each subgoal. Pseudo code for weak unification can be found in Appendix A -we refer the reader to Applying a logic reasoner to QA requires transforming the natural language paragraphs to logical representations, which is a brittle and error-prone process. Our aim is reasoning with natural language representations in the form of triples, where entities and relations may appear under different surface forms. For instance, the textual mentions is located in and lies in express the same concept. We propose replacing the exact matching between symbols in the Prolog unification operator with a weak unification operator With the weak unification operator, the comparison between two logical atoms results in an unification score resulting from the aggregation of each similarity score. Inspired by fuzzy logic tnorms Each natural language statement is first translated into a triple, where the first and third element denote the entities involved in the sentence, and the second element denotes the textual surface pattern connecting the entities. All elements in each triple -both the entities and the textual surface pattern -are then embedded into a vector space. These vector representations are used by the similarity function ∼ θ for computing similarities between two entities or two textual surface patterns and, in turn, by the backward chaining algorithm with the weak unification operator for deriving a proof score for a given assertion. Note that the resulting proof score is fully end-to-end differentiable with respect to the model parameters θ: we can train NLPROLOG using gradient-based optimisation by back-propagating the prediction error to θ. Fig. To transform the support documents to natural language triples, we first detect entities by performing entity recognition with SPACY Embedding representations of the symbols in a triple are computed using an encoder e θ : F ∪P → R d parameterized by θ -where F, P denote the sets of entity and predicate symbols, and d denotes the embedding size. The resulting embeddings are used to induce the similarity function ∼ θ : (F ∪ P) 2 → [0, 1], given by their cosine similarity scaled to [0, 1]: In our experiments, for using textual surface patterns, we use a sentence encoder composed of a static pre-trained component -namely, SENT2VEC Additionally, we introduce a third lookup table and MLP for the predicate symbols of rules and goals. The main reason of this choice is that semantics of goal and rule predicates may differ from the semantics of fact predicates, even if they share the same surface form. For instance, the query (X, parent, Y) can be interpreted either as (X, is the parent of, Y) or as (X, has parent, Y), which are semantically dissimilar. We train the encoder parameters θ on a downstream task via gradient-based optimization. Specifically, we train NLPROLOG with backpropagation using a learning from entailment setting During training, we minimize the following loss: where a ∈ C is the correct answer. For simplicity, we assume that there is only one correct answer per example, but an adaptation to multiple correct answers would be straight-forward, e.g. by taking the minimum of all answer scores. To estimate p(c|R; θ), we enumerate all proofs for the triple c up to a given depth D, where D is a user-defined hyperparameter. This search yields a number of proofs, each with a success score S i . We set p(c|R; θ) to be the maximum of such proof scores: Note that the final proof score p(c|R; θ) only depends on the proof with maximum success score S max . Thus, we propose to first conduct the proof search by using a prover utilizing the similarity function induced by the current parameters ∼ θt , which allows us to compute the maximum proof score S max . The score for each proof is given by the aggregation -either using the minimum or the product functions -of the weak unification scores, which in turn are computed via the differentiable similarity function ∼ θ . It follows that p(c|R; θ) is end-to-end differentiable, and can be used for updating the model parameters θ via Stochastic Gradient Descent. The worst case complexity vanilla logic programming is exponential in the depth of the proof (Russell and Norvig, 2010a). However, in our case, this is a particular problem because weak unification requires the prover to attempt unification between all entity and predicate symbols. To keep things tractable, NLPROLOG only attempts to unify symbols with a similarity greater than some user-defined threshold λ. Furthermore, in the search step for one statement q, for the rest of the search, λ is set to max(λ, S) whenever a proof for q with success score S is found. Due to the monotonicity of the employed aggregation functions, this allows to prune the search tree without losing the guarantee to find the proof yielding the maximum success score S max , provided that S max ≥ λ. We found this optimization to be crucial to make the proof search scale on the considered data sets. In NLPROLOG, the reasoning process depends on rules that describe the relations between predicates. While it is possible to write down rules involving natural language patterns, this approach does not scale. Thus, we follow For instance, to induce a rule that can model transitivity, we can use a rule template of the form p 1 (X, Z) ⇐ p 2 (X, Y ) ∧ p 3 (Y, Z), and NLPRO-LOG will instantiate multiple rules with randomly initialized embeddings for p 1 , p 2 , and p 3 , and finetune them on a downstream task. The exact number and structure of the rule templates is treated as a hyperparameter. Unless explicitly stated otherwise, all experiments were performed with the same set of rule templates containing two rules for each of the forms q(X, Y ) where q is the query predicate. The number and structure of these rule templates can be easily modified, allowing the user to incorporate additional domain-specific background knowledge, such as born_in(X, Z) ⇐ born_in(X, Y ) ∧ located_in(Y, Z) We evaluate our method on two QA datasets, namely MEDHOP, and several subsets of WIKI-HOP In both data sets, each data point consists of a query p(e, X), where e is an entity, X is a variable -representing the entity that needs to be predicted, C is a list of candidates entities, a ∈ C is an answer entity and p is the query predicate. Furthermore, every query is accompanied by a set of support documents which can be used to decide which of the candidate entities is the correct answer. MEDHOP is a challenging multi-hop QA data set, and contains only a single query predicate. The goal in MEDHOP is to predict whether two drugs interact with each other, by considering the interactions between proteins that are mentioned in the support documents. Entities in the support documents are mapped to data base identifiers. To compute better entity representations, we reverse this mapping and replace all mentions with the drug and proteins names gathered from DRUG-BANK To further validate the effectiveness of our method, we evaluate on different subsets of WIK-IHOP Following On MEDHOP we optimize the embeddings of predicate symbols of rules and query triples, as well as of entities. WIKIHOP has a large number of unique entity symbols and thus, learning their embeddings is prohibitive. Thus, we only train the predicate symbols of rules and query triples on this data set. For MEDHOP we use bigram SENT2VEC embeddings trained on a large biomedical corpus 4 , and for WIKIHOP the wikiunigrams model 5 of SENT2VEC. All experiments were performed with the same set of rule templates containing two rules for each of the forms p(X, Y ) ⇐ q(X, Y ), p(X, Y ) ⇐ q(Y, X) and p(X, Z) ⇐ q(X, Y ) ∧ r(Y, Z) and set the similarity threshold λ to 0.5 and maximum proof depth to 3. We use Adam (Kingma and Ba, 2014) with default parameters. The results for the development portions of WIK-IHOP and MEDHOP are shown in Table Exemplary proofs generated by NLPROLOG for the predicates record_label and country can be found in Fig. To study the impact of the rule-based reasoning on the predictive performance, we perform an ablation experiment in which we train NLPROLOG without any rule templates. The results can be found in the bottom half of Table In a qualitative analysis, we observed that in many cases multi-hop reasoning was performed via aligning entities and not by applying a multi-hop rule. For instance, the proof of the statement country(Oktabrskiy Big Concert Hall, Russia) visualized in Figure We performed an error analysis for each of the WIKIHOP predicates. To this end, we examined all instances in which one of the neural QA models (with SENT2VEC) produced a correct prediction and NLPROLOG did not, and labeled them with predefined error categories. Of the 55 instances, 49% of the errors were due to NLPROLOG unifying the wrong entities, mainly because of an over-reliance on heuristics, such as predicting a record label if it is from the same country as the artist. In 25% of the cases, NLPROLOG produced a correct prediction, but another candidate was defined as the answer. In 22% the prediction was due to an error in predicate unification, i.e. NLPROLOG identified the correct entities, the sentence did not express the target relation. Furthermore, we performed an evaluation on all problems of the studied WIKI-HOP predicates that were unanimously labeled as containing the correct answer in the support texts by 6 Discussion and Future Work We proposed NLPROLOG, a system that is able to perform rule-based reasoning on natural language, and can learn domain-specific rules from data. To this end, we proposed to combine a symbolic prover with pretrained sentence embeddings, and to train the resulting system using backpropagation. We evaluated NLPROLOG on two different QA tasks, showing that it can learn domainspecific rules and produce predictions which outperform those of the two strong baselines BIDAF and FASTQA in most cases. While we focused on a subset of First Order Logic in this work, the expressiveness of NLPRO-LOG could be extended by incorporating a different symbolic prover. For instance, a prover for temporal logic
| 1,266 | 2,160 | 1,266 |
K-best Iterative Viterbi Parsing
|
This paper presents an efficient and optimal parsing algorithm for probabilistic context-free grammars (PCFGs). To achieve faster parsing, our proposal employs a pruning technique to reduce unnecessary edges in the search space. The key is to repetitively conduct Viterbi inside and outside parsing, while gradually expanding the search space to efficiently compute heuristic bounds used for pruning. This paper also shows how to extend this algorithm to extract K-best Viterbi trees. Our experimental results show that the proposed algorithm is faster than the standard CKY parsing algorithm. Moreover, its K-best version is much faster than the Lazy K-best algorithm when K is small.
|
The CKY or Viterbi inside algorithm is a wellknown algorithm for PCFG parsing Despite their practical success, both pruning methods are approximate, so the solution of the parser is not always optimal, i.e., the parser does not always output the Viterbi tree. Recently, another line of work has explored A* search algo-rithms, in which simpler problems are used to estimate heuristic scores for prioritizing edges to be processed during parsing This paper presents an alternative way of pruning unnecessary edges while keeping the optimality of the parser. We call this algorithm iterative Viterbi parsing (IVP) for the reason that the iterative process plays a central role in our proposal. The IVP algorithm conducts repetitively Viterbi inside and outside parsing, while gradually expanding the search space to efficiently compute lower and upper bounds used for pruning. IVP is easy to implement and is much faster in practice than the standard CKY parsing algorithm. In addition, we also show how to extend the IVP algorithm to extract K-best Viterbi parse trees. The idea is to integrate
|
Following We assume N = {A, B, C, D}. By grouping several symbols in the same cell of the chart table, we can make a smaller table than the original one. While the original chart table in Figure . Figure symbols but also new symbols X1 and X2. The new symbols, which are made by grouping several non-terminal symbols, are refered to as shrinkage symbols. For example, the shrinkage symbols X1 and X2 consist of non-terminal symbols {A, B} and {C, D}, respectively. In this paper, to make shrinkage symbols, we use hierarchical clustering of non-terminal symbols defined in By this construction, each derivation in a coarse chart gives an upper bound on its corresponding derivation in the original chart Lemma 1. If the best goal derivation d in the coarse chart does not include any shrinkage symbol, it is equivalent to the best goal derivation in the original chart. Proof . Let Y be the set of all goal derivations in the original chart, Y ⊂ Y be the subset of Y not appearing in the coarse chart, and Y be the set of all goal derivations in the coarse chart. For each derivation d ∈ Y , there exists its unique corresponding derivation d in Y (see Figure and this means that d is the best derivation in the original chart. 2 Algorithm 1 shows the pseudo code for IVP. The IVP algorithm starts by initializing coarse chart, which consists of only 0-th layer shrinkage symbols. It conducts Viterbi inside parsing to find the best goal derivation. If the derivation does not contain any shrinkage symbols, the algorithm returns it and terminates. Otherwise, the chart table is expanded, and the above procedure is repeated until the termination condition is satisfied. For efficient parsing, we integrate a pruning technique with IVP. For an edge e = (A, i, j), we denote by αβ(e) = α(e) + β(e) the score of the best goal derivation which passes through e, where β(e) and α(e) are Viterbi inside and outside scores for e. Then, if we obtain a lower bound lb such that lb ≤ max d∈Y s(d) where Y is the set of all goal derivations in the original chart, an edge e with αβ(e) < lb is no longer necessary to be processed. Though it is expensive to compute αβ(e) in the original chart, we can efficiently compute by Viterbi inside-outside parsing its upper bound in a coarse chart table: where α(e) and β(e) are the Viterbi inside and outside scores of e in the coarse chart table. If αβ(e) < lb, we can safely prune the edge e away from the coarse chart. Note that this pruning simply reduces the search space at each IVP iteration and does not affect the number of iterations taken until convergence at all. We initialize the lower bound lb with the score of a goal derivation obtained by deterministic parsing det() in the original chart. The deterministic parsing keeps only one non-terminal symbol with the highest score per chart cell and removes the other non-terminal symbols. The det() function is very fast but causes many search errors. For efficient pruning, a tighter lower bound is important, thus we update the current lower bound with the score of the best derivation, having non-terminals only, obtained by the best() function in the current coarse chart, if the former is less than the latter. At line 9, IVP expands the current chart table by replacing all shrinkage symbols in d with their next layer symbols using mapping π. While this expansion cannot derive a reasonable worst time complexity since it takes many iterations until convergence, we show from our experimental results that it is highly effective in practice. Algorihtm 2 shows the K-best IVP algorithm which applies the iterative process to the Lazy K-best algorithm of The K-best IVP algorithm also prunes unnecessary edges and initializes the lower bound lb with the score of the k-th best derivation obtained by beam search parsing in the original chart. For efficient pruning, we update lb with the k-th best derivation, which consists of non-terminals only, obtained by the k-best() function in the current coarse chart. The getShrinkageDeriv() function seeks the best derivation, which contains shrinkage symbols, from [ d2 , . . . , dk ]. The K-best IVP algorithm inherits the other components from standard IVP. We used the Wall Street Journal (WSJ) part of the English Penn Treebank: Sections 02-21 were used for training, sentences of length 1-35 in Section 22 for testing. We estimated a Chomsky normal form PCFG by maximum likelihood from rightbranching binarized trees without function labels and trace-fillers. Note that while this grammar is a proof-of-concept, CKY on a larger grammar does not work well even for short sentences. Table Next, we examine the K-best IVP algorithm. Figure Huang and Chiang (2005) presented an efficient K-best parsing algorithm, which extracts K-best lists after a Viterbi inside pass. Huang (2005) also described a K-best extension of the Knuth parsing algorithm The coarse-to-fine parsing For sequential decoding, This paper presents an efficient K-best parsing algorithm for PCFGs. This is based on standard Viterbi inside-outside algorithms and is easy to implement. Now, we plan to conduct experiments using latent-variable PCFGs
| 685 | 1,093 | 685 |
ART: rule bAsed futuRe-inference deducTion
|
Deductive reasoning is a crucial cognitive ability of humanity, allowing us to derive valid conclusions from premises and observations. However, existing works mainly focus on languagebased premises and generally neglect deductive reasoning from visual observations. In this work, we introduce rule bAsed futuReinference deducTion (ART), which aims at deducing the correct future event based on the visual phenomenon (a video) and the rule-based premises, along with an explanation of the reasoning process. To advance this field, we construct a large-scale densely annotated dataset (Video-ART), where the premises, future event candidates, the reasoning process explanation, and auxiliary commonsense knowledge (e.g., actions and appearance) are annotated by annotators. Upon Video-ART, we develop a strong baseline named ARTNet. In essence, guided by commonsense knowledge, ARTNet learns to identify the target video character and perceives its visual clues related to the future event. Then, ARTNet rigorously applies the given premises to conduct reasoning from the identified information to future events, through a non-parametric rule reasoning network and a reasoning-path review module. Empirical studies validate the rationality of ARTNet in deductive reasoning upon visual observations and the effectiveness over existing works.
|
Deductive reasoning is a systematic method that rigorously follows a set of explicitly given constraints (i.e., rules) to deduce valid conclusions from empirical facts through logical inferences
|
+ Candidate Future Events: Event 1: The lady with yellow hair is putting the book on the desk into her bag. Event 2: The woman in green dress is putting down her bag. Event 3: The lady who opened the door is walking to the lamp and turning on it. stone of human psychological functioning, serving as an indispensable aspect of our daily cognitive processes. For example, human beings possess the capability to utilize the given rule set (R) to deduce future events (F) through the interpretation of observed phenomena (O). To illustrate: • Supposing that the rule R: as the selfprotection, the person will release the hot object, once burned. holds, and we observe O: a man holds a very hot teacup, it follows logically that we anticipate F: he will release the cup. • Under the premise that R: after getting home, my dad will definitely smoke to relieve anxiety;, the observation O: dad returns home from work at night, should lead to the future event F: he will smoke. Despite deductive reasoning being acknowledged as a fundamental cognitive competency of humanity To advance the research, we simulate the deductive reasoning of human beings and propose a rule bAsed futuRe-inference deducTion task (ART). Overall, in aligning with the established deductive reasoning studies within the NLP community To promote multi-modal deductive reasoning research and meet the demands of the ART task, we introduce a new dataset, named Video-ART, consisting of 23, 895 samples. Careful annotation was performed by annotators and verifiers with strong logical reasoning skills, who mainly focused on two key aspects: (1) They targeted to design the rule sets and the candidate future events that are closely associated with the visual information presented; (2) The annotators provided the correct future events and a rule-based explanation of the reasoning process. In addition, to enhance the AI system's deduction from visual semantics in highly unstructured videos, which are composed of densely arranged pixels, we have carefully annotated the commonsense knowledge of the target objects, including their appearance and related actions. To lay the groundwork for future research, we propose a strong baseline for the rule based future-inference deduction, named ARTNet. ART-Net mainly consists of three components, i.e., knowledge-guided target perception (KTP), nonparametric rule reasoning network (RRN), and reasoning path review (RPR). KTP learns to identify the target character and corresponding visual clues related to the upcoming event through multi-task learning and commonsense knowledge annotations such as actions and appearance. Inspired by traditional graph-theoretic algorithms, RRN performs layer-by-layer reasoning through a purpose-built non-parametric rule reasoning network, uncovering the reasoning paths from the identified visual clues to potential future events. RRN offers two advantages for the ART task over traditional models: (1) RRN provides explanations of its rule-based reasoning process. (2) RRN avoids rote memorization of rules within the training data and ensures the rigorous application of the sample-specific rule set. Furthermore, the RPR module validates the semantic consistency between the rule reasoning paths uncovered by RRN, the video observations, and the future event descriptions. Overall, the main contributions of this work are three-fold: • We propose the rule based future-inference deduction task, through imitating human cognition. To the best of our knowledge, this is an early exploration of deductive reasoning in the multi-modal domain. • We construct a large-scale dataset Video-ART • We contribute a strong baseline, ARTNet, tailored for the ART task. Experimental results on the Video-ART dataset validate the effectiveness of ARTNet over the state-of-the-arts. Video-Language Inference. As the development of the deep learning The video-language inference task aims at judging the correctness of the textual conclusion, based on the video information and the language description Deductive Reasoning. Reasoning is an important skill for human beings to understand the world Our rule bAsed futuRe-inference deducTion task (ART) requires the AI system to (1) select the correct future event from the candidate events by reasoning on the rule set and the observation (a We collect the videos in our dataset from two sources: (1) Parts of the video clips are manually intercepted from 80 American movies, including Broke Girls, Grey's Anatomy, Mr. Bean, etc. These videos are of high quality, with rich character actions and emotions, and rigorous plot logic. (2) Other videos are carefully selected from the existing datasets, Charades Both sources of data have their own characteristics and combined together may provide a relatively comprehensive testbed for the ART task. Some collected videos are not suitable for our ART task, such as videos with few actions or blurred videos in which key details cannot be clearly distinguished. With the collected videos, we rigorously design the ART task examples for each data and manually validate all examples. In addition, we annotate the commonsense knowledge for all video characteristics in detail to assist AI system training, including human appearance, clothing, actions, semantics, and scenes located. Commonsense Knowledge Annotation. The annotated categories and subcategories of commonsense knowledge for each video characteristic are shown in Table Validation. The verifiers with strong logical abilities are responsible for verifying the labeled examples. The examples not agreed by them are relabeled or discarded. Our dataset has the following characteristics: In addition, on average there are 4 future events in each example. The average length of the videos is 24.5 seconds. Detailed statistics are shown in Figure We propose a new task, rule bAsed futuReinference deducTion (ART), and design a targeted model named ARTNet. According to the task characteristics, we contribute the non-parametric rule reasoning module for ARTNet. In addition to the key reasoning module, the knowledge-guided perception module and the rechecking module are introduced to assist in the completion of the ART task. Task Formulation. Given an observation (a video) V, a rule set (multiple rules) S = {S i } N S i=1 , and candidate future events C = {C i } N C i=1 described by the natural language, the ART task aims to reason out the correct future event and explain the reasoning process based on the rules. We define the model with the parameter Θ for the ART task as M. Then, the training optimization function δ(.) of M is represented as: where Θ is a learnable parameter. The function ϵ(.) generates the ground truth and the function M(.) outputs the model prediction. The function ξ(.) calculates the consistency of ϵ(.) and M(.). Rule Transformation. Before describing our ARTNet structure, we shed light on the intriguing transformation of rules within the ART task. The ART task revolves around predicting future events based on observed information, employing rules as the means of inference. These rules can be perceived as an intricate mapping of crucial information bridging two consecutive events. It is important to note that the core driver of event progression for the target character lies in the changes in actions. Hence, in our proposed baseline, we adopt an approximation where rules are represented as mappings of actions. We employ the robust Stan-fordNLP toolkit The lady with yellow hair is put the book into the bag. Frame-Text Level Semantic Scene Step 1: Knowledge-guided Target Perception Step 2: Non-parametric Rule Reasoning Step Step 1: Knowledge-guided Target Perception, which focuses on the language-described person in the video and identifies her key actions. Step 2: Non-parametric Rule Reasoning, which constructs the rule graph (action graph), and finds the connected action path between the video actions and the future-event action with Dijkstra's algorithm. Step 3: Reasoning Path Review, which finally checks whether the future event, the video, and the found rule paths match to determine the correctness of the future event. Model Pipeline. As shown in Figure Step 2 (Section 4.2) The non-parametric rule reasoning network constructs the rule graph based on the action chains stored in the rule memory. Then, it finds the connected rule paths between the future event action and the visual action from the video V. If no path is found, we judge the future event F is wrong. Step 3 (Appendix): The review module of ARTNet reasons on the found connected rule paths and the cross-modal feature containing the semantics of the video V and the future event F. The module outputs the correct probability p e of the future event F. We choose the one with the highest probability from the candidate future events as the final prediction result of the deductive reasoning task, ART. The corresponding rule path in Step 2 is viewed as the explanation for the prediction result. Step 3 is introduced in the appendix in detail. Identifying the target character and the corresponding action knowledge related to the natural language future event F from the input video V is an essential step before the reasoning based on the rule set S (which is transformed into action chains in the preprocessing). Toward this target, we leverage human-annotated commonsense knowledge labels (human appearance, clothing, semantics, scene, and actions) of the target video person described by the textual future event F to train the model with transformer-based multi-task learning. Specifically, the knowledge-guided visual perception module is designed based on the transformer architecture. The transformer encoder extracts the cross-modal feature F c from the video V and the future event F. We define two types of query vectors to analyze the cross-modal feature F c with the transformer decoder. It includes the frame-text level queries i=1 and the video-text level queries Q v = {q i v } 3 i=1 used to analyze the cross-modal semantics, where N f is the number of frames. The transformer decoder distinguishes different types of query vectors according to the injected type embeddings. Then, it reasons the corresponding features (frame-text level features ) relying on the query vectors (Q f and Q v ). By analyzing the resulting features (F f and F v ), we predict all commonsense knowledge. We take the action knowledge predcition as an example, the others are shown in appendix. Action Knowledge Prediction. For action knowledge, we do not predict it frame-by-frame like sentiments and scenes. It is because there are multiple actions for the target person in each frame and the frame-by-frame prediction introduces too much burden to ARTNet, which leads to difficult model training. Therefore, the model directly counts the actions contained in the video rather than each frame. The process of judging whether the i-th action exists in the video V with the video-level query f 1 v is represented as: where the M LP i ac is the MLP applied specifically for the i-th action prediction. The p i ac is the probability of action existence or not. The rule-based reasoning of the ART task requires the model not to memorize the rules in the training set and has strong interpretability. Towards this end, we propose the non-parametric rule reasoning network, based on the traditional graph theory rather than the neural network. In detail, the action set contained in the rule set (action chain set) is represented as A = {A i } N A i=1 . As shown in the the Figure (3) Notably, for the action chain form of combinatorial inference in the action chain set A i + A j → A k , we store two single-step relational maps, (2) Graph Construction. We construct the action graph G(A, U) by connecting all the single-step relational maps (A i → A i+1 ), ..., (A i+n-1 → A i+n ) in the memory, where U represents the edges between the actions A in the graph. The construction process is formalized as: (3) Action Path Finding. Firstly, we need to find the starting nodes {A s i } N As i=1 and the ending node A e of the target action paths in the constructed graph G(A, U). The starting nodes {A s i } N As i=1 are determined by matching the actions predicted in step 1 (Section 4.1) and each graph node (action). Similarly, we find the ending node A e by matching each graph node (action) and the action of the future event F detected by the widely used tool, (5) Finally, we review all the found action paths again for violations of the action chains in the action chain set S and delete them. Notably, for the rule paths (like , these action paths need to be merged into one and then checked. After merging, checking, and deleting, multiple action paths may be preserved. They need to be further verified in the next review module. We experiment with our ARTNet model on our proposed Video-ART dataset to verify the 9517 model's effectiveness for the rule bAsed futuReinference deducTion task (ART). All experimental environments are deployed in Hikvision ( Dataset. The Video-ART dataset consists of data from real life scenes and movie scenes, which are randomly divided into 14, 029/706/3, 238 (train/val/test) and 3, 902/349/1, 671 (train/val/test), respectively. As stated in Section 3.1, both types of data have their own characteristics. To comprehensively evaluate the performance of the models, we conduct experiments in both scenarios. Evaluation Metrics. Following previous deductive reasoning tasks Baselines. Previous methods from other tasks cannot adopt our ART task in a direct manner. Thus, several state-of-the-art multi-modal and reasoning models are extended as the baselines to compare. Specifically, to make a comprehensive comparison, we take into account the following methods: (1) video-language inference methods: LF-VILA Comparison with State-of-the-arts. Our ARTNet model is compared with the baselines on the Video-ART dataset for the ART task. The experiment results are shown in Table Comparison with baselines on the merged dataset. We are interested in the ARTNet model performance on the whole dataset. Thus, we merge the two dataset parts, including the real-life examples and the movie examples, and experiment on them. The results of the baseline comparison are shown in Table Comparison with baselines on different training data volumes. To evaluate the performance of the ARTNet model trained on different data volumes, we randomly select 25%, 50%, and 75% of the training data in the movie scene for experiments. The comparison results between ARTNet and baselines are shown in Figure Rule 1: After cooking, someone will have a meal. Rule 3: After cooking, someone will have a meal up and then do the washing up. Rule 2: Someone will continue to hug another person. Candidate Future Events: (Red Text is the error cause.) Event 1: There is an adult woman in a pink coat doing the washing up. Event 2: That adult woman wearing a black coat is standing up. Event 3: The adult woman in a black pant is continuing to hug another person. Event 4: There is an adult woman in a black coat doing the washing up. Ablation Study We are interested in the contribution of each key module in our ARTNet model and design the ablation study. Specifically, we surgically remove the Commonsense-Knowledge Guidance (CKG) and the non-parametric Rule Reasoning Network (RRN) from our ARTNet model and get different architectures. Without RRN, the ARTNet model totally losses the rules exploit capabilities, which is necessary for the ART task. Thus, we replace the RRN module with the advanced NLP model, transformer We study the deductive reasoning process in humans and propose a video-text deductive reasoning task, ART, which is an early exploration of deductive reasoning in the field of multi-modal. We propose a strong baseline, ARTNet, for the ART task, as a field foundation. The ARTNet baseline is limited to approximate the rules as the action chains to further process. In the future, we will update the ARTNet to improve the design of this part. We hope our work could promote the development of the multi-modal deductive reasoning.
| 1,339 | 194 | 1,339 |
Evaluating and Improving the Coreference Capabilities of Machine Translation Models
|
Machine translation (MT) requires a wide range of linguistic capabilities, which current end-to-end models are expected to learn implicitly by observing aligned sentences in bilingual corpora. In this work, we ask: How well do MT models learn coreference resolution from implicit signal? To answer this question, we develop an evaluation methodology that derives coreference clusters from MT output and evaluates them without requiring annotations in the target language. We further evaluate several prominent open-source and commercial MT systems, translating from English to six target languages, and compare them to state-of-theart coreference resolvers on three challenging benchmarks. Our results show that the monolingual resolvers greatly outperform MT models. Motivated by this result, we experiment with different methods for incorporating the output of coreference resolution models in MT, showing improvement over strong baselines. 1
|
Machine translation (MT) may require coreference resolution to translate cases where the source and target language differ in their grammatical properties. For example, consider translating "The trophy didn't fit in the suitcase because it was too small" from English to French: "Le trophée ne rentrait pas dans la valise car elle était trop petite" Figure Such texts evade lexical one-to-one translation, and instead demand source-side coreference resolution as a prerequisite for a correct translation. The prominent end-to-end approach to MT assumes that translation models implicitly learn source-side coreference resolution by observing aligned sourcetarget pairs, without intermediate coreference supervision. While the importance of addressing such semantic phenomenon has been stated in various works In Section 3 we devise an evaluation paradigm that reduces MT output to source-side coreference resolution predictions by inferring coreference clusters from source inputs and predicted target translations. E.g., in the previous example, a feminine inflection for the pronoun "it" in French can infer linking "it" with "suitcase", while a masculine French inflection links "it" with "trophy", as shown in Figure We use this approach to evaluate the coreference capabilities of several commercial and open source MT systems, translating from English to six target languages. We conduct our experiments in both synthetic (WinoMT and Wino-X; Following this finding, in Section 4, we develop methods for improving coreference in MT, both implicitly and explicitly. Our implicit approach consists of fine-tuning MT models on texts that specifically require many coreference decisions, thus exposing the model to more implicit coreference signal. Our explicit approach further enriches source sentences with predicted coreference markers. We show that these approaches improve coreference over the end-to-end MT approach, achieving comparable or better results than much larger MT models, both commercial systems and open-source. More broadly, our approach can be applied to improve the translation of other semantic phenomena that diverge in realization between source and target languages, such as plurality in second-person pronouns
|
We start our work by extending the methodology developed in In particular, assuming a dataset of English sentences D, where each instance includes gold coreference annotation between a human entity and its pronoun (e.g., "The doctor asked the nurse to help her with the procedure."), they evaluate gender bias from English to language T with morphological gender in the following manner: 1. Predict word alignment between D and M (D), i.e., the output translations of an MT model M . This finds the translations for pronouns (e.g., "her") and possible entities (e.g "doctor", "nurse") in the target language T . 2. Automatically extract the gender of the possible entities and the pronouns in the target language based on morphological features. 3. Check whether the gender of the co-referred entity (e.g., "doctor") in T corresponds to the gender of the English pronoun (e.g., "her"). The gender bias of M is then defined as the difference in performance between stereotypical and anti-stereotypical gender role assignments. We use a similar setup to address a different question: rather than evaluating the gender bias of the model, we evaluate its coreference abilities, which may be hindered by bias, but also by the inherent difficulty to infer coreference in the absence of an explicit training signal. The approach taken in WinoMT is limited as it restricts the evaluation to sentences with a known gender in English, indicated by a gendered pronoun of a human entity (e.g., her). Consider the sentence in Figure Step 3 in Stanovsky et al.'s method will fail to assign a coreference label to the translation, because "it" does not have a gender in English. In this section, we extend the WinoMT approach in order to estimate more general coreference abilities of MT models. To achieve this, we note that many languages have gender agreement between pronouns and the noun that they refer to. Therefore, correct targetside gender agreement requires (implicitly) resolving the source-side coreference of the relevant entities. As exemplified in Figure We note that while this framing uses the morphological gender inflection of common nouns, it is different in motivation from measures of gender bias. In our example above, gender inflection allows us to determine whether an MT model correctly employs common sense rather than examining whether it tends to prefer stereotypical gender norms. While a model's gender bias may explain some loss in coreference abilities, the model's ability to resolve coreference need not be aligned with the degree of its bias (e.g., a random gender assignment would result in unbiased performance, but very poor coreference ability). Most importantly, by considering the gender of the entity and the pronoun, we obtain mention clusters which can be compared against those produced by coreference resolution models. In our example figure, both the first MT model and the coreference model produce the correct clustering: {{trophy}, {suitcase, it}}, while the second MT model errs by producing: {{trophy, it}, {suitcase}}. Another aspect of our evaluation methodology is its generality. Our method does not require a reference translation or make any particular assumptions about the generated output. As there are generally many correct translations, this flexibility allows us to accurately assess the model's coreference abilities. For instance, our methodology does not assume the gender of the entity's translation as can be seen in the first example in Table Evaluation datasets. The first dataset we use is Wino-X Second, we use WinoMT Table dure". The gendered pronoun reveals the gender of the entity and adds gender attributes to the source cluster. In our example, "her" refers to the "doctor", revealing the doctor's gender. Our third dataset is BUG Machine translation models. We apply our evaluation methodology to four Transformer-based machine translation models from EasyNMT: (3) Semitic languages: Hebrew and Arabic, each with a unique alphabet; both are partial pro-drop languages and have two grammatical genders. (4) Germanic languages: German with 3 grammatical genders. We first evaluate the accuracy of existing coreference resolvers on our three evaluation datasets, where accuracy is defined as the percentage of instances in which the model identifies that the pronoun is coreferring with the correct entity. We select state-of-the-art models trained on Table Similarly to Wino-X, target-side consistency results on WinoMT are consistently lower than coreference resolvers. Further, we observe that consistency is affected by two factors: the MT model and the target language. Regarding models, Opus-MT achieves lowest performance, with average consistency of 59.3, while mBART50 achieves high results with average consistency of 70.5, sometimes surpassing the second-best MT model by about 9 points. This might be due to the extensive pretraining of mBART50, as previously demonstrated for monolingual LMs Consistency results on BUG are higher than on WinoMT for most models, while sometimes surpassing English coreference resolvers, notably in Hebrew and Arabic (e.g., 91.8 for Google vs. 74.6 for LINGMESS). To understand this gap, we analyze the translation of 50 BUG sentences to Hebrew and French and find that most instances (45 in Hebrew and 33 in French) do not include a distracting entity which should be translated to a different gender in the target language. As mentioned above ( §3), our metric trivially indicates those examples as consistent. Overall, target-side consistency results across all datasets demonstrate that both open-source and commercial MT systems exhibit rather poor coreference capabilities compared to English coreference models. The use of automatic tools in the proposed methodology inevitably implies the introduction of noise into the process. To assess the quality of our measurements, we randomly sampled 50 translations of the Opus-MT model from all evaluation datasets and in all target languages (for a total of 750 annotations), annotating each sample in-house by a native speaker of the target language. The human annotators were asked to identify if the candidate pronoun is indeed the target pronoun and to verify that the gender prediction is correct. This way, we can account for both types of possible errors, i.e., alignment and gender extraction. We compare the human annotations to the output of our automatic method and find that the average agreement over all languages and datasets is above 90% (see full results in App. §A). These results are comparable to the ones reported by Some errors can be caused by idiosyncrasies that affect the morphological analysis, as In the previous section we showed that the coreference performance of MT systems, obtained through an implicit signal, seems inferior to that of coreference resolution learned from an explicit signal. This result raises the question of whether we can leverage dedicated conference resolvers to improve the consistency of MT coreference. To address this question, we propose two data augmentation techniques that leverage a sourceside English coreference model, and show that finetuning on them indeed improves coreference resolution in MT. Augmented fine-tuning with instances which require coreference resolution. First, we run a coreference resolution model on the source-side sentences. We then consider two approaches for constructing the augmented fine-tuning data: (1) Coref data with all sentences that have nonsingleton clusters and (2) Gender data, a subset of Coref data where there is at least one non-singleton cluster with a gendered pronoun (he, she, her, him, hers, his). The motivation for this augmented finetuning strategy is that further fine-tuning on such instances would expose the MT model to examples that may bear a coreference signal. Adding explicit source-side coreference markers. Second, we use the non-singleton clusters from the coreference model to add inline coreference markers in the source sentences. For our example sentence, this process produces the following source-side sequence: "The trophy didn't fit in the <ENT1> suitcase </ENT1> because <ENT1> it </ENT1> was too small", indicating that "suitcase" and "it" are coreferring. MT models. In our fine-tuning experiments, we opt for the Opus-MT model, since its size (68M parameters) and efficiency Training datasets. For fine-tuning data of Spanish, French, and German, we use Europarl Fine-tuning and inference. For each language, we fine-tune the Opus-MT model using four different finetuning datasets: (1) Coref data (2) Coref data with explicit coreference markers, (3) Gender data and (4) Gender data with explicit coreference markers. The inference on our three evaluation datasets (Wino-X, WinoMT, BUG) conforms with the fine-tuning procedure of each model. Namely, we run the models ( Table Effect of augmented fine-tuning data. The models fine-tuned on Coref data (1) and Gender data (3) outperform the Opus-MT baseline for all languages, both in Wino-X and WinoMT. This demonstrates that MT models learn implicitly linguistic phenomena from instances involving those phenomena. Furthermore, we point out that consistency scores on Wino-X are generally higher when fine-tuning on Coref data (1, 2a, 2b) while WinoMT results are better when fine-tuning on Gender data Effect of explicit coreference markers. In the majority of our experiments (13/18), the explicit fine-tuning models (2a and 4a) outperform the implicit data augmentation approach when using the same augmented data (1 and 3) (see examples in Table We turn to observing the empirical effect of the suggested fine-tuning strategies, using additional metrics. For each sentence in Wino-X, we have the gold target pronoun that should appear in its translation. We use it to compute pronoun translation accuracy by comparing the candidate pronoun with the gold target pronoun. Other metrics that The study of coreference has a long tradition in machine translation. A long line of work uses pronoun translation as a way of measuring coreference, since BLEU-based evaluation was shown to be insufficient for measuring improvement in coreference An alternative evaluation methodology is using automatic reference-based methods that produce a score based on word alignment between the source, reference translation, and translation output, and identification of pronouns in them, such as Auto-PRF The headphones blocked the noise but not the vibration, as it was relatively strong Baseline (RU) Наушники блокировали шум, но не вибрацию, поскольку он был относительно сильным. ✗ Ours (RU) Наушники блокировали шум, но не вибрацию, так как она была относительно сильной. ✓ substantial disagreement between these metrics and human annotators, especially because of the existence of valid alternative translations and pronouns than the ones used in the reference Our method extends Several previous methods aimed to improve the coreference abilities of MT models and reduce undesirable biases, by modifying the training data in ways that share some similarities with our method. Les poulets se sont échappés de la cour et ont fui vers le champ, comme ils l'ont trouvé si restreint. ✗ Ours (FR) Les poulets se sont échappés de la cour et ont fui vers le champ, car ils l'ont trouvée si encombrée. ✓ Our work is the first to present an automatic methodology for assessing the coreference capabilities of MT models, that can be applied in any target language and does not require any target side annotations. Furthermore, to the best of our knowledge, we are the first to conduct a large-scale multilingual coreference evaluation study on prominent opensource and commercial MT models, and compare them against state-of-the-art coreference resolvers on three challenging benchmarks. Finally, based on the superior results of coreference resolvers, we propose a novel approach to improve the coreference capabilities of MT models, that outperforms or achieves comparable results to strong and larger MT models. Despite this substantial gain, there is still a performance gap between our model and state-of-the-art coreference resolvers. We hope that our work, and specifically our automatic evaluation methodology, will encourage future research to improve the coreference capabilities of MT models. Future work can expand our approach to account for number and person agreement phenomena, investigate how to extend our approach to more coreference clusters and more mentions per cluster in intra-sentential as well as inter-sentential settings. Moreover, we intend to investigate how different morphological attributes affect MT models' coreference abilities. Even though our study presents the first large-scale multilingual coreference evaluation study in MT, it still has some limitations that could be addressed in future work. First, our methodology provides an upper bound to the coreference capabilities based on detecting gender valuations. While this could allow for a controlled evaluation experiment, this upper bound can become non-indicative in cases where gender assignment is not a discriminative factor. This can be addressed by accounting for more semantic and syntactic constraints that the translation needs to follow (e.g., singular/plural agreement). Second, our setting addresses one entity and a single co-referring pronoun in the naturalistic sentences experiment. Our methodology could in principle be augmented to deal with more coreference clusters and mentions per cluster. Another possible extension is to include event coreference in addition to entity coreference. For example, in this work, we focus only on the anaphoric function of the pronoun "it" but further research can also examine the event function of "it" Third, MT models should generally produce translations with accurate gender inflection for all words. However, in this work, we focus on the coreference capabilities of MT models by evaluating gender agreement between coreferring entity mentions. Future research can extend our evaluation methodology to assess the gender inflection of verb and adjective translation (e.g., the gender of "big" and "small" in Figure Finally, although in Section 4 we show big gains from the fine-tuning approach, it is clear that there is much room for improving the coreference capabilities of MT models, especially with regard to the performance of state-of-the-art coreference resolvers. We hope this work will help others develop MT models with better coreference capabilities. Table In Arabic and Hebrew, the alignment error occurs more. A possible explanation for that can be the fact that both those languages are partial pro-drop languages. To verify that those results will not affect our measurement, we verified that the error has similar consistency distributions as the rest of our results. Table Table Table
| 944 | 2,238 | 944 |
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
|
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 •M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
|
Broad-coverage knowledge bases (KBs) such as Freebase to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ? , by reasoning from known facts stored in the KB. As a most common approach However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations. Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 • M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process. We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4). We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank. We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).
|
A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g. The Matrix, country of film, Australia ). A relation r has its inverse r -1 ∈ R so that for every h, r, t ∈ T , we regard t, r -1 , h as also in the KB. Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ? triple. Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts. The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r . If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ? is calculated by u h M r (with each nonzero entry corresponds to an answer). Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T . This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|. Thus, we define the score function as for the basic model. This is similar to the bilinear model of More generally, we consider composition of relations r 1 / . . . /r l to model paths in a KB to measure the plausibility of a path. It is explored in In order to learn parameters u h , v t , M r of the score function, we follow as our KB-learning objective. Here, k is the number of noises generated for each path. When the score function is regarded as probability, L 1 represents the log-likelihood of " h, r 1 / . . . , t being actual path and h, r 1 / . . . , t * being noise". Maximizing L 1 increases the scores of actual paths and decreases the scores of noises. Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding. By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e. relation matrices). Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder. m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define as the coding. Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function which measures the length of Bc r 2 projected to the direction of m r 1 . In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize as our reconstruction objective. Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * . During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well. Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices. In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold. Joint training with an autoencoder is not simple. Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster. Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization. Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work. After extensive pre-experiments, we have found some crucial settings for successful training. The most important "magic" is the scaling factor 1 √ dc in definition of the similarity function (3), perhaps being combined with other settings as we discuss below. We have tried different factors 1, and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings. When the scaling factor is too small (e.g. 1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g. 1), all codings get very close to 0. The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 . We use Stochastic Gradient Descent (SGD) for optimization, and the common practice Here, η, λ are hyper-parameters and τ is a counter of processed data points. In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting "number of updates" instead of data points The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds. In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later. But how to estimate ∆ 1 and ∆ 2 ? It seems that we can approximately calculate their scales from initialization. In this work, we use i.i.d. Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc. Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3), we have approximately It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 . This might not be a mere coincidence. Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below. In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set. Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random. The identity matrix I helps passing information from head to tail Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises. This is somehow counterintuitive compared to training word embeddings. KBs have a wide range of applications Among the previous works, TransE On the other hand, the base model used in this work originates from RESCAL Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints Moreover, we additionally focus on leveraging composition in KBC. Although this idea has been frequently explored before Autoencoders have been used solo for learning distributed representations of syntactic trees Jointly training an autoencoder is not simple because it takes non-stationary inputs. In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad We evaluate on standard KBC datasets, including WN18 and FB15k WN18 collects word relations from WordNet For any incomplete triple h, r, ? in KBC test, we calculate a score s(h, r, e) from ( We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving. The results are shown in Table Among the published results, STransE Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions. This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations. Figure In the first group of Figure In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film. As for the relation currency of film budget, it has large code values at both dimensions. This kind of relation clustering also seems independent of initialization. Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them. Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably. For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have In order to visualize the relation matrices learned by our joint and base models, we use UMAP In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 . Formally, the list is constructed as below. For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB. Similarly, we define C(r 1 /r 2 ) t-SNE (van der For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices. Then, we calculate MR and MRR for evaluation. We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2). The results are shown in Table In the KBC task, where are the losses and what are the gains of different settings? With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training. It is noteworthy that our base model already achieves strong results. This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training. Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer. In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training. The results on FB15k-237 are shown in Table We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder. We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank. Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training. We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.
| 1,098 | 1,878 | 1,098 |
Finspector: A Human-Centered Visual Inspection Tool for Exploring and Comparing Biases among Foundation Models
|
An overview of Finspector. Users can launch Finspector in a Python notebook (e.g., Jupyter). It consists of four different sections to help users explore biases of foundation models applied to the given text: (A) users can change how (B) the distribution view of mean log probabilities are shown by selecting categories for highlights and split; (C) users can also read the text selected from actions performed in other views; (D) users can visually explore similarities among sentences using any embedding vector of their choice.
|
Recently, pre-trained large language models (LLMs), including 'foundation models,' that are trained on large amounts of data have shown striking performances in a variety of natural language processing (NLP) tasks such as language translation, text classification, and summarization. Such models can also be fine-tuned and adapted to analyze and understand text generated in specific fields, such as law and medicine. Despite their usefulness, there is a growing concern that the foundation models inherently reflect human biases, which might have originated from their large training corpora These social biases include stereotyping and negative generalizations of different social groups and communities, which could have been present in their training corpora A previous work We believe that experts in respective fields need to inspect the fairness and biases through a systematic, human-in-the-loop approach, including the lens of log-likelihood scores, before adapting them for any downstream tasks. Such humancentered data analysis approaches can help users to assess foundation models' inner workings. Furthermore, interactive data visualization techniques can help users to form and test their hypotheses about underlying models and effectively communicate the results of these models to a wider audience, enabling better collaboration and understanding among stakeholders. Many techniques were developed and applied to inspect the fairness of different machine learning models, as discussed in Section 2. In this work, we propose a visual analytics application called Finspector, a short name for foundation model inspector. Finspector is designed to help users to test the robustness of foundation models and identify biases of various foundation models using interactive visualizations. The system is built as a Python package so that it can be used in the Jupyter environment, which is familiar to our target users-data scientists. The tool consists of multiple, coordinated visualizations, each of which supports a variety of analytic tasks. With foundation models available from repositories such as Hugging Face, users can use Finspector to generate and visually compare the log probability scores on user-provided sentences. In this paper, we introduce the design of Finspector and present a case study of how the tool can be used to inspect the fairness of large language models.
|
Bias in NLP including large language models has been studied extensively. Garrido-Muñoz et al. provide a survey Tenny et al. presented Language Interpretability Tool (LIT) There are several other visualization tools that help users investigate the fairness of machine learning models, primarily focusing on aspects such as prediction discrepancy among different subgroups, group fairness, individual fairness, and counterfactual fairness. These include tools such as What-If Tool In contrast to these tools above, Finspector aims to inspect the fairness and bias of foundational models by exploring the log-likelihood scores generated by the models. Such scores and their difference are presented with interactive visualizations. In this section, we describe the design of Finspector. There are three main views of Finspector, 1) Distribution of Log Likelihoods, 2) Table The system requires users to provide three items: 1) text data with paired samples and bias category labels; 2) pre-trained foundation models; 3) 2d sentence embeddings. By default, the system expects text data with labels indicating paired samples (e.g., sample id) and bias categories, similar to the CrowS-Pairs dataset This view shows the distribution of aggregated conditional pseudo-log-likelihood scores of the set of input sentences as shown in Figure As Figure Users can use the view to explore the distributions of subgroups defined by users. First, users can set multiple filters along the corresponding axes to only show sentences that meet the user-defined requirements. Figure The table view shows the details of the input sentence data as Figure Sentence embeddings show similarities and differences among the input sentences using a scatterplot as shown in Figure In this section, we demonstrate how Finspector can be used to discover informative insights about foundation models and datasets. Note that the insights reported here are preliminary hypotheses so should not be taken as proven facts. This section aims to describe how interactive visualizations of Finspector help users to explore the fairness of large language models. In this use case, we used the CrowS-Pairs dataset for the analysis Initially, the three Foundation models show similar distributions of log probabilities, but the score of ALBERT is slightly lower than those of the two other models as shown in Figure We set a filter by drawing a rectangular box between the values of -5 and -4 on ALBERT. Figure We decided to explore differences in the log probabilities among bias categories using the split and highlight features of Finspector. The distribution view shows differences in the score of AL-BERT between a pair of base and stereotype sentences of the disability category, as Figure Lastly, we decided to generate some customized sentences. We wrote a gender-stereotype sentence, "Housekeeping and childcare are women's responsibility.", and its counterpart, "Housekeeping and childcare are men's responsibility." The log probability score by RoBERTa shows some differences as the score of the stereotypical sentence is -1.5, which is higher than that of the counterpart, -2.0. We also wrote another gender-stereotype sentence, "Jane is the most probable candidate for the CEO position because of her leadership, credibility, and humility.", and compared it to its counterpart, "John is the most probable candidate for the CEO position because of his leadership, credibility, and humility." As Figure In this paper, we introduce an interactive visualization system called Finspector, which helps users to explore pre-trained large language models by evaluating the pseudo-log-likelihood measures computed on text data. Adopting the workflow from our previous approach (Kwon and Our work of a human-centered approach for fairness inspection of LLMs opens new research av-enues for interdisciplinary research between AI, Visualization, and other fields. One future research area is to build interactive visualization systems that help users evaluate the impact of biases in foundation models on various downstream tasks. Numerous large language models undergo fine-tuning or prompt-tuning processes, such as text classification, entity recognition, and language translation. Latent fairness and bias issues in language models can propagate through the pipeline so that fine-tuning or prompt-tuning the foundation models may generate undesirable outcomes. Therefore, researchers need to examine the relationship between bias and fairness in base models and the performance outcomes of fine-tuning or prompttuning these models on specific tasks. Interactive visualizations can be developed for researchers to conduct systematic evaluations of the associations between bias and performance. Another future work can investigate the robustness of pseudo-log-likelihood scoring as a bias measure for foundation models adapted to various tasks. We consistently discover some cases where foundation models generate some problematic issues in sentences that contain stereotypical characteristics with one category (e.g., black) versus another (e.g., white). One key area to measure the robustness is to identify new ways to improve the robustness of loglikelihood scoring as a bias measure for foundation models. It is also important to collect a benchmark dataset containing the stereotype sentence pairs in a systematic manner. Ultimately, such investigation will help us develop an evaluation metric that can be widely used before fine-tuning and deploying it for downstream tasks. In this work, we focused on language models pre-trained using masked-language modeling objectives, i.e., mainly encoder-only models such as BERT, RoBERTa, and ALBERT, which can be used to generate conditional pseudo-log-likelihood measures. There are two other families of language models. First, decoder-only autoregressive models, such as GPT, are pre-trained by predicting the subsequent word in a sequence based on the preceding words or employing the next-sentence-prediction approach To inspect such models in the current Finspector framework, users need to develop ways to generate a log-likelihood-equivalent measure per sentence or we can adapt the visualization framework to fit the next-sentence-prediction models and evaluate their biases in different ways. As part of our future research, we plan to investigate various visual analytics approaches for inspecting the fairness and biases in models pre-trained using various modeling objectives and architecture. Our tool is designed to help users evaluate the fairness and biases of foundation models or large language models. Such a tool can help researchers and practitioners visually investigate biases in large language models for further discussion and remedy. Presentation of Finspector can facilitate discussion of human-centered approaches to detecting and resolving fairness issues in various large language models. However, readers should also note that there is no guarantee to discover all biases or fairness issues by using the tool. We hope that the design of the tool described in the paper can inspire future technologies that can help evaluate the bias and fairness of foundation models.
| 530 | 2,397 | 530 |
A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations
|
Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e.g., documents) when conversing with humans. However, it is non-trivial to collect large-scale dialogues that are naturally grounded on the background documents, which hinders the effective and adequate training of knowledge selection and response matching. To overcome the challenge, we consider decomposing the training of the knowledge-grounded response selection into three tasks including: 1) query-passage matching task; 2) query-dialogue history matching task; 3) multi-turn response matching task, and joint learning all these tasks in a unified pre-trained language model. The former two tasks could help the model in knowledge selection and comprehension, while the last task is designed for matching the proper response with the given query and background knowledge (dialogue history). By this means, the model can be learned to select relevant knowledge and distinguish proper response, with the help of ad-hoc retrieval corpora and a large number of ungrounded multi-turn dialogues. Experimental results on two benchmarks of knowledge-grounded response selection indicate that our model can achieve comparable performance with several existing methods that rely on crowd-sourced data for training.
|
Along with the very recent prosperity of artificial intelligence empowered conversation systems in the spotlight, many studies have been focused on building human-computer dialogue systems In this paper, we consider the response selection problem in knowledge-grounded conversion and specify the background knowledge as unstructured documents that are common sources in practice. The task is that given a conversation context and a set of knowledge entries, one is required 1): to select proper knowledge and grasp a good comprehension of the selected document materials (knowledge selection); 2): to distinguish the true response from a candidate pool that is relevant and consistent with both the conversation context and the background documents (knowledge matching). While there exists a number of knowledge documents on the Web, it is non-trivial to collect large-scale dialogues that are naturally grounded on the documents for training a neural response selection model, which hinders the effective and adequate training of knowledge selection and response matching. Although some benchmarks built upon crowd-sourcing have been released by recent works Since knowledge-grounded dialogues are unavailable in training, it raises greater challenges for learning the grounded response selection model. Fortunately, there exists a large number of unstructured knowledge (e.g., web pages or wiki articles), passage search datasets (e.g., query-passage pairs coming from ad-hoc retrieval tasks) (Khattab and Zaharia, 2020) and multi-turn dialogues (e.g., context-response pairs collected from Reddit) Based on the above intuition, in this paper, we consider decomposing the training of the grounded response selection task into several sub-tasks, and joint learning all those tasks in a unified model. To take advantage of the recent breakthrough on pretraining for natural language tasks, we build the grounded response matching models on the basis of a pre-trained language model (PLMs) In the first strategy, we directly concatenate the selected knowledge and dialogue history as a long sequence of background knowledge and feed into the model. In the second strategy, we first compute the matching degree between each queryknowledge and the response candidates, and then integrate all matching scores. We conduct experiments with benchmarks of knowledge-grounded dialogue that are constructed by crowd-sourcing, such as the Wizard-of-Wikipedia Corpus Our contributions are summarized as follows: • To the best of our knowledge, this is the first exploration of knowledge-grounded response selection under the zero-resource setting. • We propose decomposing the training of the grounded response selection models into several sub-tasks, so as to empower the model through these tasks in knowledge selection and response matching. • We achieve a comparable performance of response selection with several existing models learned from crowd-sourced training sets.
|
Early studies of retrieval-based dialogue focus on single-turn response selection where the input of a matching model is a message-response pair Recently, researchers pay more attention to multiturn context-response matching and usually adopt the representation-matching-aggregation paradigm to build the model. Representative methods include the dual-LSTM model To bridge the gap of the knowledge between the human and the machine, researchers have investigated into grounding dialogue agents with unstructured background knowledge In this section, we first formalize the knowledgegrounded response matching problem and then introduce our method from preliminary to response matching with PLMs to details of three pre-training tasks. We first describe a standard knowledge-grounded response selection task such as Wizard-of-Wikipedia. Suppose that we have a knowledgegrounded dialogue data set where k i = {p 1 , p 2 , . . . , p l k } represents a collection of knowledge with p j the j-th knowledge entry (a.k.a., passage) and l k is the number of entries; c i = {u 1 , u 2 , . . . , u lc } denotes multi-turn dialogue context with u j the j-th turn and l c is the number of dialogue turns. It should be noted that in this paper we denote the latest turn u lc as dialogue query q i , and dialogue context except for query is denoted as h i = c i /{q i }. r i stands for a candidate response. y i = 1 indicates that r i is a proper response for c i and k i , otherwise y i = 0. N is the number of samples in data set. The goal knowledge-grounded dialogue is to learn a matching model g(k, c, r) from D, and thus for any new (k, c, r), g(k, c, r) returns the matching degree between r and (k, c). Finally, one can collect the matching scores of a series of candidate responses and conduct response ranking. Zero-resource grounded response selection then is formally defined as follows. There is a standard multi-turn dialogue dataset and an ad-hoc retrieval dataset where q i is a query and p i stands a candidate passage, z i = 1 indicates that p i is a relevant passage for q i , otherwise z i = 0. Our goal is to learn a model g(k, h, q, r) from D c and D p , and thus for any new input (k, h, q, r), our model can select proper knowledge k from k and calculate the matching degree between r and ( k, q, h). Pre-trained language models have been widely used in many NLP tasks due to the strong ability of language representation and understanding. In this work, we consider building a knowledge-grounded response matching model with BERT. Specifically, given a query q, a dialogue history h = {u 1 , u 2 , ..., u n h } where u i is the i-th turn in the history, a response candidate r = {r 1 , r 2 , ..., r lr } with l r words, we concatenate all sequences as a single consecutive tokens sequence with special tokens, which can be represented as [CLS] and [SEP] are classification symbol and segment separation symbol respectively. For each token in x, BERT uses a summation of three kinds of embeddings, including WordPiece embedding Then, the embedding sequence of x is fed into BERT, giving us the contextualized embedding sequence {E [CLS] , E 2 , . . . , E lx }. E [CLS] is an aggregated representation vector that contains the Query-Dialogue History Matching Task Query-Passage Matching Task 𝑢 " [SEP] semantic interaction information between the query, history, and response candidate. Finaly, E [CLS] is fed into a non-linear layer to calculate the final matching score, which is formulated as: where W {1,2} and b {1,2} is training parameters for response selection task, σ is a sigmoid function. In knowledge-grounded dialogue, each dialogue is associated with a large collection of knowledge entries k = {p 1 , p 2 , . . . , p l k } 1 . The model is required to select m(m ≥ 1) knowledge entries based on semantic relevance between the query and each knowledge, and then performs the response matching with the query, dialogue history and the highly-relevant knowledge. Specifically, we denote k = (p 1 , . . . , pm ) as the selected knowledge entries, and feed the input sequence The final matching score g( k, h, q, r) can be computed based on [CLS] representation. On the basis of BERT, we further jointly train it with three tasks including 1) query-passage matching task; 2) query-dialogue history matching task; 3) multi-turn response matching task. The former two tasks could help the model in knowledge selection and knowledge (and dialogue history) comprehension, while the last task is designed for matching the proper response with the given query and background knowledge (dialogue 1 The scale of the knowledge referenced by each dialogue usually exceeds the limitation of input length in PLMs. history). By this means, the model can be learned to select relevant knowledge and distinguish the proper response, with the help of a large number of ungrounded dialogues and ad-hoc retrieval corpora. Although there exist a huge amount of conversation data on social media, it is hard to collect sufficient dialogues that are naturally grounded on knowledge documents. Existing studies Given a query-passage pair (q, p), we first concatenate the query q and the passage p as a single consecutive token sequence with special tokens separating them, which is formulated as: where w p i , w q j denotes the i-th and j-th token of knowledge entry p and query q respectively. For each token in S qp i , token, segment and position embeddings are summated and fed into BERT. It is worth noting that here we set the segment embedding of the knowledge to be the same as the dialogue history. Finally, we feed the output representation of [CLS] E qp [CLS] into a MLP to obtain the final query-passage matching score g(q, p). The loss function of each training sample for query-passage matching task is defined by = -log( e g(q,p + ) e g(q,p + ) + δp j=1 e g(q,pj ) ) where p + stands for the positive passage for q, p - j is the j-th negative passage and δ p is the number of negative passage. In multi-turn dialogues, the conversation history (excluding the latest query) is a piece of supplementary information for the current query and can be regarded as another format of background knowledge during the response matching. Besides, due to the natural sequential relationship between dialogue turns, the dialogue query usually shows a strong semantic relevance with the previous turns in the dialogue history. Inspired by such characteristics, we design a query-dialogue history matching task with the multi-turn dialogue context, so as to enhance the capability of the model to comprehend the dialogue history with the given dialogue query and to rank relevant passages with these pseudo query-passage pairs. Specifically, we first concatenate the dialogue history into a long sequence. The task requires the model to predict whether a query q = {w q 1 , . . . , w q nq } and a dialogue history sequence h = {w h 1 , . . . , w h n h } are consecutive and relevant. We concatenate two sequences into a single consecutive sequence with [SEP] tokens, For each word in S qh , token, segment and position embeddings are summated and fed into BERT. Finally, we feed E qh [CLS] into a MLP to obtain the final query-history matching score g(q, h). The loss function of each training sample for queryhistory matching task is defined by = -log( e g(q,h + ) e g(q,h + ) + δ h j=1 e g(q,hj ) ) where h + stands for the true dialogue history for q, h - j is the j-th negative dialogue history randomly sampled from the training set and δ h is the number of sampled dialogue history. The above two tasks are designed for empowering the model to knowledge or history comprehension and knowledge selection. In this task, we aim at training the model to match reasonable responses based on dialogue history and query. Since we treat the dialogue history as a special form of background knowledge and they share the same segment embeddings in the PLMs, our model can acquire the ability to identify the proper response with either dialogue history or the background knowledge through the multi-turn response matching task. Specifically, we format the multi-turn dialogues as query-history-response triples and requires the model to predict whether a response candidate r = {w r 1 , . . . , w r nr } is appropriate for a given query q = {w q 1 , . . . , w q nq } and a concatenated dialogue history sequence h = {w h 1 , . . . , w h n h }. Concretely, we concatenate three input sequences into a single consecutive tokens sequence with [SEP] tokens, 1 , . . . , w q nq , [SEP], w r 1 , . . . , w r nr } Similarly, we feed an embedding sequence of which each entry is a summation of token, segment and position embeddings into BERT. Finally, we feed E hqr [CLS] into a MLP to obtain the final response matching score g(h, q, r). The loss function of each training sample for multi-turn response matching task is defined by Lr(h, q, r + , r - 1 , . . . , r - δr ) = -log( e g(h,q,r + ) e g(h,q,r + ) + nr i=j e g(h,q,rj ) ) where r + is the true response for a given q and h, r - j is the j-th negative response candidate randomly sampled from the training set and δ r is the number of negative response candidate. We adopt a multi-task learning manner and define the final objective function as: In this way, all tasks are jointly learned so that the model can effectively leverage two training corpus and learn to select relevant knowledge and distinguish the proper response. After learning model from D c and D p , we first rank {p i } n k i=1 according to g(q, k i ) and then select top m knowledge entries {p 1 , . . . , p m } for the subsequent response matching process. Here we design two strategies to compute the final matching score g(k, h, q, r). In the first strategy, we directly concatenate the selected knowledge and dialogue history as a long sequence of background knowledge and feed into the model to obtain the final matching score, which is formulated as, where ⊕ denotes the concatenation operation. In the second strategy, we treat each selected knowledge entry and the dialogue history equally as the background knowledge, and compute the matching degree between each query, background knowledge, and the response candidates with the trained model. Consequently, the matching score is defined as an integration of a set of knowledgegrounded response matching scores, formulated as, g(k, h, q, r) = g(h, q, r)+ max i∈(0,m) g(p i , q, r) Training Set. We adopt MS MARCO passage ranking dataset For the query-dialogue history matching task and multi-turn response matching task, we use the multi-turn dialogue corpus constructed from the Reddit Test Set. We tested our proposed method on the Wizard-of-Wikipedia (WoW) Evaluation Metrics. Following previous works on knowledge-grounded response selection Our model is implemented by PyTorch Wizard Seen Wizard Unseen R@1 R@2 R@5 R@1 R@2 R@5 PTKGC sep (q+h) 84.9 93.9 97.8 64.9 81.7 94. as PTKGC sep -X, where X ∈ {L p , L h } meaning query-passage matching task and query-dialogue history matching task respectively. Table To further investigate the impact of our pretraining tasks on the performance of the multiturn response selection (without considering the grounded knowledge), we conduct an ablation study and the results are shown in Table The impact of the number of selected knowledge. We further study how the number of selected knowledge (m) influences the performance of PTKGC sep . Figure Since the characteristics of the two data sets are different (only WoW provides the golden knowledge label), we compare the proposed model with the baselines on both data sets individually. Baselines on WoW. 1) IR Baseline Starspace Baselines on CMU DoG 1) Starspace Performance of Response Selection. Our explanation to the phenomenon is that there is information loss when a long sequence composed of the knowledge and dialogue history passes through the deep architecture of BERT. Thus, the earlier different knowledge entries and dialogue history are fused together, the more information of dialogue history or background knowledge will be lost in matching. Particularly, on the WoW, in terms of R@1, our PTKGC sep achieves a comparable performance with the existing stateof-the-art models that are learned from the crowdsourced training set, indicating that the model can effectively learn how to leverage external knowledge feed for response selection through the proposed pre-training approach. Notably, we can observe that our PTKGC sep performs worse than DIM and FIRE on the CMU DoG. Our explanation to the phenomenon is that the dialogue and knowledge in CMU DoG focus on the movie domain while our train data including ad-hoc retrieval corpora and multi-turn Performance of Knowledge Selection. We also assess the ability of models to predict the knowledge selected by human wizards in WoW data. The results are shown in Table Ablation Study. We conduct a comprehensive ablation study to investigate the impact of different inputs and different tasks. First, we remove the dialogue history, knowledge, and both of them from the model, which is denoted as PTKGC sep (q+k), PTKGC sep (q+h) and PTKGC sep (q) respectively. According to the results of the first four rows in Table Then, we remove each training task individually from PTKGC sep , and denote the models In this paper, we study response matching in knowledge-grounded conversations under a zeroresource setting. In particular, we propose decomposing the training of the knowledge-grounded response selection into three tasks and joint train all tasks in a unified pre-trained language model. Our model can be learned to select relevant knowledge and distinguish proper response, with the help of ad-hoc retrieval corpora and amount of multiturn dialogues. Experimental results on two benchmarks indicate that our model achieves a comparable performance with several existing methods trained on crowd-sourced data. In the future, we would like to explore the ability of our proposed method in retrieval-augmented dialogues.
| 1,357 | 2,963 | 1,357 |
Towards Open Domain Event Trigger Identification using Adversarial Domain Adaptation
|
We tackle the task of building supervised event trigger identification models which can generalize better across domains. Our work leverages the adversarial domain adaptation (ADA) framework to introduce domain-invariance. ADA uses adversarial training to construct representations that are predictive for trigger identification, but not predictive of the example's domain. It requires no labeled data from the target domain, making it completely unsupervised. Experiments with two domains (English literature and news) show that ADA leads to an average F1 score improvement of 3.9 on outof-domain data. Our best performing model (BERT-A) reaches 44-49 F1 across both domains, using no labeled target data. Preliminary experiments reveal that finetuning on 1% labeled data, followed by self-training leads to substantial improvement, reaching 51.5 and 67.2 F1 on literature and news respectively. 1
|
Events are a key semantic phenomenon in natural language understanding. They embody a basic function of language: the ability to report happenings. Events are a basic building block for narratives across multiple domains such as news articles, stories and scientific abstracts, and are important for many downstream tasks such as question answering Prior work has explored unsupervised Concretely, we focus on event trigger identification, which aims to identify triggers (words) that instantiate an event. For example, in "John was born in Sussex", born is a trigger, invoking a BIRTH event. To introduce domain-invariance, we adopt the adversarial domain adaptation (ADA) framework
|
Throughout this work, we treat the task of event trigger identification as a token-level classification task. For each token in a sequence, we predict whether it is an event trigger. To ensure that our trigger identification model can transfer across domains, we leverage the adversarial domain adaptation (ADA) framework Figure In the second step, we train the representation learner and event classifier using D s to optimize the following loss: L refers to the cross-entropy loss and λ is a hyperparameter. In practice, the optimization in the above equation is performed using a gradient reversal layer (GRL) In our setup, the event classifier and domain predictors are MLP classifiers. For the representation learner, we experiment with several architectures. We experiment with the following models: 3 Experiments In our experiments, we use the following datasets: • LitBank Tables On average, ADA makes supervised models more robust on out-of-domain data, with an average F1 score improvement of 3.9, at no loss of in-domain performance. What cases does ADA improve on? To gain more insight into the improvements observed on using ADA, we perform a manual analysis of out-ofdomain examples that BERT labels incorrectly, but BERT-A gets right. We carry out this on 50 examples from TimeBank and LitBank each. We observe that an overwhelming number of cases from TimeBank use vocabulary in contexts unique to news (43/50 or 86%). This includes examples of financial events, political events and reporting events that are rarer in literature, indicating that ADA manages to reduce event extraction models' reliance on lexical features. We make similar observations for LitBank though the proportion of improvement cases with literature-specific vocabulary is more modest (22/50 or 44%). These cases include examples with archaic vocabulary, words that have a different meaning in literary contexts and human/ animal actions, which are not common in news. 4 Incorporating Minimal Labeled Data Finetuning on labeled data: We run finetuning experiments to study improvement in model performance on incorporating small amounts of labeled target domain data. For both domains, we finetune BERT-A, slowly increasing the percentage of labeled data used from 1%-5%. 5 We compare BERT-A with two other models. The first model is naive BERT with no domain adaptation (BERT-NoDA). The second model is a BERT model trained via supervised domain adaptation (BERT-FEDA), which we use as an indicator of ceiling performance. The supervised domain adaptation method we use is the neural modification of frustratingly easy domain adaptation developed in Frustratingly easy domain adaptation TimeBank 68.9 65.5 67.2 LitBank 40.3 71.5 51.5 Figures In this work, we tackled the task of building generalizable supervised event trigger identification models using adversarial domain adaptation (ADA) showed that ADA made supervised models more robust on out-of-domain data, with an average F1 score improvement of 3.9. Our best performing model (BERT-A) was able to reach 44-49 F1 across both domains using no labeled target domain data. Preliminary experiments showed that finetuning BERT-A on 1% labeled data, followed by selftraining led to substantial improvement, reaching 51.5 and 67.2 F1 on literature and news respectively. While these results are encouraging, we are yet to match supervised in-domain model performance. Future directions to explore include incorporating noise-robust training procedures (Goldberger and Ben-Reuven, 2017) and example weighting
| 898 | 683 | 898 |
Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism
|
The relational facts in sentences are often complicated. Different relational triplets may have overlaps in a sentence. We divided the sentences into three types according to triplet overlap degree, including Normal, EntityPairOverlap and SingleEn-tiyOverlap. Existing methods mainly focus on Normal class and fail to extract relational triplets precisely. In this paper, we propose an end-to-end model based on sequence-to-sequence learning with copy mechanism, which can jointly extract relational facts from sentences of any of these classes. We adopt two different strategies in decoding process: employing only one united decoder or applying multiple separated decoders. We test our models in two public datasets and our model outperform the baseline method significantly.
|
Recently, to build large structural knowledge bases (KB), great efforts have been made on extracting relational facts from natural language texts. A relational fact is often represented as a triplet which consists of two entities (an entity pair) and a semantic relation between them, such as < Chicago, country, U nitedStates >. So far, most previous methods mainly focused on the task of relation extraction or classification which identifies the semantic relations between two pre-assigned entities. Although great progresses have been made Recently, with the success of deep learning on many NLP tasks, it is also applied on relational facts extraction. Nevertheless, the relational facts in sentences are often complicated. Different relational triplets may have overlaps in a sentence. Such phenomenon makes aforementioned methods, whatever deep learning based models and traditional feature engineering based joint models, always fail to extract relational triplets precisely. Generally, according to our observation, we divide the sentences into three types according to triplet overlap degree, including Normal, EntityPairOverlap (EPO) and SingleEntityOverlap (SEO). As shown in Figure To address the aforementioned challenge, we aim to design a model that could extract triplets, including entities and relations, from sentences of Normal, EntityPairOverlap and SingleEntityOverlap classes. To handle the problem of triplet overlap, one entity must be allowed to freely participate in multiple triplets. Different from previous neural methods, we propose an end2end model based on sequence-to-sequence (Seq2Seq) learning with copy mechanism, which can jointly extract relational facts from sentences of any of these classes. Specially, the main component of this model includes two parts: encoder and decoder. The encoder converts a natural language sentence (the source sentence) into a fixed length semantic vector. Then, the decoder reads in this vector and generates triplets directly. To generate a triplet, firstly, the decoder generates the relation. Secondly, by adopting the copy mechanism, the decoder copies the first entity (head entity) from the source sentence. Lastly, the decoder copies the second entity (tail entity) from the source sentence. In this way, multiple triplets can be extracted (In detail, we adopt two different strategies in decoding process: employing only one unified decoder (OneDecoder) to generate all triplets or applying multiple separated decoders (MultiDecoder) and each of them generating one triplet). In our model, one entity is allowed to be copied several times when it needs to participate in different triplets. Therefore, our model could handle the triplet overlap issue and deal with both of EntityPairOverlap and SingleEntityOverlap sentence types. Moreover, since extracting entities and relations in a single end2end neural network, our model could extract entities and relations jointly. The main contributions of our work are as follows: • We propose an end2end neural model based on sequence-to-sequence learning with copy mechanism to extract relational facts from sentences, where the entities and relations could be jointly extracted. • Our model could consider the relational triplet overlap problem through copy mechanism. In our knowledge, the relational triplet overlap problem has never been addressed before. • We conduct experiments on two public datasets. Experimental results show that we outperforms the state-of-the-arts with 39.8% and 31.1% improvements respectively. By giving a sentence without any annotated entities, researchers proposed several methods to extract both entities and relations. Pipeline based methods, like
|
In this section, we introduce a differentiable neural model based on Seq2Seq learning with copy mechanism, which is able to extract multiple relational facts in an end2end fashion. Our neural model encodes a variable-length sentence into a fixed-length vector representation first and then decodes this vector into the corresponding relational facts (triplets). When decoding, we can either decode all triplets with one unified decoder or decode every triplet with a separated decoder. We denote them as OneDecoder model and MultiDecoder model separately. The overall structure of OneDecoder model is shown in Figure To encode a sentence s = [w 1 , .., w n ], where w t represent the t-th word and n is the source sentence length, we first turn it into a matrix X , where x t is the embedding of t-th word. The canonical RNN encoder reads this matrix X sequentially and generates output o E t and hid- where f (• ) represents the encoder function. Following where to represent the concatenate result. Similarly, the concatenation of forward and backward RNN hidden states are used as the representation of sentence, that is s = [ The decoder is used to generate triplets directly. Firstly, the decoder generates a relation for the triplet. Secondly, the decoder copies an entity from the source sentence as the first entity of the triplet. Lastly, the decoder copies the second entity from the source sentence. Repeat this process, the decoder could generate multiple triplets. Once all valid triplets are generated, the decoder will generate NA triplets, which means "stopping" and is similar to the "eos" symbol in neural sentence generation. Note that, a NA triplet is composed of an NA-relation and an NA-entity pair. As shown in Figure where g(• ) is the decoder function and h D t-1 is the hidden state of time step t -1. We initialize h D 0 with the representation of source sentence s. u t is the decoder input in time step t and we calculate it as: where c t is the attention vector v t is the embedding of copied entity or predicted relation in time step t -1. W u is a weight matrix. Attention Vector. The attention vector c t is calculated as follows: (5) where o E i is the output of encoder in time step i, α = [α 1 , ..., α n ] and β = [β 1 , ..., β n ] are vectors, w c is a weight vector. selu(• ) is activation function After we get decoder output o D t in time step t (1 ≤ t), if t%3 = 1 (that is t = 1, 4, 7, ...), we use o D t to predict a relation, which means we are decoding a new triplet. Otherwise, if t%3 = 2 (that is t = 2, 5, 8, ...), we use o D t to copy the first entity from the source sentence, and if t%3 = 0 (that is t = 3, 6, 9, ...), we copy the second entity. Predict Relation. Suppose there are m valid relations in total. We use a fully connected layer to calculate the confidence vector q r = [q r 1 , ..., q r m ] of all valid relations: where W r is the weight matrix and b r is the bias. When predict the relation, it is possible to predict the NA-relation when the model try to generate NA-triplet. To take this into consideration, we calculate the confidence value of NA-relation as: where W N A is the weight matrix and b N A is the bias. We then concatenate q r and q N A to form the confidence vector of all relations (including the NA-relation) and apply softmax to obtain the probability distribution p r = [p r 1 , ..., p r m+1 ] as: We select the relation with the highest probability as the predict relation and use it's embedding as the next time step input v t+1 . The first decoder is initialized with s; Other decoder(s) are initialized with s and previous decoder's state. Copy the First Entity. To copy the first entity, we calculate the confidence vector q e = [q e 1 , ..., q e n ] of all words in source sentence as: where w e is the weight vector. Similar with the relation prediction, we concatenate q e and q N A to form the confidence vector and apply softmax to obtain the probability distribution p e = [p e 1 , ..., p e n+1 ]: Similarly, We select the word with the highest probability as the predict the word and use it's embedding as the next time step input v t+1 . Copy the Second Entity. Copy the second entity is almost the same as copy the first entity. The only difference is when copying the second entity, we cannot copy the first entity again. This is because in a valid triplet, two entities must be different. Suppose the first copied entity is the k-th word in the source sentence, we introduce a mask vector M with n (n is the length of source sentence) elements, where: then we calculate the probability distribution p e as: where ⊗ is element-wise multiplication. Just like copy the first entity, We select the word with the highest probability as the predict word and use it's embedding as the next time step input v t+1 . MultiDecoder model is an extension of the proposed OneDecoder model. The main difference is when decoding triplets, MultiDecoder model decode triplets with several separated decoders. is the decoder function of decoder i. u t is the decoder input in time step t and we calculated it as Eq 3. h D i t-1 is the hidden state of i-th decoder in time step t -1. ĥD i t-1 is the initial hidden state of i-th decoder, which is calculated as follows: Both OneDecoder and MultiDecoder models are trained with the negative log-likelihood loss function. Given a batch of data with B sentences S = {s 1 , ..., s B } with the target results Y = {y 1 , ..., y B }, where y i = [y 1 i , ..., y T i ] is the target result of s i , the loss function is defined as follows: T is the maximum time step of decoder. p(x|y) is the conditional probability of x given y. θ denotes parameters of the entire model. To evaluate the performance of our methods, we conduct experiments on two widely used datasets. The first is New York Times (NYT) dataset, which is produced by distant supervision method The second is WebNLG dataset The number of sentences of every class in NYT and WebNLG dataset are shown in Table In our experiments, for both dataset, we use LSTM We compare our models with NovelTagging model Following Table We can also observe that, in both NYT and WebNLG dataset, the NovelTagging model achieves the highest precision value and lowest recall value. By contrast, our models are much more balanced. We think that the reason is in the structure of the proposed models. The NovelTagging method finds triplets through tagging the words. However, they assume that only one tag could be assigned to just one word. As a result, one word can participate at most one triplet. Therefore, the NovelTagging model can only recall a small number of triplets, which harms the recall performance. Different from the NovelTagging model, our models apply copy mechanism to find entities for a triplet, and a word can be copied many times when this word needs to participate in multiple different triplets. Not surprisingly, our models recall more triplets and achieve higher recall value. Further experiments verified this. To verify the ability of our models in handling the overlapping problem, we conduct further experiments on NYT dataset. Figure As shown in the previous experiments (Table In this paper, we proposed an end2end neural model based on Seq2Seq learning framework with copy mechanism for relational facts extraction. Our model can jointly extract relation and entity from sentences, especially when triplets in the sentences are overlapped. Moreover, we analyze the different overlap types and adopt two strategies for this issue, including one unified decoder and multiple separated decoders. We conduct experiments on two public datasets to evaluate the effectiveness of our models. The experiment results show that our models outperform the baseline method signif-icantly and our models can extract relational facts from all three classes. This challenging task is far from being solved. Our future work will concentrate on how to improve the performance further. Another future work is test our model in other NLP tasks like event extraction.
| 777 | 3,709 | 777 |
Can Click Patterns across User's Query Logs Predict Answers to Definition Questions?
|
In this paper, we examined click patterns produced by users of Yahoo! search engine when prompting definition questions. Regularities across these click patterns are then utilized for constructing a large and heterogeneous training corpus for answer ranking. In a nutshell, answers are extracted from clicked web-snippets originating from any class of web-site, including Knowledge Bases (KBs). On the other hand, nonanswers are acquired from redundant pieces of text across web-snippets.
|
It is a well-known fact that definition queries are very popular across users of commercial search engines It is a standard practice of definition question answering (QA) systems to mine KBs (e.g., online encyclopedias and dictionaries) for reliable descriptive information on the definiendum As a means of dealing with this, current strategies try to construct general definition models inferred from a collection of definitions coming from the Internet or KBs Our approach has different innovative aspects compared to other research in the area of definition extraction. It is at the crossroads of query log analysis and QA systems. We study the click behavior of search engines' users with regard to definition questions. Based on this study, we propose a novel way of acquiring large-scale and heterogeneous training material for this task, which consists of: • automatically obtaining positive samples in accordance with click patterns of search engine users. This aids in harvesting a host of descriptions from non-KB sources in conjunction with descriptive information from KBs. • automatically acquiring negative data in consonance with redundancy patterns across snippets displayed within search engine results when processing definition queries. In brief, our experiments reveal that these patterns can be effectively exploited for devising efficient models. Given the huge amount of amassed data, we additionally contrast the performance of systems built on top of samples originated solely from KB, non-KB, and both combined. Our comparison corroborates that KBs yield massive trustworthy descriptive knowledge, but they do not bear enough diversity to discriminate all answering nuggets within any kind of text. Essentially, our experiments unveil that non-KB data is richer and therefore it is useful for discovering more descriptive nuggets than KB material. But its usage relies on its cleanness and on a negative set. Many people had these intuitions before, but to the best of our knowledge, we provide the first empirical confirmation and quantification. The road-map of this paper is as follows: section 2 touches on related works; section 3 digs deeper into click patterns for definition questions, subsequently section 4 explains our corpus construction strategy; section 5 describes our experiments, and section 6 draws final conclusions.
|
In recent years, definition QA systems have shown a trend towards the utilization of several discriminant and statistical learning techniques • centroid vector • • • Our contribution is a novel technique for obtaining heterogeneous training material for defi-nitional QA, that is to say, massive examples harvested from KBs and non-KBs. Fundamentally, positive examples are extracted from web snippets grounded on click patterns of users of a search engine, whereas the negative collection is acquired via redundancy patterns across web-snippets displayed to the user by the search engine. This data is capitalized by two state-of-the-art definition extractors, which are different in nature. In addition, our paper discusses the effect on the performance of different sorts (KBs and non-KBs) and amount of training data. As for user clicks, they provide valuable relevance feedback for a variety of tasks, cf. 3 User Click Analysis for Definition QA In this section, we examine a collection of queries submitted to Yahoo! search engine during the period from December 2010 to March 2011. More specifically, for this analysis, we considered a log encompassing a random sample of In the first place, we associate each query with a category in the taxonomy proposed by According to • • By the same token, queries containing keywords such as "homepage", "on-line", and "sign in" were also removed. • After the previous steps, many navigational queries (e.g., "facebook") still remained in the query log. We noticed that a substantial portion was signaled by several frequently and indistinctly clicked URLs. Take for instance "facebook": " With this in mind, we discarded entries embodied in a manually compiled black list. This list contains the 600 highest frequent cases. A third category in Subsequently, we profited from the remaining 44,928,652 (informational) entries for detecting queries where the intention of the user is finding descriptive information about a topic (i.e., definiendum). In the taxonomy delineated by In practice, we filtered definition questions as follows: 1. We exploited an array of expressions that are commonly utilized in query analysis for classifying definition questions 2. As stated in Unfortunately, since query logs stored by search engines are not publicly available due to privacy and legal concerns, there is no accessible training material to build models on top of annotated data. Thus, we exploited the aforementioned hand-crafted rules to connect queries to their respective category in this taxonomy. In substance, the first filter recognizes the intention of the user by means of the formulation given by the user (e.g., "What is a/the/an..."). With regard to this filter, some interesting observations are as follows: • In 40.27% of the entries, users did not visit any of the displayed web-sites. Consequently, we concluded that the information conveyed within the multiple snippets was often enough to answer the respective definition question. In other words, a significant fraction of the users were satisfied with a small set of brief, but quickly generated descriptions. • In 2.18% of these cases, the search engine returned no results, and a few times users tried another paraphrase or query, due to useless results or misspellings. • We also noticed that definition questions matched by these expressions are seldom related to more than one click, although informational queries produce several clicks, in general. In 46.44% of the cases, the user clicked a sole document, and more surprisingly, we observed that users are likely to click sources different from KBs, in contrast to the widespread belief in definition QA research. Users pick hits originating from small but domain-specific web-sites as a result of at least two effects: a) they are looking for minor or ancillary senses of the definiendum (e.g., "ETA" in " While the first filter infers the intention of the user from the query itself, the second deduces it from the origin of the clicked documents. With regard to this second filter, clicking patterns are more disperse. Here, the first two clicks normally correspond to the top two/three ranked hits returned by the search engine, see also All in all, the insight gained in this analysis allows the construction of an heterogeneous corpus for definition question answering. Put differently, these user click patterns offer a way to obtain huge amounts of heterogeneous training material. In this way the heavy dependence of open-domain description identifiers on KB data can be alleviated. Since queries obtained by the previous two filters are not associated with the actual snippets seen by the users (due to storage limitations), snippets were recovered by means of submitting the queries to Yahoo! search engine. After retrieval, we benefited from OpenNLP Along with numbers, sequences of full and partial matches of the definiendum were also substituted with placeholders, "#Q#" and "#QT#", respectively. To exemplify, consider this pre-processed snippet regarding "Benjamin Millepied" from " ) is a principal dancer at New York City Ballet and a ballet choreographer... We benefit from these templates for building both a positive and a negative training set. The negative set comprised templates appearing across all (clicked and unclicked) web-snippets, which at the same time, are related to more than five distinct queries. We hypothesize that these prominent elements correspond to noninformative, and thus non-descriptive, content as they appear within snippets across several questions. In other words: "If it seems to answer every question, it will probably answer no question". Take for instance: Conversely, templates that are more plausible to be answers are strongly related to their specific definition questions, and consequently, they are low in frequency and unlikely to be in the result set of a large number of queries. This negative set was expanded with templates coming from titles of snippets, which at the same time, have a frequency higher than four across all snippets (independent on which queries they appear). This process cooperated on gathering 1,021,571 different negative examples. In order to measure the precision of this process, we randomly selected and checked 1,000 elements, and we found an error of 1.3%. As for the positive set, this was constructed only from the summary section of web-snippets clicked by the users. We constrained these snippets to bear a title template associated with at least two web-snippets clicked for two distinct queries. What is #Q# ? Choices and Consequences. Biology question : What is an #Q# ? Since clicks are linked with entire snippets, it is uncertain which sentences are genuine descriptions (see the previous example). Therefore, we removed those templates already contained in the negative set, along with those samples that matched an array of well-known handcrafted rules. This set included: a. sentences containing words such as "ask", "report", "say", and "unless" This process assisted in acquiring 881,726 different examples, where 673,548 came from KBs. Here, we also randomly selected 1,000 instances and manually checked if they were actual descriptions. The error of this set was 12.2%. To put things into perspective, in contrast to other corpus acquisition approaches, the present method generated more than 1,800,000 positive and negative training samples combined, while the open-domain strategy of In our experiments, we checked the effectiveness of our user click-based corpus acquisition technique by studying its impact on two state-of-theart systems. The first one is based on the bi-term LMs proposed by With regard to the test set, this was constructed by manually annotating 113,184 sentence templates corresponding to 3,162 unseen definienda. In total, this array of unseen testing instances encompassed 11,566 different positive samples. In order to build a balanced testing collection, the same number of negative examples were randomly selected. Overall, our testing set contains 2 As to a baseline system, we accounted for the centroid vector Experiments. We trained both models by systematically increasing the size of the training material by 1%. For this, we randomly split the training data into 100 equally sized packs, and systematically added one to the previously selected sets (i. e., 1%, 2%, 3%, . . ., 99%, 100%). We also experimented with: 1) positive examples originated solely from KBs; 2) positive samples harvested only from non-KBs; and eventually 3) all positive examples combined. Figure Further, the improvement of about 9%-10% by means of exploiting our negative set makes its positive contribution clear. In particular, this supports our hypothesis that redundancy across websnippets pertaining to several definition questions can be exploited as negative evidence. On the whole, this enhancement also suggests that ME models are a better option than LMs. Furthermore, in the case of ME models, putting together evidence from KB and non-KBs betters the performance. Conversely, in the case of LMs, we do not observe a noticeable improvement when unifying both sources. We attribute this difference to the fact that non-KB data is noisier, and thus negative examples are necessary to cushion this noise. By and large, the outcomes show that the usage of descriptive information derived exclusively from KBs is not the best, but a cost-efficient solution. Incidentally, Figure In detail, when contrasting the confusion matrices of the best configurations accomplished by ME-combined (80.72%), ME-KB (80.33%) and ME-N-KB (78.99%), one can find that MEcombined correctly identified 88% of the answers (true positives), while ME-KB 89.37% and ME-N-KB 93.38% (see Table Interestingly enough, non-KB data only embodies 23.61% of all positive training material, but it still has the ability to recognize more answers. Despite of that, the other two strategies outperform ME-N-KB, because they are able We verified this synergy by inspecting the number of answers from non-KBs detected by the three top configurations in Table In addition, we performed significance tests utilizing two-tailed paired t-test at 95% confidence interval on twenty samples. For this, we used only the top three configurations in Table In summary, the results show that both negative examples and combining positive examples from heterogeneous sources are indispensable to tackle any class of text. However, it is vital to lessen the noise in non-KB data, since this causes a more adverse effect on the performance. Given the upperbound in accuracy, our outcomes indicate that cleanness and quality are more important than the size of the corpus. Our figures additionally suggest that more effort should go into increasing diversity than the number of training instances. In light of these observations, we also conjecture that a more reduced, but diverse and manually annotated, corpus might be more effective. In particular, a manually checked corpus distilled by inspecting click patterns across query logs of search engines. Lastly, in order to evaluate how good a click predictor the three top ME-configurations are, we focused our attention only on the manually labeled positive samples (answers) that were clicked by the users. Overall, 86.33% (MEcombined), 88.85% (ME-KB) and 92.45% (ME-N-KB) of these responses were correctly predicted. In light of that, one can conclude that (clicked and non-clicked) answers to definition questions can be identified/predicted on the basis of user's click patterns across query logs. From the viewpoint of search engines, web snippets are computed off-line, in general. In so doing, some methods select the spans of text bearing query terms with the potential of putting the document on top of the rank Benjamin Millepied / News & This work investigates into the click behavior of commercial search engine users regarding definition questions. These behaviour patterns are then exploited as a corpus acquisition technique for definition QA, which offers the advantage of encompassing positive samples from heterogoneous sources. In contrast, negative examples are obtained in conformity to redundancy patterns across snippets, which are returned by the search engine when processing several definition queries. The effectiveness of these patterns, and hence of the obtained corpus, was tested by means of two models different in nature, where both were capable of achieving an accuracy higher than 70%. As a future work, we envision that answers detected by our strategy can aid in determining some query expansion terms, and thus to devise some relevance feedback methods that can bring about an improvement in terms of the recall of answers. Along the same lines, it can cooperate on the visualization of the results by highlighting and/or extending truncated answers, that is more informative snippets, which is one of the holy grail of search operators, especially when processing informational queries. NLP tools (e.g., parsers and name entity recognizers) can also be exploited for designing better training data filters and more discriminative features for our models that can assist in enhancing the performance, cf.
| 488 | 2,361 | 488 |
Composing Simple Image Descriptions using Web-scale N-grams
|
Studying natural language, and especially how people describe the world around them can help us better understand the visual world. In turn, it can also help us in the quest to generate natural language that describes this world in a human manner. We present a simple yet effective approach to automatically compose image descriptions given computer vision based inputs and using web-scale n-grams. Unlike most previous work that summarizes or retrieves pre-existing text relevant to an image, our method composes sentences entirely from scratch. Experimental results indicate that it is viable to generate simple textual descriptions that are pertinent to the specific content of an image, while permitting creativity in the description -making for more human-like annotations than previous approaches.
|
Gaining a better understanding of natural language, and especially natural language associated with images helps drive research in both computer vision and natural language processing (e.g., Our work contrasts to most previous approaches in four key aspects: first, we compose fresh sentences from scratch, instead of retrieving In this work, we propose a novel surface realization technique based on web-scale n-gram data. Our approach consists of two steps: (n-gram) phrase selection and (n-gram) phrase fusion. The first step phrase selection -collects candidate phrases that may be potentially useful for generating the description of a given image. This step naturally accommodates uncertainty in image recognition inputs as well as synonymous words and word re-ordering to improve fluency. The second step -phrase fusion -finds the optimal compatible set of phrases using dynamic programming to compose a new (and more complex) phrase that describes the image. We compare the performance of our proposed approach to three baselines based on conventional techniques: language models, parsers, and templates. Despite its simplicity, our approach is highly effective for composing image descriptions: it generates mostly appealing and presentable language, while permitting creative writing at times (see
|
Accommodating Uncertainty We extend candidate phrase selection in order to cope with uncertainty from the image recognition. In particular, for each object detection obj i , we include its top 3 predicted modifiers adj i1 , adj i2 , adj i3 determined by the attribute classifiers (see §2) to expand the set O 1 and O 2 accordingly. For instance, given adj i =(shiny or white) and obj i = sheep, we can consider both <shiny,sheep> and <white,sheep> pairs to predict more compatible pairs of words. Accommodating Synonyms Additionally, we augment each modifier adj i and each object name obj i with synonyms to further expand our sets O 1 , O 2 , and R. These expanded sets of phrases enable resulting generations that are more fluent and creative. This section explores three baseline surface realization approaches: language models ( §3.1), randomized local search ( §3.2), and template-based ( §3.3). Our best approach, phrase fusion using web-scale ngrams follows in §4. For each triple, as described in §2, we construct a sentence. For instance, given the triple <<white, cloud>, in, <blue, sky>>, we might generate "There is a white cloud in the blue sky". We begin with a simple decoding scheme based on language models. Let t be a triple, and let V t be the set of words in t. We perform surface realization by adding function words in-between words in V t . As a concrete example, suppose we want to determine whether to insert a function word x between a pair of words α ∈ V t and β ∈ V t . Then, we need to compare the length-normalized probability p(αxβ) with p(αβ), where p takes the n'th root of the probability p for n-word sequences. We insert the new function word x if p(αxβ) ≥ p(αβ) using the n-gram models, where the probability of any given sequence w 1 , ..., w m is approximated by Note that if we wish to reorder words in V t based on n-gram based language models, then the decoding problem becomes an instance of asymmetric traveler's salesman problem (NP-hard). For brevity, we retain the original order of words in the given triple. We later lift this restriction using the web-scale ngram based phrase fusion method introduced in §4. enforce long distance regularities for more grammatically correct generation. However, optimizing both language-model-based probabilities and parser-based probabilities is intractable. Therefore, we explore a randomized local search approach that makes greedy revisions using both language models and parsers. Randomized local search has been successfully applied to intractable optimization problems in AI (e.g., Table where X is a given sentence (image description), pLM (X) is the length normalized probability of X based on the language model, and pP CF G (X) is the length normalized probability of X based on the probabilistic context free grammar (PCFG) model. The loop is repeated until convergence or a fixed number of iterations is reached. Note that this approach can be extended to simulated annealing to allow temporary downward steps to escape from local maxima. We use the PCFG implementation of The third approach is a template-based approach with linguistic constraints, a technique that has often been used for various practical applications such as summarization We also include templates that encode basic discourse constraints. For instance, the template that generated the first sentences in Figure , where x i is the name of an object (e.g. "cow"), #(x i ) is the number of instances of x i (e.g. "one"), and PREFIX ∈ {"This picture shows", "This is a picture of", etc}. Although this approach can produce good looking sentences in a limited domain, there are many limitations. First, a template-based approach does not allow creative writing and produces somewhat stilted prose. In particular, it cannot add interesting new words, or replace existing content words with better ones. In addition, such an approach does not allow any reordering of words which might be necessary to create a fluent sentence. Finally, hand-written rules are domain-specific, and do not generalize well to new domains. We now introduce an entirely different approach that addresses the limitations of the conventional ap-proaches discussed in §3. This approach is based on web-scale n-gram, also known as Google Web 1T data, which provides the frequency count of each possible n-gram sequence for 1 ≤ n ≤ 5. We first define three different sets of phrases for each given triple <<adj1, obj1>, prep, <adj2, obj2>>: x is an n-gram phrase describing the first object using the words adj1 and obj1, and f is the frequency of x} x is an n-gram phrase describing the second object using the words adj2 and obj2, and f is the frequency of x} x is an n-gram describing the relation between the two objects using the words obj1 and obj2, and f is the frequency of x} We find n-gram phrases for O 1 , O 2 , and R from the Google Web 1T data. The search patterns for O 1 is: where [♣] is a wildcard word, and [♣] n- It is worthwhile to note that our pattern matching is case sensitive, and we only allow patterns that are Template This picture shows one cow, one building, one grass and one sky. The black cow is by the shiny building, and by the furry grass, and by the blue sky. The shiny building is by the furry grass, and by the blue sky. The furry grass is below the blue sky. Simple decoding the black cow or by the furry grass. the shiny building up by the blue sky. the furry grass be below one blue sky. all lower-case. From our pilot study, we found that n-grams with upper case characters are likely from named entities, which distort the n-gram frequency distribution that we rely on during the phrase fusion phase. To further reduce noise, we also discard any n-gram that a character that is not an alphabet. Given the expanded sets of phrases O 1 , O 2 , and R described above, we perform phrase fusion to generate simple image description. In this step, we find the best combination of three phrases, ( x1 , f1 ) ∈ O 1 , ( x2 , f2 ) ∈ O 2 , and ( xR , fR ) ∈ R as follows: Computational Efficiency One advantage of our phrase fusion method is its efficiency. If we were to attempt to re-order words with language models in a naive way, we would need to consider all possible permutations of words -an NP-hard problem ( §3.1). However, our phrase fusion method is clever in that it probes reordering only on selected pairs of words, where reordering is likely to be useful. In other words, our approach naturally ignores most word pairs that do not require reordering and has a time complexity of only O(K 2 n), where K is the maximum number of candidate phrases of any phrase type, and n is the number of phrase types in each sentence. K can be kept as a small constant by selecting K-best candidate phrases of each phrase type. We set K = 10 in this paper. To construct the training corpus for language models, we crawled Wikipedia pages that describe our object set. For evaluation, we use the UIUC PAS-CAL sentence dataset Automatic Evaluation: BLEU The results are shown in Table There is one important factor to consider when interpreting Table From Table Human Evaluation: As mentioned earlier, BLEU score has some drawbacks including obliviousness to correctness of grammar and inability to evaluate the creativity of a composition. To directly quantify these aspects that could not be addressed by BLEU, we perform human judgments on 120 instances for the four proposed methods. Evaluators do not have any computer vision or natural language generation background. We consider the following three aspects to evaluate the our image descriptions: creativity, fluency, and relevance. For simplicity, human evaluators assign one set of scores for each aspect per image. The scores range from 1 to 3, where 1 is very good, 2 is ok, and 3 is bad. [Creativity] How creative is the generated sentence? 1 There is creativity either based on unexpected words (in particular, verbs), or describing things in a poetic way. 2 There is minor creativity based on re-ordering words that appeared in the triple 3 None. Looks like a robot talking. [Fluency] How grammatically correct is the generated sentence? 1 Mostly perfect English phrase or sentence. 2 There are some errors, but mostly comprehensible. 3 Terrible. [Relevance] How relevant is the generated description to the given image? 1 Very relevant. 2 Reasonably relevant. 3 Totally off. Notice that the relevance score of TEMPLATE is better than that of LANGUAGE MODEL, even though both approaches generate descriptions that consist of an almost identical set of words. This is presumably because the output from LANGUAGE MODEL contains grammatically incorrect sentences that are not comprehendable enough to the evaluators. The relevance score of PHRASE FUSION is also slightly worse than that of TEMPLATE, presumably because PHRASE FUSION often generates poetic or creative expressions, as shown in Figure Error Analysis There are different sources of errors. Some errors are due to mistakes in the original visual recognition input. For example, in the 3rd image in Figure Other errors are from surface realization. For instance, in the 8th image, PHRASE FUSION selects the preposition "under", presumably because dogs are typically under the chair rather than on the chair according to Google n-gram statistics. In the 5th image, an unexpected word "burning" is selected to make the resulting output idiosyncratic. Word sense disambiguation sometimes causes a problem in surface realization as well. In the 3rd image, the word "way" is chosen to represent "path" or "street" by the image recognizer. However, a different sense of way -"very" -is being used in the final output. There has been relatively limited work on automatically generating natural language image descriptions. Most work related to our study is discussed in §1, hence we highlight only those that are closest to our work here. We use similar vision based inputs -object detectors, modifier classifiers, and prepositional functions -to some very recent work on generating simple descriptions for images In this paper, we presented a novel surface realization technique based on web-scale n-gram data to automatically generate image description. Despite its simplicity, our method is highly effective in generating mostly appealing and presentable language, while permitting creative writing at times. We conclude from our study that it is viable to generate simple textual descriptions that are germane to the specific image content while also sometimes producing almost poetic natural language. Furthermore, we demonstrate that world knowledge implicitly encoded in natural language can help enhance image content recognition.
| 803 | 1,307 | 803 |
VoiSeR: A New Benchmark for Voice-Based Search Refinement
|
Voice assistants, e.g., Alexa or Google Assistant, have dramatically improved in recent years. Supporting voice-based search, exploration, and refinement are fundamental tasks for voice assistants, and remain an open challenge. For example, when using voice to search an online shopping site, a user often needs to refine their search by some aspect or facet. This common user intent is usually available through a "filter-by" interface on online shopping websites, but is challenging to support naturally via voice, as the intent of refinements must be interpreted in the context of the original search, the initial results, and the available product catalogue facets. To our knowledge, no benchmark dataset exists for training or validating such contextual search understanding models. To bridge this gap, we introduce the first large-scale dataset of voicebased search refinements, VoiSeR, consisting of about 10,000 search refinement utterances, collected using a novel crowdsourcing task. These utterances are intended to refine a previous search, with respect to a search facet or attribute (e.g., brand, color, review rating, etc.), and are manually annotated with the specific intent. This paper reports qualitative and empirical insights into the most common and challenging types of refinements that a voicebased conversational search system must support. As we show, VoiSeR can support research in conversational query understanding, contextual user intent prediction, and other conversational search topics to facilitate the development of conversational search systems.
|
Modern voice assistants, such as Amazon Alexa or Apple Siri, make use of Natural Language Understanding (NLU) techniques to perform several tasks. Some of the most popular functions offered by these systems are based on voice-search: millions use voice assistants to access information or search for music, products or local restaurants and stores. However, search experience with a voice assistant remains limited. The current generation of these systems mostly supports single-turn interactions, and does not naturally support more complex search needs, which often require refinements to narrow, broaden or change the initial search. Supporting refinement is a fundamental aspect of search systems, and it is done in a variety of ways in Web-based user interfaces, e.g., through query suggestion or explicit facets navigation or filtering. For example, in an e-Commerce search, a user may want to refine their search with respect to some facet or attribute (e.g., brand or price); this critical functionality is supported on most e-Commerce websites. However, this kind of interaction is challenging to support via voice-based dialogue interfaces, as interpreting such refinements requires modeling the original search intent, the initial results, and the available result facets. To the best of our knowledge, no large scale dataset exists for training and validating NLU models for multi-turn voice-based search. To bridge this gap, we present a new Voice-based Search Refinement dataset, VoiSeR The dataset was collected through crowdsourcing via Amazon Mechanical Turk, between February and June 2020 in the US and India. We designed the task to minimize any bias towards partic-ular expressions or terminology which may not be natural to users. To achieve this goal, we provided clear and concise instructions; we intentionally did not provide examples to avoid biasing participants towards using specific linguistic expressions (see Figure We annotated the dataset to highlight some important aspects characterizing a voice refinement of product search. In particular, we annotated (i) the products and attributes mentioned in each utterance, if present; (ii) the specific refinement intent of each utterance (e.g., refinement by exact attribute value). In addition to the new VoiSeR benchmark dataset, our contributions include (i) an analysis of the data, where we highlight some linguistic aspects characterizing how people express the refinement intent, and (ii) an empirical investigation to demonstrates that VoiSeR can be successfully used to bootstrap NLU models for handling voicebased search refinements. Furthermore, we show that contextual information is beneficial for such NLU tasks. Next, §2 provides details about the data collection and annotation. §3 provides a detailed analysis of the dataset, while §4 reports the empirical investigation. In §5, we discuss the related works. Finally, §6 discusses our conclusions.
|
In order to collect a large number of voice search refinements from multiple participants, we designed a crowdsourcing task on Amazon Mechanical Turk The design of the task was intended to make it both easy for the participants (i.e., Amazon Mechanical Turk Worker) and as realistic as possible, to provide valid linguistic expressions of voice refinements. Thus, we tried to reproduce a real "customer journey" of product searches and refinements. With this idea in mind, we designed the Mechanical Turk task depicted in Figure • An initial set of products, i.e., up to five products in the top part of the image. • A target set of products, i.e., up to five products in the central part of the image. • A visual intent indicator, i.e., an image describing the attribute type the worker should focus on when expressing the refinement. In Figure The participant is asked to imagine they are searching for products and that her search led to the initial set of results. We ask the participant to record a voice utterance, modifying the search to achieve the target product set, cued by the provided visual intent indicator. In the example in Figure An Automatic Speech Recognition (ASR) system (we adopted Amazon AWS Transcribe To automatically generate the many examples to annotate, we used the Amazon.in product search engine: starting from a random product search, we collected the initially retrieved products, as well as those returned after the application of a filter. The type of the activated filter dictates the visual intent indicator shown to the Worker, while the products shown in the initial and target sets are a subset of those retrieved by Amazon.in before and after the filter application, respectively. To emphasize the difference between the initial set and the target set, we select the products so that (i) no product appears in both sets and (ii) the products in the initial set do not satisfy the activated filter. For instance, the task shown in Figure We kept the task instructions as simple as possible in order to not introduce linguistic biases: the complete instructions to the Workers are those shown at the left Figure In a preliminary experiment, we did not show the intent indicator, but in many cases the difference between the initial set and the target set was not obvious, so that the Workers ended up focusing on irrelevant details. As a consequence, the utterances that were collected in that setting were over-specific, and often the Workers simply read parts of the target product titles. Based on this experiment, the full dataset was collected using the intent indicator condition described above. In e-commerce websites selling a wide range of products, there are typically many possible at-tribute types customers can filter on. We decided to collect data about some of the most popular and generally applicable ones, namely brand, color, discount, material, price and review rating. After collecting the data, we asked domain experts to annotate them with respect to three different tasks. Voice Refinement Validity: Since crowdsourcing data can be noisy, we first design a preliminary annotation task to validate each single utterance collected on Mechanical Turk. In particular, we showed to the annotators the Mechanical Turk task associated to each sentence and asked them to state whether the utterance correctly refines the product search on the target attribute. We also asked the annotators to report whether ASR errors (or typos in case the Worker manually corrected the ASR transcription) occur in the utterance. We asked the annotators to mark product and attribute mentions in the utterance, for example in the utterance "Show me only red t-shirts," red is the attribute and t-shirt is the product. Given the Mechanical Turk task design discussed in §2.1, the collected utterances are supposed to refine on a single attribute. However, we noticed that some utterances contain multiple attribute mentions. For instance, Nike and red in the sentence "Show me only red Nike t-shirts". The annotators are required to extract all individual attribute mentions within the utter-ance and not only the attribute mention referring to the target attribute type (the one shown as intent indicator to the Worker). Refinement Intent Classification: This task consists of indicating how to change the search query based on the attribute mentioned in the utterance. We asked the annotators to indicate whether the provided refinement belongs to one of the following types: • EXACT: the customer asks to select products having a specific value for the attribute, e.g., "show me only purple". • EXCLUDE: the customer asks to exclude products having a specific value for the attribute, e.g., "exclude the purple ones". • RANGE: the customer asks to select products having attribute values in a closed interval, e.g., "Price between 200 and 300". • GREATER: the customer asks to select products with attribute value higher than a given value, e.g., "Show 4 stars and up", or "Exclude the products with less than four stars". • LOWER: the customer asks to select products with attribute value lower than a given value, e.g., "Price less than 100", or "Exclude the ones more expensive than 100". • OTHER: utterance not falling in the above categories, e.g., "Show me a different color", or 'Select top ratings". Each example in our data is annotated by a single domain expert, since we observed a very high annotation quality in a preliminary annotation phase where multiple annotators annotated the same instances. We registered an almost perfect agreement in all tasks: Cohen's Kappa 0.914 for the Voice Refinement Validity task, Cohen's Kappa 0.960 for the Attribute and Product Extraction task, and Cohen's Kappa 0.859 for the Refinement Intent Classification task. In this section, we provide the analysis of the data collected through the crowdsourcing experiment. First, we describe how we conducted the Mechanical Turk experiment and the statistics of the collected data in §3.1; then, we discuss some of the linguistic properties emerging in the context of voice refinements in §3.2. The data was crowdsourced using the Amazon Mechanical Turk platform, from workers in the U.S. and India, between February and June 2020. Both Indian and U.S. Workers were asked to provide English voice refinements with respect to the tasks shown. As reported in Table Table As expected, the majority of brand-related utterances are of type EXACT. Also, the number of utterances of the price and rating related attributes are mainly LOWER and GREATER, respectively. This is intuitive considering that usually users look for less expensive and better rated products (for example, "Show me items that are rated four stars or better"). Notice that price (and also discount) utterances include a good number of RANGE, EXACT, as for example, "Womens bags in the range of 1000 to 3300". Most of the OTHER bucket are (i) utterances whose intent is to sort products with respect to an attribute type like price or review rating, e.g., "Sort by most rated laptops", (ii) utterances where no specific attribute value is mentioned, e.g., "Show items with high ratings". Other common cases are utterances with comparatives e.g., "Show me less expensive earphones" or superlatives e.g., "Show me the highest rated silk sets". We noticed that, despite the fact that our Mechanical Turk tasks focused on a single attribute, in about 27% of the cases (2,668 utterances) Workers provided utterances containing multiple attributes. For example, the utterance "Price should be less than 700 with discount" contains both the target attribute (discount) and a specification of the price. Finally, ∼80% of the utterances have a product mention, e.g., "Show me a Vega brand hair curler". The rest don't mention products, e.g., "Which ones are discounted", or "Higher price". To better understand the collected data, and to identify differences between regions, categories, etc..., we computed some basic features of the refinement utterances: word counts, entropy per word for a bi-gram language model, use of adjectives and modifiers, dependency tree depth, and whether the utterance could be parsed as a complete English sentence having subject, verb, and object. We find moderate but statistically significant differences between regional populations which may have implications for the applicability of this data set to other regions. Table Adjectives and modifiers are used much more in IN than US, which appears to reflect workers speaking not only the refinement, but the original product search query as well. For example "I'm looking for a blue straight fit trouser pant" includes much more than the target color refinement, "blue". US workers had an increased tendency to speak in complete sentences, e.g., "Please display more per-fumes from Calvin Klein" instead of simply "Calvin Klein", both of which are utterances found in the dataset. Perhaps the most curious distinction is US workers' increased tendency to use first-person pronouns compared to other workers. These are typically phrased as "I only want to see...", "Show me...", "I want the {attribute} to be {value}", "I'm looking for ..." etc. Unsurprisingly, the per-word entropy is quite a bit higher for brand refinements than any other type but rating, while the word count is smaller for brand refinements than most others. In brand refinements the usage of first person pronouns is higher than any other type. Ratings refinements are the longest overall, with the deepest dependency trees. We note that the distribution of refined attributes differs across regions. With one notable exception, we do not see any statistically significant, consequential, and consistent interactions between region and refined attribute when used to predict the features shown in Table In this section we present a set of experiments on intent detection, specifically the recognition of attribute and product categories in a search refinement utterance. We aim to show that the VoiSeR dataset enables building models for intent recognition in voice search refinement. Moreover, we show the contribution of contextual information, e.g., the previous utterance, is beneficial, highlighting the need for large scale datasets like VoiSeR for developing contextual intent recognition models. token, as usual in BERT Setup. We used the bert-base-uncased model from the Huggingface repository Experimental Results. Table We performed an in-depth analysis to find out how the model performs on different attribute types. Tables In both settings, the model achieves the best results on discount, while results are a bit worse on material, review-rating and brand. This is a consequence of the lower linguistic variability associated with the discount attribute. On the other hand, refinements on review-rating are on average the Table Finally, we conducted a set of experiments to study the model capability to generalize to attribute types rarely, or never, observed in training. Figure The average learning curve becomes almost flat after 400 examples, and overall the model demonstrates a very good generalization capability on new attribute types. This suggests that the collected dataset has a reasonable size and that it represent a valuable resource to bootstrap NLU models for voice-based search. Human-computer information retrieval (HCIR) Due to the increasing availability of smartphones and voice assistants like Amazon Alexa or Google Home, voice-based search is becoming ubiquitous There have been prior efforts in creating open multi-turn voice-based search datasets, but because of the lack of effective automated systems for these tasks, the datasets were collected in a lab using a Wizard-of-Oz approach, where a hidden human participant playing the part of the search engine, e.g., Recently, several dialog-related datasets have been In this paper, we discussed the problem of search refinement, a fundamental component for supporting multi-turn voice-based complex search tasks. We presented the challenges in the voice refinement problem, and introduced a large-scale, critically needed benchmark for training and evaluating models in this setting. Specifically, we introduced the first benchmark dataset, VoiSeR, specifically developed for analyzing and measuring the linguistic phenomena underlying the voice refinements in second-turn searches in the e-commerce domain. We emphasize that the target search facets and attributes are (by design) general, and thus the data and the resulting models can be used, with or without adaptation, for a wide range of conversational search refinement and intent prediction tasks. We provided a detailed description of the data collection and annotation processes, and identified interesting statistical and linguistic phenomena in the dataset. We complement the data release with an extensive empirical investigation, which demonstrates that (i) our dataset is a valuable resource for training NLU models for voice-based search and (ii) using contextual information for recognizing product and attribute mentions is beneficial. Together, the new VoiSeR dataset and the analysis in this paper enable productive research for developing systems for voice-based complex search tasks.
| 1,582 | 2,944 | 1,582 |
On the use of Comparable Corpora to improve SMT performance
|
We present a simple and effective method for extracting parallel sentences from comparable corpora. We employ a statistical machine translation (SMT) system built from small amounts of parallel texts to translate the source side of the nonparallel corpus. The target side texts are used, along with other corpora, in the language model of this SMT system. We then use information retrieval techniques and simple filters to create French/English parallel data from a comparable news corpora. We evaluate the quality of the extracted data by showing that it significantly improves the performance of an SMT systems.
|
Parallel corpora have proved be an indispensable resource in Statistical Machine Translation (SMT). A parallel corpus, also called bitext, consists in bilingual texts aligned at the sentence level. They have also proved to be useful in a range of natural language processing applications like automatic lexical acquisition, cross language information retrieval and annotation projection. Unfortunately, parallel corpora are a limited resource, with insufficient coverage of many language pairs and application domains of interest. The performance of an SMT system heavily depends on the parallel corpus used for training. Generally, more bitexts lead to better performance. Current resources of parallel corpora cover few language pairs and mostly come from one domain (proceedings of the Canadian or European Parliament, or of the United Nations). This becomes specifically problematic when SMT systems trained on such corpora are used for general translations, as the language jargon heavily used in these corpora is not appropriate for everyday life translations or translations in some other domain. One option to increase this scarce resource could be to produce more human translations, but this is a very expensive option, in terms of both time and money. In recent work less expensive but very productive methods of creating such sentence aligned bilingual corpora were proposed. These are based on generating "parallel" texts from already available "almost parallel" or "not much parallel" texts. The term "comparable corpus" is often used to define such texts. A comparable corpus is a collection of texts composed independently in the respective languages and combined on the basis of similarity of content There has been considerable amount of work on bilingual comparable corpora to learn word translations as well as discovering parallel sentences. include Our technique is similar to that of (Munteanu and Marcu, 2005) but we bypass the need of the bilingual dictionary by using proper SMT translations and instead of a maximum entropy classifier we use simple measures like the word error rate (WER) and the translation error rate (TER) to decide whether sentences are parallel or not. Using the full SMT sentences, we get an added advantage of being able to detect one of the major errors of this technique, also identified by We apply this technique to create a parallel corpus for the French/English language pair using the LDC Gigaword comparable corpus. We show that we achieve significant improvements in the BLEU score by adding our extracted corpus to the already available human-translated corpora. This paper is organized as follows. In the next section we first describe the baseline SMT system trained on human-provided translations only. We then proceed by explaining our parallel sentence selection scheme and the post-processing. Section 4 summarizes our experimental results and the paper concludes with a discussion and perspectives of this work.
|
The goal of SMT is to produce a target sentence e from a source sentence f . Among all possible target language sentences the one with the highest probability is chosen: where Pr(f |e) is the translation model and Pr(e) is the target language model (LM). This approach is usually referred to as the noisy sourcechannel approach in SMT It is today common practice to use phrases as translation units The feature functions h i are the system models and the λ i weights are typically optimized to maximize a scoring function on a development set In the framework of the EuroMatrix project, a test set of general news data was provided for the shared translation task of the third workshop on SMT The general architecture of our parallel sentence extraction system is shown in figure We shall also be trying to answer the following question over the course of this study: do we need to use the best possible SMT systems to be able to retrieve the correct parallel sentences or any ordinary SMT system will serve the purpose ? LDC provides large collections of texts from multilingual news reporting agencies. We identified agencies that provided news feeds for the languages of our interest and chose AFP for our study. Using the ID and date information for each sentence of both corpora, we first collect all sentences from the SMT translations corresponding to the same day (query sentences) and then the corresponding articles from the English Gigaword cor-pus (search space for IR). These day-specific files are then used for information retrieval using a robust information retrieval system. The Lemur IR toolkit The information retrieval step is the most time consuming task in the whole system. The time taken depends upon various factors like size of the index to search in, length of the query sentence etc. To give a time estimate, using a ±5 day window required 9 seconds per query vs 15 seconds per query when a ±7 day window was used. The number of results retrieved per sentence also had an impact on retrieval time with 20 results taking 19 seconds per query, whereas 5 results taking 9 seconds per query. Query length also affected the speed of the sentence extraction process. But with the problem at we could differentiate among important and unimportant words as nouns, verbs and sometimes even numbers (year, date) could be the keywords. We, however did place a limit of approximately 90 words on the queries and the indexed sentences. This choice was motivated by the fact that the word alignment toolkit Giza++ does not process longer sentences. A Krovetz stemmer was used while building the index as provided by the toolkit. English stop words, i.e. frequently used words, such as "a" or "the", are normally not indexed because they are so common that they are not useful to query on. The stop word list provided by the IR Group of University of Glasgow The resources required by our system are minimal : translations of one side of the comparable corpus. We will be showing later in section 4.2 of this paper that with an SMT system trained on small amounts of human-translated data we can 'retrieve' potentially good parallel sentences. Once we have the results from information retrieval, we proceed on to decide whether sentences are parallel or not. At this stage we choose the best scoring sentence as determined by the toolkit and pass the sentence pair through further filters. Sentences pairs conforming to the previous criteria are then judged based on WER (Levenshtein distance) and translation error rate (TER). WER measures the number of operations required to transform one sentence into the other (insertions, deletions and substitutions). A zero WER would mean the two sentences are identical, subsequently lower WER sentence pairs would be sharing most of the common words. However two correct translations may differ in the order in which the words appear, something that WER is incapable of taking into account as it works on word to word basis. This shortcoming is addressed by TER which allows block movements of words and thus takes into account the reorderings of words and phrases in translation Our main goal was to be able to create an additional parallel corpus to improve machine translation quality, especially for the domains where we have less or no parallel data available. In this section we report the results of adding these extracted parallel sentences to the already available humantranslated parallel sentences. We conducted a range of experiments by adding our extracted corpus to various combinations of already available human-translated parallel corpora. We experimented with WER and TER as filters to select the best scoring sentences. Generally, sentences selected based on TER filter showed better BLEU and TER scores than their WER counter parts. So we chose TER filter as standard for our experiments with limited amounts of human translated corpus. Figure Two main classes of errors common in such tasks: firstly, cases where the two sentences share many common words but actually convey different meaning, and secondly, cases where the two sentences are (exactly) parallel except at sentence ends where one sentence has more information than the other. This second case of errors can be detected using WER as we have both the sentences in English. We detected the extra insertions at the end of the IR result sentence and removed them. Some examples of such sentences along with tails detected and removed are shown in figure The best BLEU score on the development data is obtained when adding 9.4M words of automatically aligned bitexts (11M in total). This corre- Adding the dictionary improves the baseline system (second line in Table Having had very promising results with our previous experiments, we proceeded onto experimentation with larger human-translated data sets. We added our extracted corpus to the collection of News-commentary (1.56M) and Europarl (40.1M) bitexts. The corresponding SMT experiments yield an improvement of about 0.2 BLEU points on the Dev and Test set respectively (see table Our motivation for this approach was to be able to improve SMT performance by 'creating' parallel texts for domains which do not have enough or any parallel corpora. Figure commentary bitext and the bilingual dictionary were used to train an SMT system that produced the queries for information retrieval. To investigate the impact of the SMT quality on our system, we built another SMT system trained on large amounts of human-translated corpora (116M), as detailed in section 2. Parallel sentence extraction was done using the translations performed by this big SMT system as IR queries. We found no experimental evidence that the improved automatic translations yielded better alignments of the comaprable corpus. It is however interesting to note that we achieve almost the same performance when we add 9.4M words of autoamticallly extracted sentence as with 40M of human-provided (out-of domain) translations (second versus fifth line in Table Sentence aligned parallel corpora are essential for any SMT system. The amount of in-domain parallel corpus available accounts for the quality of the translations. Not having enough or having no indomain corpus usually results in bad translations for that domain. This need for parallel corpora, has made the researchers employ new techniques and methods in an attempt to reduce the dire need of this crucial resource of the SMT systems. Our study also contributes in this regard by employing an SMT itself and information retrieval techniques to produce additional parallel corpora from easily available comparable corpora. We use automatic translations of comparable corpus of one language (source) to find the corresponding parallel sentence from the comparable corpus in the other language (target). We only used a limited amount of human-provided bilingual resources. Starting with about a total 2.6M words of sentence aligned bilingual data and a bilingual dictionary, large amounts of monolingual data are translated. These translations are then employed to find the corresponding matching sentences in the target side corpus, using information retrieval methods. Simple filters are used to determine whether the retrieved sentences are parallel or not. By adding these retrieved parallel sentences to already available human translated parallel corpora we were able to improve the BLEU score on the test set by almost 2.5 points. Almost one point BLEU of this improvement was obtained by removing additional words at the end of the aligned sentences in the target language. Contrary to the previous approaches as in This technique is particularly useful for language pairs for which very little parallel corpora exist. Other probable sources of comparable corpora to be exploited include multilingual encyclopedias like Wikipedia, encyclopedia Encarta etc. There also exist domain specific comparable corpora (which are probably potentially parallel), like the documentations that are done in the national/regional language as well as English, or the translations of many English research papers in French or some other language used for academic proposes. We are currently working on several extensions of the procedure described in this paper. We will investigate whether the same findings hold for other tasks and language pairs, in particular translating from Arabic to English, and we will try to compare our approach with the work of
| 613 | 2,979 | 613 |
MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization
|
Recently, large-scale datasets have vastly facilitated the development in nearly all domains of Natural Language Processing. However, there is currently no cross-task dataset in NLP, which hinders the development of multi-task learning. We propose MATINF, the first jointly labeled large-scale dataset for classification, question answering and summarization. MAT-INF contains 1.07 million question-answer pairs with human-labeled categories and usergenerated question descriptions. Based on such rich information, MATINF is applicable for three major NLP tasks, including classification, question answering, and summarization. We benchmark existing methods and a novel multi-task baseline over MATINF to inspire further research. Our comprehensive comparison and experiments over MATINF and other datasets demonstrate the merits held by MAT-INF. 1
|
In recent years, large-scale datasets (e.g., Ima-geNet Due to the high cost of data annotation, existing NLP datasets are usually labeled for only one particular task (e.g., SQuAD In this paper, we propose Maternal and Infant Dataset (MATINF), the first large-scale dataset covering three major NLP tasks: text classification, question answering and summarization. MATINF consists of question answering data crawled from a large Chinese maternity and baby caring QA site. On this site, users can ask questions related to maternity and baby caring. When submitting a question, a detailed description is required to provide essential information and the asker also needs to assign a category for this question from a pre-defined topic list. Each user could submit an answer to a question post, and the asker will select the best answer out of all the candidates. To attract more attention, the askers are encouraged to set rewards using virtual coins when submitting the question and these coins will be given to the user who submitted the best answer selected by the asker. This rewarding mechanism could constantly ensure high-quality answers. MATINF supports three NLP tasks as follows. Text Classification. Given a question and its detailed description, the task is to select an appropriate category from the fine-grained category list. Different from previous news classification tasks whose category set is general topics like entertainment and sports, MATINF-C is a fine-grained classification under a single domain. That is, the distance between different categories is smaller, which provides a more challenging stage to test the continuously evolving state-of-the-art neural models. Question Answering. Given a question, the task is to produce an answer in natural language. This task is slightly different from previous Machine Reading Comprehension (MRC) since the document which contains the correct answer is not directly provided. Therefore, how to collect the domain knowledge from massive QA data becomes extremely important. Summarization. Given a question description, the task is to produce the corresponding question. Previous summarization datasets are all constructed with news or academic articles. The limited text genres covered in these datasets hinder the thorough evaluation of summarization models. Also, the noisy nature of MATINF encourages more robust models. MATINF can be considered as the first social media summarization dataset. MATINF holds the following merits: (1) Large. MATINF includes 1.07M unique QA pairs, making it an ideal playground for the new advancements of deeper and larger models (e.g., Pretrained Language Models). (2) Multi-task applicable. MAT-INF is the first dataset that simultaneously contains ground truths for three major NLP tasks, which could facilitate new multi-task learning methods for these tasks. Here, to set a baseline and inspire future research, we present Multi-task Field-shared Sequence to Sequence (MTF-S2S), a straightforward yet effective model, which achieves better performance on all three tasks compared to its singletask counterparts.
|
Topic classification is one of the most fundamental tasks in NLP. As a deeply explored task, many datasets have been used in previous research both in English (AGNews, DBPedia, Yahoo Answer However, as most of them are formal text and the target categories are general topics, even simply leveraging n-gram features could achieve acceptable results. Plus, some of them are small in scale. Nowadays, with the prevalence of neural models and pretraining techniques, recent algorithms Following the definition in Currently, several datasets are available for Chinese Question Answering. NLPCC Shared Task Summarization datasets can be roughly categorized into extractive and abstractive datasets, which respectively favor abstractive and extractive methods. Extractive datasets are composed of long documents and summaries. Since the summary is long, extracted sentences and spans from the document could compose a good summary. Newsroom Abstractive datasets often contain short documents and summaries, which encourages a thorough understanding of the document and style transfer between a document and its corresponding summary. Gigaword However, all of these existing datasets are composed of either news or academic articles. The narrow sources of these datasets bring two main drawbacks. First, due to the nature of news reporting and academic writing, the summary-eligible contents do not distribute uniformly We present Maternal and Infant (MATINF) Dataset, a large-scale dataset jointly labeled for classification, question answering and summarization in the domain of maternity and baby caring in Chinese. An entry in the dataset includes four fields: question (Q), description (D), class (C) and answer (A). An example is shown in Figure We collect nearly two million question-answer pairs with fine-grained human-labeled classes from a large Chinese maternity and baby caring QA site. We conduct both automatic and manual data cleansing and remove: (1) classes with insufficient samples; (2) entries in which the length of the description filed is less than the length of the question field; (3) data with any field longer than 256 characters; (4) human-spotted ill-formed data. After the data cleansing, we construct MATINF with the remaining 1.07 million entries. We first randomly split the whole data into training, validation and test sets with a proportion of 7:1:2. Then, we use the splits for summarization and QA. For classification, we further divide the data into two sub-tasks according to different classification standards within each split. In MATINF, the class labels are first selected by the users when submitting a question. Then, if the question is not in the right class, the forum administrators would manually re-categorize the question to the correct class. In our data, there are two parallel standards for classifying a question: topic class and age of the baby. We use these two standards to construct our two subsets. Thus, we define two tasks: (1) classifying a question to different age groups; (2) classifying a question into a fine-grained topic. We list the classes of the two tasks in between the two subsets. Formally, we define the task as predicting the class of a QA pair with its question and description fields (i.e., Q, D → C). Different from previous datasets, our task is a finegrained classification (i.e., to classify documents in a domain) rather than classifying general topics (e.g., politics, sports, entertainments), which means the semantic difference between classes is prominently smaller. It requires meticulous exploitation of semantics instead of recognizing unique n-gram features for each class. We provide statistical comparison of MATINF-C with other datasets in Table Typically, to return an answer for a specific question, the model needs to retrieve from a pre-defined document set or query a manually-constructed knowledge base. MS-MARCO On the other hand, in a real-world scenario, it is impossible to define a document set covering all knowledge needed to answer a user question. Thus, we provide the training set of MATINF-QA as the possible document source and encourage all kinds of methods including retrieval, generation and hybrid models. Formally, the task is defined as replying a question with natural text (i.e., Q → A). The large scale of our dataset ensures that a model is able to generalize and learn enough knowledge to answer a user question. Note that we do not use description when defining this task since we observe a negative effect on the generalization in our experiment. Shown in Table All current datasets for summarization to date are in the domain of news and academic articles. However, as a custom of the report and academic writing, in extractive datasets, the summary-eligible contents often appear at the beginning or the end of an article, preventing the summarization model from a full understanding and resulting in impractically high performance in evaluation. On the other hand, current abstractive datasets are all formal news datasets, which are in lack of diversity. Models trained on such a single-source dataset is not robust enough to handle real-world complexity. In MATINF-SUMM, question description can be seen as an extended and specific version of the question itself, containing more detailed background information with respect to the question. Besides, the question itself is often a well-formed interrogative sentence rather than extracted phrases. Our task is to generate the question from the corresponding description (i.e., D → Q). Note that our task itself can support many meaningful real-world applications, e.g., generating an informative title for user-generated content (UGC). Also, there is only one public dataset for summarization in Chinese to date. Our dataset can be used to verify the effectiveness of existing models and eliminate the overfitting bias caused by evaluation on merely one dataset. We compare MATINF-SUMM with other datasets in Table Recently, many attempts have been made on multitask learning in NLP To set a baseline and also inspire future research, we design a multi-task learning network, named Multi-task Field-shared Sequence to Sequence (MTF-S2S). We illustrate the architecture of MTF-S2S in Figure When training, since the sizes of datasets for different tasks are not equal, we first determine the batch size for different tasks to make sure that the training progress for each task is approximately synchronized by: where T includes four tasks: summarization, QA, and two classification tasks. bs * is the batch size of each task, and n * is the sample numbers in each dataset for the task. If one task is iterated to the last data batch, it will start over from the first batch. For each iteration, we successively calculate the losses by Cross Entropy for each task in one batch. Then, we train the model to minimize the total loss: where λ * is the manually set weight for each task. We stop the co-training after one epoch, then finetune the model to obtain the peak performance for each task, separately. In this section, we benchmark a few baselines and MTF-S2S on the three tasks of MATINF. We run each experiment with three different random seeds and report the average result of the three runs. MTF-S2S. For MTF-S2S, we set all λ i = 0.25 and use an Adam (Kingma and Ba, 2015) optimizer to co-train the model for one epoch with batch sizes of 64, 64, 12 and 52 for bs Summ , bs QA , bs CT opic , and bs CAge respectively with a learning rate of 0.001. Then we fine-tune the model for each task with a learning rate of 5 × 10 -5 . We report both the performance after co-training and after fine-tuning. The hidden size of all LSTM encoders/decoders and attentions is 200. For all tasks, we separately train MTF-S2S on each task only to provide a single-task baseline. Both MTF-S2S and Seq2Seq baselines are character-based and their embeddings are initialized with Tencent AI Lab Embedding Classification. For classification, we conduct experiments with a statistical learning baseline, several deep neural networks and pretrained large-scale language models. For the statistical baselines, we extract character-based unigram and bigram features and use a logistic classifier to predict the classes. For neural networks, we choose fastText For language models, we fine-tune BERT Classification. We show the experimental results of two classification sub-tasks in Table TextRank language models with an accuracy of 91.02. To analyze, this task has fewer training samples, which is in favor of a model with moderate parameter numbers instead of huge parameter numbers as in language models. Also, the task is relatively easier due to the class number, which makes the advantage of language models more trivial. For the multi-task baseline, MTF-S2S shows a satisfying performance on both MATINF-C-AGE and MATINF-C-TOPIC, outperforming the same model which is only trained on the single task by 0.14 and 0.19 in terms of accuracy. Notably, BERT-of-Theseus Question Answering. The experimental results are shown in Table Summarization. We further conduct performance comparison for summarization across three datasets, CNN/DM Since MATINF is a web-crawled dataset, it would be inevitable to be noisier than a dataset annotated by hired annotators though we have made every effort to clean the data. On the bright side, it can encourage more robust models and facilitate realworld applications. For future work, we would like to see more interesting work exploring new multi-task learning approaches. To conclude, in this paper, we present MATINF, a jointly labeled large-scale dataset for classification, question answering and summarization. We benchmark existing methods and a straightforward baseline with a novel multi-task paradigm on MAT-INF and analyze their performance on these three tasks. Our extensive experiments reveal the potential of the proposed dataset for accelerating the innovations in the three tasks and multi-task learning.
| 848 | 3,119 | 848 |
Discriminative Marginalized Probabilistic Neural Method for Multi-Document Summarization of Medical Literature
|
Although current state-of-the-art Transformerbased solutions succeeded in a wide range for single-document NLP tasks, they still struggle to address multi-input tasks such as multidocument summarization. Many solutions truncate the inputs, thus ignoring potential summary-relevant contents, which is unacceptable in the medical domain where each information can be vital. Others leverage linear model approximations to apply multi-input concatenation, worsening the results because all information is considered, even if it is conflicting or noisy with respect to a shared background. Despite the importance and social impact of medicine, there are no ad-hoc solutions for multi-document summarization. For this reason, we propose a novel discriminative marginalized probabilistic method (DAMEN) trained to discriminate critical information from a cluster of topic-related medical documents and generate a multi-document summary via token probability marginalization. Results prove we outperform the previous state-of-theart on a biomedical dataset for multi-document summarization of systematic literature reviews. Moreover, we perform extensive ablation studies to motivate the design choices and prove the importance of each module of our method. 1
|
The task of multi-document summarization aims to generate a compact and informative summary from a cluster of topic-related documents, which represents a very challenging natural language processing (NLP) application due to the presence of redundant and sometimes conflicting information among documents State-of-the-art approaches leverage two leading solutions: hierarchical networks that capture crossdocument relations via graph encodings Multi-document summarization requires models to have more robust capabilities for analyzing the cluster to discriminate the correct information from noise and merge it consistently. In this work, we propose a discriminative marginalized probabilistic neural method (DAMEN) that selects worthy documents in the cluster with respect to a shared background and generates the summary via token probability marginalization. The marginalization of the probability has been successfully applied in past NLP models such as pLSA To the best of our knowledge, we are the first that propose such a method for multi-document summarization. To this aim, we conduct experiments on the only medical dataset for multi-document summarization of systematic literature reviews (MS2). Besides, we perform extensive ablation studies to motivate the design choices and prove the importance of each component of our method. To sum up, our contributions are as follows: • We propose a novel probabilistic neural method for multi-document summarization (DAMEN) that discriminates the summaryrelevant information from a cluster of topicrelated documents and generates a final summary via token probability marginalization. • We advance the research in the medical domain, experimenting with a biomedical multidocument summarization dataset about the generation of systematic literature reviews. • We show that our solution outperforms previous state-of-the-art solutions, achieving better ROUGE scores. Furthermore, we extensively prove the contribution of each module of our method with ablation studies.
|
We describe related works on multi-document summarization categorized on model architectures. Flat solutions. Flat concatenation is a simple yet powerful solution because the generation of the multi-document summary is treated as a singledocument summarization task, thus it can leverage state-of-the-art pre-trained summarization models. Consequently, processing all documents as a flat input requires models capable of handling long sequences. As previously experimented by Hierarchical solutions. To better preserve crossdocument relations and obtain semantic-rich representations, hierarchical concatenation solutions leverage graph-based techniques to work from word and sentence-level Our solution. In this work, we show how the summary-relevant information can be discriminated from a cluster of medical documents by a probabilistic neural method trained end-to-end. In detail, our solution fully leverages pre-trained state-of-the-art Transformers without applying input truncation that causes performance drop and discards important contents, unacceptable for a high-social impact domain such as the medical one. We introduce DAMEN, a discriminative marginalized probabilistic neural method for the multidocument summarization of medical literature based on three components: • Indexer: it is a neural language model based on BERT architecture • Discriminator: it leverages a BERT model to create the background embedding, which is used to compute a distance score between the embedding of each document in the cluster in order to select the top K ones. • Generator: it uses a BART model In this phase, we index each document in the cluster with an embedding generated by a BERT-based model. Such a pre-trained language model is the state-of-the-art in semantic modeling from textual data thanks to the vast knowledge learned during pre-training The technique we use is known as dense passage retriever (DPR) The main idea of the Discriminator is to discriminate the critical information from noise in a cluster of topic-related documents with respect to a shared background without breaking the backpropagation chain. For this reason, we use a probabilistic deep neural model to draw a probability distribution over documents in the cluster < c 0 , c 1 , ..., c n >∈ C i , with the following formula: where θ represents the parameters of the neural network. Even in this case, the neural model is a BERT-based pre-trained language model as the one used for indexing, but this is trained during the learning process while the first is frozen. In detail, the Discriminator creates a latent projection for each background, which is used to fetch the more related documents in the cluster. More precisely, it applies the inner product to create a score for each document and selects the top K ones. We use the pre-trained encoder-decoder generative Transformer BART where tok is a special text separator token (<doc>) we add between x i and c ij to make BART aware of the background text boundary. The behavior of the Generator can be formally defined as follows: where γ are the Generator parameters, N = |y i | is the target length, and y i,1:z are the tokens from position 1 to z of the target y i . The entire model aims to draw the probability distribution over the dictionary to generate the output tokens y i conditioned by x i and C i that we formally define as: We train the whole model by minimizing the negative marginal log-likelihood of each target with the following loss: 4 Experiments This section starts with describing the dataset in §4.1 and training details in §4.2. We then analyze model performance in §4.3 and finally conduct ablation studies in §4.4. We tested and evaluated our proposed method on the only medical dataset for multi-document summarization, as far as we know, about the generation of systematic literature reviews: the MS2 dataset. The dataset is provided in DeYoung et al. ( The problem can be formalized as follows: we have a target statement to generate about the background source, containing the topic specifications, and a cluster of related document abstracts from which to fetch and discriminate helpful knowledge with respect to the background. From here on, we use the terms "document" and "abstract" interchangeably since the elements in the cluster are just the abstracts of medical documents. We report the dataset statistics in Table We trained our solution for 3 epochs using a batch size of 1 and a learning rate with a linear schedule set to 1 × 10 -5 . We set the number of K equal to 6 because it gave best results and used 1024 tokens as the max input size for the Generator. During the evaluation, we adopted a beam size of 4 with a min and a max length set to 32 and 256, respectively. We implemented the code using PyTorch for tensor computations and Hugging Face 3 for language model checkpoints. We performed the experiments on a workstation with a GPU Nvidia RTX 3090 of 24GB memory, 64GB of RAM, and a processor Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz. Table We conducted ablation studies on the MS2 dataset to prove the importance of each module of our method. In detail, for all experiments we trained our solution for 1 epoch with the same training details reported in §4.2, and we performed the evaluation on the first 400 instances of the test set. 3 The importance of a highly abstractive largesized Generator. We report in Table • facebook/bart-base: the actual BART model pre-trained with a denoising masked language modeling. • gayanin/bart-mlm-pubmed: the BART model pre-trained exclusively on scientific corpora. • facebook/bart-large: the same BART model as the base version with a large architecture. • facebook/bart-large-cnn: the large BART fine-tuned on single-document summarization on the CNN/DailyMail dataset • facebook/bart-large-xsum: the large BART fine-tuned on single-document summarization on the XSum dataset Results prove that a large-sized BART model already fine-tuned on a summarization task achieves better performance. More precisely, the checkpoint fine-tuned on the XSum dataset obtains better results thanks to the higher abstractiveness and the shortness of the target summaries, which are made up of just 1-2 sentences, similar to the MS2 dataset. The importance of a full-sized chunked representation of documents in the cluster. Table • Document-level: the simpler configuration that considers the entire abstracts in the cluster. We truncated documents taking only the first 512 tokens before encoding by the Indexer. • Sentence-level: we considered the sentences of each document obtained using the stateof-the-art tokenizer PySBD • Chunk-level: our configuration, where each document is split into chunks of exact 512 tokens to consider all text information without input truncation. This configuration is similar to the "sentence-level" one but with the difference that each textual unit is 512 tokens in length and not 128. The results prove the better performance on a cluster with chunked documents. By considering 512 tokens for each document, we fully leverage the capability of BERT language modeling without truncating any information. Input truncation required by the "document-level" configuration plays an important role in final accuracy because it discards and ignores potential summary-relevant information, leading to a performance drop. The "sentence-level" setting lets us increase the top K sentences to retrieve, but it worsens the final summary because single sentences are too fine-grained. The importance of a background-first concatenation with special token. Table Results prove the importance of a backgroundfirst concatenation with the special token separator to make BART aware of the textual difference between the background and the documents. The importance of pre-trained DPR encoders. model checkpoints for the Indexer and Discriminator. First, we leveraged the checkpoint "sentencetransformers/allenai-specter" Results prove the importance of the DPR checkpoints for both the Indexer and Discriminator. We proposed a novel probabilistic method based on the combination of three language models to tackle multi-document summarization in the medical domain. This task is characterized by redundant information, noise, and the possible presence of vital information in each sentence that makes arbitrary input truncation unacceptable. For this reason, we proposed a multi-document summarization method able to discriminate salient contents from irrelevant before summarizing. In detail, the solution first leverages a BERT-based model (Indexer) for creating dense indices for each chunk of each document in the cluster. Then, a second BERT-based model (Discriminator) is used to process the shared background and select only the most relevant chunks. The final BART model is trained to perform a probability marginalization over each token prediction for each selected chunk. In this way, our solution reads all document information and selects just the most relevant chunks, discarding noise before feeding the Generator. The Discriminator and Generator are trained end-to-end, backpropagating the probability distribution as explained in §3. The Indexer is frozen; training would lead to some problems, such as the time to learn improved embeddings at each iteration and the larger memory occupation to save the gradient for each document. We tested our method on MS2, the only dataset on systematic literature reviews, and compared it with state-of-the-art models, finding that our novel approach outperforms competitors on the ROUGE evaluation metrics. Further, we performed extensive ablation studies to highlight the contribution of each component and motivate the design choices. At the edge of our knowledge, this is the first work that applies a probability marginalization method for multi-document summarization. We believe this work can inspire novel research towards endto-end multi-model collaboration instead of solutions with a single large model addressing the entire task. According to the divide et impera pattern, each model learns a specific sub-task, creating a more efficient and transparent cooperating solution. Tasks such as related work generation or text generation from multi-sourced inputs can get the most from our method, improving pre-existing solutions to discriminate helpful knowledge from noise. Further possible directions to deal with multiinputs are the following: i) extracting relevant snippets from documents with term weighting techniques The advancement of deep neural network architectures and the availability of large pre-trained language models has led to significant improvements for the multi-document summarization task, which has applications in high-impact domains, particularly in the medical one. Here, systematic literature reviews play an essential role for the medical and scientific community, and for that reason, they require strong guarantees about the factuality of the output summary. Current state-of-the-art NLP solutions cannot establish such assurance, so we do not believe our solution, like previous ones, is ready to be deployed. The research should explore more effective evaluation measures for text summarization to make it happen, and large-scale accuracy guarantees by medical experts are still needed. Finally, if the method will be applied to sensitive data such as medical patient records, it should also include privacy-preserving policies
| 1,251 | 2,022 | 1,251 |
AMR Parsing as Graph Prediction with Latent Alignment
|
meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).
|
Abstract meaning representations (AMRs)
|
In this work, we demonstrate that the alignments can be treated as latent variables in a joint probabilistic model and induced in such a way as to be beneficial for AMR parsing. Intuitively, in our probabilistic model, every node in a graph is assumed to be aligned to a word in a sentence: each concept is predicted based on the corresponding RNN state. Similarly, graph edges (i.e. relations) are predicted based on representations of concepts and aligned words (see Figure We assume injective alignments from concepts to words: every node in the graph is aligned to a single word in the sentence and every word is aligned to at most one node in the graph. This is necessary for two reasons. First, it lets us treat concept identification as sequence tagging at test time. For every word we would simply predict the corresponding concept or predict NULL to signify that no concept should be generated at this position. Secondly, Gumbel-Sinkhorn can only work under this assumption. This constraint, though often appropriate, is problematic for certain AMR constructions (e.g., named entities). In order to deal with these cases, we re-categorized AMR concepts. Similar recategorization strategies have been used in previous work The resulting parser achieves 74.4% Smatch score on the standard test set when using LDC2016E25 training set, • we introduce a joint probabilistic model for alignment, concept and relation identification; • we demonstrate that a continuous relaxation can be used to effectively estimate the model; • the model achieves the best reported results. In this section we describe our probabilistic model and the estimation technique. In section 3, we describe preprocessing and post-processing (including concept re-categorization, sense disambiguation, wikification and root selection). We will use the following notation throughout the paper. We refer to words in the sentences as w = (w 1 , . . . , w n ), where n is sentence length, w k ∈ V for k ∈ {1 . . . , n}. The concepts (i.e. labeled nodes) are c = (c 1 , . . . , c m ), where m is the number of concepts and c i ∈ C for i ∈ {1 . . . , m}. For example, in Figure A relation between 'predicate concept' i and 'argument concept' j is denoted by r ij ∈ R; it is set to NULL if j is not an argument of i. In our example, r 2,3 = ARG0 and r 1,3 = NULL. We will use R to denote all relations in the graph. To represent alignments, we will use a = {a 1 , . . . , a m }, where a i ∈ {1, . . . , n} returns the index of a word aligned to concept i. In our example, a 1 = 3. All three model components rely on bidirectional LSTM encoders We believe that using discrete alignments, rather than attention-based models Our model consists of three parts: (1) the concept identification model P θ (c|a, w); (2) the relation identification model P φ (R|a, w, c) and (3) the alignment model Q ψ (a|c, R, w). 4 Formally, (1) and (2) together with the uniform prior over alignments P (a) form the generative model of AMR graphs. In contrast, the alignment model Q ψ (a|c, R, w), as will be explained below, is approximating the intractable posterior P θ,φ (a|c, R, w) within that probabilistic model. In other words, we assume the following model for generating the AMR graph: 4 θ, φ and ψ denote all parameters of the models. AMR concepts are assumed to be generated conditional independently relying on the BiLSTM states and surface forms of the aligned words. Similarly, relations are predicted based only on AMR concept embeddings and LSTM states corresponding to words aligned to the involved concepts. Their combined representations are fed into a bi-affine classifier The expression involves intractable marginalization over all valid alignments. As standard in variational autoencoders, VAEs (Kingma and Welling, 2014), we lower-bound the loglikelihood as where Q ψ (a|c, R, w) is the variational posterior (aka the inference network), E Q [. . .] refers to the expectation under Q ψ (a|c, R, w) and D KL is the Kullback-Liebler divergence. In VAEs, the lower bound is maximized both with respect to model parameters (θ and φ in our case) and the parameters of the inference network (ψ). Unfortunately, gradient-based optimization with discrete latent variables is challenging. We use a continuous relaxation of our optimization problem, where realvalued vectors âi ∈ R n (for every concept i) approximate discrete alignment variables a i . This relaxation results in low-variance estimates of the gradient using the parameterization trick (Kingma and Welling, 2014), and ensures fast and stable training. We will describe the model components and the relaxed inference procedure in detail in sections 2.6 and 2.7. Though the estimation procedure requires the use of the relaxation, the learned parser is straightforward to use. Given our assumptions about the alignments, we can independently choose for each word w k (k = 1, . . . , m) the most probably concept according to P θ (c|h k ). If the highest scoring option is NULL, no concept is introduced. The relations could then be predicted relying on P φ (R|a, w, c). This would have led to generating inconsistent AMR graphs, so instead we search for the highest scoring valid graph (see Section 3.2). Note that the alignment model Q ψ is not used at test time and only necessary to train accurate concept and relation identification models. The concept identification model chooses a concept c (i.e. a labeled node) conditioned on the aligned word k or decides that no concept should be introduced (i.e. returns NULL). Though it can be modeled with a softmax classifier, it would not be effective in handling rare or unseen words. First, we split the decision into estimating the probability of concept category τ (c) ∈ T (e.g. 'number', 'frame') and estimating the probability of the specific concept within the chosen category. Second, based on a lemmatizer and training data 5 we prepare one candidate concept e k for each word k in vocabulary (e.g., it would propose want if the word is wants). Similar to where the first multiplicative term is a softmax classifier over categories (including NULL); ] denotes the indicator function and equals 1 if its argument is true and 0, otherwise; Z(h, θ) is the partition function ensuring that the scores sum to 1. We use the following arc-factored relation identification model: Each term is modeled in exactly the same way: 1. for both endpoints, embedding of the concept c is concatenated with the RNN state h; 2. they are linearly projected to a lower dimension separately through M h (h 3. a log-linear model with bilinear scores is used to compute the probabilities. 5 See supplementary materials. In the above discussion, we assumed that BiL-STM encodes a sentence once and the BiLSTM states are then used to predict concepts and relations. In semantic role labeling, the task closely related to the relation identification stage of AMR parsing, a slight modification of this approach was shown more effective Recall that the alignment model is only used at training, and hence it can rely both on input (states h 1 , . . . , h n ) and on the list of concepts c 1 , . . . , c m . Formally, we add (m-n) NULL concepts to the list. As with sentences, we use a BiLSTM model to encode concepts c, where g i ∈ R dg , i ∈ {1, . . . , n}. We use a globally-normalized align-ment model: where Z ψ (c, w) is the intractable partition function and the terms ϕ(g i , h a i ) score each alignment link according to a bilinear form where B ∈ R dg×d is a parameter matrix. Recall that our learning objective (1) involves expectation under the alignment model. The partition function of the alignment model Z ψ (c, w) is intractable, and it is tricky even to draw samples from the distribution. Luckily, the recently proposed relaxation where P is the set of all permutations of n elements, a is a noise drawn independently for each a from the fixed Gumbel distribution (G(0, 1)). Unfortunately, this is also intractable, as there are n! permutations. Instead, in perturband-max an approximate schema is used where noise is assumed factorizable. In other words, first noisy scores are computed as φ(g i , h a i ) = ϕ(g i , h a i ) + i,a i , where i,a i ∼ G(0, 1) and an approximate sample is obtained by a = argmax a n i=1 φ(g i , h a i ), Such sampling procedure is still intractable in our case and also non-differentiable. The main contribution of Note that Φ is a function of the alignment model Q ψ , so we will write Φ ψ in what follows. The variational bound (1) can now be approximated as E Σ∼G(0,1) [log P θ (c|S t (Φ ψ , Σ), w) Following Using the Gumbel-Sinkhorn construction unfortunately does not guarantee that i âij = 1. To encourage this equality to hold, and equivalently to discourage overlapping alignments, we add another regularizer to the objective (5): Our final objective is fully differentiable with respect to all parameters (i.e. θ, φ and ψ) and has low variance as sampling is performed from the fixed non-parameterized distribution, as in standard VAEs. One remaining question is how to use the soft input â = S t (Φ ψ , Σ) in the concept and relation identification models in equation ( The standard technique would be to pass to the models expectations under the relaxed variables n k=1 âik h k , instead of the vectors h a i However, the concept prediction model log P θ (c|S t (Φ ψ , Σ), w) relies on the pointing mechanism, i.e. directly exploits the words w rather than relies only on biLSTM states h k . So instead we treat âi as a prior in a hierarchical model: As we will show in our experiments, a softer version of the loss is even more effective: where we set the parameter α = 0.5. We believe that using this loss encourages the model to more actively explore the alignment space. Geometrically, the loss surface shaped as a ball in the 0.5norm space would push the model away from the corners, thus encouraging exploration. 3 Pre-and post-pocessing Such 'primary' concepts get encoded in the category of the concept (the set of categories is τ , see also section 2.3). In Figure Details of the re-categorization procedure and other pre-processing are provided in appendix. For post-processing, we handle sensedisambiguation, wikification and ensure legitimacy of the produced AMR graph. For sense disambiguation we pick the most frequent sense for that particular concept ('-01', if unseen). For wikification we again look-up in the training set and default to "-". There is certainly room for improvement in both stages. Our probability model predicts edges conditional independently and thus cannot guarantee the connectivity of AMR graph, also there are additional constraints which are useful to impose. We enforce three constraints: (1) specific concepts can have only one neighbor (e.g., 'number' and 'string'; see appendix for details); (2) each predicate concept can have at most one argument for each relation r ∈ R; (3) the graph should be connected. Constraint (1) is addressed by keeping only the highest scoring neighbor. In order to satisfy the last two constraints we use a simple greedy procedure. First, for each edge, we pick-up the highest scoring relation and edge (possibly NULL). If the constraint ( Finally, we need to select a root node. Similarly to relation identification, for each candidate concept c i , we concatenate its embedding with the corresponding LSTM state (h a i ) and use these scores in a softmax classifier over all the concepts. Data Smatch JAMR 62.1 Mul-BiLSTM We used Adam (Kingma and Ba, 2014) to optimize the loss (5) and to train the root classifier. Our best model is trained fully jointly, and we do early stopping on the development set scores. Training takes approximately 6 hours on a single GeForce GTX 1080 Ti with Intel Xeon CPU E5-2620 v4. We start by comparing our parser to previous work (see Table A the previous best model, multi-BiLSTM parser of In order to disentangle individual phenomena, we use the AMR-evaluation tools The spurious ambiguity will have a detrimental effect on the relation identification stage. It is interesting to see the contribution of other modeling decisions we made when modeling and relaxing alignments. First, instead of using Gumbel-Sinkhorn, which encourages mutuallyrepulsive alignments, we now use a factorized alignment model. Note that this model ('No Sinkhorn' in Table Alignment performance has been previously identified as a potential bottleneck affecting AMR parsing Treating alignment as discrete variables has been successful in some sequence transduction tasks with neural models The discrete alignment modeling framework has been developed in the context of traditional (i.e. non-neural) statistical machine translation For AMR parsing, another way to avoid using pre-trained aligners is to use seq2seq models We introduced a neural AMR parser trained by jointly modeling alignments, concepts and relations. We make such joint modeling computationally feasible by using the variational autoencoding framework and continuous relaxations. The parser achieves state-of-the-art results and ablation tests show that joint modeling is indeed beneficial. We believe that the proposed approach may be extended to other parsing tasks where alignments are latent (e.g., parsing to logical form
| 805 | 39 | 805 |
A Survey for Efficient Open Domain Question Answering
|
Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP). Recent works have predominantly focused on improving the answering accuracy and have achieved promising progress. However, higher accuracy often requires more memory consumption and inference latency, which might not necessarily be efficient enough for direct deployment in the real world. Thus, a trade-off between accuracy, memory consumption and processing speed is pursued. In this paper, we will survey recent advancements in the efficiency of ODQA models and conclude core techniques for achieving efficiency. Additionally, we will provide a quantitative analysis of memory cost, query speed, accuracy, and overall performance comparison. Our goal is to keep scholars informed of the latest advancements and open challenges in ODQA efficiency research and contribute to the further development of ODQA efficiency.
|
Open domain question answering However, most general-purpose ODQA models are computationally intensive, slow to infer, and expensive to train. One of the reasons is the huge index/document size. For example, Towards this challenge, there are various tradeoffs in building ODQA models that meet real-world application needs, such as the trade-offs among accuracy, memory consumption, inference speed, and so on In this survey, we provide a comprehensive introduction to the broad range of methods that aim to improve efficiency with a focus on the ODQA task. In Section 2, we overview general-purpose ODQA models and discuss their strategies and limitations in terms of efficiency. In Section 3, we first walk through the key ODQA models which concentrate on efficiency, then conclude the core techniques used. Section 4 gives a quantitative analysis with an overall comparison of different frameworks and three specific aspects, i.e., memory cost, processing speed, and accuracy. Finally, in Section 5, we discuss the challenges reminded followed by the conclusion given in Section 6.
|
In this section, we summarize ODQA models into three typical frameworks (see in Fig. Retriever-Reader ODQA methods generally obtain good performance. However, due to dense encoding for corpus passages and longer evidence for answer reasoning, they normally suffer from a larger index size and a slower processing speed. In addition, the dual-encoder retrievers like DPR, encoding for questions and documents independently, ignored interaction between them and limited the retrieval performance (2) high storage requirement in terms of indexes for fine-grained retrieval units such as phrases or QA pairs. For Generator-Only ODQA models, skipping evidence retrieving and reading makes low memory costs and short processing time than two-stage systems. However, the performances of Generator-Only ODQA methods have much room for improvement. Additionally, real-world knowledge is updated routinely, and the huge training cost of the generative language models makes it laborious and impractical to keep them always up-to-date or retrain them frequently. Billions of parameters also make them storage-unfriendly and hard to apply on resource-constrained devices In this section, we first walk through the key ODQA models which concentrate on efficiency, and discuss their strengths and weaknesses as well as their unique characteristics in Section 3.1. Then we conclude the core techniques used in these models for improving the efficiency of ODQA, from data and model perspectives, respectively, in Section 3.2. Before we start, we first take DPR on the Natural Questions (NQ) test dataset as an example to show the time each module needs during inference and their detailed memory costs in Fig. Based on these observations, how to improve the efficiency of ODQA models focuses on the reduction of processing time and memory cost. To reduce processing time, we can accelerate evidence searching and reading. To reduce the memory cost, we can reduce the size of the index and model. Besides, some emerging directions are also proposed, such as jumping the retrieval part to generate answers using questions directly or retrieving answers directly to omit evidence reading. We introduce the details below. In this subsection, we delve into the details of efficiency ODQA models. We categorize them into 2 The passages in the corpus are embedded offline. three classes regarding the different means of implementing efficiency, i.e., reducing processing time, reducing memory cost, and blazing new directions. When giving a question, the processing time for ODQA involves three stages: question embedding, evidence searching, and evidence reading. Whereas evidence searching and evidence reading occupy most of the processing time, researchers mainly focus on narrowing the time cost of the two stages. By Accelerating Evidence Searching. Other than the traditional brute search method By Accelerating Evidence Reading. Accelerating the evidence reading is another effective way to speed up the question processing of ODQA models. Actually, in the retrieved evidence, a high percentage of content is not pertinent to answers Adaptive computation (AC) For ODQA models, there are three kinds of memory cost: index, model, and raw corpus. Normally, reducing the sizes of the index and model are two ways to break through and to achieve storage efficiency, while reducing raw corpus size results in certain knowledge source loss and a significant drop in performance BPR By Reducing Model Size. Besides downsizing the index, compressing model is another way to cut the memory cost of ODQA systems. One way to accomplish this goal is building a comprehensive model to implement retrieval and reading simultaneously, instead of multiple models in traditional ODQA systems. YONO (You Only Need One model) Besides the methods which accelerate evidence searching and reading and the methods that reduce the size of the index and model, some one-stage frameworks are proposed as well, such as generating the answer using the input question directly or retrieving answers directly from a finer-grained knowledge base (ie., phrases or question-answer pairs). Directly Generate Answers. Some researchers blazed a brand new path that omits the whole evidence retrieval process, including corpus indexing and evidence searching, by leveraging generative language models (such as T5, BART, GPT) to tackle ODQA tasks RePAQ This section concludes the core techniques commonly used in existing ODQA models with respect to improving efficiency. It can be briefly divided into two categories: data-based and model-based techniques. Data-based techniques mainly focus on the reduction of the index, which can be downsized from different hierarchies such as the number of corpus passages, feature dimension, and storage unit per dimension. Model-based techniques try to reduce the model size while avoiding a significant drop in performance. Model pruning and knowledge distillation are commonly used techniques. Passage Filtering. Among the huge corpus ODQA models rely on, there are massive passages that contain little useful information and are unlikely to be evidence for answers. Thus, filtering unrelated passages is a way to reduce the memory cost of corpus without a large negative impact. For example, some researchers have designed a linear classifier to discriminate and discard unnecessary passages before evidence retrieval Dimension Reduction. Another way to reduce the memory cost is to reduce the dimension for dense passage representations. To achieve this goal, Izacard et al. ( The three techniques introduced above are adopted jointly in Fusion-in-Decoder with Knowledge Distillation (FiD-KD) Model Pruning. Most recent works on open domain question answering This section gives a quantitative analysis of the aforementioned ODQA models. We first give an overall comparison of different frameworks and further discuss the methods quantitatively from three specific aspects: memory cost, processing speed, and accuracy In Table Concerning comparison between different frameworks, we can see that two-stage methods (Retriever-Reader) generally obtain better ODQA performances than one-stage methods (i.e., Retriever-Only and Generator-Only). The best end-to-end EM performance on NQ (55.9%) and TriviaQA (74.8%) datasets are obtained by R2-D2+reranker and GAR_extractive respectively. They are both under the Retriever-Reader framework. The second-best ODQA performances on NQ (54.7%) and TriviaQA (72.1%) are obtained by UnitedQA and Fid-large+KD_DPR methods, which are also under the two-stage frameworks. In terms of total memory cost, i.e., the sum of model size and the index size, Generator-Only systems keep generally low memory overhead. Except GPT-3, the rest of the Generator-Only systems take less than 50GB of memory, and five methods out of the eight are less than 5GB. On the contrary, most Retriever-Only ODQA models require huge memory, normally greater than 200GB. The method DenSPI needs a 2002.69GB memory cost, which is enormous. Retriever-Reader ODQA models have a wide range in terms of memory cost, from 0.31GB to 363.26GB. Overall speaking, Minimal R&R achieves the smallest memory overhead (0.31GB) while DenSPI keeps the largest one (2002.69GB). In terms of processing speed, which determines how fast one ODQA system can answer a given question, one-stage methods generally achieve higher processing speed than two-stage methods, especially Retriever-Only systems. Among the eight Retriever-Only methods, five of them can process more than 20 questions per second (Q/s) and RePAQ_XL and RePQA_base can answer 800 and 1400 questions per second respectively, which is impressive. For the methods with slow processing speed, Fig-large and RAG-seq from the Retriever-Reader framework are the two slowest systems, which process less than 1 question per second. To conclude, Fig. The total memory cost depends on the model size and the index size. Index Size. For the index size, the two kinds of one-stage frameworks are two extremes. Generator-Only methods do not require creating an index file while Retriever-Only methods generally need a huge storage space for index. Most two-stage methods have a moderate index of 65GB or less. For Retriever-Reader ODQA systems, the 65GB index set of dense passage embedding, developed by DPR For Retriever-Only ODQA systems, Den-SPI+Sparc The model size involves all modules present in one ODQA system, including the retriever and the reader. It has a great range, from 0.04GB to 700GB. Among all mentioned ODQA models, a quarter of ones have model sizes less than 1GB; the model sizes of 40% systems are between 1∼2GB and 12.5% ones have sizes between 2∼3GB; 7.5% systems have model sizes between 3∼4GB; the remaining 15% models weigh larger than 4GB. Specifically, GPT-3 Most ODQA models are implemented with PLMs that are less than 2GB. A few ODQA models keep the total model size more than 3GB to achieve higher performance, like FiD-large+KD_DPR In terms of latency, i.e., processing speed, most ODQA models answer less than 10 questions per second. Retriever-Only ODQA models bring faster processing speed than the other three frameworks. Compared to phrase-base systems, the QA-pairbased system RePAQ In this section, we summarize and illustrate the insights and future directions from the following aspects. We first summarize the key points to improve the effectiveness of ODQA systems, from the two aspects of index and model respectively. In terms of index size, it is worth exploring deeper on generative models and the techniques of compacting embedding. In terms of model size, knowledge distillation is a promising direction to reduce model size while another direction is the application of lightweight models. In addition, one-stage ODQA models are also worthy of research. Additionally, we provide some advice on model recommendations under different requirements. For example, if we pursue real-time feedback, Retriever-Only systems should be good choices; if we are limited by computing resources, Generator-Only systems are suitable candidates; and if we need to trade off performance, memory cost and processing time, Retriever-Reader systems are relatively more appropriate. In general, for researchers who are interested in improving the state-of-the-art efficiency methods on ODQA tasks, this survey can serve as an entry point to find opportunities for new research directions. However, some salient challenges need to be addressed in the way of ODQA efficiency research. One of the worrisome things is that most ODQA approaches are computation-heavy and energyexpensive. How can the ODQA system be deployed in low-power devices with limited computing resources and mobile devices is still very challenging. Another thing is that it seems to be inadequate to evaluate the efficiency of ODQA models only on accuracy, memory, and processing time, due to many other factors that should be considered and traded off. For example, it is also important to establish what resource, e.g., money, time, and data for model training, power consumption, carbon emissions, etc. In this survey, we retrospected the typical literature according to three different frameworks of open domain question answering (ODQA) systems. Further, we provided a broad overview of existing methods to increase efficiency for ODQA models and discussed their limitations. In addition, we performed a quantitative analysis in terms of efficiency and offered certain suggestions about method selections of open domain question answering. Finally, we discussed possible open challenges and potential future directions of efficient ODQA models. It seems to be difficult to evaluate the efficiency of ODQA models fairly and impartially due to multiple factors that should be considered and need to be traded off. On the one hand, it is not enough to only use accuracy, memory, and processing time to evaluate effectiveness. It is also important to establish what resource, e.g., money, power consumption, carbon emissions, etc., one attempt to constrain Our work focuses on summarizing and discussing the accuracy, inference speed, and memory cost of open domain question answering systems. We believe that our work is helpful for researchers who are interested in improving the state-of-the-art efficiency methods on ODQA tasks. We do not anticipate any ethical concerns arising from the research presented in this paper. 2.Retriever-only 3.Generator-only Phrased-based models Corpus. The most commonly used corpus for open domain question answering systems is the 2018-12-20 dump of Wikipedia corpus, which contains 21 million 100-word-long passages after removing semi-structured data (tables, information boxes, lists, and the disambiguation pages) A comprehensive introduction is illustrated in Table In terms of latency, training time In terms of memory, model parameter size, passage corpus size, index size, and training data size are important to influence factors of memory cost In terms of answering quality, EM (Exact Match accuracy) In this paper, we adopt metrics on latency, memory, and accuracy to evaluate ODQA models comprehensively. Specifically, we use Q/s to measure the processing speed, use total memory overhead to evaluate the memory cost, and use EM score to estimate the end-to-end answer prediction quality as shown in Table
| 1,015 | 1,084 | 1,015 |
DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules
|
Existing large language models (LLMs) that mainly focus on Standard American English (SAE) often lead to significantly worse performance when being applied to other English dialects. While existing mitigations tackle discrepancies for individual target dialects, they assume access to high-accuracy dialect identification systems. The boundaries between dialects are inherently flexible, making it difficult to categorize language into discrete predefined categories. In this work, we propose DADA (Dialect Adaptation via Dynamic Aggregation), a modular approach to imbue SAE-trained models with multi-dialectal robustness by composing adapters which handle specific linguistic features. The compositional architecture of DADA allows for both targeted adaptation to specific dialect variants and simultaneous adaptation to various dialects. We show that DADA is effective for both single task and instruction finetuned language models, offering an extensible and interpretable framework for adapting existing LLMs to different English dialects. 1
|
As Natural Language Processing (NLP) becomes even more impactful, the equitable distribution of its benefits becomes an increasing concern. Specifically, NLP tooling is often trained and evaluated on dominant language variants, such as Standard American English (SAE). This results in a significant decline in the performance when these tools are applied to non-SAE dialects. Studies have revealed that SAE models tested on African American Vernacular English (AAVE) encounter difficulties in language identification
|
Adapter Training Feature Adapter Frozen Layer Frozen Layer Figure 2018; Existing research to mitigate this disparity has mainly focused on dialectal adaptation targeting individual dialects of interest Previous linguistic works have developed a collection of lexical and morphosyntactic features that describe the differences between SAE and various other English dialects To this end, we develop a model which handles this reality by accommodating the diversity of English variants at a fine-grained level (linguistic features or linguistic rules). Concretely, we propose Dialect Adaptation via Dynamic Aggregation (DADA): a modular approach to adapt an established model trained on SAE to dialect variants by composing linguistic features. DADA captures and encapsulates each feature using adapters To sum up, our work contributes the following: • We propose a modular approach DADA to adapt the standard SAE model to dialect variants via a dynamic aggregation of different linguistic features. (Sec. 3) • We train nearly 200 feature adapters, which can be flexibly composed to target different dialects. Moreover, we demonstrate that DADA with all the trained feature adapters can consistently improve model performance across five English dialects. (Sec. 4) • DADA exhibits strong interpretability. Using AAVE as an example, we illustrate that DADA possesses the capability to detect the relevant linguistic features for a given input and subsequently activate the corresponding feature adapters. (Sec. 5) • We show that DADA improves dialectal robustness in task-agnostic instruction-tuned LLMs using FLAN-T5 (Chung et al., 2022) (Sec. 6), which highlights the capability of DADA in learning task-agnostic features that can be applied to newer general-purpose models. Dialect NLP research tends to focus primarily on dominant dialects represented in "textbook" grammar, such as Standard American English (SAE), over lower-resource dialects. The performance disparity in resulting models is pervasive Add & Norm For each linguistic feature, we train a feature adapter. African American Vernacular English (AAVE) Parameter-Efficient Learning To efficiently transfer pretrained language models to downstream tasks, several techniques We introduce Dialect Adaptation via Dynamic Aggregation (DADA), a modular method for adapting an existing model trained on the Standard American English (SAE) to accommodate dialect variants at a finer-grained level. Our proposed method deploys a dynamic aggregation of feature adapters, which characterize the divergence of linguistic features between SAE and its dialect variants. Specifically, DADA involves the creation of a synthetic training dataset for each individual feature using transformation rules Previous works have discerned a series of linguistic divergences and devised Multi-VALUE, a collection of lexical and morphosyntactic transformation rules Let T = {T 1 , T 2 , ...T N } denote the set of transformation rules between SAE and its dialect variants. For each transformation rule T i ∈ T , we can generate a corresponding synthetic dataset D i by applying the respective rule to each individual training example within the original training dataset D. Adapter tuning is known for its ability to adapt quickly to new tasks without catastrophic forgetting In Sec. 3.2, we described the process of training feature adapter A i for each linguistic transformation rule to capture a specific type of linguistic difference between SAE and its dialect variants. However, it is common for multiple linguistic differences to co-occur within a single sentence in realworld scenarios, thereby necessitating the model to simultaneously consider these distinct linguistic features to varying degrees. Therefore, we propose to dynamically aggregate the N trained feature adapters, denoted as A = {A 1 , A 2 , ...A N }, into the SAE-trained backbone model M via an additional fusion layer Following where [•, •] indicates the concatenation of vectors and o l is the output of the l-th fusion layer. Through training on the super-synthetic dataset D, a parameterized compositional mixture of feature adapters can be learned to identify the applied linguistic features for a given input and activate the corresponding feature adapters, thereby facilitating the effective addressing of linguistic discrepancies between SAE and its dialect variants. To sum up, the compositionality of DADA enables targeted adaptation to specific dialect variants by selecting appropriate feature adapters. DADA uses modularity and compositionality to adapt a model to linguistic features present at test time since the pervasiveness of a feature can vary greatly based on its applicability and density In this section, we demonstrate how DADA can enable the adaptation of an existing SAE model to multiple dialect variants, taking Multi-Genre Natural Language Inference (MNLI; Williams et al. ( As described in Sec. 3.2, we train a feature adapter for each transformation rule from Ziems et al. ( For each transformation rule, we generate a synthetic dataset by applying only that specific transformation rule to each example in the original MNLI training dataset. We only retain examples that differ from the original example, i.e., examples that have been transformed. Afterward, we train feature adapters using these synthetic datasets, as described in Sec. 3.2. To aggregate trained feature adapters into the backbone model, we train a large fusion layer for 5 epochs on a synthetic dataset that applies all dialectal variations simultaneously, termed Multi. Additionally, we include a null adapter that remains as the identity function. This is kept for purely SAE inputs. In Appendix B, we report full hyperparameters along with the training details. We evaluate DADA on five English dialects: AppE, ChcE, CollSgE, IndE, AAVE and report the results in Table It is surprising to note that although single Adapter Tuning with Multi-Task AAVE demonstrates improvements in 4 out of 7 tasks, the overall average performance is even inferior to that of the SAE baseline. In contrast, DADA consistently outperforms both the SAE baseline and Adapter Tuning across all evaluated tasks, resulting in an overall improvement of 1.80/1.92 points on the AAVE GLUE benchmark, respectively. Specifically, on the relatively large datasets, DADA achieves a notable accuracy improvement of 2.0%/1.0% on MNLImm, 0.9%/1.2% on QNLI, and 1.5%/0.9% on QQP when compared to the SAE Baseline and Adapter Tuning, respectively. that our proposed approach, DADA, is not limited to single-task applications but can be easily scaled up to accommodate various tasks for use with the increasingly common multi-task instruction-tuning setup using in popular large-scale industrial systems In Table prompts "You are a native [DIALECT_NAME] English speaker, and here is your task:" However, ChatGPT + "Native Speaker" Prompt does not yield improved results and, in fact, performs even worse than the vanilla ChatGPT on all evaluated tasks. This highlights that dialect adaptation is not solved with trivial prompt-based interventions while being simultaneously less grounded in expert linguistic resources than DADA. We analyze the average performance of DADA on 5 evaluated English dialects, considering different numbers of feature adapters (k) ranging from 1 to all. For each k, we select the top k feature adapters with the best performance on the evaluation set. The results in Figure As discussed in Sec. 3, DADA can implicitly identify the relevant linguistic features for a given input and activate the corresponding feature adapters. We validate this by investigating the correlation between attention scores within each layer of DADA and the presence of linguistic features, to determine whether the contributing feature adapters are relevant to the features present. Here, we use the AAVE dialect and MNLI task as an example. We perform a correlation analysis of these 10 feature adapters for the linguistic features applied to the input data. For each transformation rule, we calculate the softmax activation for each adapter, for each input to which the specific linguistic feature applies, and average over all activations within the same layer calculated over all instances in the AAVE MNLI test set. For better clarity, our final metrics takes the average utilization score of each feature adapter for the entire dataset and then subtracts the average utilization score associated with each transformation rule. We plot the results for layers 1, 3, 7, 11 in Figure Using AAVE dialect as a case study, to demonstrate the effectiveness of our method in adapting the SAE model across multiple tasks, we include the tasks from the AAVE transformed version For each transformation rule of AAVE dialect, we construct synthetic training data following the procedure described in Sec. 3.1. However, in the case of a multi-task model, we construct a synthetic dataset for each task considered and utilize the mixture to train the corresponding feature adapter. Subsequently, we proceed to fuse these feature adapters by training a fusion layer on the super-synthetic dataset Multi-Task AAVE, which is constructed by applying all the AAVE transformation rules. In Appendix D, we provide the templates used to train the FLAN-T5 model. In Appendix B, we report full hyperparameters along with the training details. We assess the performance of DADA on AAVE transformed version of the GLUE Benchmark, and compare its results with the SAE baseline and Adapter Tuning with Multi-Task AAVE. In this paper, we present Dialect Adaptation via Dynamic Aggregation (DADA), a fine-grained and modular approach designed to adapt an established model trained on Standard American English to its dialect variants through the compositional aggregation of linguistic features. Our experiments demonstrate that the compositionality of DADA en-ables targeted adaptation to specific dialects, and demonstrated improved robustness across multiple evaluated dialects, including AppE, ChcE, CollSgE, IndE, and AAVE. Our analysis also highlights the interpretability of DADA, as shown through its capability to identify relevant linguistic features for a given input and trigger the corresponding adapters. Furthermore, our experiments on FLAN-T5 illustrate the potential of applying DADA to taskagnostic instruction-tuned large language models, showcasing its generalizability. DADA involves the training for feature adapters and the fusion layer, which can make it computationally expensive, especially when dealing with a substantial number of linguistic rules. However, each training run only requires a small number of parameters to be learned, and parallelization is feasible for feature adapter training. More importantly, these trained feature adapters exhibit significant reusability; the same set of feature adapters can be reused and employed for multiple dialects, though the fusion layer would need to be retrained for these dialects. However, if a use case does not involve significant reuses, this aspect may indeed remain a limitation. We will release our trained feature adapters so that future studies will not need to reincur the up-front training cost. Furthermore, while DADA has the flexibility to utilize any linguistic rules, in our experiments, we specifically employed these linguistic transformation rules that are well-established in prior work for English While DADA mainly relies on Multi-VALUE DADA instead focuses on adapting to the lin-guistic features present in a given sentence. We learn a parameterized compositional mixture of the dialectal features automatically, rather than relying on static assumptions of density. This avoids what we view as a major issue: it is often difficult to determine the dialect of an input since dialects themselves vary depending on context and speaker. The density of a dialectal feature represents an approximate of density across the entire dialect, but may not be accurate to a specific speaker and context Previous linguistic works on dialectal features may not fully or accurately document the natural usage patterns of all existing dialects in terms of their linguistic rules. As a result, we acknowledge that our proposed method DADA, which relies on these dialectal features from prior literature, may not take some undocumented features associated with dialects into account. However, by curating more dialectal features, our method can be easily extended to a broader range of dialects. Additionally, as DADA is task-agnostic when applied to instruction-tuned models (Sec 6), malicious individuals might misuse it. To address this concern, we will release DADA with a license that explicitly prohibits its usage for purposes of deception, impersonation, mockery, discrimination, hate speech, targeted harassment, and cultural appropriation targeting dialect-speaking communities. Ziems Furthermore, we provide detailed statistics for the respective synthetic training datasets (for MNLI task) associated with each linguistic rule for the AAVE dialect in Table Multi-Dialect Adaptation We train feature adapters for each transformation rule using synthetic datasets, as described in Sec. 3.2, with learning rate 3e-4 and batch size 64 followed by Multi-Task Dialect Adaptation For feature adapter training, we set the learning rate to 1e-3 and fix the number of training steps as 50000. To fuse these feature adapters, we train a fusion layer for 5 epochs using a learning rate of 8e-5. Throughout the process of model training (including finetuning, adapter tuning, DADA training etc.), we consistently employ the standard training objectives specific to the tasks, such as crossentropy loss for classification tasks. In Sec. 5, we showcase the effectiveness of DADA in adapting the RoBERTa Base We provide here the templates used in Sec. 6 to train the FLAN-T5 model for each task. Layer 0 6.09 -0.9 -1.17 -0.43 -1.27 -0.4 -0.63 -0.12 0.15 -1.32 -0.62 5.85 -0.96 -0.45 -1.12 -0.42 0.13 -0.66 -0.56 -1.18 -0.46 -0.28 1.73 -0.07 -0.15 -0.24 -0.1 -0.05 -0.19 -0.17 2.83 -0.3 -1.2 -0.29 -0.75 -0.08 -0.27 -0.13 0.28 -0.1 -0.46 -0.18 -0.3 -0.17 -0.51 1.37 -0.17 -0.12 0.44 0.08 -0.82 -0.17 2.96 -0.17 -0.02 -0.03 -0.12 -0.02 -0.09 0.05 -0.02 -0.03 0.12 0.16 -0.38 -0.41 -1.45 -0.38 -1.12 5.71 -0.36 -0.41 -0.48 -0.71 -0.2 -0.27 -1.51 -0.24 -0.74 4.41 -0.19 -0.28 -0.59 -0.37 -0.13 -0.13 0.16 -0.17 -0.4 -0.35 -0.16 -0.12 0.68 0.63 -0.12 -0.14 -0.31 -0.12 -0.49 0.25 -0.17 the ability of models to perform sentence-level semantic matching and reasoning. In this task, given a question and a corresponding sentence, the objec-tive is to determine whether the sentence contains the answer to the question, considering both linguistic and logical entailment. For the QNLI task, -0.06 -0.06 -0.06 0.46 -0.17 -0.06 -0.15 0 -0.07 0.19 0 0 0 -0.24 -0.01 0 0.02 -0.06 0 0.32 -0.05 -0.05 -0.05 0.13 -0.14 -0.05 -0.11 -0.09 -0.05 0.45 -0.01 -0.01 -0.01 -0.04 -0.04 -0.01 0.01 -0.01 -0.01 0.14 -0.12 -0.12 -0.12 1.73 -0.34 -0.12 -0.12 0.18 -0.13 -0.82 -0.17 -0.18 -0.17 2.73 -0.49 -0.18 -0.28 0.07 -0.18 -1.15 -0.07 -0.07 -0.07 -0.07 -0.2 -0.07 -0.05 -0.09 -0.06 0.77 -0.07 -0.07 -0.07 -0.23 -0.2 -0.07 -0.05 -0.1 -0.06 0.93 -0.06 -0.06 -0.06 -0.03 -0.17 we adopt the following template: Does the sentence {sentence} answer the question {question} {answer}
| 1,046 | 516 | 1,046 |
Contrastive Learning with Keyword-based Data Augmentation for Code Search and Code Question Answering
|
The semantic code search is to find code snippets from the collection of candidate code snippets with respect to a user query that describes functionality. Recent work on code search proposes data augmentation of queries for contrastive learning. This data augmentation approach modifies random words in queries. When a user web query for searching code snippet is too brief, the important word that represents the search intent of the query could be undesirably modified. A code snippet has informative components such as function name and documentation that describe its functionality. We propose to utilize these code components to identify important words and preserve them in the data augmentation step. We present Key-DAC (Keyword-based Data Augmentation for Contrastive learning) that identifies important words for code search from queries and code components based on term matching. KeyDAC augments query-code pairs while preserving keywords, and then leverages generated training instances for contrastive learning. We use KeyDAC to fine-tune various pre-trained language models and evaluate the performance of code search and code question answering via CoSQA and WebQueryTest. The experimental results confirm that KeyDAC substantially outperforms the current state-of-the-art performance, and achieves the new state-of-the-arts for both tasks.
|
Software developers or students who major in computer science often write natural language queries to search for code snippets with desired functionality from the web search engine. The retrieved code snippets are reused or referred to improve productivity of software development. Semantic code search is a well-known code-related downstream task that measures the semantic relevance between a given natural language query and a collection of code snippets to retrieve the most relevant code snippet. CodeXGLUE We present KeyDAC, Keyword-based Data Augmentation for Contrastive learning that identifies keywords from a given query-code pair based on term matching. KeyDAC applies data augmentation both on natural language (NL) sequences (i.e., query, function name, and documentation) and programming language (PL) sequences (i.e., code statements), and generates more training querycode pair instances while preserving keywords. Figure • We propose KeyDAC-data augmentation mechanism for contrastive learning, which identifies important words from training query-code pairs and augments them while preserving identified keywords. • We demonstrate that KeyDAC outperforms the current SOTA for the code search task on CoSQA benchmark. • We achieve a new record, with a substantial improvement, for the open challenge code QA, WebQueryTest. 2 Related Work
|
Some researchers proposed information retrievalbased approaches using term matching between queries and code snippets Contrastive learning encourages the distance between similar instances to be minimized and the distance between dissimilar instances to be maximized in the representation space. Recently, contrastive learning showed its effectiveness and became popular in self-supervised learning 3 Approach trastive learning to fine-tune pre-trained encoders for the code search task. A code function c i has the following three main components: • function name in the function header (NL) • function-level documentation (NL) • code statements in the function body (PL) The previous approaches consider these three components as PL sequences. On the other hand, we consider the function name and the documentation as NL sequences, since 1) those two code components describe the functionality of the code snippet; 2) modifying the function name or documentation does not produce any syntax errors. Identifying Keywords Since a user query demands certain functionality of the code snippet, KeyDAC utilizes two NL descriptions of code function, such as the function name and documentation, to identify keywords. Specifically, KeyDAC identifies common words from three NL sequences (i.e., query, function name, and documentation) based on term matching. Figure identified keywords are related to the functionality of code function. Using the identified keywords, KeyDAC applies data augmentation to NL and PL sequences in different ways. In the following, we demonstrate keyword-based data augmentation in detail. NL: Rewriting KeyDAC rewrites three NL sequences by modifying unimportant words while preserving keywords by choosing one of the following four ways: 1) deleting one randomly selected unimportant word (Delete); 2) switching the position of two randomly selected unimportant words (Switch); 3) copying one randomly selected unimportant word (Copy); 4) doing nothing (None). In Figure We adopt siamese network architecture, which has a shared encoder to map a query and code snippet to fixed-sized embeddings. Each query q i and code function c i are encoded by a shared encoder Encoder (e.g., CodeBERT). We take the representation of [CLS] token from the last hidden layer of Encoder. Then we compute the cosine similarity sim (q i ,c i ) between a query-code pair (q i , c i ) as: where 〈•〉 indicates cosine similarity operation. We use binary cross-entropy as the training objective: where y i is the ground truth label of (q i , c i ). KeyDAC uses contrastive learning to optimize the parameters of Encoder. This contrastive learning aims to maximize the similarity of the query q i and code function c i with label of y i = 1 while minimizing the similarity of the query with unrelated code snippets. Given the query-code pair (q i , c i ) in a batch of size N , we consider the other N -1 code snippets as unrelated. The contrastive loss with in-batch negative samples is defined as: The overall training objective is: Figure We evaluate the performance for two tasks, code search and code question answering. Python code function. Following CoSQA, we train the models using the CoSQA dataset, then use Web-QueryTest as the test dataset. Since WebQueryTest is an open challenge, we submit model predictions to the CodeXGLUE official leaderboard, and report the evaluated results. • In-batch: Contrastive learning method using in-batch negative (without data augmentation). • CoCLR: Contrastive learning method with query-rewritten data augmentation and inbatch negative. We set the batch size N as 32, the learning rate as 1e-5 and the fine-tuning epoch as 10. We use the Adam optimizer (Kingma and Ba, 2014) to train the models. We conduct all experiments on an NVIDIA RTX3090 GPU with 24GB memory. We use the following pre-trained models • RoBERTa • CodeBERT negative only (In-batch) and contrastive learning with query-rewritten data augmentation (Co-CLR). GraphCodeBERT fine-tuned using KeyDAC achieves the highest performance both on code search and code QA tasks. We notice that CoCLR drops code search performance of UniXcoder and code QA performance of GraphCodeBERT and UniXcoder. On the other hand, KeyDAC shows consistent performance improvement. We use GraphCodeBERT as base model for following analyses since among four pre-trained models, GraphCodeBERT achieves the highest performance on code search and code QA when finetuned using KeyDAC. From the experimental results, we observe that Key-DAC consistently outperforms CoCLR. We hypothesize that the main reason for the performance gain is preserving keywords in data augmentation. We study the effect of preserving keywords, especially in queries to directly compare KeyDAC and Co-CLR. We perform query-rewritten data augmentation in three ways: 1) Preserving keywords (same as applying keyword-based data augmentation to query only); 2) Deleting a random word (same as CoCLR with delete operation); 3) Deleting a keyword. We analyze this via code search task on CoSQA test set to avoid an excessive submission of code QA prediction results to the WebQueryTest leaderboard. Table The deletion of keywords in queries shows significant performance degradation. The results suggest that keywords determined through our proposed way (Figure We conduct experiments to investigate the effect of different NL rewriting operations on the code search task. Table We investigate the contributions of each component to keyword-based data augmentation. Table 5 shows the results for code search task on the CoSQA test set. All five rows determine keywords from pairs of query and code function with the same way. However, the last four rows only modify each component as follows: 1) deleting unimportant words in queries; 2) deleting unimportant words in function names; 3) deleting unimportant words in documentations; 4) renaming variables using keywords. The results show that KeyDAC leveraging all components (results in the first row) achieves the best result. Among the results of KeyDAC using a single component, documentation contributes the most to performance, and the function name contributes the least. Software developers typically use an abbreviated function name but write documentation in detail to describe the functionality of the +x) while the functionality of the code is to add the write privilege (chmod +w). KeyDAC fails to understand the difference between +x and +w, resulting in a wrong prediction. While software developers can easily understand the difference of Unix/Linux command chmod +x and chmod +w, it is not trivial for the language models and humans without programming knowledge. The potential research direction is to incorporate domain-specific knowledge, such as mathematical knowledge and programming knowledge, into the pre-training or fine-tuning process of language models to improve code search performance. We have presented KeyDAC-keyword-based data augmentation for contrastive learning, which generates more training query-code pairs while preserving important keywords for the code search task. First, KeyDAC utilizes term matching technique to identify important words from a query and code components (function name and documentation). Then, KeyDAC augments both a query and a code snippet while preserving the identified keywords. Finally, KeyDAC deploys contrastive learning using the augmented data to fine-tune the pre-trained language models. We have demonstrated that Key-DAC outperforms the current state-of-the-art performance on both the code search and an open challenge code question answering task. Given a query-code pair, KeyDAC identifies keywords which share the same surface form by term matching. In other words, KeyDAC identifies keywords at the lexical level. As a future work, Key-DAC can utilize external knowledge for keywordbased data augmentation. For example, KeyDAC can utilize WordNet to identify keywords based on not only surface form, but also synonyms.
| 1,356 | 1,355 | 1,356 |
Memory-Based Dependency Parsing
|
This paper reports the results of experiments using memory-based learning to guide a deterministic dependency parser for unrestricted natural language text. Using data from a small treebank of Swedish, memory-based classifiers for predicting the next action of the parser are constructed. The accuracy of a classifier as such is evaluated on held-out data derived from the treebank, and its performance as a parser guide is evaluated by parsing the held-out portion of the treebank. The evaluation shows that memory-based learning gives a signficant improvement over a previous probabilistic model based on maximum conditional likelihood estimation and that the inclusion of lexical features improves the accuracy even further.
|
Deterministic dependency parsing has recently been proposed as a robust and efficient method for syntactic parsing of unrestricted natural language text In this paper, we report experiments using memorybased learning The paper is structured as follows. Section 2 gives the necessary background definitions and introduces the idea of guided parsing as well as memory-based learning. Section 3 describes the data used in the experiments, the evaluation metrics, and the models and algorithms used in the learning process. Results from the experiments are given in section 4, while conclusions and suggestions for further research are presented in section 5.
|
The linguistic tradition of dependency grammar comprises a large and fairly diverse family of theories and formalisms that share certain basic assumptions about syntactic structure, in particular the assumption that syntactic structure consists of lexical nodes linked by binary relations called dependencies (see, e.g., In a dependency structure, every word token is dependent on at most one other word token, usually called its head or regent, which means that the structure can be represented as a directed graph, with nodes representing word tokens and arcs representing dependency relations. In addition, arcs may be labeled with specific dependency types. Figure Formally, we define dependency graphs in the following way: , where (a) W is the set of nodes, i.e. word tokens in the input string, (b) A is a set of labeled arcs (w i , r, w j ) (where w i , w j ∈ W and r ∈ R). We write w i < w j to express that w i precedes w j in the string W (i.e., i < j); we write w i r → w j to say that there is an arc from w i to w j labeled r, and w i → w j to say that there is an arc from w i to w j (regardless of the label); we use → * to denote the reflexive and transitive closure of the unlabeled arc relation; and we use ↔ and ↔ * for the corresponding undirected relations, i.e. w i ↔ w j iff w i → w j or w j → w i . 3. A dependency graph D = (W, A) is well-formed iff the five conditions given in Figure For a more detailed discussion of dependency graphs and well-formedness conditions, the reader is referred to The parsing algorithm presented in 1. The transition Left-Arc (LA) adds an arc w j r → w i from the next input token w j to the token w i on top of the stack and reduces (pops) w i from the stack. 2. The transition Right-Arc (RA) adds an arc w i r → w j from the token w i on top of the stack to the next input token w j , and shifts (pushes) w j onto the stack. 3. The transition Reduce (RE) reduces (pops) the token w i on top of the stack. 4. The transition Shift (SH) shifts (pushes) the next input token w i onto the stack. The transitions Left-Arc and Right-Arc are subject to conditions that ensure that the graph conditions Unique label and Single head are satisfied. By contrast, the Reduce transition can only be applied if the token on top of the stack already has a head. For Shift, the only condition is that the input list is non-empty. As it stands, this transition system is nondeterministic, since several transitions can often be applied to the same configuration. Thus, in order to get a deterministic parser, we need to introduce a mechanism for resolving transition conflicts. Regardless of which mechanism is used, the parser is guaranteed to terminate after at most 2n transitions, given an input string of length n Figure One way of turning a nondeterministic parser into a deterministic one is to use a guide (or oracle) that can inform the parser at each nondeterministic choice point; cf. In our case, we rather want to use the guide to improve the accuracy of a deterministic parser, starting from a baseline of randomized choice. One way of doing this is to use a treebank, i.e. a corpus of analyzed sentences, to train a classifier that can predict the next transition (and dependency type) given the current configuration of the parser. However, in order to maintain the efficiency of the parser, the classifier must also be implemented in such a way that each transition can still be performed in constant time. Previous work in this area includes the use of memorybased learning to guide a standard shift-reduce parser Memory-based learning and problem solving is based on two fundamental principles: learning is the simple storage of experiences in memory, and solving a new problem is achieved by reusing solutions from similar previously solved problems Memory-based learning has been successfully applied to a number of problems in natural language processing, such as grapheme-to-phoneme conversion, partof-speech tagging, prepositional-phrase attachment, and base noun phrase chunking The main reason for using memory-based learning in the present context is the flexibility offered by similaritybased extrapolation when classifying previously unseen configurations, since previous experiments with a probabilistic model has shown that a fixed back-off sequence does not work well in this case For the experiments reported in this paper, we have used the software package TiMBL (Tilburg Memory Based Learner), which provides a variety of metrics, algorithms, and extra functions on top of the classical k nearest neighbor classification kernel, such as value distance metrics and distance weighted class voting The function we want to approximate is a mapping f from parser configurations to parser actions, where each action consists of a transition and (unless the transition is Shift or Reduce) a dependency type: Here Config is the set of all possible parser configurations and R is the set of dependency types as before. However, in order to make the problem tractable, we try to learn a function f whose domain is a finite space of parser states, which are abstractions over configurations. For this purpose we define a number of features that can be used to define different models of parser state. The features used in this study are listed in Table The first five features (TOP-TOP.RIGHT) deal with properties of the token on top of the stack. In addition to the word form itself (TOP), we consider its part-of-speech (as assigned by an automatic part-of-speech tagger in a preprocessing phase), the dependency type by which it is related to its head (which may or may not be available in a given configuration depending on whether the head is to the left or to the right of the token in question), and the dependency types by which it is related to its leftmost and rightmost dependent, respectively (where the current rightmost dependent may or may not be the rightmost dependent in the complete dependency tree). The following three features (NEXT-NEXT.LEFT) refer to properties of the next input token. In this case, there are no features corresponding to TOP.DEP and TOP.RIGHT, since the relevant dependencies can never be present at decision time. The final feature (LOOK) is a simple lookahead, using the part-of-speech of the next plus one input token. In the experiments reported below, we have used two different parser state models, one called the lexical model, which includes all nine features, and one called the non-lexical model, where the two lexical features TOP and NEXT are omitted. For both these models, we have used memory-based learning with different parameter settings, as implemented TiMBL. For comparison, we have included an earlier classifier that uses the same features as the non-lexical model, but where prediction is based on maximum conditional likelihood estimation. This classifier always predicts the most probable transition given the state and the most probable dependency type given the transition and the state, with conditional probabilities being estimated by the empirical distribution in the training data. Smoothing is performed only for zero frequency events, in which case the classifier backs off to more general models by omitting first the features TOP.LEFT and LOOK and then the features TOP.RIGHT and NEXT.LEFT; if even this does not help, the classifier predicts Reduce if permissible and Shift otherwise. This model, which we will refer to as the MCLE model, is described in more detail in It is standard practice in data-driven approaches to natural language parsing to use treebanks both for training and evaluation. Thus, the Penn Treebank of American English For the experiments reported in this paper we have used a manually annotated corpus of written Swedish, created at Lund University in the 1970's and consisting mainly of informative texts from official sources In the conversion process, we have reduced the original fine-grained classification of grammatical functions to a more restricted set of 16 dependency types, which are listed in Table Since the function we want to learn is a mapping from parser states to transitions (and dependency types), the treebank data cannot be used directly as training and test The part-of-speech of the next plus one input token Table The complete converted treebank contains 6316 sentences and 97623 word tokens, which gives a mean sentence length of 15.5 words. The treebank has been divided into three non-overlapping data sets: 80% for training 10% for development/validation, and 10% for final testing (random samples). The results presented below are all from the validation set. (The final test set has not been used at all in the experiments reported in this paper.) When talking about test and validation data, we make a distinction between the sentence data, which refers to the original annotated sentences in the treebank, and the transition data, which refers to the transitions derived by simulating the parser on these sentences. While the sentence data for validation consists of 631 sentences, the corresponding transition data contains 15913 instances. For training, only transition data is relevant and the training data set contains 371977 instances. The output of the memory-based learner is a classifier that predicts the next transition (including dependency type), given the current state of the parser. The quality of this classifier has been evaluated with respect to both prediction accuracy and parsing accuracy. Prediction accuracy refers to the quality of the classifier as such, i.e. how well it predicts the next transition given the correct parser state, and is measured by the classification accuracy on unseen transition data (using a 0-1 loss function). We use McNemar's test for statistical significance. Parsing accuracy refers to the quality of the classifier as a guide for the deterministic parser and is measured by the accuracy obtained when parsing unseen sentence data. More precisely, parsing accuracy is measured by the attachment score, which is a standard measure used in studies of dependency parsing Table • The IB1 classification algorithm • The overlap distance metric. • Features weighted by Gain Ratio • Overlap metric replaced by the modified value distance metric (MVDM) • No weighting of features. • k = 5, i.e. classification based on 5 nearest neighbors. • Distance weighted class voting with inverse distance weighting For more information about the different parameters and settings, the reader is referred to The results show that the lexical model performs consistently better than the non-lexical model, and that the difference increases with the optimization of the learning algorithm (all differences being significant at the .0001 level according to McNemar's test). This confirms previous results from statistical parsing indicating that lexical information is crucial for disambiguation Table Finally, it may be interesting to consider the accuracy for individual dependency types. Table In this paper we have shown that a combination of memory-based learning and deterministic dependency parsing can be used to construct a robust and efficient parser for unrestricted natural language text, achieving a parsing accuracy which is close to the state of the art even with relatively limited amounts of training data. Classifiers based on memory-based learning achieve higher parsing accuracy than previous probabilistic models, and the improvement increases if lexical information is added to the model. Suggestions for further research includes the further exploration of alternative models and parameter settings, but also the combination of inductive and analytical learning to impose high-level linguistic constraints, and the development of new parsing methods (e.g. involving multiple passes over the data). In addition, it is important to evaluate the approach with respect to other languages and corpora in order to increase the comparability with other approaches.
| 727 | 655 | 727 |
Language Models with Rationality
|
While large language models (LLMs) are proficient at question-answering (QA), it is not always clear how (or even if) an answer follows from their latent "beliefs". This lack of interpretability is a growing impediment to widespread use of LLMs. To address this, our goals are to make model beliefs and their inferential relationships explicit, and to resolve inconsistencies that may exist, so that answers are supported by interpretable chains of reasoning drawn from a consistent network of beliefs. Our approach, which we call REFLEX, is to add a rational, self-reflecting layer on top of the LLM. First, given a question, we construct a belief graph using a backward-chaining process to materialize relevant model beliefs (including beliefs about answer candidates) and their inferential relationships. Second, we identify and minimize contradictions in that graph using a formal constraint reasoner. We find that REFLEX significantly improves consistency (by 8%-11% absolute) without harming overall answer accuracy, resulting in answers supported by faithful chains of reasoning drawn from a more consistent belief system. This suggests a new style of system architecture in which an LLM extended with a rational layer can provide an interpretable window into system beliefs, add a systematic reasoning capability, and repair latent inconsistencies present in the LLM.
|
While large language models (LLMs) are impressive at question-answering (QA), it is not always clear how (or even if) an answer follows from their latent "beliefs" where properties of explainability, interpretability, and trust are paramount. Our goal is to help alleviate such opacity by constructing an explicit representation of system beliefs and their inferential relationships (including to answer candidates), so that answers are supported by interpretable chains of reasoning. These constructed belief graphs, e.g., Figures In addition, when we do this, we find such graphs expose latent inconsistencies in the model's beliefs. We show how such inconsistencies can be resolved using constraint satisfaction techniques. When we do this, the rational layer becomes not just a window onto the model, but an active reasoning component in its own right in a larger, overall system, comprising the (frozen) LLM plus rational layer (blue box, Figure Our approach, called REFLEX, introduces a rational layer consisting of two parts. First, to produce a belief graph, we recursively ask the LLM to explain why each candidate answer might be true, expressed as a set of sentences that entail the answer. This builds on earlier work on generating entailment-based and chain-of-thought explanations Second, we apply a formal constraint reasoner to this graph to resolve inconsistencies, by finding the optimal (minimal cost, Section 3.3) way of flipping T/F values. For example, on the left in Figure We evaluate our implementation of REFLEX on three datasets: EntailmentBank 1. A new style of system architecture in which an LLM is extended with a rational layer in which an explicit representation of system beliefs and relationships is constructed and which can be reasoned over. This layer provides an interpretable window into system beliefs, adds a systematic reasoning capablity, and allows latent inconsistencies present in the LLM to be repaired. 2. An implementation of this architecture demonstrating that the consistency of the overall system's network of beliefs can be significantly improved without harming answer accuracy. Answers are now supported by explicit, interpretable chains of reasoning drawn from a more consistent network of beliefs.
|
Materializing a Model's Internal Knowledge: It is now well recognized that LLMs contain extensive world knowledge with no guarantee that the generated sequence of tokens expresses the model's internal knowledge, nor entails the actual answer. Similarly, chain-ofthought (CoT) To add semantics to generations, several systems have used self-querying to verify that generations reflect model-believed facts (by self-querying "Is p true?") (e.g., We refer to the model's factual opinions as "beliefs" rather than "knowledge" because those opinions may be wrong. In general, an agent can be said to believe p if it acts as if p was true Reducing Inconsistency: LLMs are known to be inconsistent in their answers Our belief graphs are defined over a set of natural language true/false statements and represent a set of rules that constrain the truth values of these statements. We refer to statements that are factually true in the world as facts. The truth value assigned by a model M to a statement is referred to as M 's belief in that statement (cf. Footnote 1). A model's internal beliefs may not always align 2 REFLEX checks whether both the statements si, and the rules (si → h), are believed by the model via self-querying, e.g., by asking "Does si → h?", and also scores the strength of those beliefs. In maieutic prompting, the generated rules are not checked against the model, resulting in rules that the model itself may not believe, if queried about them. with facts. Our goal is to extract a model's initial beliefs about statements inferentially related to all top-level hypotheses of interest, and perform reasoning to update these beliefs so as to make them more consistent with respect to the rules, and ideally also factually more accurate. A belief graph is a type of factor graph commonly used in the probabilistic inference literature Edges E connect rule nodes to the statements they constrain, denoting their dependence. For legibility, we draw edges directionally to depict the way the rule reads: the statements in p point to r, which in turn points to h. Mathematically, the influence is bidirectional and the depicted directionality is irrelevant during reasoning (Section 3.3), just as in a standard factor graph. We adopt the standard probabilistic semantics of factor graphs, thereby associating a belief graph with a well-defined probability distribution over any set of statement beliefs. For a statement node (s, l, c s ), the cost cost s for setting it to l is 0, and that for setting it against l is c s ; the corresponding weight of this node is w s = exp(-cost s ). Costs and weights for a rule node (r, c r ) are defined similarly, based on whether the beliefs satisfy r or not. Finally, the overall weight of a T/F assignment to all statements is s w s • r w r , which, when normalized by the total weight across all possible assignments, yields a probability distribution over such assignments. We will be interested in finding the most consistent set of beliefs, i.e., a T/F assignment to statements with the minimum overall weight, which is equivalent to minimizing 4 14193 s cost s + r cost r . This is referred to as the MPE (most probable explanation) problem in the graphical models literature, which we later solve exactly using a MaxSAT constraint solver based on a standard translation of MPE into weighted Given an initial node (statement) s, a belief graph G is produced by a backward-chaining process described below, in which G is recursively expanded to add statements that together may entail s. Let h denote a hypothesis (language statement s) of interest and p a premise-a set of statements {s 1 ,. . . ,s n } that together may entail h. Given these, there are three basic operations required to generate belief graphs: 1. h ⇒ p: Given h, generate a p that may entail h. 2. s ⇒ (l, c s ): Given a statement s, output a true/false value l and a confidence in the belief that s has truth value l (as assessed via yes/no question-answering). 3. (p, h) ⇒ c r : Given p and h, output a confidence that the candidate rule r = p → h holds. The most important of these is the first operation, in which the model self-generates conjunctive rules concluding h (i.e., reason p for believing h), thus adding new nodes to the graph. There are several ways of implementing these basic functions, and our algorithm is agnostic to the method used. In our work here, we use Entailer, an off-the-shelf T5-11B trained model with these functionalities One may use alternative ways to implement these operators, such as chain-of-thought prompting a model like GPT3 Given a question, we first generate a set H of hypothesis sentences (e.g., "Is the sky (A) blue (B) yellow" → { h 1 = "The sky is blue.", h 2 = "The sky is yellow."). The belief graph generation process is shown in Algorithm 1. An example of (part of) a generated belief graph is shown in Figure Belief graphs provide a window into the model's beliefs about some of the relevant statements and their (believed) inferential relationships to candidate answers to a question. As others have shown In a similar vein, and as discussed in Section 3.1, REFLEX performs inference over belief graphs in order to compute an updated set of beliefs that is as consistent as possible with the rules. To this end, it converts belief graphs into an equivalent weighted MaxSAT problem and uses an off-theshelf MaxSAT solver (RC2, Notably, the smaller updated belief graph produced by REFLEX provides a faithful explanation of the answer it predicts, in the sense that it accurately represents the reasoning process behind the overall system's prediction We note that the original belief graph (before reasoning) may reveal that the model's original explanation is, in fact, not faithful to its own beliefs. For example, in Figure The goal of our experiments is to evaluate the extent to which our overall system, namely an LLM plus a self-reflecting, rational layer, helps expose and resolve inconsistencies in the LLM's beliefs without harming accuracy. Importantly, REFLEX is evaluated in a zero-shot setting, without relying on training instances of the target datasets. Datasets. We use the test partitions of three existing multiple-choice datasets: EntailmentBank Models. The baseline LLM we use is an LLM that has been trained to perform QA and also supports the basic operations discussed in Sec. 3.2.1, enabling us to assess how much it can be improved by adding a REFLEX layer. To this end, we use a publicly available, frozen, off-the-shelf T5-11B LLM called Entailer REFLEX then adds a rational layer to this LLM, creating a new system that is also able to self-reflect and modify its beliefs. To ensure the different belief graph scores in REFLEX are appropriately calibrated, we use nine hyperparameters, tuned once on the dev partition of EntailmentBank Metrics. For measuring self-consistency, we follow where s = T denotes the system believes statement s to be true (similarly for s = F ). The numerator of τ thus captures the number of constraints the system violates. The denominator captures the number of applicable constraints. We then report the following metric: consistency = 1τ . For QA performance, we report standard multiple-choice accuracy: 1 point for predicting the correct answer, 1/N points for predicting N answers including the correct one, 1/k points for no prediction (k = # answer options), 0 otherwise. Consistency. Table Ablations. To study the impact of the three different types of rules on consistency improvement, we using the EntilmentBank dataset (dev partition). mentBank OBQA Quartz LLM 79.4 74.0 80.2 LLM + rational layer 79.9 75.0 80.0 (REFLEX) To do this, given the belief graph for a question, we mask out (separately, rather than cumulatively) each type of rule in turn when providing the graph to the MaxSAT solver. We then run the constraint solver and measure the resulting self-consistency of beliefs on the original graph. EntailmentBank REFLEX (our system): 96.1 -without p → h rules 93.8 -without XOR rules 90.4 -without MC rule 95.8 Table The results are shown in Table We identify three classes of successful reasoning by the constraint reasoner: (a) latent model beliefs correct an initially wrong answer (Figure Reasoning can also make mistakes. From a manual analysis of 50 random questions from Entail-mentBank that REFLEX answered incorrectly, we identified five main causes of failure and their approximate frequency (Note that multiple categories can apply, hence total is > 100%): 7 1. Missing Rules (≈30%): In some cases, the system generates irrelevant rules but misses an important one needed to support the correct answer, resulting in incorrect conclusions. While somewhat subjective, this is a notable error category that we observe. For example for the question: A human cannot survive the loss of (A) The liver [correct] (B) A lung (C) A kidney the system incorrectly concludes (B) is true, ignoring the commonsense rule that with two lungs, a person can survive without one of them. 2. Incorrect Beliefs (≈30%): Sometimes the reasoner fails to correct incorrect model beliefs, either because the model's confidence is high or evidence against them is weak or missing. In the example shown in Figure 3. Incorrect Rules (≈10%): Rule generation can produce bad rules, e.g., in Figure A final cause of "error" -at least with respect to the gold label -is that multiple answers may be valid, and 8 Figure the question is asking for the best answer; eg. for "What could fill a beach ball? (A) Oxygen (B) Water ...", A is labeled correct, while B is also a valid answer. REFLEX (desirably) finds valid reasoning chains for both, but the notion of highest-scoring proof does not fully correlate with the notion of "best answer" intended by the question author. There are several impactful ways this work could be further extended. First, incorporating the question's context in the belief statements in our rational layer could make the semantics of the beliefs more precise, thus avoiding potential ambiguity in their truth value. Second, one could use the belief graph itself to identify the key reasoning pieces that the LLM is most uncertain about. This could then guide a human-in-the-loop mechanism to correct or validate uncertain pieces via user interaction. Third, maintaining a persistent belief graph over multiple questions could help make the system more consistent across questions. This, in turn, would make a user's conversational experience with the system more coherent in a longer dialog setting. Lastly, after resolving inconsistencies in the rational layer, we could consider propagating information back to the LLM layer in order to update it (via fine-tuning, model editing, memory-based architectures, etc.), helping avoid similar inconsistencies in the future. While LLMs perform well, the interdependencies between their answers and their other beliefs is opaque, and may even be in conflict. This lack of interpretability is a significant impediment to widespread use of LLMs. To reduce this opacity, and reduce these conflicts, we have proposed REFLEX, a new system architecture in which an explicit, interpretable representation of beliefs -the belief graph -is added as a rational layer above the LLM. This layer providing a window into system beliefs, and allows latent inconsistencies in the LLM alone to reasoned about and repaired. Our implementation shows that belief consistency of the overall system is significantly improved, without harming answer accuracy, resulting in answers supported by interpretable chains of reasoning drawn from a more consistent belief system. This new architecture is an important step towards improving confidence in system behavior, and towards trustable deployment of LLMs in practical applications. We have shown how an LLM can be extended with a self-reflective component, allowing latent model knowledge to be made explicit in the form of a belief graph, providing a window into the model's system of beliefs. While exciting, there are several limitations with the current work and opportunities for the future. First, the reasoning component in the rational 9 layer can make mistakes, resulting in the overall system rejecting true statements or accepting false ones. A detailed analysis and classification of these failure modes was presented in Section 4.3. Second, for our experiments, we used the T5-11B based Entailer system as the baseline LLM. While there is every reason to expect our proposed architecture to be effective in reducing inconsistency with newer and larger LLMs such as ChatGPT and LLaMA, this is still to be evaluated. Doing so would require implementing the basic operations needed to construct belief graphs (Section 3.2.1) using instruction prompting and incontext learning. Other work has demonstrated such implementations (e.g., Lastly, we found consistency-minimized belief graphs to be highly valuable in understanding the system's successes and failures. We expect these graphs to be a valuable starting point for providing explanations and gaining a user's trust in the system. However, we have not conducted a formal user study to measure this. Like any other project using LLMs, despite the best intentions there is a risk of the model producing biased or offensive statements as part of its explanations, and thus must be used with care and appropriate guards and warnings.
| 1,375 | 2,256 | 1,375 |
Enhancing the generalization for Intent Classification and Out-of-Domain Detection in SLU
|
Intent classification is a major task in spoken language understanding (SLU). Since most models are built with pre-collected in-domain (IND) training utterances, their ability to detect unsupported out-of-domain (OOD) utterances has a critical effect in practical use. Recent works have shown that using extra data and labels can improve the OOD detection performance, yet it could be costly to collect such data. This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection. Our method designs a novel domain-regularized module (DRM) to reduce the overconfident phenomenon of a vanilla classifier, achieving a better generalization in both cases. Besides, DRM can be used as a drop-in replacement for the last layer in any neural network-based intent classifier, providing a low-cost strategy for a significant improvement. The evaluation on four datasets shows that our method built on BERT and RoBERTa models achieves state-of-the-art performance against existing approaches and the strong baselines we created for the comparisons.
|
Spoken language understanding (SLU) systems play a crucial role in ubiquitous artificially intelligent voice-enabled personal assistants (PA). SLU needs to process a wide variety of user utterances and carry out user's intents, a.k.a. intent classification. Many deep neural network-based SLU models have recently been proposed and have demonstrated significant progress To address the challenges in open-world settings, previous works adopt varied strategies. A straightforward solution is to collect OOD data and train a supervised binary classifier on both IND data and OOD data This paper proposes a strategy based on neural networks to use only IND utterances and their labels to learn both the intent classifier and OOD detector. Our strategy modifies the structure of the classifier, introducing an extra branch as a regularization target. We call the structure a Domain-Regularized Module (DRM). This structure is probabilistically motivated and empirically leads to a better generalization in both intent classification and OOD detection. Our analysis focuses more on the latter task, finding that DRM not only outputs a class probability that is a better indicator for judging IND/OOD, but also leads to a feature representation with a less distribution overlap between IND and OOD data. More importantly, DRM is a simple drop-in replacement of the last linear layer, making it easy to plug into any off-the-shelf pretrained models (e.g. BERT
|
In the application of intent classification, a user utterance will be either an in-domain (IND) utterance (supported by the system) or an out-of-domain (OOD) utterance (not supported by the system). The classifier is expected to correctly (1) predict the intent of supported IND utterances; and (2) detect to reject the unsupported OOD utterances. The task is formally defined below. We are given a closed world IND training set (2) OOD Detection: detect an utterance x to be an abnormal/unsupported sample if x is drawn from a different distribution P OOD . Intent Classification is one of the major SLU components OOD Detection has been studied for many years Our method is inspired by the decomposed confidence of Generalized-ODIN The motivation begins with introducing the domain variable d (d = 1 means IND, while d = 0 means OOD) following the intuition in where the last step holds since p(y, d = 0|x) is close to 0 with the intrinsic conflict between IND classes y and random variable d = 0 for OOD. Motivated by the above Equation where ) is a probability between 0 and 1, Section 3.1.2 will describe the training details of domain loss via the sigmoid function. Classification Logits f c models the probability posterior p(y|x) before normalization. It follows the conventional linear projection from hidden state h to the number of classes: where At the end, we obtain the final logits f to represent p(y|d = 1, x) by putting f d and f c together following the dividend-divisor structure of Equation 1: where each element of f c is divided by the same scalar f d . We propose two training loss functions to train a model with DRM. The first training loss aims to minimize a cross-entropy between the predicted intent class and ground truth IND class labels. where p(f ) is the softmax of logits f : The second training loss aims to ensure that the domain component f d is close to 1 since all utterances in the training set are IND. We first restrict f d between 0 and 1 by using sigmoid activation function. Thus, we sum them up to optimize the model: Remarks: It is important to note that the design of L domain is to introduce extra regularization to mitigate the overconfidence in standard posterior probability p(f ). sigmoid(f d ) is not used to directly predict if an utterance is IND or OOD. Following Equation 1 and our DRM design, it is straightforward to use the confidence score of softmax(f ) to predict the IND intent class. There are two types of strategies to utilize the outputs of a classifier to perform OOD detection. One is based on the confidence which is computed from logits, the other is based on the features. In the below, we describe how to compute different OOD scores with our DRM. Recent works DRM Confidence Score: DRM ODIN Confidence Score: with large T = 1000 The OOD utterances have low Conf DRM , ODIN DRM scores and high ENT DRM score. While our DRM confidence already outperforms many existing methods (later shown in experiments), we further design the feature-based Mahalanobis distance score, inspired by the recent work We first recap the approach in where f (x) represents the output features at the th -layer of neural networks; µ i and Σ are the class mean representation and the covariance matrix. Thus, the overall score is their summation: In addition, the input preprocessing adds a small controlled noise to the test samples to enhance the performance. Although Mahalanobis distance score can be applied only to the last feature layer without input preprocessing S last M aha (x), the analysis (Table Since BERT-based models showed significant performance improvement for intent classification in SLU where f and f n are the features of each layer and last layer n in a BERT-based intent classifier model. We refer to our proposed approach as L-Mahalanobis. 4 Experimental Evaluation We evaluate our proposed approach on three benchmark SLU datasets and one in-house SLU dataset. Table Among all these datasets, the recently released CLINC dataset serves as a benchmark for OOD detection in SLU. For the other three datasets, we treat them mutually OOD due to non-overlapping domains. We crowdsourced the in-house Movie dataset containing common questions that users may ask regarding movies. This dataset mainly consists of queries a user may ask in the movie domain. The dataset consists of 38 different intents (e.g. rating information, genre information, award information, show trailer) and 20 slots or entities (e.g., director, award, release year). This dataset was collected using crowdsourcing as follows. At first, some example template queries were generated by linguistic experts for each intent, along with intent and slot descriptions. Next, a generation crowdsourcing job was launched where a crowd worker was assigned a random intent, a combination of entities, and few slots generally associated with the intent. To better understand the intent and slots, the worker was asked to review the intent and slot descriptions, and example template utterances. The first task of the worker was to provide 3 different queries corresponding to the given intent, which also contains the provided entities. The second task of the worker was to provide additional entities corresponding to the same slot type. A subsequent validation crowdsourcing job was launched where these crowdsourced queries were rated by validation workers in terms of their accuracy with the provided intent and entities. Each query was rated by 5 different validation workers, and the final validated dataset contains a subset of crowdsourced queries with high accuracy score and high interrater agreement. We implemented our method using PyTorch on top of the Hugging Face transformer library Remarks: All experiments only use IND data for both training and validation. We use the same hyperparameters in all datasets and validate the generalizability of our method. We consider the strongest baseline BERT-Linear (the last layer is linear) fine-tuned on the pre-trained BERT-based models We consider the existing OOD detection methods: ConGAN Autoencoder (AE) ODIN Generalized-ODIN (G-ODIN) Mahalanobis For ConGAN and AE, we evaluate the model in the original paper as well as customized BERTbased backbone models as strong baselines. Specifically, we customize En-ConGAN and En-AE as follows: En-ConGAN uses BERT sentence representation as input; En-AE applies a BERT classi-fier model to train the sentence representation and then use them to further train an autoencoder. Thus, En-ConGAN and En-AE are not existing baselines. Note that ERAEPOG We evaluate IND performance using the classification accuracy metric as in literature we follow the evaluation metrics in literature EER (lower is better): (Equal Error Rate) measures the error rate when false positive rate (FPR) is equal to the false negative rate (FNR). Here, FPR=FP/(FP+TN) and FNR=FN/(TP+FN). FPR95 (lower is better): (False Positive Rate (FPR) at 95% True Positive Rate (TPR)) can be interpreted as the probability that an OOD utterance is misclassified as IND when the true positive rate (TPR) is as high as 95%. Here, TPR=TP/(TP+FN). Detection Error (lower is better): measures the misclassification probability when TPR is 95%. Detection error is defined as follows: where s is a confidence score. We follow the same assumption that both IND and OOD examples have an equal probability of appearing in the testing set. AUROC (higher is better): (Area under the Receiver Operating Characteristic Curve) The ROC curve is a graph plotting TPR against the FPR=FP/(FP+TN) by varying a threshold. AUPR (higher is better): (Area under the Precision-Recall Curve (AUPR)) The PR curve is a graph plotting the precision against recall by varying a threshold. Here, precision=TP/(TP+FP) and recall=TP/(TP+FN). AUPR-IN and AUPR-OUT is AUPR where IND and OOD distribution samples are specified as positive, respectively. Note that EER, detection error, AUROC, and AUPR are threshold-independent metrics. We also evaluate the statistical significance between all baselines and our best result (DRM + L-Mahalanobis) on all the above metrics. We train each model 10 times with different PyTorch random seeds. We report the average results and t-test statistical significance results. Table Results on CLINC Dataset: Table For a given OOD detection method, we find that their combinations with DRM consistently perform better than those with standard models. The improvement is at least 1-2% for all metrics against our enhanced baselines. Among all OOD detection approaches, our proposed L-Mahalanobis OOD detection approach achieves the best performance for both linear and DRM combined BERT and RoBERTa models. It is not surprising to observe that our DRM method combined with a better pretrained RoBERTa model achieves larger OOD detection performance improvement. Note that our customized En-AE performs much better than most other methods since we incorporated the enhanced reconstruction capability with pre-trained BERT models. However, En-AE cannot utilize all BERT layers as our proposed L-Mahalanobis method, resulting in worse performance. In addition, DRM+L-Mahalanobis models are significantly better than existing methods and enhanced baselines with p-value < 0.01 on most metrics for both BERT and RoBERTa backbones. Ablation Study on CLINC Dataset: We analyze how our two novel components, DRM model and L-Mahalanobis, impact the performance. The rows with "DRM" in "Last Layer" column of Table The rows with "L-Mahalanobis" in "OOD Method" column of When taking Snip as IND and ATIS as OOD, it is interesting to see that our method achieves better performance than En-AE. This is because that Snips contains a large number of entities such that the reconstruction error will be lower and become less separable than that in ATIS OOD utterances. For both Snips and Movie IND datasets, DRM+L-Mahalanobis are significantly better than baseline methods with p-value < 0.01 in most cases for all OOD datasets. For ATIS IND dataset, DRM+L-Mahalanobis shows similar behavior except En-AE since it is easier to train an autoencoder model for ATIS IND dataset due to its carefully collected clean training utterances. We provide a quantitative analysis by visualizing our two methods, DRM and L-Mahalanobis. Our proposed method in this paper has been deployed in the domain classification SLU model for Samsung Bixby voice assistant. In addition to SLU, our work could have a broader impact on other applications, which can be benefited from having a more robust classification system. For example, our method can help the robot to detect objects more accurately or stop safely by correctly identifying unknown objects, classify environmental sounds or detect anomaly sounds, and so on. Moreover, by better detecting the OOD samples that are different from the training data distribution, our method can facilitate to handle distributional shifts between training data and practical usage data.
| 1,095 | 1,452 | 1,095 |
AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning
|
NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapterbased methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art.
|
Since Transformer-based In this paper, we present a cost-effective method for language model fine-tuning that is applicable, without customization, to a variety of language models and Adapter types. We focus on small to mid-sized language models such as BERT In this paper, we propose a meta-knowledgedriven few-shot adapter learning method, called AMAL (Adapter-by-MetA-Learning), based on a novel meta-learning framework, through which meta-level layer-wise adaptation kernels are derived in an end-to-end manner. Our design takes inspiration from AMAL includes two key ideas: (1) construction of language model adapters' intrinsic kernels from tasks and (2) inference of the optimal task-specific language model adapter for a given task, by referring to a meta-level latent embedding space over all tasks.
|
Few-shot Text Classification: DS AMAL can be seen as similar with LoRA in terms of using the low-rank decomposition technique. However as a meta learning-based approach, AMAL can be applied to a broad range of language models and all existing adapter-based methods, including LoRA. We deal with the few-shot text classification problem to demonstrate AMAL's few-shot language model adaptation performance. As usual, C-way K-shot indicates that K-annotated examples are only given for each of the C number of classes for a task (denoted as τ i ), leading to the total number of examples as K τ i = K × |C|. We experiment with BERT In the meta-learning setting, tasks are divided into a meta-training set (S tr ), meta-validation set (S val ), and meta-test set (S test ) as disjoint sets of classes. Our meta-learning strategy follows the overall procedure of optimization-based meta-learning In this section, we present the implementation of AMAL. The design implies the hypothesis that the language model adaptation can be performed on a low intrinsic rank. Here, we describe AMAL by employing the original Adapter As shown in Figure where l is the layer's index, and given the PLM's original dimension d, the adapter's bottleneck dimension m and the rank r (r ≪ min(d, m)). E τ i l is a diagonal matrix. For notational simplicity, we drop the distinction for the two different adapters (i.e., lower and upper) and likewise the distinction between up and down-projections. Importantly, E τ i l is the l-th layer's low-rank adapter pooler for the task τ i , U l the l-th layer's left adapter kernels, and V l the right adapter kernels. The aim of the pooling is to derive the taskspecific composition from the established adapterkernels, U and V, which are obtained in the metaoptimization process. To obtain the optimal adapter for a task τ i , there are two important steps in the pooling process: (1) encoding the task τ i into a low-dimensional latent embedding space Z and (2) producing the taskspecific adapter pooler from the latent embedding z τ i . The encoding pipeline is taken from where z τ i n denotes the latent space embedding for the particular class n under a given task τ i , N indicates the total number of classes under the task, K denotes the total number of examples under each class, f θr indicates the relation network where E τ i denotes the low rank adapter pooler for the task τ i , f θr indicates the decoder neural net- for number of tasks in batch do Sample task instance τ i ∼ S tr 6: for number of adaptation steps do 9: Encode [CLS] to z τ i ′ using f θe and f θr 10: Generate document embeddings using H τ i 12: Compute Task-Adaptation loss L tr τ i Perform gradient step w.r.t. z τ i ′ and θ ′ τ i 14: end for Generate document embeddings using H τ i 18: Compute Meta-Optimization loss L val τ i end for Perform gradient step w.r.t ϕ 21: 22: end while work, and z τ i is the task's latent embedding. To sum up, a new task is eventually converted into the task-specific low-rank adapter pooler via modulation on the low-dimensional latent space. As noted in Algorithm 1, AMAL updates three neural network blocks (i.e., θ e , θ r , θ d ) as well as the left adapter kernels U and the right adapter kernels V, by minimizing the following objective function in the meta-optimization process: where Ω indicates a weighted KL-divergence term, i.e., D KL (q(z τ i |D tr n )||p(z τ i )) where p(z τ i ) = N (0, I), to regularize the latent space with the aim to learn a disentangled embedding. R denotes a penalty term to attain near-orthogonality in the construction of U and V, and is formulated as follows: where F denotes the frobenius norm, and both U and V are randomly initialized. All the hyperparameters are equivalently kept all over the layers. 5 Experimental Results Here, we briefly explain how we generated document embeddings for our experiments. For a text input with length L, we utilize the embedding vectors for the individual tokens from the last layer of the given PLM, which are denoted as ] for the jth text example of the task τ i . For text classification, we average H τ i j column-wise and then feed it into a fully connected neural network with the parameters θ ′ τ i , which are optimized for the inner-update. We evaluate AMAL on five text classification datasets: 20 Newsgroups We evaluate AMAL in both 5-way 1-shot and 5way 5-shot settings and the results are shown in Table As shown in Table We explore the effect of the number of layers equipped with AMAL. Here, the BERT base is employed as the base PLM. We monitored performance while incrementally extending the number of AMAL-equipped layers, starting from the last layer and proceeding towards the input layer. As shown in Figure According to this empirical analysis, we can maximize the efficiency in a fine-grained manner by adjusting the number of AMAL-equipped layers. We plot the initial document embeddings and the corresponding fine-tuned embeddings obtained by AMAL for 20 Newsgroups dataset (Figure We hypothesized that language model adaptation can be performed on a low intrinsic rank, especially when only a few examples are offered. We designed a novel meta-learning-based low-rank adaptation method for leveraging small to mid-sized pretrained language models, allowing a new task to be cost-effectively learned in the few-shot regime. We demonstrated that the combination of low-rank matrix decomposition and meta learning is so effective, that we can reap the benefits of small to mid-sized pre-trained language models in practical scenarios with scarce annotated data. AMAL may be difficult to apply to unidirectional language models such as GPT2 We summarize the details of the model training and evaluation in Table For example, for the 20 newsgroups dataset, the 20 classes of news topics are split into 8 classes for meta-training, 5 classes for meta-validation, and 7 for meta-testing. When composing a batch for meta-training, since # of tasks is 4, the following is repeated 4 times: the 5 classes (since our few-shot setup is 5-way) for a task are randomly selected from the given 8 classes. In table Early-stopping was employed during model training: model training was stopped if the validation loss did not improve for 20 steps. For both the validation and testing, we sample 15 tasks with 15 queries from S val and S test . We used the Adam optimizer with learning rates of 0.1 and 0.001 in the inner and outer updates, and the inner update is repeated 40 times. During the metaoptimization process (outer loop), we apply weight decay scheduling. In addition, the coefficient λ of the KL-Divergence term in eq. 4 was set to 0.001 and the coefficient γ of the penalty term in eq. 4 was set to 0.1. We performed all the experiments on a single NVIDIA A100 80GB GPU. A.4 Few-Shot Performance of Adapter-based Fine-tuned Methods In addition, to verify the validity of our assumption that is introduced in section 1, we find the performance of parameter efficient fine-tuned methods, i.e., As shown in Table The Table
| 941 | 808 | 941 |
Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via Compositional Uncertainty Quantification
|
Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples. In comparison, symbolic parsers under-perform on populationlevel metrics, but exhibit unique strength in OOD and tail generalization. In this work, we study compositionality-aware approach to neural-symbolic inference informed by model confidence, performing fine-grained neuralsymbolic reasoning at subgraph level (i.e., nodes and edges) and precisely targeting subgraph components with high uncertainty in the neural parser. As a result, the method combines the distinct strength of the neural and symbolic approaches in capturing different aspects of the graph prediction, leading to well-rounded generalization performance both across domains and in the tail. We empirically investigate the approach in the English Resource Grammar (ERG) parsing problem on a diverse suite of standard in-domain and seven OOD corpora. Our approach leads to 35.26% and 35.60% error reduction in aggregated SMATCH score over neural and symbolic approaches respectively, and 14% absolute accuracy gain in key tail linguistic categories over the neural model, outperforming prior state-of-art methods that do not account for compositionality or uncertainty.
|
A structured account of compositional meaning has become a longstanding goal for Natural Language Processing. To this end, a number of efforts have focused on encoding semantic relationships and attributes into graph-based meaning representations (MRs, see Appendix A for details). In particular, graph semantic parsing has been an important task in almost every Semantic Evaluation (SemEval) exercise since 2014. In recent years, we have witnessed the burgeoning of applying neural networks † Co-senior authors. ‡ Work done at Google. to semantic parsing. Pre-trained language modelbased approaches have led to significant improvements across different MRs In this paper, we propose a novel compositional neural-symbolic inference for graph semantic parsing, which takes advantage of both uncertainty quantification from a seq2seq parser and prior knowledge from a symbolic parser at the subgraph level (i.e., nodes and edges). We take graph semantic parsing for English Resource Grammar (ERG) as our case study. ERG is a compositional semantic representation explicitly coupled with the syntactic structure. Compared to other graph-based meaning representations like Abstract Meaning Representation (AMR), ERG has high coverage of English text and strong transferability across domains, rendering itself as an attractive target formalism for automated semantic parsing. Furthermore, many years of ERG research has led to well-established symbolic parser and a rich set of carefully constructed corpus across different application domains and fine-grained linguistic phenomena, making it an ideal candidate for studying cross-domain generalization of neural-symbolic methods We start with a novel investigation of the uncertainty calibration behaviour of a T5-based state-ofthe-art neural ERG parser We then propose a decision-theoretic criteria to allow for neural-symbolic inference at subgraph level (i.e., nodes and edges) and incorporates the neural parser's fine-grained uncertainty for each graph component prediction The core challenge here is how to properly quantify compositional uncertainty using a seq2seq model, i.e., assigning model probability for a node or edge prediction. For example, our interest is to express the conditional probability of a graph node v with respect to its parent p(v|pa(v), x), rather than the likelihood of v conditioning on the previous tokens in the linearized string. As a result, it cannot be achieved by relying on the naive tokenlevel autoregressive probabilities from the beam search. To address this issue, we introduce a simple probabilistic formalism termed Graph Autoregressive Process (GAP) (Section 4.2). GAP adopts a dual representation of an autoregressive process and a probabilistic graphical model, and can serve as a powerful medium for expressing compositional uncertainty for seq2seq graph parsing. We demonstrate the effectiveness of our approach in experiments across a diverse suite of eight in-domain and OOD evaluation datasets encompassing domains including Wikipedia entries, news articles, email communications, etc (Section 5). We achieve the best results on the overall performance across the eight domains, attaining 35.26% and 35.60% error reduction in the aggre-gated SMATCH score over the neural and symbolic parser, respectively. Our approach also exhibits significantly stronger robustness in generalization to OOD datasets and long-tail linguistic phenomena than previous work, while maintaining the stateof-the-art performance on in-domain test. Further study also shows that the compositionality aspects of neural-symbolic inference helps the model to assemble novel graph solution that the original inference process (e.g., beam search or symbolic parse) fails to provide (Section 5.4). In summary, our contributions are four-fold: • We present a novel investigation of neural graph parser's uncertainty calibration performance at subgraph level (Section 3). Our study confirms the seq2seq uncertainty is effective for detecting model error even out-of-distribution, establishing the first empirical basis for the utility of compositional uncertainty in seq2seq graph parsing. • We propose a practical and principled framework for neural-symbolic graph parsing that utilizes model uncertainty and exploits compositionality (Section 4.1). The method is fully compatible with modern large pre-trained seq2seq network using beam decoding, and is general-purpose and applicable to any graph semantic parsing task. proves models OOD and tail performance. Reproducibility. Our code is available on Github:
|
In this work, we take the representations from English Resource Grammar (ERG; ERG can be presented into different types of annotation formalism In this section, we review the state-of-the-art symbolic and neural parsers utilized in our work, i.e., the ACE parser We hypothesize that when the neural seq2seq model is uncertain at the subgraph level, it is more likely to make mistakes. Assuming the symbolic parser performs more robustly in these situations, we can then design a procedure to ask the symbolic parser for help when the model is uncertain. To validate this hypothesis, we conduct experiments to empirically explore the following two questions: (1) how does the model perform when it is uncertain at the subgraph level? and (2) how does the symbolic parser perform when the model is uncertain? First, we compute model probabilities for each graph element (i.e., node and edge) prediction (see Section 4.2 for how to compute these quanitities), and identify the corresponding ACE parser prediction using the graph matching algorithm from SMATCH In Figure Notation & Problem Statement. For graph semantic parsing, the input is a natural language utterance x, and the output is a directed acyclic graph (DAG) G = ⟨N, E⟩, where N is the set of nodes and E ∈ N × N is the set of edges (e.g., Figure To this end, our goal is to produce a principled inference procedure for graph prediction accounting for model uncertainty on predicting graph elements v ∈ G. In the sequel, Section 4.1 presents a decision-theoretic criterion that leverages the graphical model likelihood p(G|x) to conduct compositional neural-symbolic inference for graph prediction. To properly express the graphic model likelihood p(G|x) = v∈G p(v|pa(v), x) using a learned seq2seq model, Section 4.2 introduces a simple probabilistic formalism termed Graph Autoregressive Process (GAP) to translate the autoregressive sequence probability from the seq2seq model to graphical model probability. Appendix E discusses some additional extensions. Previously, an uncertainty-aware decision criteria was proposed for neural-symbolic inference based on the Hurwicz pessimism-optimism criteria R(G|x) where R(G|x) =log p(G|x) is the neural model likelihood, R 0 (G) = log p 0 (G) is the symbolic prior likelihood, and α(x) is a the uncertaintydriven trade-off coefficient to balance between the optimistic MLE criteria R p (G|x) and the pessimistic, prior-centered criteria R 0 (G|x) centered around symbolic prediction G 0 . A key drawback of this approach is the lack of accounting for the compositionality. This motivates us to consider synthesizing the multiple graph predictions {G k } K k=1 from the neural parser to form a meta graph G and the overall criteria is written as R(G|x) = v∈G R(v|x). Here pa(v) refers to the parents of v in G, and α(v|x) = sigmoid(-1 T H(v|x) + b) is the component-specific trade-off parameter driven by model uncertainty H(v|x) = log p(v| pa(v), x), and (T, b) are scalar calibration hyperparameters that can be tuned on the dev set. Following previous work Algorithm 1 summarizes the full algorithm. As shown, during inference, the method proceeds by starting from the root node v 0 and selects the optimal prediction v0 = arg max c 0 ∈Candidate(v 0 ) R(c 0 |x), where c 0 are different candidates for v 0 given by the meta graph G. The algorithm then recursively performs the same neural-symbolic inference procedure for the children of v 0 (i.e., ch(v)). The algorithm terminates when the optimal candidates for all graph variables v ∈ G are determined. As a result, the algorithm is able to adaptively combine subgraph predictions across multiple beam candidates thanks to the meta graph G, and appropriately weight between the local neural and symbolic information thanks to the uncertaintyaware decision criteria R(v|x). Empirically, this also gives the algorithm the ability to synthesize novel graph predictions that are distinct from its base models (Section 5.4). Meta graph G Graphical model likelihood log p(G|x) Symbolic prior p0 Output: Neural-symbolic graph prediction G Initialize: To properly model the uncertainty p(G|x) from a seq2seq model, we need an intermediate probabilistic representation to translate the raw token-level probability to the distribution over graph elements. To this end, we introduce a simple probabilistic formalism termed Graph Autoregressive Process (GAP), which is a probability distribution assigning seq2seq learned probability to the graph elements v ∈ G. Specifically, as the seq2seq-predicted graph adopts both a sequence-based representation g = s 1 , ..., s L and a graph representation G = ⟨N, E⟩, the GAP model adopts both an autoregressive representation p(g|x) = i p(s i |s <i , x) (Section 4.2.1), and also a probabilistic graphical model representation p(G|x) = v∈G p(v| pa(v), x) (Section 4.2.2). Both representations share the same set of underlying probability measures (i.e., the graphicalmodel likelihood p(G|x) can be derived from the autoregressive probabilities p(s i |s <i , x)) (Figure Linearized Sequence g Given an input sequence x and output sequence In the context of graph parsing, the output sequence describes a linearized N or an edge e ∈ E of the graph and corresponds to a collection of beam-decoded tokens g., the node _the_q in Figure Marginal and Conditional Probability. Importantly, GAP allows us to compute the marginal and (non-local) conditional probabilities for graph elements s i . Given the input x, the marginal probability of s i is computed as by integrating over the space of all possible subsequences s <i prior to the symbol s i . Then, the (non-local) conditional probability between two graph elements (s i , s j ) with i < j is computed as by integrating over the space of subsequences s i→j between (s i , s j ) and the subsequence s <i before s i . Higher order conditional (e.g., p(s j |(s i , s l ), x)) can be computed analogously. Notice this gives us the ability to reason about long-range dependencies between non-adjacent symbols on the sequence. Furthermore, the conditional probability on the reverse direction can also be computed using the Bayes' rule: . Efficient Estimation Using Beam Outputs. In practice, we can estimate p(s i |x) and p(s j |s i , x) efficiently via importance sampling using the output from the beam decoding {g k } K k=1 , where K is the beam size where is the importance weight proportional to the beam candidate g k 's log likelihoods, and t > 0 is the temperature parameter fixed to a small constant (e.g., t = 0.1, see Appendix C.1 further discussion) Then, for two symbols (s i , s j ) with i < j, we can estimate the joint probability as where is the importance weight among beam candidates that contains s i . Notice this is different from Equation So far, we have focused on probability computation based on the graph's linearized representation p(g|x) = i p(s i |s <i , x). To conduct the compositional neural-symbolic inference (Section 4.1), we also need to consider GAP's graphical model representation p(G|x) = v∈G p(v| pa(v), x). GAP's graphical model representation G depends on the meta graph G constructed from K candidate graphs {G k } K k=1 (Section 4.1). Figure To this end, GAP assigns proper graph-level probability p(G|x) to graphs G sampled from the meta graph G via the graphical model likelihood: where p(v| pa(v), x) is the conditional probability for v with respect to their parents pa(v) in G. Given the candidates graphs {G k } K k=1 , we can express the likelihood for p(v| pa(v), x) by writing down a multinomial likelihood enumerating over different values of pa(v) where c k is the value of pa(n) in k th beam sequence, and the conditional probabilities are computed using Equation (3). See Appendix D for a detailed derivation. Algorithm 2 Graph Autoregressive Process Inputs: In summary, for each graph element variable v ∈ G, GAP allows us to compute the graphicalmodel conditional likelihood p(v|pa(v), x) via its graphical model representation, and also to compute the marginal probability p(v|x) via its autoregressive presentation. The conditional likelihood is crucial for neural-symbolic inference (Section 4.1), and the marginal probability is useful for sparsity regularization in global graph structure inference (Appendix E). Algorithm 2 summarizes the full GAP computation. Datasets. Consistent with previous ERG works, we train the neural model on DeepBank v1.1 annotation of the Wall Stree Journal (WSJ), sections 00-21 (the same text annotated in the Penn Tree Bank) that correspond to ERG version 1214. For OOD evaluation, we select 7 diverse datasets from the Redwoods Treebank corpus: Wikipedia (Wiki), the Brown Corpus (Brown), the Eric Raymond Essay (Essay), customer emails (Ecommerce), meeting/hotel scheduling (Verbmobil), Norwegian tourism (LOGON) and the Tanaka Corpus (Tanaka) (See Appendix G for more details). Model. Following The results are shown in We now compare with the previous state-of-theart methods. Though in-domain performance is not the focus of this work, our approach is still comparable to Collab, i.e., the neural-symbolic method from We also notice that the voting-based ensemble method Vote ERG provides different levels of linguistic information that can benefit many NLP tasks, e.g., named entity recognition and semantic role labeling. This rich linguistic annotation provides an oppurtunity to evaluate model performance in meaningful population subgroups. Detailed description of those linguistic phenomena is in Appendix J. Result is in Table To test if our methods can generate optimal graph solution which the base models fail to obtain, we further explore the percentage of novel graphs (graphs that are not identical to any of the candidate predictions of the neural or symbolic model) for each dataset, and compare the corresponding SMATCH scores on those novel cases. The results are shown in Table In this section we introduce related work for neuralsymbolic and ensemble learning for graph semantic Compare to the previous ensemble work, our work differ in three ways: (1) Our decision rule is based on neural model confidence, so the decision is driven not by model consensus, but by model confidence which indicates when the main (neural) result is untrustworthy and needs to be complemented by symbolic result. Model consensus is effective when there exists a large number of candidate models. However, in the neural-symbolic setting when there are only two models, the ability of quantifying model uncertainty becomes important. (2) A secondary contribution of our work is to produce an parsing approach for the ERG community that not only exhibits strong average-case performance on in-domain and OOD environments, but also generalizes robustly in important categories of tail linguistic phenomena. Therefore, our investigation goes beyond average-case performance and evaluates in tail generalization as well. (3) We reveal a more nuance picture of neural models' OOD performance: a neural model's top K parses in fact often contains subgraphs that generalize well to OOD scenarios, but the vanilla MLE-based inference fails to select them (see Section 5.4 for more details). We have shown how to perform accurate and robust semantic parsing across a diverse range of genres and linguistic categories for English Resource Grammar. We achieve this by taking the advantage of both the symbolic parser (ACE) and the neural parser (T5) at a fine-grained subgraph level using compositional uncertainty, an aspect missing in the previous neural-symbolic or ensemble parsing work. Our approach attains the best known result on the aggregated SMATCH score across eight evaluation corpus from Redwoods Treebank, attaining 35.26% and 35.60% error reduction over the neural and symbolic parser, respectively. Our work is sponsored in part by National Science Foundation Convergence Accelerator under award OIA-2040727 as well as generous gifts from Google, Adobe, and Teradata. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes not withstanding any copyright annotation hereon. We thank Du Phan, Panupong Pasupat, Jie Ren, Balaji Lakshminarayanan and Deepak Ramachandran for helpful discussion. Here we discuss a potential limitations of the current study: Problem domain In this work, we have selected English Resource Grammar as the target formalism. This is a deliberate choice based on the availability of (1) realistic out-of-distribution evaluation corpus, and (2) well-established, high-quality symbolic parser. This is a common setting in industrial applications, where an practitioner is tempted to combine large pre-trained neural model with expert-developed symbolic rules to improve performance for a new domain. Unfortunately, we are not aware of another popular meaning representation for which both resources are available. To overcome this challenge, we may consider studying collaborative inference between a standard seq2seq model and some indirect symbolic supervision, e.g., syntactic parser or CCG parser The vanilla seq2seq model is known to under-estimate the true probability of the high-likelihood output sequences, wasting a considerable amount of probability mass towards the space of improbable outputs The GAP model presented in this work considers a classical graphical model likelihood p(G|x) = v∈G p(v| pa(v), x) , which leads to a clean factorization between graph elements v and fast probability computation. However, it also assumes a local Markov property that v is conditional independent to its ancestors given the parent pa(v). In theory, the probability learned by a seq2seq model is capable of modeling higher order conditionals between arbitrary elements on the graph. Therefore it is interesting to ask if a more sophisticated graphical model with higher-order dependency structure can lead to better performance in practice while maintaining reasonable computational complexity. There exists many different types of uncertainties occur in a machine learning system This paper focused on neural-symbolic semantic parsing for the English Resource Grammar (ERG). Our architecture are built based on open-source models and datasets (all available online). We do not anticipate any major ethical concerns. Considerable NLP research has been devoted to the transformation of natural language utterances into a desired linguistically motivated semantic representation. Such a representation can be understood as a class of discrete structures that describe lexical, syntactic, semantic, pragmatic, as well as many other aspects of the phenomenon of human language. In this domain, graph-based representations provide a light-weight yet effective way to encode rich semantic information of natural language sentences and have been receiving heightened attention in recent years. Popular frameworks under this umbrella includes Bi-lexical Semantic Dependency Graphs (SDG; Graph-based Representations for English Resource Grammar (ERG; In this section, we present a summary of different parsing technologies for graph-based meaning representations in addition to the ones discussed in 2.2, with a focus on English Resource Grammar (ERG). Grammar-based approach In this type of approach, a semantic graph is derived according to a set of lexical and syntactico-semantic rules. For ERG parsing, sentences are parsed to HPSG derivations consistent with ERG. The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. The parser has a default parse ranking procedure trained on a treebank, where maximum entropy models are used to score the derivations in order to find the most likely parse. However, this approach fails to parse sentences for which no valid derivation is found Factorization-based approach This type of approach is inspired by graph-based dependency tree parsing (2019) presented a four-stage pipeline to incrementally construct an ERG graph, whose core idea is similar to previous work. Transition-based approach In these parsing systems, the meaning representations graph is generated via a series of actions, in a process that is very similar to dependency tree parsing Composition-based approach Following a principle of compositionality, a semantic graph can be viewed as the result of a derivation process, in which a set of lexical and syntactico-semantic rules are iteratively applied and evaluated. For ERG parsing, based on Chen et al. ( Translation-based approach This type of approach is inspired by the success of seq2seq models which are the heart of modern Neural Machine Translation. A translation-based parser encodes and views a target semantic graph as a string from another language. In a broader context of graph semantic parsing, simply applying seq2seq models is not successful, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges Given the candidates graphs {G k } K k=1 , we can express the likelihood for p(v| pa(v), x) by writing down a multinomial likelihood enumerating over different values of pa(v) p(n|(e 1 , e 2 ) = c, x) Kc where Candidate(e) is the collection of possible symbols s the variable e can take, and K c is the number of times (e 1 , e 2 ) takes a particular value c ∈ Candidate(e 1 , e 2 ) = Candidate(e 1 ) × Candidate(e 2 ). Then, the log likelihood becomes: To simplify this above expression, we notice that log p(n| pa(n), x) can be divided by the constant beam size K without impacting the inference. As a result, the log probability can be computed by simplify averaging the values of log p(v| pa(v) = c k ) across the beam candidates: where c k is the value of (e 1 , e 2 ) in k th beam candidate. E.1 Infer Sparse Global Structure via Likelihood-based Pruning In practice, the meta graphG can contain spurious elements v that have a high local likelihoods log p(v| pa(v), x) but very low global probabilities p(v|x). This happens when the element v only appears in a few low-probability beam sequences. These spurious nodes and edges often adds redundancy to the generated graph (i.e., hurting precision), and cannot be eliminated by the neural-symbolic inference procedure, due to their high local conditional probability p(v| pa(v), x). Consequently, we find it empirically effective to perform sparse structure inference forG based on global probabilities p(v|x) before diving into local neural-symbolic prediction for graph components. In this work, we carry out this global structure inference by considering a simple threshold-andproject procedure, i.e., pruning out all the graph elements whose global probability ||p(v|x)|| ∞ = max s∈Candidate(v) p(v = s|x) is lower than a threshold t, but will keep v if its removal will lead to an invalid graph with disconnected subcomponents. Here ||p(v|x)|| ∞ is the total variation metric that returns the maximum probability. Algorithm 3 summarizes this procedure. From a theoretical perspective, this is equivalent to finding the most sparse solution with respect to threshold t within the space of valid (i.e., connected) subgraphs ofG. In some rare cases where the input sentence is fragmented or ill-formed, the neural model may output multiple beam sequences with drastically different high-level structures, creating difficulty for the graph merging procedure (See Figure We can handle this multi-modality in observed graph structure by extending p(G|x) to be a mixture of GAP distributions, so that the graphical model likelihood becomes: where p(m|x) is a categorical distribution over the mixture components m ∈ M . Here each component m induce a meta graph G m for graph Given beam sequences {g k } K k=1 , the mixture components can be estimated using a standard clustering algorithm based on an edit distance between beam candidate g k . Based on our experiments, hierarchical agglomerative clustering (HAC) combined with the longest common subsequence (LCS) distance often leads to the best result. After clustering, p(m|x) is computed as the empirical probability of beam sequences belonging the m th cluster, and the meta graph G m is computed by applying the graph merging procedure to the beam sequences in the m th cluster. To conduct neural symbolic inference, we also need to define the symbolic prior p 0 for the mixture distribution: where p 0 (v = s) ∝ exp(I(s ∈ G 0 )) as define previously, and we define p 0 (m) = exp(-SMATCH(G m , G 0 )) following the previous work As a result, the decision criteria for neuralsymbolic inference under the mixture model becomes: where v∈Gm R(v|x) is the component-wise decision criteria as defined in the main text, and R(m|x) is the additional term for the mixture components: where α(m|x) = σ(-1 T H(m|x) + b) is the tradeoff parameter driven by the average log likelihood of beam sequences in the m th cluster C m , i.e., H(m|x) = 1 |Cm| g k ∈Cmlog(g k |x). During inference, we can again proceed in a greedy fashion, first select the optimal m based on R(m|x), and then perform compositional neuralsymbolic inference with respect to G m using v∈G m R(v|x). As a result, the complete precedure with all optional extensions are shown in Algorithm 4. In general, finding the largest common subgraph is a well-known computationally intractable problem in graph theory. However, for graph parsing problems where graphs have labels and a simple tree-like structure, some efficient heuristics are proposed to approximate the best match by a hillclimbing algorithm Local node / edge prediction via compostitional neural-symbolic inference (Algorithm 1) G Details for OOD Datasets Wikipedia (Wiki) The DeepBank team constructed a treebank for 100 Wikipedia articles on Computational Linguistics and closely related topics. The treebank of 11,558 sentences comprises 16 sets of articles. The corpus contains mostly declarative, relatively long sentences, along with some fragments. The Brown Corpus (Brown) The Brown Corpus was a carefully compiled selection of current American English, totalling about a million words drawn from a wide variety of sources. The Eric Raymond Essay (Essay) The treebank is based on translations of the essay "The Cathedral and the Bazaar" by Eric Raymond. The average length and the linguistic complexity of these sentences is markedly higher than the other treebanked corpora. E-commerce While the ERG was being used in a commercial software product developed by the YY Software Corporation for automated response to customer emails, a corpus of training and test data was constructed and made freely available, consisting of email messages composed by people pretending to be customers of a fictional consumer products online store. The messages in the corpus fall into four roughly equal-sized categories: Product Availability, Order Status, Order Cancellation, and Product Return. Meeting/hotel scheduling (Verbmobil) This dataset is a collection of transcriptions of spoken dialogues, each of which reflected a negotiation either to schedule a meeting, or to plan a hotel stay. One dialogue usually consists of 20-30 turns, with most of the utterances relatively short, including greetings and closings, and not surprisingly with a high frequency of time and date expressions as well as questions and sentence fragments. Norwegian tourism (LOGON) The Norwegian/English machine translation research project LOGON acquired for its development and evaluation corpus a set of tourism brochures originally written in Norwegian and then professionally translated into English. The corpus consists almost entirely of declarative sentences and many sentence fragments, where the average number of tokens per item is higher than in the Verbmobil and Ecommerce data. The Tanaka Corpus (Tanaka) This treebank is based on parallel Japanese-English sentences, which was adopted to be used with in the WWWJDIC dictionary server as a set of example sentences associated within words in the dictionary. We use the open-sourced T5X 4 , which is a new and improved implementation of T5 codebase in JAX and Flax. Specifically, we use the official pretrained T5-Large (770 million parameters), which is the same size as the one used in Hyperparameters For the trade-off parameter α(v|x) = σ(-1 T H(v|x) + b), we set temperature T = 0.1 and bias b = 0.25. ACE parser and other data-driven parsers. The baseline models also include a similar practice with From the table we can see that our methods outperforms the base model (T5-based) and most of the previous work. Specifically, we achieves a SMATCH score of 96.77, which is a 6.11% error reduction compared to the base T5 parser. Lexical construction ERG uses the abstract node compound to denote compound words. The edge labeled with ARG1 refers to the root of the compound word, and thus can help to further distinguish the type of the compound into (1) nominal with normalization, e.g., "flag burning"; (2) nominal with noun, e.g., "pilot union"; (3) verbal, e.g., "state-owned"; (4) named entities, e.g., "West Germany". Argument structure In ERG, there are different types of core predicates in argument structures, specifically, verbs, nouns and adjectives. We also categorize verb in to basic verb (e.g., _look_v_1) and verb particle constructions (e.g., _look_v_up). The verb particle construction is handled semantically by having the verb contribute a relation particular to the combination. Coreference ERG resolves sentence-level coreference, i.e., if the sentence referring to the same entity, the entity will be an argument for all the nodes that it is an argument of, e.g., in the sen-
| 1,299 | 4,583 | 1,299 |
CREST: A Joint Framework for Rationalization and Counterfactual Text Generation
|
Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model's predictions.
|
As NLP models have become larger and less transparent, there has been a growing interest in developing methods for finer-grained interpretation and control of their predictions. One class of methods leverages selective rationalization as improved robustness to input perturbations This paper is motivated by the observation that selective rationales and counterfactual examples allow for interpreting and controlling model behavior through different means: selective rationalization improves model transparency by weaving interpretability into a model's internal decisionmaking process, while counterfactual examples provide external signal more closely aligned with human causal reasoning We propose to combine both methods to leverage their complementary advantages. We introduce CREST (ContRastive Edits with Sparse raTionalization), a joint framework for rationalization and Figure • We present CREST-Generation (Figure • We introduce CREST-Rationalization (Figure • We show that CREST-generated counterfactuals can be effectively used to increase model robustness, leading to larger improvements on contrast and out-of-domain datasets than using manual counterfactuals ( §6.2, Tables • We find that rationales trained with CREST-Rationalization not only are more plausible, but also achieve higher forward and counterfactual simulabilities ( §6.3, Table Overall, our experiments show that CREST successfully combines the benefits of counterfactual examples and selective rationales to improve the quality of each, resulting in a more interpretable and robust learned model.
|
The traditional framework of rationalization involves training two components cooperatively: the generator-which consists of an encoder and an explainer-and the predictor. The generator encodes the input and produces a "rationale" (e.g., word highlights), while the predictor classifies the text given only the rationale as input Assume a document x with n tokens as input. The encoder module (enc) converts the input tokens into d-dimensional hidden state vectors H ∈ R n×d , which are passed to the explainer (expl) to generate a latent mask z ∈ {0, 1} n . The latent mask serves as the rationale since it is used to select a subset of the input x ⊙ z, which is then passed to the predictor module (pred) to produce a final prediction ŷ ∈ Y, where Y = {1, ..., k} for k-class classification. The full process can be summarized as follows: where ϕ, γ, θ are trainable parameters. To ensure that the explainer does not select all tokens (i.e., z i = 1, ∀i), sparsity is usually encouraged in the rationale extraction. Moreover, explainers can also be encouraged to select contiguous words, as there is some evidence that it improves readibility In this work, we will focus specifically on the SPECTRA rationalizer In NLP, counterfactuals refer to alternative texts that describe a different outcome than what is encoded in a given factual text. Prior works • Validity: the generated counterfactuals should encode a different label from the original text. • Closeness: the changes made to the text should be small, not involving large-scale rewriting of the input. • Fluency: the generated counterfactuals should be coherent and grammatically correct. • Diversity: the method should generate a wide range of counterfactuals with diverse characteristics, rather than only a limited set of variations. While many methods for automatic counterfactual generation exist We now introduce CREST (ContRastive Edits with Sparse raTionalization), a framework that combines selective rationalization and counterfactual text generation. CREST has two key components: (i) CREST-Generation offers a controlled approach to generating counterfactuals, which we show are valid, fluent, and diverse ( §4.2); and (ii) CREST-Rationalization leverages these counterfactuals through a novel regularization technique encouraging agreement between rationales for original and counterfactual examples. We demonstrate that combining these two components leads to models that are more robust ( §6.2) and interpretable ( §6.3). We describe CREST-Generation below and CREST-Rationalization in §5. Formally, let x = ⟨x 1 , ..., x n ⟩ represent a factual input text with a label y f . We define a counterfactual as an input x = ⟨x 1 , ..., x m ⟩ labeled with y c such that y f ̸ = y c . A counterfactual generator is a mapping that transforms the original text x to a counterfactual x. Like MiCE, our approach for generating counterfactuals consists of two stages, as depicted in Figure Mask stage. We aim to find a mask vector z ∈ {0, 1} n such that tokens x i associated with z i = 1 are relevant for the factual prediction ŷf of a particular classifier C. To this end, we employ a SPECTRA rationalizer as the masker. Concretely, we pre-train a SPECTRA rationalizer on the task at hand with a budget constraint B, and define the mask as the rationale vector z ∈ {0, 1} n (see §2.1). Edit stage. Here, we create edits by infilling the masked positions using an editor module G, such as a masked language model: x ∼ G LM (x ⊙ z). In order to infill spans rather than single tokens, we follow MiCE and use a T5-based model to infill In order to generate counterfactual edits at test time, we prepend a counterfactual label y c instead, and sample counterfactuals using beam search. Overall, our procedure differs from that of MiCE in the mask stage: instead of extracting a mask via gradient-based attributions and subsequent binary search, we leverage SPECTRA to find an optimal mask. Interestingly, by doing so, we not only avoid the computationally expensive binary search procedure, but we also open up new opportunities: as our masking process is differentiable, we can optimize our masker to enhance the quality of both the counterfactuals ( §4.2) and the selected rationales ( §6.3). We will demonstrate the latter with our proposed CREST-Rationalization setup ( §5). All implementation details for the masker and the editor can be found in §B. This section presents an extensive comparison of counterfactuals generated by different methods. We use the IMDB and SNLI datasets to train SPECTRA rationalizers with and without counterfactual examples, and further evaluate on in-domain, contrast and out-of-domain (OOD) datasets. For IMDB, we evaluate on the revised IMDB, contrast IMDB, RottenTomatoes, SST-2, Amazon Polarity, and Yelp. For SNLI, we evaluate on the Hard SNLI, revised SNLI, break, MultiNLI, and Adversarial NLI. Dataset details can be found in §A. To produce CREST counterfactuals, which we refer to as "synthetic", we use a 30% masking budget as it provides a good balance between validity, fluency, and closeness (cf. Figure Results are presented in Table For SNLI, this modification allows MiCE to achieve the best overall scores, closely followed by CREST. However, when controlling for closeness, we observe that CREST outperforms MiCE: at closeness of ∼0.30, CREST (30% mask) outperforms MiCE with binary search in terms of fluency and diversity. Similarly, at a closeness of ∼0.40, CREST (50% mask) surpasses MiCE (30% mask) across the board. As detailed in §C, CREST's counterfactuals are more valid than MiCE's for all closeness bins lower than 38%. We provide examples of counterfactuals produced by CREST and MiCE in Appendix G. Finally, we note that CREST is highly affected by the masking budget, which we explore further next. Sparsity analysis. We investigate how the number of edits affects counterfactual quality by training maskers with increasing budget constraints (as described in §2.1). The results in Figure Validity filtering. As previously demonstrated by We conduct a small-scale human study to evaluate the quality of counterfactuals produced by MiCE and CREST with 50% masking percentage. Annotators were tasked with rating counterfactuals' validity and naturalness (e.g., based on style, tone, and grammar), each using a 5-point Likert scale. Two fluent English annotators rated 50 examples from the IMDB dataset, and two others rated 50 examples from SNLI. We also evaluate manually created counterfactuals to establish a reliable baseline. More annotation details can be found in §D. The study results, depicted in Figure Now that we have a method that generates highquality counterfactual examples, a natural step is to use these examples for data augmentation. However, vanilla data augmentation does not take advantage of the paired structure of original/contrastive examples and instead just treats them as individual datapoints. In this section, we present CREST's second component, CREST-Rationalization (illustrated in Figure We propose to incorporate counterfactuals into a model's functionality by taking advantage of the fully differentiable rationalization setup. Concretely, we decompose a rationalizer into two flows, as depicted in Figure Training. Let Θ = {ϕ, γ, θ} represent the trainable parameters of a rationalizer (defined in §2.1). We propose the following loss function: where L f (•) and L c (•) represent cross-entropy losses for the factual and counterfactual flows, respectively, and Ω(•) is a novel penalty term to encourage factual and counterfactual rationales to focus on the same positions, as defined next. α ∈ R and λ ∈ R are hyperparameters. Agreement regularization. To produce paired rationales for both the factual and counterfactual flows, we incorporate regularization terms into the training of a rationalizer to encourage the factual explainer to produce rationales similar to those originally generated by the masker z ⋆ , and the counterfactual explainer to produce rationales that focus on the tokens modified by the editor z⋆ . We derive the ground truth counterfactual rationale z⋆ by aligning x to x and marking tokens that were inserted or substituted as 1, and others as 0. The regularization terms are defined as: (4) To allow the counterfactual rationale z to focus on all important positions in the input, we adjust the budget for the counterfactual flow based on the length of the synthetic example produced by the counterfactual generator. Specifically, we multiply the budget by a factor of In this section, we evaluate the effects of incorporating CREST-generated counterfactuals into training by comparing a vanilla data augmentation approach with our CREST-Rationalization approach. We compare how each affects model robustness ( §6.2) and interpretability ( §6.3). Tables Examining the results for NLI in Table Overall, these observations imply that CREST-Rationalization is a viable alternative to data augmentation for improving model robustness, especially for learning contrastive behavior for sentiment classification. In the next section, we explore the advantages of CREST-Rationalization for improving model interpretability. In our final experiments, we assess the benefits of our proposed regularization method on model inter- pretability. We evaluate effects on rationale quality along three dimensions: plausibility, forward simulability, and counterfactual simulability. Plausibility. We use the MovieReviews Forward simulability. Simulability measures how often a human agrees with a given classifier when presented with explanations, and many works propose different variants to compute simulability scores in an automatic way Counterfactual simulability. Building on the manual simulability setup proposed by We define counterfactual simulability as follows: where Results. The results of our analysis are shown in Table Generating counterfactuals. Existing approaches to generating counterfactuals for NLP use heuristics Training with counterfactuals. Existing approaches to training with counterfactuals predominantly leverage data augmentation. Priors works have explored how augmenting with both manual Rationalization. There have been many modifications to the rationalization setup to improve task accuracy and rationale quality. Some examples include conditioning the rationalization on pre-specified labels We proposed CREST, a joint framework for selective rationalization and counterfactual text generation that is capable of producing valid, fluent, and diverse counterfactuals, while being flexible for controlling the amount of perturbations. We have shown that counterfactuals can be successfully incorporated into a rationalizer, either via counterfactual data augmentation or agreement regularization, to improve model robustness and rationale quality. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model's predictions. Our work shows that CREST is a suitable framework for generating high-quality counterfactuals and producing plausible rationales, and we hope that CREST motivates new research to develop more robust and interpretable models. We note, however, two main limitations in our framework. First, our counterfactuals are the result of a large language model (T5), and as such, they may carry all the limitations within these models. Therefore, caution should be exercised when making statements about the quality of counterfactuals beyond the metrics reported in this paper, especially if these statements might have societal impacts. Second, CREST relies on a rationalizer to produce highlights-based explanations, and therefore it is limited in its ability to answer interpretability questions that go beyond the tokens of the factual or counterfactual input. The revised IMDB and SNLI datasets, which we refer to as rIMDB and rSNLI respectively, were created by For SNLI, counterfactuals were created either by revising the premise or the hypothesis. We refer to For all datasets, the masker consists of a SPEC-TRA rationalizer that uses a T5-small encoder as the backbone for the encoder and predictor (see §2.1). Our implementation is derived directly from its original source For all datasets, CREST and MiCE editors consist of a full T5-small model All of our SPECTRA rationalizers share the same setup and training hyperparameters as the one used by the masker in §4, but were trained with distinct random seeds. We tuned the counterfactual loss weight α within {1.0, 0.1, 0.01, 0.001, 0.0001}, and λ within {1.0, 0.1, 0.01, 0.001} for models trained with agreement rationalization. More specifically, we performed hyperparameter tuning on the validation set, with the goal of maximizing in-domain accuracy. As a result, we obtained α = 0.01 and λ = 0.001 for IMDB, and α = 0.01 and λ = 0.1 for SNLI. To better assess the performance of CREST and MiCE by varying closeness, we plot in Figure The annotation task was conducted by four distinct individuals, all of whom are English-fluent PhD students. Two annotators were employed for IMDB and two for SNLI. The annotators were not given any information regarding the methods used to create each counterfactual, and the documents were presented in a random order to maintain source anonymity. The annotators were presented with the reference text and its corresponding gold label. Subsequently, for each method, they were asked to assess both the validity and the naturalness of the resulting counterfactuals using a 5-point Likert scale. We provided a guide page to calibrate the annotators' understating of validity and naturalness prior the annotation process. We presented hypothetical examples with different levels of validity and naturalness and provided the following instructions regarding both aspects: • "If every phrase in the text unequivocally suggests a counterfactual label, the example is deemed fully valid and should receive a top score of 5/5." • "If the counterfactual text aligns with the style, tone, and grammar of real-world examples, it's considered highly natural and deserves a score of 5/5." We measure inter-annotator agreement with a normalized and inverted Mean Absolute Difference (MAD), which computes a "soft" accuracy by averaging absolute difference ratings and normalizing them to a 0-1 range. We present the annotation results in annotators assigned similar scores across all methods. In terms of overall metrics, including validity, naturalness, and agreement, the scores were lower for IMDB than for SNLI, highlighting the difficulty associated with the generation of counterfactuals for long movie reviews. Annotation interface. Figure Previous studies on counterfactual data augmentation have found that model performance highly depends on the number and diversity of augmented samples Discussion. We find that incorporating humancrafted counterfactuals (F + C H ) improves SPEC-TRA performance on all OOD datasets. On top of that, we note that using a small proportion (4% of the full IMDB) of valid CREST counterfactuals (F + C S,V ) through data augmentation also leads to improvements on all datasets and outweighs the benefits of manual counterfactuals. This finding confirms that, as found by PolyJuice Our infrastructure consists of four machines with the specifications shown in Table Table
| 1,301 | 1,578 | 1,301 |
Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task
|
Pretraining and multitask learning are widely used to improve the speech to text translation performance. In this study, we are interested in training a speech to text translation model along with an auxiliary text to text translation task. We conduct a detailed analysis to understand the impact of the auxiliary task on the primary task within the multitask learning framework. Our analysis confirms that multitask learning tends to generate similar decoder representations from different modalities and preserve more information from the pretrained text translation modules. We observe minimal negative transfer effect between the two tasks and sharing more parameters is helpful to transfer knowledge from the text task to the speech task. The analysis also reveals that the modality representation difference at the top decoder layers is still not negligible, and those layers are critical for the translation quality. Inspired by these findings, we propose three methods to improve translation quality. First, a parameter sharing and initialization strategy is proposed to enhance information sharing between the tasks. Second, a novel attention-based regularization is proposed for the encoders and pulls the representations from different modalities closer. Third, an online knowledge distillation is proposed to enhance the knowledge transfer from the text to the speech task. Our experiments show that the proposed approach improves translation performance by more than 2 BLEU over a strong baseline and achieves state-of-theart results on the MUST-C English-German, English-French and English-Spanish language pairs.
|
End-to-end methods have achieved significant progress in speech to text translation (ST) and even surpassed the traditional pipeline-based methods in some applications In this study, we focus on training the ST model along with an auxiliary text to text machine translation (MT) task. We are interested in the task interactions with different modalities and in improving the primary ST task with the help from the auxiliary MT task. The model is initialized with pretrained modules from automatic speech recognition (ASR) and MT. Two types of analysis are conducted on the fine-tuned multitask learned models. The first focuses on the model variation by comparing fine-tuned models with pretrained models for different tasks. The second aims to measure internal representation differences due to different modalities. The analysis leads to three main findings. First, the analysis confirms that MTL tends to generate similar model representations for different input modalities and preserves more information from the pretrained MT modules. Second, we do not observe significant negative transfer effect from the MT task to the corresponding ST task. Sharing more parameters is helpful to transfer knowledge to the primary ST task. Finally, the top layers in the ST decoder are more critical to the translation performance and they are also more sensitive to the modality difference. The model representations from different modalities demonstrate larger difference for the top layers in our analysis. Inspired by these findings, we propose three techniques to enhance the performance of the primary ST task. First, we propose to maximize parameter sharing between the ST and MT tasks, i.e. the entire decoder and the top encoder layers. Those shared parameters are initialized with the corresponding MT models. Second, a cross-attentive regularization is introduced for the encoders. It minimizes the L2 distance between two reconstructed encoder output sequences and encourages the encoder outputs from different modalities to be closer to each other. Finally, an online knowledge distillation learning is introduced for MTL in order to enhance knowledge transfer from the MT to the ST task. Our contributions are summarized as follows: 1. A detailed analysis is conducted on the interaction between the primary ST task and the auxiliary MT task. 2. A parameter sharing and initialization strategy are proposed to encourage information sharing between tasks. 3. Cross-attentive regularization and online knowledge distillation are proposed to reduce the model representation difference between different modalities and enhance the knowledge transfer from the MT task to the ST task. 4. Our system achieves state of the art results on the MUST-C English-German (EN-DE), English-French (EN-FR) and English-Spanish (EN-ES) language pairs, with 2 or more BLEU gains over strong baselines.
|
The proposed ST system is co-trained with the MT task as depicted in Figure The model has two encoders, a text encoder and a speech encoder, to take text and speech input respectively. The decoder is shared between the two tasks. To encourage knowledge sharing between the two tasks, the top encoder layers are also shared. The parameters of the shared modules are initialized with a pretrained MT model. A novel crossattentive regularization is proposed to reduce the distance between encoder outputs from different input modalities. We also introduce a novel online knowledge distillation method where the output from the auxiliary MT task is used to guide the ST model training. The cross-attentive regularization and online knowledge distillation are illustrated as orange modules in Figure The cross-attentive regularization (CAR) is proposed to increase the similarity between the text encoder outputs and their corresponding speech encoder outputs. Hence, the performance of the more difficult ST task can be improved by learning from the relatively easier MT task. Encoder output sequences from different modalities can not be compared directly since they have different lengths. In CAR, the two reconstructed sequences are calculated from the text output sequence via self-attention or the speech output sequence via cross attention over the text output sequence. The two reconstructed sequences have the same length and the distance is simply measured as the L2 distance between the two sequences. Formally, we denote a speech to text translation training sample as a triplet o = (X s , x t , y). X s ∈ R ds×N , x t ∈ R M , and y ∈ R K are the speech feature input, text token input and target text output respectively. N , M and K are the corresponding sequence lengths. Assume are outputs from the speech encoder and text encoder respectively, where d h is the dimension of the output states. A similarity matrix S ∈ R N ×M is defined as the cosine distance between the tensors in the two sequences: where s i,j is the ith row and jth column component in S. The text encoder outputs H t are reconstructed through the speech encoder outputs H s and similarity matrix S as below. H t→t , the reconstruction of H t from itself, can be computed similarly via self-attention. CAR is defined as the L2 distance between the two reconstruction encoder outputs: where sg[•] is the stop-gradient operator and θ s are the ST model parameters. By optimizing the model with CAR, the speech encoder is encouraged to learn from more accurate text encoder and generates similar encoder outputs after reconstruction. CAR is inspired by the attention mechanism between the encoder and decoder where the decoder states are reconstructed through encoder output states via the attention mechanism. Knowledge distillation (KD) is widely used for model compression The accuracy of the MT model is usually much higher than the corresponding ST model. Knowledge distillation from a well trained MT model to a ST model has been proved to be an effective way to improve the ST performance where δ(•) is the indicator function and p the distribution from the ST model (parameterized by θ s ). Assume the probability distribution for y k given text input x t and MT model θ t is q(y k = v|y <k , x t , θ t ), the knowledge distillation loss is defined as minimizing the cross-entropy with the MT's probability distribution The overall loss is the combination of crossattentive regularization, knowledge distillation loss, negative log likelihood loss for both ST and MT, as follows: where α and λ are predefined hyper-parameters. Experiments are conducted on three MUST-C (Gangi et al., 2019a) language pairs: EN-DE, EN-ES and EN-FR. The models are developed and analyzed on the dev set and the final results are reported on the tst-COMMON set. We use WMT parallel data from different years, 2013 for Spanish, 2014 for German, and 2016 for French, as extra text training corpus for MTL. Case-sensitive detokenized BLEU is reported by SACREBLEU with default options We use the "T-Md" configuration from The Adam optimizer (Kingma and Ba, 2014) with a learning rate 0.002 is employed in the experiments. Label smoothing and dropout rate are both set to 0.1. We choose α = 0.8 and λ = 0.02 in Equation 6 through grid search ([0.1, 1.0] for α and [0.01, 0.05] for λ). Input speech is represented as 80D log melfilterbank coefficients computed every 10ms with a 25ms window. Global channel mean and variance normalization is applied. The SpecAugment All ST or jointly trained models are initialized with pretrained ASR and MT modules. The ASR model is trained on the same English speech training data from MUST-C with the "T-Md" configuration too. The pretrained MT models are trained for each language pair with the aforementioned WMT data. The MT encoder and decoder configurations are the same as the text encoder and decoder in the MTL model mentioned above. The models are fine-tuned to 100 epochs using 8 V100 GPUs for approximate one day. The batch size is 10,000 frames for speech to text translation samples and 10,000 tokens for parallel text samples per GPU. The models are trained with FAIRSEQ We extend Chatterji et al. ( Table The BLEU difference for the top encoder layer is down from 20.2 to 17.6 when the parameters are replaced with the ones in the pretrained ASR encoder. It is further reduced to 10.0 if the shared layers are initialized with MT encoder layers. The BLEU differences in the decoder layers are mixed. The performance of "JT-S-ASR" degrades quickly in the criticality test for the top decoder layer, while "JT-S-MT performs similarly in the test as "JT" decoder. We argue that the top layers in the finetuned ST encoder might be closer to the MT encoder than the ASR encoder. It preserves more information from the MT task by sharing more parameters between two tasks and initializing them with pretrained MT modules. This is a desirable property since we want to transfer more knowledge from the text corpus to the ST task. The jointly trained model takes input from two modalities, i.e. text or speech, and we are interested in the model internal representation difference for paired inputs. Given text target y, we extract the decoder hidden state representations for the corresponding text input x t and speech input X s . The decoder representation difference solely comes from different input modalities. The difference is quantified by the correlation coefficient over all samples evaluated between two input modalities: where σ z (l, d), z ∈ [s, t] is the standard deviations of decoder hidden states at layer l for component d in all samples, and σ st (l, d) is the corresponding covariance. The layer-wise correlation coefficient is the average of all components: Figure 6 Experimental Results The main ST results are presented in Table The second column ("pars(m)") lists the number of parameters used during inference. From Table Overall, sharing top encoder layers can increase BLEU by 0.2∼0.7 ("JT-S-MT" v.s. "JT"). CAR further improves the translation by another 0.3∼0.9 BLEU. The best results are achieved by applying the shared top encoder layers, CAR and online KD together. They are about 2.9+ BLEU better than the single task based system ("ST") and achieve 2+ BLEU increase on top of the strong vanilla joint training system("JT"). Figure In MLT, many works The results show the extra adapter layer doesn't bring gain while the task dependent attention module actually makes the performance worse. It indicates that the negative transfer effect is not significant in this study and adding extra task-dependent components might not be necessary. As shown in Table "MT (JT)" and "MT (JT Proposed)" are results from the co-trained MT models in "JT" and "JT Proposed" respectively. After fine-tuning using both MuST-C (speech and text) and WMT (text only) training data, the auxiliary MT models perform better than the corresponding ST models. The proposed techniques further improve the co-trained MT models by 0.7∼1.6 BLEU. While this is a surprising result, we note that the dedicated MT models may be improved with better hyperparameter tuning. In conclusion, the results show the proposed methods are effective to unify two tasks into one model with minimal negative transfer effect. In this study, we focus on understanding the interactions between the ST and MT tasks under the MTL framework, and on boosting the performance of the primary ST model with the auxiliary MT task. Two types of analysis on model variation and modality variation, are conducted on the MTL models. The analysis demonstrates MTL helps to preserve information from the MT task and generates similar model representations for different modalities. We observe a minimal negative transfer effect between the two tasks. Sharing more parameters can further boost the information transfer from the MT task to the ST model. The analysis also reveals that the model representation difference due to modality difference is nontrivial, especially for the top decoder layers, which are critical for the translation performance. Inspired by the findings, we propose three techniques to increase knowledge transfer from the MT task to the ST task. These techniques include parameter sharing and initialization strategy to improve the information sharing between tasks, CAR and online KD to encourage the ST system to learn more from the auxiliary MT task and then generate similar model representations from different modalities. Our results show that the proposed methods improve translation performance and achieve state-of-the-art results on three MUST-C language pairs.
| 1,627 | 2,886 | 1,627 |
Learning the Visualness of Text Using Large Vision-Language Models
|
Visual text evokes an image in a person's mind, while non-visual text fails to do so. A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text with relevant images. This is particularly challenging with long-form text as text-to-image generation and retrieval models are often triggered for text that is designed to be explicitly visual in nature, whereas long-form text could contain many non-visual sentences. To this end, we curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators. We also propose a fine-tuning strategy that adapts large vision-language models like CLIP by modifying the model's contrastive learning objective to map text identified as nonvisual to a common NULL image while matching visual text to their corresponding images in the document. We evaluate the proposed approach on its ability to (i) classify visual and non-visual text accurately, and (ii) attend over words that are identified as visual in psycholinguistic studies. Empirical evaluation indicates that our approach performs better than several heuristics and baseline models for the proposed task. Furthermore, to highlight the importance of modeling the visualness of text, we conduct qualitative analyses of text-to-image generation systems like DALL-E.
|
People typically communicate knowledge and information textually, but most prefer to consume visually rich content. Text-to-image generation/retrieval models could augment text with appropriate images, aiding the creation of appealing and easy-to-understand documents. Models like DALL-E Prior approaches for quantifying the visualness of text operate on a word or phrase level To this end, in this work, we curate a corpus of 3,260 sentences in English paired with their human ratings for visualness, as well as a noisy but large corpus of 48,077 automatic alignments between text and visual assets in long-form documents. The textual part of the resulting alignment pairs can be used as examples of visual and non-visual sentences. We propose a strategy to fine-tune visionlanguage models like CLIP, allowing classification inferences over text-only inputs. Our objective also ensures that the learned embeddings remain usable for downstream text-to-image retrieval. We compare the performance of our proposed approach against several heuristic and model-based baselines. Our extensive evaluation suggests that our fine-tuning strategy leads to the most accurate visual and non-visual text classifier. Finally, we conduct several analyses to glean insights into the model's learned attention mechanism, text-toimage retrieval abilities, and downstream text-toimage generation capabilities. • We propose the task of identifying the visualness of a sentence and curate a dataset by crowdsourcing annotations for English sentences. • We develop a training objective that fine-tunes large vision-language models for the task of text visualness identification. • Quantitative and qualitative experiments demonstrate the effectiveness of our fine-tuning approach in identifying visual text over several competitive baselines, while preserving downstream text-toimage retrieval performance.
|
Fine-tuning vision-language models for downstream tasks: Large vision-language models like CLIP Our proposed fine-tuning approach follows multistage training of a large vision-language model CLIP The formulation of the training objective (discussed later) requires positive examples comprising vi- Example text from TIMED µ / σ • now the snow has melted and the grass not only looks dreary, but it is soggy. µ = 6.88 • The operation left a six-inch zipper scar on his chest. µ = 6.55 • When the gardens open, just after dawn, the first to appear are the joggers and the silent figures performing the intricate maneuvers of tai chi. µ = 6.44 • He removed the box, placed it next to the garbage can, and put his garbage inside the can. µ = 5.88 • But, after running only the first 500 meters, he realized that the injury that seemed so insignificant would not only prevent him from winning the race, but also from finishing it. µ = 5.00 • There's only one way to prove them wrong. µ = 1.22 • For more information or to schedule an outreach, please call (999) 123-4567 or email [email protected]. µ = 1.55 • In case of your failure to answer, judgment will be taken against you by default for the relief demanded in the complaint. µ = 1.67 • A 25% quorum of member votes in each district is needed to conduct district delegate elections in October. µ = 1.77 • Colliers International makes no guarantees, representations or warranties of any kind, expressed or implied, regarding the information including, but not limited to, warranties of content, accuracy and reliability. µ = 2.00 Ambiguous • J. Roman discusses his book Ohio State Football: The Forgotten Dawn which draws on extensive archival research to tell the untold story of the early days of football at Ohio as flagship public university. σ = 2.34 sual text and paired images as well as negative examples that comprise non-visual text. To create a corpus like this, we: (i) leverage image-text co-occurrences in documents to develop a selfsupervised approach, and (ii) use image-text similarity scores obtained using CLIP as priors to construct a large training corpus. We start with 450,000 publicly available PDFs referenced in the Common Crawl corpus and identify pages within those PDFs that include images. We do sentence segmentation for the identified paragraphs using NLTK Tokenizer For the human-annotated visual and non-visual examples, we start with another 200,000 PDFs distinct from those used for the automated assignment of labels. To focus on natural images rather than infographics and academic figures, we filtered these documents to only include brochures, flyers, and magazines. For the resulting 35,432 documents, we adopted the same policy as that for curating the automatically-labeled dataset (selecting top 1% and bottom 5% sentences based on similarity values). We then recruited annotators to rate the visualness of the resulting 3,620 sentences after manually anonymizing any personal information. We recruited annotators on Amazon Mechanical Turk (AMT). We randomly ordered the 3,620 examples and, for each example, we asked nine annotators to provide a response on a 7-point Likert scale for the following question: "Do you agree that the sentence below evokes an image or picture in your mind?" A response of 1 indicated strong disagreement, while 7 indicated strong agreement. We also inserted some attention-check examples (5%; n = 181) to ensure the annotators read the text carefully before responding. These checks explicitly asked the annotators to mark a randomly chosen score on the Likert scale regardless of the actual content. We discarded the annotations from annotators who did not correctly respond to all the attention-check examples and re-collected more responses iteratively. Appendix A.3 provides more details about the filters used for recruiting the annotators and the annotation interface. If a majority of annotations (i.e., at least 5 out of 9) were 1, 2, or 3, we considered the example to be non-visual (n = 2108). Similarly, visual examples had a majority of 5, 6, or 7 responses (n = 1132). We considered examples that did not have a clear majority or majority of responses of 4 (i.e., 'Neutral' on the Likert scale) as ambiguous and neutral, respectively. Table For 27.1% of the examples only at most 1 of the 9 annotators disagreed with the labels decided based on the process described above. 10.5% of the sentences were assigned a neutral or ambiguous class. Inter-annotator agreement measured by Krippendorff's α was 0.446. This inter-annotator agreement value is in a similar range to what is observed for other language-related tasks that involve assessment of text by experts on dimensions like coherence, likability, relevance, and even grammar Background: The CLIP model (1) Here, N denotes the number of examples in a batch, I e m and T e m denote the embeddings of the m-th pair of image and text that are normalized to have unit ℓ 2 -norm, respectively, such that m ∈ {1, . . . , N }. ⟨...⟩ represents the inner product, and τ is the trainable temperature parameter. V and V are the set of examples in the current batch that belong to non-visual and visual categories, respectively. Finally, I e null denotes the embedding of the NULL image. During inference, we compute the cosine similarity between the representation of a given text with the representation of the NULL image; non-visual texts will have a high similarity with the NULL image. Conversely, the visualness score S of any text with embedding T e can be obtained using For the NULL image, we create an RGB image of size An alternative formulation for adapting the CLIP training objective could have been to match visual text with a single image while matching non-visual text with a single NULL image. However, this formulation of the training objective is similar to binary classification and does not enforce a contrastive objective for the positive examples. Matching visual text with its corresponding image instead of a common image for all visual text affords text embeddings that can be used for downstream tasks like text-to-image retrieval; we provide empirical evidence for worse text-to-image retrieval performance with the alternative formulation in Results. Train, test, & validation splits: Recall that our fine-tuning approach requires paired images for visual sentences only during training time and not during inference time; the model needs only text as input during inference. Of the 1132 visual sentences in the human-annotated set of TIMED, we assign 515 examples that had an automatically determined corresponding image to the training set, and the remaining were randomly assigned to the test set (n = 517) and validation set (n = 100). The 2108 non-visual sentences were randomly split into the training (n = 980), test (n = 928), and validation set (200). All three sets maintain positive:negative class ratio of ∼ 0.5. For the first stage of training, we fine-tune the CLIP model (ViT/B-32) on the proposed objective (see Eq. 1) using the 48,077 examples with automatic labels. This training is done on Tesla T4 GPUs, for 5 epochs, and a learning rate initialized at 5 × 10 -5 and optimized using Adam optimizer We investigate the performance of TIP-CLIP against several heuristics and baseline models. Random: The random baseline generates predictions via prior class probabilities in the training set. Average MRC-I score: We consider the imageability scores of 3,769 words in the MRC lexicon and normalize them to be ∈ [0, 1]. For each example, we take the average of the imageability scores of the unique words; out-of-vocabulary words are assigned a score of 0. We lowercase the words in the MRC lexicon as well as the input text. Based on this average score, we categorize an example as visual or non-visual by setting the decision boundary as 0.17. The threshold is chosen to optimize performance on the validation set of TIMED. Concentration of Visual Genome objects (VG-Objects): The Visual Genome dataset comprises 75,729 objects, along with annotations for their attributes and object-object relations Evaluation on held-out test set of TIMED: We first evaluate the baselines and our approach on the test set of the human-annotated TIMED, computing macro-averaged F 1 , precision, recall scores, Correlation of attention Weights with MRC imageability scores: Attention mechanisms could be taken as proxies for explainability We conduct ablations to isolate the effect of two-stage training. In Table Effect on text-to-image retrieval: We aim to analyze the re-usability of learned embeddings by the TIP-CLIP model for the text-to-image retrieval task. To this end, we consider the 515 visual examples from the test set of TIMED and, for each visual example, we rank the 515 corresponding images based on the cosine similarity between the image and text embeddings obtained from the TIP-CLIP model. We compute the Mean Reciprocal Rank (MRR) and contrast it with the MRR obtained using the pre-trained CLIP embeddings. As expected, CLIP achieves a near-perfect MRR of 0.989. The proposed fine-tuning objective does not severely impact the reusability of embeddings obtained from TIP-CLIP for retrieval, and results in an MRR of 0.937. This comparison evaluates the retrieval capabilities of TIP-CLIP against that of the CLIP model because the correspondence between visual text and images was established using similarities between CLIP embeddings. The downside of an alternate training objective: Recall that our fine-tuning strategy involves matching visual text with its corresponding image and matching non-visual text with the NULL image. With only the classification of visual and non-visual text in mind, an alternate fine-tuning strategy would have been to match all the visual examples with one common image while matching all the non-visual text with the common NULL image. The major downside of this approach is that while it leads to an effective classifier after two-stage fine-tuning, demonstrating a comparable F 1 score of 0.842 as the TIP-CLIP model, it performs poorly on the text-to-image retrieval task with an MRR of 0.014. Overall, while the alternate entirely classification-based training objective performs at par with the proposed TIP-CLIP model on the classification task, the resultant embeddings demonstrate poor reusability for downstream tasks like text-to-image retrieval. Properties of the new embedding space: In Figure However, our proposed objective in TIP-CLIP pre-serves reusability for downstream tasks by maintaining semantic relevance between learned image and text embeddings. In this section, we conduct two qualitative analyses: (i) contrasting the attention mechanisms for CLIP and TIP-CLIP, and (ii) the role of distinguishing visual and non-visual text in downstream text-to-image generation using systems like DALL-E Attention map visualization: To contrast the mechanism by which CLIP and TIP-CLIP models match input text with their corresponding image, we visualize and contrast the attention maps for both models. We adopt the state-of-the-art approach to explain multimodal Transformers Downstream text-to-image generation: In Fig. Triggering text-to-image generation models like DALL-E for visual text is crucial to effectively use such systems in a passive setting. For instance, the authors should only be recommended to add visual assets in relevant places (i.e., for visual sentences) while working with long-form documents; triggering image generations for non-visual sentences could cause sub-optimal experiences. Thus, our contributions focus on distinguishing visual text from non-visual text as the necessary first step. We propose the task of predicting the visualness of text and curate a human-annotated dataset of sentence-level visualness scores. Additionally, we propose a two-stage fine-tuning objective for the task that involves training on a distantly supervised corpus followed by a smaller human-annotated corpus. Comparisons with several baselines demonstrate the effectiveness of our approach in distinguishing visual and non-visual text. We analyze the attention and downstream text-to-image retrieval capabilities of the model. Qualitative analysis of attention weights over input reinforces that our model attends to visual words to a greater extent. In closing, we show qualitative examples of how predicting text visualness can make text-toimage generation more effective. In the future, we will study alternate objectives for learning text visualness while ensuring the learned representations are transferable to related downstream tasks. We are also interested in ing measures relating to the quality of the images generated from text-to-image generation systems to decipher signals about the visualness of input text, enabling the creation of auto-labeled examples. As the aggregation word-level visualness scores leads to poor predictability sentence-level visualness, future work could aim to understand what linguistic factors (like compositionality) precipitate sentence-level visualness. Limitations: As the first study on predicting sentence-level visualness, we focus on representative vision-and-language (CLIP) and language-only (BERT) encoders. studies can extend experiments explore the benefits of using other encoders to model text visualness. Our curated TIMED dataset only covers the English language. The notion of visualness can vary across and we encourage future research contrast visualness in the context of the English language that in other non-English languages. Additionally, since US-based crowd provided our ground-truth annotations for visualness, the dataset reflects a predominantly Western-centric view of text visualness. It is unclear how visualness in the text is perceived across different cultures. To this end, we acknowledge that our work and artifacts reflect West-centric views of visualness in the English language and encourage cross-lingual and cross-cultural extensions. Broader Social Impact, Annotations, and Datasets: The authors do not foresee any negative social impacts of this work. However, our model can inherit the known biases in underlying models like CLIP and BERT reassure, militancy, inhumanly, catalyses, industrial, peacefulness, handwoven, neurosurgery, overwashed, whooper, snails, preeminence, recluse, entrepreneur, character, insufficient, paladin, impersonal, deviously, recover Low imageability politologist, psycholinguistic, requirements, confirmatory, terseness, preformulation, offender, controversial, unhealable, monoculturalism, miserable, reprogrammability, this, participate, attractive, determinant, disestablishment Table We randomly selected 500 words from the MRC lexicon and 500 words from the word2vec vocabulary that did not occur in the MRC lexicon. Each word was shown to 9 annotators using Amazon Mechanical Turk to seek responses to the following question: "Do you agree that the word below evokes an image or picture in your mind?" The annotators were instructed to respond on a 7-point Likert scale, where 1 denoted strong disagreement and 7 denoted strong agreement. Please see Appendix A.3 for details about the instructions, demographic filters, and compensation. We average the ratings for all the annotated words and normalized them to be ∈ [0, 1]. We compute the Pearson's correlation coefficient between (a) the average ratings for MRC words and the normalized imageability scores, and (b) the average ratings for word2vec words and the imageability scores assigned via embedding-based propagation. The correlation between MRC imageability scores and average annotators' ratings is 0.870 (p < 0.001) and the correlation between scores assigned via our propagation method and average annotators' ratings is 0.735 (p < 0.001). This high positive correlation coefficient between assigned imageability scores and human-perceived ratings demonstrates the effectiveness of our adopted propagation method. We also note that the inter-annotator agree-ments for the ratings for MRC words and word2vec words, as computed using Krippendorf's α (ordinal measure), were 0.626 and 0.584, respectively. Overall, this assessment illustrates the validity of propagating word-level imageability scores using embedding-based semantic similarities. More broadly, the aim of adopting this approach is to expand the coverage of MRC lexicon. Qualitatively, we observe that words like 'gotcha' (0.33) and 'presbyterian' (0.61) are assigned meaningful imageability scores, demonstrating expansion along time and domains. As a point of difference between human ratings and assigned scores, we notice that the propagation approach assigned a high imageability score to words like 'qawwali' (0.60) while the human annotators did not, possibly due to a lack of sociocultural context. In Table For all our annotation tasks, we recruited annotators using Amazon Mechanical Turk. We set the criteria to 'Master' annotators with at least a 99% approval rate and were located in the United States. To further ensure the quality of annotations, we required the annotators to have at least 5000 accepted annotations in the past. The rewards were set by assuming an hourly rate of 12 USD for all the annotators. We show the annotation interfaces in Figure We also inserted some "attention-check" examples during the annotation tasks to ensure the annotators read the text carefully before responding. This was done by asking the annotators to mark a randomly chosen score on the Likert scale regardless of the actual content. We discard the annotations from annotators who did not correctly respond to all the attention-check examples and re-collect annotations for the affected samples. We compute the Pearson's correlation coefficient between a model's average attention scores over words and the visualness score assigned using our propagation method. However, unlike Table To analyze the alignment between learned attention scores for various models, we compute the correlation between average attention scores across different models. Pearson's correlation coefficients in Table Robustness of vision-language models has been the subject of investigation in several prior works We now ask the question: how well the models considered in our work categorize Twitter text with images as visual and Twitter text without images as non-visual? We first adapt the thresholds used to classify text using various methods by running an evaluation on a randomly sampled validation set of 100 Twitter examples, 50 from each category. The thresholds are set as follows: MRC-I: 0.19; VG-Objects: 0.52; MRC-I + w2v: 0.17; MRC-I + GloVe: 0.32 6 ; CLIP: 0.87; TIP-CLIP: 0.74. Using these threshold values, we categorize the rest of the Twitter dataset (n = 14, 992) into visual and non-visual categories. The random baseline uses uniform sampling. Table Recall that while curating TIMED, we combined examples without a clear majority from the annotators (n = 378) and those with majority votes for the 'Neutral' category (n = 2) into a single category called ambiguous. We revisit these examples to analyze how the most competitive baselines 6 Since we are operating with the Twitter domain, we design a version of the propagation method where MRC Imageability scores are propagated in the GloVe-embedding space, where the GloVe embeddings are learned on Twitter corpus
| 1,366 | 1,885 | 1,366 |
The lack of theory is painful: Modeling Harshness in Peer Review Comments
|
The peer-review system has primarily remained the central process of all science communications. However, research has shown that the process manifests a power-imbalance scenario where the reviewer enjoys a position where their comments can be overly critical and wilfully obtuse without being held accountable. This brings into question the sanctity of the peer-review process, turning it into a fraught and traumatic experience for authors. A little more effort to still remain critical but be constructive in the feedback would help foster a progressive outcome from the peer-review process. In this paper, we argue to intervene at the step where this power imbalance actually begins in the system. To this end, we develop the first dataset of peer-review comments with their real-valued harshness scores. We build our dataset by using the popular Best-Worst-Scaling mechanism. We show the utility of our dataset for text moderation in peer reviews to make review reports less hurtful and more welcoming. We release our dataset and associated codes in
|
The peer-review system has largely remained the central and universal quality control system in all scientific fields. "The very act of evaluating another's work is a thinly disguised instructional relationship of authority; an inherently unequal interaction because the power to criticise is non-reciprocal and lies exclusively with the reviewer. This is perhaps made more threatening by the fact that reviewers are "mysterious and intimidating figures" Towards the overarching goal of improving the review quality standards and making the peerreviewing process more inclusive, an interesting direction would be to intervene at the very step where this power imbalance actually begins. Present-day scientific progress is critically dependent on the peer-review process. Hence an inclusive and constructive environment is critical to foster a progressive scientific temperament. Here in this work, we intend to make the review reports more welcoming so that they do not seem hurtful and actually focus on their intended objective, i.e., to provide helpful feedback to the authors on their submitted manuscript. Given the scale of the peer-review process, an automatic system for this intervention would be of high value. Here, we model the various facets of how review comments can be perceived as hurtful, a quality we henceforth call as harshness. We build upon the reviewer guidelines in major Artificial Intelligence (AI) conferences to categorize how this harshness is expressed in the peer-review reports. We use a comparative annotation scheme, called Best-Worst-Scaling, to map review sentences into real-valued harshness scores and make this dataset publicly available. We envision that our research and accompanying dataset will be helpful in automatic peer-review text moderation. Let us study a recent example from a metareview in NeurIPS 2021, which was rather harsh and unnecessary "I do have experience with social science research, and this paper lacks insightfulness or originality from that perspective, so I recommend rejection," and "This paper will eventually be published somewhere, but it won't have great impact." On gaining visibility and criticism in social media on these open access reviews Our dataset can be used to filter out review sentences based on different thresholds to detect im-polite review comments. A system to predict a harshness score of review sentences would help (senior) area chairs or editors to not allow such comments to go out in public or to the authors. Similarly, a reviewer-assistant tool could use such a predictor to flag/alert reviewers when they write such harsh comments (or are repeated offenders). We understand that the peer-review process and harshness is inherently a subjective phenomenon. However, we should strive to make the peer-review process more welcoming so that the fundamental process of scrutinizing science remains objective. Our current work is a step in that direction.
|
There is a growing body of literature on Natural Language Processing (NLP) for peer reviews and scientific literature in general. For example, datasets like PeerRead We define review harshness as a metric encompassing two orthogonal dimensions. The first dimension concerns the evaluative focus of the comment, and the second dimension deals with the comment's critical stance. Peer reviews evaluate the submitted research work across several criteria, such as novelty, correctness/soundness, impact, appropriateness, etc. As such, review texts can be (and are expected to be) critical in their expression. By harshness in review texts, we not only mean the presence of criticality or the negative sentiment in them but how these attributes are expressed. This dimension deals with the actual content of the review comments. Building upon the reviewer guidelines for the IEEE Conference on Computer Vision and Pattern Recognition (IEEE CVPR), we identify several facets of review texts that are unwelcoming and demonstrate bad reviewing practices. Some of these practices are also mentioned in 1. Blank Assertions and Pure Opinions These are ungrounded statements with no evidence to support the reasoning. Peer reviews are supposed to be the objective evaluation of the submitted work and should provide actionable comments to the authors. These ungrounded statements can sometimes take a very disparaging tone and blatantly attack authors, and the overall research Intellectual Laziness refers to narrow-minded reviewing practices. Instead of focusing on a comprehensive evaluation of the submitted research, reviewers can sometimes choose to overemphasize certain factors. For example, if the paper surpassed the state-of-theart (SOTA) results, 3. Policy Entrepreneurism stands for reviewers imposing their own policies in review comments which are against sound scientific reviewing practices. For example, sometimes reviewers ask the authors to compare with a recent arXiv preprint (not peer-reviewed or a contemporaneous article), reviewers in some venues show bias against resource papers We note that the boundaries across the above categories are ill-defined, making the categorical annotation challenging. We further assert that both the dimensions of our definition are orthogonal to each other, and the harshness score is a monotonically increasing function of both these two dimensions. Access to peer reviews is still restricted since much of the peer-review system operates behind closed doors. Fortunately, many venues in Artificial Intelligence research have adopted an open-access peer review platform called OpenReview As a seed dataset, we crawl 1093 review sentences using the Twitter API In this work, we use the Cartography Active Learning (CAL) algorithm (Zhang and Plank, 2021) for sampling. CAL is a model-agnostic active learning sampling procedure based on datamaps As stated before, we aim to model review comment harshness on a real-valued scale. Our choice is motivated by the fact that a review text can be hurtful/harsh to a varying degree and by the downstream application of more fine-grained review text moderation. Contrary to the categorical annotation of marking whether a review comment is hurtful or not For N samples, a naive comparative annotation mechanism would need to compare N 2 pairs. This is obviously expensive in practice. BWS is an efficient comparative annotation mechanism where we need only 2N comparisons. However, instead of comparing in a pair, we ask our annotators to mark a Best Item and a Worst Item according to some quality of interest in a set of four comments (4-tuple). We follow For our task, Best Item stands for the most harsh review comment, and Worst Item means the least harsh comment. In simple terms, our annotation task refers to showing each annotator a 4-tuple of review comments and asking them to select which is the most harsh comment and which is the least harsh comment. Since harshness is a subjective perceptual quality, crowdsourcing annotations would have been ideal. However, we are working with specific scientific data which requires As can be seen, the distribution of the sample scores is moderately left-skewed and has "thinner" tails. some training to get acquainted with. Therefore, we deliberately hire annotators from diverse academic backgrounds. We hire six annotators; four hold graduate degrees in Linguistic and English Literature, one holds a bachelor's degree in Computer Science and Engineering (CSE), and another is an undergraduate student in CSE. The annotators are duly paid according to the annotation payment standards in India. Each annotator underwent an exposition and training session about the Evaluative Focus dimension in our definition of harshness. We asked each annotator to read In order to cover the entire range of harshness scale, we use 500 samples randomly selected from the ShitMyReviewersSay set, and 500 samples as procured from the process described in section 4.1. Thus, we have N = 1000, resulting in 2000 tuples for BWS. We have six annotators, and since each review comment is seen in eight different 4-tuples, we get 48 judgments per review comment. To calculate the reliability of our annotations obtained through BWS, we use split-half-reliability (SHR) values over 10 trials. SHR is a commonly used metric to calculate internal consistency, a desirable quantity for the annotations to be reliable. We follow the methodology in Our final dataset contains 1000 review sentences annotated for their harshness value on a scale of -1.0 (most harsh) to 1.0 (least harsh). In this section, we study the distribution of the harshness score and qualitatively examine the samples on varying positions in the harshness scale. We visualize the histogram of the harshness scores in our sample dataset in Figure ) . The calculated test statistic for our test is t = -4.699, which suggests that the population harshness scores are skewed negatively with high confidence. This observation is not surprising, as most of the academic writing is formal, and very harsh (overly sentimental/caustic, etc.) texts are a rare class in an academic context. However, this observation also asserts the challenges in modeling the harshness of peer-review comments. Our methodology of using Active Learning and comparative annotations through BWS efficiently circumvents these issues and closely models a statistic of harshness scores in peer-review Bin Review comment Score 1 a). An article like this is just a waste of peer-reviewing resources. -0.708 b). This paper reads like a woman's diary, not like a scientific piece of work. -0.625 c). The manuscript is a collection of fragmented and disconnected descriptive observations. -0.667 d). What were you thinking? -0.625 2 a). The lack of theory is painful at times. -0.521 b). The author should abandon the premise that his work can be considered research. -0.583 c). A failing course paper written by an undergrad. -0.438 d). Overall, I think this manuscript is a waste of time. -0.562 3 a). I don't see much science in this manuscript. -0.333 b). Many questions on the text, for example, cause embarrassment in understanding the text. -0.250 c). Most part of methodology is useless, most of the paragraphs are irrelevant to the main topics. -0.333 d). The authors use a log transformation, which is statistical machination, intended to deceive. -0.396 4 a). None of these results beat state-of-the-art deep NNs. -0.188 b). Your proposed method should be compared with another method that introduced in a prestigious paper. -0.001 c). That can hardly be true (if it is, then it puts the entire paper into question! If trivial uncertainty is almost as good as this method, isn't the method trivial, too?). -0.021 d). I don't believe in simulations. -0.188 5 a). They do not really provide any substantial theoretical justification why these heuristics work in practice even though they observe it empirically. 0.083 b). The results look like a smorgasbord of data 0.021 c). Unfortunately, in your Figure 0.083 6 a). Since the adaptions to DTP are rather small, the work does not contain much novelty. 0.208 b). RBMs are not state-of-the-art in topic modeling, therefore it's difficult to assess whether this is helpful. 0.375 c). there is not much innovation in the model architecture. 0.208 d). From a novelty standpoint though, the paper is not especially strong given that it represents a fairly straightforward application of comments. We further analyze our dataset to gauge the patterns along the continuous harshness scale. For this, we split the scale into 8 bins, Bin 1: score ≤ -0.6, Bin 2: -0.6 ≤ score ≤ -0.4, Bin 3: -0.4 ≤ score ≤ -0.2, Bin 4: -0.2 ≤ score ≤ 0.0, Bin 5: 0.0 ≤ score ≤ 0.2, Bin 6: 0.2 ≤ score ≤ 0.4, Bin 7: 0.4 ≤ score ≤ 0.5, and Bin 8: score ≥ 0.5. We list representative samples from each bin along with the associated score in Table In this section, we use common computational models to predict the harshness scores for review comments. Our problem is a regression task; for each review sentence s, predict the real-valued score. Since we have a relatively smaller size dataset, we use 5-fold cross-validation to evaluate the predictive models. Furthermore, to account for outliers in the dataset, we use smooth L1-loss instead of the regular mean squared error (MSE) loss for the regression task. Besides the regression task, we We construct the review comment representation using the average of the word embeddings. We use 300 dimensional GoogleNews word2vec vectors for this and pass the sentence representation to the feedforward linear layers for prediction. We use the LSTM (Hochreiter and Schmidhuber, 1997) networks using word2vec word vectors We finetune the pre-trained BERT model Our task of predicting harshness score for review comments somewhat resembles the task of abusive language and toxicity prediction in NLP. Therefore, we also use a standard benchmark for our dataset. We finetune the HateBERT model For all our models, we use a learning rate of 1e -3 and a batch size of 32. For ASE and BiLSTM models, we use the Adam optimizer with a weight decay of 1e -3. For the BERT model, we use the AdamW optimizer. Since the harshness score lies between -1 to 1, we use tanh non-linearity function at the final prediction layer in all our regression task models. We use Pytorch to implement the models. The results for our benchmark models are shown in Table The peer-review process is central to all science research dissemination. However, it also exhibits a power-imbalance situation where the review comments can be overly critical and sometimes cross the boundaries to disparage while also demonstrating bad reviewing practices. This makes this process traumatic, especially for young researchers. The responsibility to moderate these review comments lies in the hands of (senior) area chairs and editors. However, it is not easy to manually moderate review comments with ever-increasing submissions in major AI conferences. In this work, we present a first-of-its-kind dataset of 1000 peerreview comments annotated for their harshness value. We define harshness in this paper based on two dimensions, critical stance and the evaluative focus of the review comment. We then use a comparative annotation technique, Best-Worst-Scaling (BWS), to elicit a continuous real-valued harshness scale. Our analysis shows that the different regions of this scale represent different facets of harshness with comments going from disparaging at one end to standard evaluative comments at another. We then benchmark common predictive models on our dataset. We show scope for improvement in building computational predictive models. We believe our dataset will be useful in automatic review comments moderation. In the future, we would like to extend the dataset and investigate the impact of reviewer confidence
| 1,054 | 2,950 | 1,054 |
Exact Hard Monotonic Attention for Character-Level Transduction
|
Many common character-level, string-tostring transduction tasks, e.g. graphemeto-phoneme conversion and morphological inflection, consist almost exclusively of monotonic transduction. Neural sequence-tosequence models with soft attention, which are non-monotonic, often outperform popular monotonic models. In this work, we ask the following question: Is monotonicity really a helpful inductive bias in these tasks? We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns a latent alignment jointly while learning to transduce. With the help of dynamic programming, we are able to compute the exact marginalization over all monotonic alignments. Our models achieve state-of-the-art performance on morphological inflection. Furthermore, we find strong performance on two other character-level transduction tasks. Code is available at
|
Many tasks in natural language can be treated as character-level, string-to-string transduction. The current dominant method is the neural sequenceto-sequence model with soft attention The standard versions of both soft and hard attention are non-monotonic. However, if we look at the data in grapheme-to-phoneme conversion, named-entity transliteration, and morphological inflection-examples are shown in Fig. In this paper, we hypothesize that the underperformance of monotonic models stems from the lack of joint training of the alignments with the transduction. Generalizing the model of
|
We assume the source string x ∈ Σ * x and the target string y ∈ Σ * y have finite vocabularies Σ x = {x 1 , . . . , x |Σx| } and Σ y = {y 1 , . . . , y |Σy| }, respectively. In tasks where the tag is provided, i.e., labeled transduction Hard attention was first introduced to the literature by where we show how one can rearrange the terms to compute the function in polynomial time. prediction tasks. We present the new best individual system. 2 Zero in the sense of non-character like BOS or EOS The model above is exactly an 0 th -order neuralized hidden Markov model (HMM). Specifically, p(y i | a i , y <i , x) can be regarded as an emission distribution and p(a i | y <i , x) can be regarded as a transition distribution, which does not condition on the previous alignment. Hence, we will refer to this model as 0 th -order hard attention. The likelihood can be computed in O(|x| • |y| • |Σ y |) time. To enforce monotonicity, hard attention with conditionally independent alignment decisions is not enough: The model needs to know the previous alignment position when determining the current alignment position. Thus, we allow the transition distribution to condition on previous one alignment p(a i | a i-1 , y <i , x) and it becomes a 1 st -order neuralized HMM. We display this model as a graphical model in Fig. where α(a i-1 ) is the forward probability, calculated using the forward algorithm Thus, computation of the likelihood in our Decoding at test time, however, is hard and we resort to a greedy scheme, described in Alg. 1. To see why it is hard, note that the dependence on y <i means that we have a neural language model scoring the target string as it is being transduced. Because the dependence is unbounded, there will be no dynamic program that allows for efficient computation. The goal of this section is to take the 1 st -order model of §2 and show how we can straightforwardly enforce monotonic alignments. We will achieve this by adding structural zeros to the distribution, which will still allow us to perform efficient inference with dynamic programming. We follow the neural parameterization of where e d encodes target characters into character embeddings. The tag embedding h t is produced by where e t maps the tag t k into tag embedding h t k ∈ R dt or zero vector 0 ∈ R dt , depends on whether the tag t k is presented. Note that Y ∈ R dt×|Σt| dt is a learned parameter. Also h e j ∈ R 2d h , h d i ∈ R d h and h t ∈ R dt are hidden states. The Emission Distributon. All of our hardattention models employ the same emission distribution parameterization, which we define below where V ∈ R 3d h ×3d h and W ∈ R |Σy|×3d h are learned parameters. 0 th -order Hard Attention. In the case of the 0 thorder model, the distribution is computed by a bilinear attention function with eq. ( where T ∈ R d h ×2d h is a learned parameter. 0 th -order Hard Monotonic Attention. We may enforce string monotonicity by zeroing out any non-monotonic alignment without adding any additional parameters, which can be done through adding structural zeros to the distribution as follows These structural zeros prevent the alignments from jumping backwards during transduction and, thus, enforce monotonicity. The parameterization is identical to the 0 th -order model up to the enforcement of the hard constraint with eq. ( 1 st -order Hard Monotonic Attention. We may also generalize the 0 th -order case by adding more parameters. This will equip the model with a more expressive transition function. In this case, we take Algorithm 1 Greedy decoding. (N is the maximum length of target string.) Forward probability 5: else 6: return y * the 1 st -order hard attention to be an offset-based transition distribution similar to where ∆ = a i -a i-1 is relative distance to previous attention position and U ∈ R (w+1)×2d h , a learned parameter. Note that, as before, we also enforce monotonicity as a hard constraint in this parameterization. There have been previous attempts to look at monotonicity in neural transduction. Tasks. We consider three character-level transduction tasks: grapheme-to-phoneme conversion Empirical Comparison. We compare (i) soft attention without input-feeding (SOFT) Finding #1: Morphological Inflection. The first empirical finding in our study is that we achieve single-model, state-of-the-art performance on the CoNLL-SIGMORPHON 2017 shared task dataset. The results are shown in Tab. 2. We find that the 1-MONO ties with the 0-MONO system, indicating the additional parameters do not add much. Both of these monotonic systems surpass the non-monotonic system 0-HARD and SOFT. We also report comparison to other top systems at the task in Tab. 1. The previous state-of-the-art model, Finding #2: Effect of Strict Monotonicity. The second finding is that by comparing SOFT, 0-HARD, 0-MONO in Tab. 2, we observe 0-MONO outperforms 0-HARD and 0-HARD in turns outperforms SOFT in all three tasks. This shows that monotonicity should be enforced strictly since strict monotonicity does not hurt the model. We contrast this to the findings of Finding #3: Do Additional Parameters Help? The third finding is that 1-MONO has a more expressive transition distribution and, thus, outperforms 0-MONO and 0-HARD in G2P. However, it performs as well as or worse on the other tasks. This tells us that the additional parameters are not always necessary for improved performance. Rather, it is the hard constraint that matters-not the more expressive distribution. However, we remark that enforcing the monotonic constraint does come at an additional computational cost: an additional factor O(|x|). We expand the hard-attention neural sequenceto-sequence model of
| 876 | 591 | 876 |
How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning
|
Our investigation into the Affective Reasoning in Conversation (ARC) task highlights the challenge of causal discrimination. Almost all existing models, including large language models (LLMs), excel at capturing semantic correlations within utterance embeddings but fall short in determining the specific causal relationships. To overcome this limitation, we propose the incorporation of i.i.d. noise terms into the conversation process, thereby constructing a structural causal model (SCM). It explores how distinct causal relationships of fitted embeddings can be discerned through independent conditions. To facilitate the implementation of deep learning, we introduce the cogn frameworks to handle unstructured conversation data, and employ an autoencoder architecture to regard the unobservable noise as learnable "implicit causes." Moreover, we curate a synthetic dataset that includes i.i.d. noise. Through comprehensive experiments, we validate the effectiveness and interpretability of our approach. Our code is available in
|
Nowadays, numerous conversation recognition tasks (such as Emotion Recognition in Conversation (ERC) task However, when it comes to the relationship between two utterances, denoted as A and B, wherein their embeddings can be fitted, various possible relationships exist: A acts as the cause of B (A → B), A acts as the outcome of B (A ← B), or more complex, A and B are both influenced by a common cause (A ← C → B), and so on. Particularly in reasoning tasks To specifically investigate the causal discrimination capability of existing methods in conversation, we narrow down our research to a particular task: Affective Reasoning in Conversation (ARC), which has included Emotion-Cause Pair Extraction (ECPE) We begin with conducting tests to evaluate the causal discrimination of existing methods including the large language models (LLMs) In order to discriminate different causal relationships between two similar embeddings, we construct the dialogue process as a Structural Causal Model (SCM). Many endeavors Furthermore, to enable the learnability of such causal discrimination within embeddings, we propose a common skeleton, named centering one graph node (cogn) skeleton for each utterance derived from some broadly accepted prior hypotheses. It can address the challenges arising from variable-length and unstructured dialogue samples. Subsequently, we develop an autoencoder architecture to learn the unobservable implicit causes. Specifically, we consider the implicit causes as latent variables and utilize a graph attention network (GAT) Finally, we conduct extensive experimental evaluations: 1) our approach significantly outperforms existing methods including prominent LLMs (GPT-3.5 and GPT-4) in two affective reasoning tasks (ECPE and ECSR) and one emotion recognition task (ERC), demonstrating its effectiveness in affective reasoning. 2) our method exhibits a significant reduction in false predictions for negative samples across three causal discrimination scenarios. 3) we curate a synthetic dataset with implicit causes to visualize the latent variable in our implementation. Our contribution is four-fold: • We formulated the dialogue process as an SCM and analyzed the causal relationships represented by different independent conditions. • We devised the cogn skeleton to address the problems of variable-length and unstructured dialogue samples. • We adopted an autoencoder architecture to overcome the unobservability of implicit causes and make it learnable. • We constructed a synthetic dataset with implicit causes and conducted extensive evaluations of our proposed method. 2 Related Works and Challenges
|
For notational consistency, we use the following terminology. The target utterance U t is the t th utterances of a conversation D = (U 1 , U 2 , U 3 , . . . , U N ) where N is the maximum number of utterances in this conversation and 0 < t ⩽ N . The emotion label Emo t denotes the emotion type of U t . The emotion-cause pair (ECP) is a pair (U t , U i ), where U i is the i th utterance of this conversation. In the ECP, U t represents the emotion utterance and U i is the corresponding cause utterance. Moreover, the cause label C t,i denotes the cause span type of the ECP (U t , U i ). Thus, in a given text, ERC is the task of identifying all Emo t . Moreover, ECPE aims to extract a set of ECPs and ECSR aims to identify all C t,i . Chen et al. ( We examined the performance of a range of methods for addressing affective reasoning in conversations, including both unsupervised approaches (large language models (LLMs), BERT-based pretrained models) and supervised approaches (taskrelated approaches). Overall, all the methods demonstrated a lack of discriminability on two types of challenges: • Samples where emotional utterances and causal utterances are interchanged. For a dialogue instance, if the ECP is (U 1 , U 2 ) (U 2 is the cause of U 1 ), the prediction results obtained by the existing methods tend to include both (U 1 , U 2 ) and (U 2 , U 1 ). Table • Samples with indirect connections. For example, if the ECPs in a conversation are (U 1 , U 2 ) and (U 2 , U 3 ), the prediction results obtained by the methods often include an additional pair (U 1 , U 3 ). We evaluated the performance of existing methods on these two challenges, and the detailed results are shown in Table In the area of causal discovery, Causal Markov and Faithfulness Assumptions [U1]It's my bad for not calling. [Sad] [U2]Don't bother coming home. [Angry] [U3]You're going to kick me out? [Surprized] [U4]Exactly. [Angry] [U5]I won't have to listen to you. [Angry] [E1]Implicit cause i.e., Another speaker came home late and forgot to call. i.e., This speaker wanted his rules to be respected but it is broken now. U 1 Figure The noise terms (also called exogenous variables) for each variable, enables methods such as Independent Component Analysis (ICA) to identify more comprehensive causal relationships between the two fitted variables. In this section, we begin by outlining incorporating i.i.d. noise terms into a dialogue model to construct an SCM in Section 3.1, demonstrating independent residual allowing for the identification of more specific causal relations within pairs of fitted utterances. Next, to mitigate conflicts between SCM models and dialogue data, we designed cogn skeletons with six instantiations in Section 3.2. Finally, we propose a deep learning implementation to tackle the issue of noise being unknown in dialogue data in Section 3.3. In order to imbue causal discriminability into the fitting process of two relevant utterances, we algebraically construct the conversation model as a Structural Causal Model (SCM). Definition 1: An SCM of a dialogue is a 3 tuple ⟨U, E, F⟩, where U is the set of utterances where rel Ut denotes a set of utterances that point to the U t . Definition 1 establishes the construction of a novel computational model for dialogue process, as exemplified in Figure Definition 2: The relationship of two utterances X and Y in a dialogue is causal discriminable, from the independent conditions: where Σ represents the residual terms in fitting process. (The proof is shown in Appendix A.) Example 1: In Example 1, it is observed that any two utterances can be fitted together as they are mutually dependent. However, causal discriminability can be employed to differentiate their distinct causal structures. For instance, the residual term and Σ U 2 is not independent of U 3 , implying the presence of common cause (U 1 ) between U 2 and U 3 . Establishing a skeleton is the first step in causal discovery, as different skeletons provide distinct learning strategies for recovering the relationships between variables. However, utterances differ from the variables that causal discovery often uses. Specifically, each conversation has a different amount (N ) of utterances, and different inter-utterances relationships related to the context. Hence, it is intractable to build a general causal skeleton with fixed nodes and edges to describe all conversation samples. Fortunately, several published GNN-based approaches From a given causal skeleton, a linear SCM can be equivalently represented as: where rel t denotes a set of utterances that point to the U t (7-th utterance) in Figure Hence, we treat A T as an autoregression matrix of the G, and then E can be yielded by an autoencoder model. The whole process reads: where f (•) and g( The details of this process are shown in Figure where W ℓ row ∈ R N ×1 and W ℓ col ∈ R N ×1 are the learnable parameters in the graph attention. Moreover, the GNN aggregates the information from the neighbor utterances as following: where W ℓ stands for parameters in the corresponding layer. From the final layer of the evaluation process, by extracting A L-1 computed in Equation Decoder. We utilize the A and E computed from Encoder to generate the causal representation H. With a fixed adjacency matrix A, the GNN aggregates the information of implicit causes from neighbor nodes as follows: where M ℓ is parameters in the corresponding layer. As the same architecture as the encoder, H = M LP (E L ). Additionally, the plug-in RNN is integrated with GNN to address the appetite of Hypothesis 6: where p ℓ is the state of GRU model, with p computed by self-attention proposed by Thost and Chen (2021). In where e is any emotion type in Emo t , p e denotes the probability labeled with emotion e. In the whole process of ARC tasks, we followed In this section, we conduct extensive experiments to answer the 3 research questions: RQ1: How effective is our method in affective reasoning tasks? RQ2: How do we justify the causal discriminability of our method? RQ3: How do we gauge the difference between the latent variable E and designed implicit causes? According to the hypotheses of these baselines, for each cogn skeleton, we choose one recent SOTA work: II: DialogXL Table We further conducted six sets of ablation experiments to study the effects of different modules. In As shown in Table We are also concerned about the causal discriminability for similar utterances. Table Additionally, we show the adjacent matrices of our model and current SOTA methods in Appendix F. which indicates that our model can more freely explore the relationship between different utterances via adjacent matrices shifting rather than being limited to a fixed structure (e.g., attention module). The latent variable E is intended to represent the mentioned implicit causes. Therefore, the global distribution of the latent variable E should be approximately equal to the one of implicit causes. Although human evaluation labels are better for proving reasonable performance, it is intractable to annotate implicit causes due to their unobservability. We thus trained our model in a synthetic dataset given a set of fixed i.i.d. implicit causes to observe how the E is similar to the ground truth implicit causes distributions. Figure Moreover, in Appendix G, we first prove the approximate emotion consistency between utterance U t and its implicit causes when U t and U i in the emotion-cause pair (U t , U i ) do not belong to the same emotion category. Then, we demonstrate through the ERC task that by replacing Ĥ with E, the emotion consistency provided by implicit causes is preserved. In our model, our method can distinguish between U i → U j and U i ← U k → U j . However, our method is unable to distinguish between U i → U j and U i ← L → U j , where L represents a unobserved variable, called common causes or con-founders. In Tables Therefore, we proposed a theoretical design for testifying the existing of latent confounders: Confounding between Non-adjacent Nodes: Consider two utterances U i and U j being nonadjacent utterances. Let P a be the union of the parents of U i and U j : P a = U i ∪ U j . If we perform an intervention on P a (i.e., do(P a = pa)), we thus have Confounding between Adjacent Nodes: Consider two utterances U i and U j being adjacent utterances: U i → U j . If there are no latent confounders, we have Indeed, implementing intervention operations on conversation data poses a significant challenge. Therefore, in our new work, we have proposed general intervention writing: do(X) := P a(X) = ∅ where P a(X) denotes the parent set. Moreover, the most significant obstacle to further research is the lack of a high-quality dataset with complete causal relationship labels. Hence, we have constructed a simulated dialogue dataset via GPT-4 and plan to make it open soon. The results of testing prevalent approaches on the ARC task have demonstrated that almost all approaches are unable to determine the specific causal relationship that leads to the association of two well-fitted embeddings. In order to enhance the causal discrimination of existing methods, we constructed a SCM with i.i.d. noise terms, and analyzed the independent conditions that can identify the causal relationships between two fitted utterances. Moreover, we proposed the cogn framework to address the unstructured nature of conversation data, designed an autoencoder implementation to make implicit cause learnable, and created a synthetic dataset with noise labels for comprehensive experimental evaluation. While our method still has some limitations, such as confounders and the inability to scale to all methods, we hope that our theory, design, and model can provide valuable insights for the broader exploration of this problem to demonstrate that our work is de facto need for identifying causal relationships. Let X and Y be two variables in an SCM, with their respective noise terms denoted as E X and E Y (where E X and E Y are mutually independent). Let X and Ŷ represent the fitted values of X and Y w.r.t. each other: X = λY and Ŷ = 1 λ X. The residual terms between the fitted values and the true values are denoted as Σ X = X -X and Hence, if the SCM only contains two variables writing: The residual terms could write: Then, if the true causal relationship is from Y to X, λ = k. Σ X does not contain the term of E Y while Σ Y contains the term of E X . We could obtain the independence of residual terms writting: and vice versa. Therefore, we could obtain the independence condition: Furthermore, there may exist a set of independence: We would like to assume that there is a latent variable L, for this situation, constructing two relationships L → X and L → Y . Then we obtain: Σ L ̸⊥ ⊥ X, Σ L ̸⊥ ⊥ Y . By utilizing the transitivity of conditional independence, we can establish X ̸⊥ ⊥ Y , and finally acheive the situation Σ X ̸⊥ ⊥ Y , Σ Y ̸⊥ ⊥ X. We likewise assume a latent variable L establishing X → L and Y → L for the opposite situation where Σ X ⊥ ⊥ Y , Σ Y ⊥ ⊥ X, and X, Y are two isolated variables in SCM. From the above independence conditions, we could obtain: Due to the graph structure of SCM, we could obtain: Considering the residual terms, we finally obtain: Hence, we could obtain additional two independence conditions: Based on the independence conditions of 2variables SCM, we could extend it to the general SCM including more than 2 variables. Given any two variables in a SCM, we could testify to the independence condition and finally orientate via the whole SCM. Hypothesis 0. ∀U i ∈ D, it has the same causal skeleton as other utterances. By regarding Hypothesis 0 as the prior knowledge, a common causal skeleton containing a target variable and a fixed number of related variables can reason about the relations between the target utterance and other considered utterances. We denote this skeleton of U t by S(U t ). There are ∀U i , U j ∈ D, S(U i ) = S(U j ). Additionally, there are some other empirical hypotheses from the above approaches. These hypotheses can be divided into two categories: one is about the "order" of utterances (Hypotheses 1, 2, 3), and the other is about intermingling dynamics among the interlocutors Hypothesis 1. Hypothesis 2. Hypothesis 4. Hypothesis 5. Hypothesis 6. A cogn skeleton is denoted by H = (V, E, M). The V = U 1 , U 2 , U 3 , ..., U N represents a set of utterances in a conversation, and the edge (i, j, m i,j ) ∈ E denotes the influence from U i to U j , where m i,j ∈ M is the type of the edge depending on whether U i and U j belong to one and the same speaker. Thus M = 0, 1, where 1 for that they are the same speaker and 0 for different. Then we denote the speaker type of U i by a function p(U i ). At last, we show the process of building 6 cogn skeletons in Algorithms 1 -6. Finally, in Figure note that adjacency can not indicate all the differences among these skeletons, for example, Hypothesis 6 takes effect when the model learns the relationship based on the VI skeleton. DailyDialog We follow EmoryNLP (Zahiri and Choi, 2018): A TV show scripts dataset with 7 emotion labels IEMOCAP RECCON Synthetic dataset: We create a synthetic dataset by following the benchmark of the causal discovery field In the word embedding, we adopt the affect-based pre-trained features 1 proposed by Although there are different pre-trained models in these skeleton baselines, the SOTA work DAG-ERC and EGAT have investigated their performances in a consistent pre-trained model. Therefore, for a fair and direct comparison, we continue this benchmark using the pre-trained embedding published by DAG-ERC for three tasks. In the hyper-parameters, we follow the setting of Finally, we adopted downstream task modules consistent with the SOTA baselines: For evaluation metrics, we follow In Table Note that W = (I -A) and A i,i = 0. So in W , the value of the elements on the diagonal is constant at 1 and is a constant maximum of each column. Naturally, f (E) is an approximate estimate of f (U ) especially U t and U i in the ECP (U t , U i ) do not belong to the same emotion category, which is why we think implicit causes are reasonable when the F1 score of Table In Table
| 1,033 | 2,641 | 1,033 |
Character-Level Translation with Self-attention
|
We explore the suitability of self-attention models for character-level neural machine translation. We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolutions. We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese). Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments. 1
|
Most existing Neural Machine Translation (NMT) models operate on the word or subword-level, which tends to make these models memory inefficient because of large vocabulary sizes. Characterlevel models In this work, we perform an in-depth investigation of the suitability of self-attention models for character-level translation. We consider two models: the standard transformer from We evaluate these models on both bilingual and multilingual translation to English, using up to three input languages: French (FR), Spanish (ES), and Chinese (ZH). We compare the performance when translating from close (e.g., FR and ES) and on distant (e.g., FR and ZH) input languages (Section 5.1) and we analyze the learned character alignments (Section 5.2). We find that self-attention models work surprisingly well for character-level translation, achieving competitive performance to equivalent subword-level models while requiring up to 60% fewer parameters (under the same model configuration). At the character-level, the convtransformer outperforms the standard transformer, converges faster, and produces more robust alignments.
|
Fully character-level translation was first tackled in Multilingual training of character-level models is possible not only for languages that have almost identical character vocabularies, such as French and Spanish, but even for distant languages that can be mapped to a common character-level vocabulary, for example, through latinizing Russian More recently, The transformer The original transformer where √ d k is a scaling factor. For the encoder, Q, K and V are equivalent, thus, given an input sequence with length N , Attention performs N 2 comparisons, relating each word position with the rest of the words in the input sequence. In practice, Q, K, and V are projected into different representation subspaces (called heads), to perform Multi-Head Attention, with each head learning different word relations, some of which might be interpretable Intuitively, attention as an operation might not be as meaningful for encoding individual characters as it is for words, because individual character representations might provide limited semantic information for learning meaningful relations on the sentence level. However, recent work on language modeling To facilitate character-level interactions in the transformer, we propose a modification of the standard architecture, which we call the convtransformer. In this architecture, we use the same decoder as the standard transformer, but we adapt each encoder block to include an additional subblock. The sub-block (Figure For all convolutional layers, we set the number of filters to be equal to the embedding dimension size d model , which results in an output of equal dimension as the input M . Therefore, in contrast to Datasets. We conduct experiments on two datasets. First, we use the WMT15 DE→EN dataset, on which we test different model configurations and compare our results to previous work on character-level translation. We follow the preprocessing in (ii) all sentences in the corpus are from the same domain. We construct our training corpora by randomly sampling one million sentence pairs from the FR, ES, and ZH parts of the UN dataset, targeting translation to English. To construct multilingual datasets, we combine the respective bilingual datasets (e.g., FR→EN, and ES→EN) and shuffle them. To ensure all languages share the same character vocabulary, we latinize the Chinese dataset using the Wubi encoding method, following Tasks. Our experiments are designed as follows: (i) bilingual scenario, in which we train a model with a single input language; (ii) multilingual scenario, in which we input two or three languages at the same time without providing any language identifiers to the models and without increasing the number of parameters. We test combining input languages that can be considered as more similar in terms of syntax and vocabulary (e.g. FR and ES) as well as more distant (e.g., ES and ZH). Model comparison. In Table We find character-level training to be 3 to 5 times slower than subword-level training due to much longer sequence lengths. However, the standard transformer trained at the character level already achieves very good performance, outperforming the recurrent model from Multilingual experiments. In Table Although multilingual translation can be realized using subword-level models through extracting a joint segmentation for all input languages (e.g., as in The convtransformer consistently outperforms the character-level transformer on this dataset, with a gap of up to 2.3 BLEU on bilingual translation (ZH→EN) and up to 2.6 BLEU on multilingual translation The convtransformer is about 30% slower to train than the transformer (see Figure To gain a better understanding of the multilingual models, we analyze their learned character alignments as inferred from the model attention probabilities. For each input language (e.g., FR), we compare the alignments learned by each of our multilingual models (e.g., FR + ES → EN model) to the alignments learned by the corresponding bilingual model (e.g., FR → EN). Our intuition is that the bilingual models have the greatest flexibility to learn high-quality alignments because they are not distracted by other input languages. Multilingual models, by contrast, might learn lower quality alignments because either (i) the architecture is not robust enough for multilingual training; or (ii) the languages are too dissimilar to allow for effective joint training, prompting the model to learn alternative alignment strategies to accommodate for all languages. We quantify the alignments using canonical correlation analysis (CCA) For similar source and target languages (e.g., the FR+ES→EN model), we observe a strong pos- itive correlation to the bilingual models, indicating that alignments can be simultaneously learned. When introducing a distant source language (ZH) in the training, we observe a drop in correlation, for FR and ES, and an even larger drop for ZH. This result is in line with our BLEU results from Section 5.1, suggesting that multilingual training on distant input languages is more challenging than multilingual training on similar input languages. The convtransformer is more robust to the introduction of a distant language than the transformer (p < 0.005 for FR and ES inputs, according to a one-way ANOVA test). Our results also suggest that more sophisticated attention architectures might need to be developed when training multilingual models on several distant input languages. We performed a detailed investigation of the utility of self-attention models for character-level translation. We test the standard transformer architecture, as well as introduce a novel variant which augments the transformer encoder with convolutions, to facilitate information propagation across nearby characters. Our experiments show that self-attention performs very well on characterlevel translation, with character-level architectures performing competitively when compared to equivalent subword-level architectures while requiring fewer parameters. Training on multiple input languages is also effective and leads to improvements across all languages when the source and target languages are similar. When the languages are different, we observe a drop in performance, in particular for the distant language. In future work, we will extend our analysis to include additional source and target languages from different language families, such as more Asian languages. We will also work towards improving the training efficiency of character-level models, which is one of their main bottlenecks, as well as towards improving their effectiveness in multilingual training. ment appears to be better preserved. This is another indication that the convtransformer is more robust for multilingual translation of distant languages. (iv) for multilingual translation with three inputs, where two of the three languages are close (FR+ES+ZH→EN, Figure Pour que ce cadre institutionnel soit efficace, il devra remédier aux lacunes en matière de réglementation et de mise en oeuvre qui caractérisent à ce jour la gouvernance dans le domaine du développement durable. reference For this institutional framework to be effective, it will need to fill the regulatory and implementation deficit that has thus far characterized governance in the area of sustainable development. To ensure that this institutional framework is effective, it will need to address regulatory and implementation gaps that characterize governance in sustainable development. convtransformer In order to ensure that this institutional framework is effective, it will have to address regulatory and implementation gaps that characterize governance in the area of sustainable development. We are convinced that the future of mankind under security, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. convtransformer We are convinced that the future of humanity in safety, peaceful coexistence, tolerance and reconciliation among nations will be reinforced by the recognition of the facts of the past. To ensure that this institutional framework is effective, gaps in regulatory and implementation that have characterized governance in sustainable development to date. convtransformer For this institutional framework to be effective, it will need to address gaps in regulatory and implementation that characterize governance in the area of sustainable development. We are convinced that the future of mankind in safety, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. convtransformer We are convinced that the future of mankind in security, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. The use of expert farm management is also important to maximize land productivity and efficiency in the use of irrigation water. ZH→EN transformer The use of expert management farms is also important for maximizing productivity and irrigation use. convtransformer The use of experts to manage farms is also important for maximizing efficiency in productivity and irrigation water use. We are convinced that the future of humanity in safety, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. convtransformer We are convinced that the future of humanity in safety, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. The use of expert management farms is also important for maximizing productivity and efficiency in irrigation water use. convtransformer The use of expert management farms is also important for maximizing productivity and irrigation water efficiency. The use of expert farm management is also important for maximizing productivity and irrigation water use efficiency. convtransformer The use of expert management farms to maximize efficiency in productivity and irrigation water use is also important. The use of expert management farms is also important for maximizing productivity and irrigation water use. convtransformer It is also important that expert management farms be used to maximize efficiency in productivity and irrigation use.
| 618 | 1,123 | 618 |
Which Melbourne? Augmenting Geocoding with Maps
|
The purpose of text geolocation is to associate geographic information contained in a document with a set (or sets) of coordinates, either implicitly by using linguistic features and/or explicitly by using geographic metadata combined with heuristics. We introduce a geocoder (location mention disambiguator) that achieves state-of-the-art (SOTA) results on three diverse datasets by exploiting the implicit lexical clues. Moreover, we propose a new method for systematic encoding of geographic metadata to generate two distinct views of the same text. To that end, we introduce the Map Vector (MapVec), a sparse representation obtained by plotting prior geographic probabilities, derived from population figures, on a World Map. We then integrate the implicit (language) and explicit (map) features to significantly improve a range of metrics. We also introduce an open-source dataset for geoparsing of news events covering global disease outbreaks and epidemics to help future evaluation in geoparsing.
|
Geocoding
|
Depending on the task objective, geocoding methodologies can be divided into two distinct categories: (1) document geocoding, which aims at locating a piece of text as a whole, for example geolocating Twitter users Computational methods in geocoding broadly divide into rule-based, statistical and machine learning-based. Edinburgh Geoparser The statistical geocoder Topocluster Among the recent machine learning methods, bag-of-words representations combined with a Support Vector Machine Figure We used separate layers, convolutional and/or dense (fully-connected), with ReLu activations FD for each candidate is computed by reducing the prediction error (the distance from predicted coordinates to candidate coordinates) by the value of error multiplied by the estimated prior probability (candidate population divided by maximum population) multiplied by the Bias parameter. The value of Bias = 0.9 was determined to be optimal for highest development data scores and is identical for all highly diverse test datasets. Equation Word embeddings and/or distributional vectors encode a word's meaning in terms of its linguistic context. However, location (named) entities also carry explicit topological semantic knowledge such as a coordinate position and a population count for all places with an identical name. Until now, this knowledge was only used as part of simple disparate heuristics and manual disambiguation procedures. However, it is possible to plot this spatial data on a world map, which can then be reshaped into a 1D feature vector, or a Map Vector, the geographic representation of location mentions. MapVec is a novel standardised method for generating geographic features from text documents beyond lexical features. This enables a strong geocoding classification performance gain by extracting additional spatial knowledge that would normally be ignored. Geographic semantics cannot be inferred from language alone (too imprecise and incomplete). Word embeddings and distributional vectors use language/words as an implicit container of geographic information. Map Vector uses a lowresolution, probabilistic world map as an explicit container of geographic information, giving us two types of semantic features from the same text. In related papers on the generation of location representations, MapVec initially begins as a 180x360 world map of geodesic tiles. There are other ways of representing the surface of the Earth such as using nested hierarchies Training data was generated from geographically annotated Wikipedia pages (dumped February 2017). Each page provided up to 30 training instances, limited to avoid bias from large pages. This resulted in collecting approximately 1.4M training instances, which were uniformly subsampled down to 400K to shorten training cycles as further increases offer diminishing returns. We used the Python-based NLP toolkit Spacy Our evaluation compares the geocoding performance of six systems from Section 2, our geocoder (CamCoder) and the population baseline. Among these, our CNN-based model is the only neural approach. We have included all open-source/free geocoders in working order we were able to find and they are the most up-to-date versions. Tables 1 and 2 feature several machine learning algorithms including Long-Short Term Memory (LSTM) We use the three standard and comprehensive metrics, each measuring an important aspect of geocoding, giving an accurate, holistic evaluation of performance. A more detailed costbenefit analysis of geocoding metrics is available in (1) Average (Mean) Error is the sum of all geocoding errors per dataset divided by the number of errors. It is an informative metric as it also indicates the total error but treats all errors as equivalent and is sensitive to outliers; (2) Accuracy@161km is the percentage of errors that are smaller than 161km (100 miles). While it is easy to interpret, giving fast and intuitive understanding of geocoding performance in percentage terms, it ignores all errors greater than 161km; (3) Area Under the Curve (AUC) is a comprehensive metric, initially introduced for geocoding in News Corpus: The Local Global Corpus (LGL) by We now introduce GeoVirus, an open-source test dataset for the evaluation of geoparsing of news events covering global disease outbreaks and epidemics. It was constructed from free WikiNews (1) The WikiNews contributor(s) who wrote the article annotated most, but not all location references. The first author checked those annotations and identified further references, then proceeded to extract the place name, indices of the start and end characters in Area Under Curve † Average Error ‡ Accuracy@161km LGL WIK GEO LGL WIK GEO LGL WIK GEO CamCoder 22 ( All tested models (except CamCoder) operate as end-to-end systems; therefore, it is not possible to perform geocoding separately. Each system geoparses its particular majority of the dataset to obtain a representative data sample, shown in Table Table Table We note that no single computational paradigm dominates Table The Pearson correlation coefficient of the target entity ambiguity and the error size was only r ≈ 0.2 suggesting that CamCoder's geocoding errors do not simply rise with location ambiguity. Errors were also not correlated (r ≈ 0.0) with population size with all types of locations geocoded to various degrees of accuracy. All error curves follow a power law distribution with between 89% and 96% of errors less than 1500km, the rest rapidly increasing into thousands of kilometers. Errors also appear to be uniformly geographically distributed across the world. The strong lexical component shown in Table Geocoding methods commonly employ lexical features, which have proved to be very effective. Our lexical model was the best languageonly geocoder in extensive tests. It is possible, however, to go beyond lexical semantics. Locations also have a rich topological meaning, which has not yet been successfully isolated and deployed. We need a means of extracting and encoding this additional knowledge. To that end, we introduced MapVec, an algorithm and a container for encoding context locations in geodesic vector space. We showed how CamCoder, using lexical and MapVec features, outperformed both approaches, achieving a new SOTA. MapVec remains effective with various machine learning frameworks (Random Forest, CNN and MLP) and substantially improves accuracy when combined with other neural models (LSTMs). Finally, we introduced GeoVirus, an open-source dataset that helps facilitate geoparsing evaluation across more diverse domains with different lexical-geographic distributions
| 1,004 | 9 | 1,004 |
Every word counts: A multilingual analysis of individual human alignment with model attention
|
Human fixation patterns have been shown to correlate strongly with Transformer-based attention. Those correlation analyses are usually carried out without taking into account individual differences between participants and are mostly done on monolingual datasets making it difficult to generalise findings. In this paper, we analyse eye-tracking data from speakers of 13 different languages reading both in their native language (L1) and in English as language learners (L2). We find considerable differences between languages but also that individual reading behaviour such as skipping rate, total reading time and vocabulary knowledge (LexTALE) influence the alignment between humans and models to an extent that should be considered in future studies.
|
Recent research has shown that relative importance metrics in neural language models correlate strongly with human attention, i.e., fixation durations extracted from eye-tracking recordings during reading In this short paper, we approach this by analysing (i) differences in correlation between machine attention and human relative fixation duration across languages, (ii) differences within the same language across datasets, text domains and native speakers of different languages, (iii) differences between native speakers (L1) and second language learners (L2), (iv) the influence of syntactic properties such as part-of-speech tags, and (v) the influence of individual differences in demographics, i.e., age, vocabulary knowledge, depth of processing. Taking into account individual and subgroup differences in future research, will encourage single-subject and cross-subject evaluation scenarios which will not only improve the generalization capabilities of ML models but also allow for adaptable and personalized technologies, including applications in language learning, reading development or assistive communication technology. Additionally, understanding computational language models from the perspectives of different user groups can lead to increased fairness and transparency in NLP applications.
|
We quantify the individual differences in human alignment with Transformer-based attention in a correlation study where we compare relative fixation duration from native speakers of 13 different languages on the MECO corpus The results show that (i) the correlation varies greatly across languages, (ii) L1 reading data correlates less with neural attention than L2 data, (iii) generally, in-depth reading leads to higher correlation than shallow processing. Our code is available at github.com/stephaniebrandl/ eyetracking-subgroups. Multilingual eye-tracking Brysbaert (2019) found differences in word per minute rates during reading across different languages and proficiency levels. That eye-tracking data contains languagespecific information is also concluded by The neglection of individual differences is a well-known issue in cognitive science, which leads to theories that support a misleading picture of an idealised human cognition that is largely invariant across individuals (Levinson, 2012). Along the same lines, when using cognitive signals in NLP, most often the data is aggregated across all participants State-of-the-art word embeddings are highly correlated with eye-tracking metrics We analyse the Spearman correlation coefficients between first layer attention in a multilingual language model and relative fixation durations extracted from a large multilingual eye-tracking cor-pus, including 13 languages Total fixation time (TRT) per word is divided by the sum over all TRTs in the respective sentence to compute relative fixation duration for individual participants, similar to We extract first layer attention for each word from mBERT Eye-tracking Data The L1 part of the MECO corpus contains data from native speakers reading 12 short encyclopedic-style texts (89-120 sentences) in their own languages For comparison, we also run the experiments on the GECO corpus Table In the following, we show results for the correlation analysis across languages and an in-depth analysis on different influences on those correlations. Languages We compute the Spearman correlation between relative fixation and first layer attention per sentence and average across sentences for all individual participants. We show correlation values averaged across participants for each language (L1) and corresponding data for English L2 in Table Correlations for XLM-R are about 0.1 higher and for mT5 0.1 -0.2 lower compared to mBERT. The correlation for English L2 are very similar between languages (0.3-0.34, mBERT) and lowest for the English L1 participants (0.26, mBERT). Correlation values for GECO are slightly lower for the Dutch experiments but in the same range for the English part. Processing depth To further analyse the different correlation values, particularly the low correlation in the L2 experiment for English native speakers, we look into skipping rates and total reading times and hereby focus on mBERT to make results more comparable to Figure POS We look deeper into cross-lingual differences and show correlation values on token-level for 6 frequent POS tags in Figure LexTALE We show LexTALE scores for English L2 and fi, en, nl for L1 versus correlation values in Figure Our results show that the correlation between relative fixation duration and first layer attention varies greatly across languages when read by native speakers. These differences can be attributed in part to the depth of processing: Languages such as Finnish and Greek, which show high total reading times, show a more evenly distributed correlation pattern across the most frequent parts of speech. Moreover, L1 English shows a high skipping rate and the lowest correlations. We find that more careful in-depth reading -processing more words for a longer time -correlates more strongly with attention than fast shallow reading. This is in line with previous research showing that attention patterns in BERT carry high entropy values, i.e., are broadly distributed, particularly in the first layers The differences in skipping rate have various origins. On one hand, skipping rate is regulated by word length We furthermore looked at the influence of age and gender but could not find any meaningful differences. This might be due to the fact that all participants were university students, most of them under the age of 30, thus representing a very specific group of the overall population. It is also important to note that most of the languages in MECO are Indo-European and only 4 are not using the Latin script. In summary, we have shown the impact of various subgroup characteristics reflected in reading and how they affect the correlation to neural attention. We argue that these differences should be taken into account when leveraging human language processing signals for NLP.
| 754 | 1,312 | 754 |
PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents
|
Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents. 1
|
Performing complex reasoning over long input documents often requires forming high-level abstractions of the text (e.g., plots and themes in a narrative) and then conducting a variety of inferences on top of those abstractions Mine helpful actions from training set questions DEFINE(X), COMPARE(X,Y), FIND_EMOTION(X),...
|
Execute the plan step-by-step Plan Execution open_conv = "In the initial conversation, Phil Conover is excited about his upcoming mission to be the first man to see the other side of the moon ...." Question: What part of the final scene best connects to the story's opening conversation? 1.open_conv = FIND_ELEMENT(CTX,"opening conver..") 2.final_scene = SUMMARIZE_X(CTX, "final_scene") Figure What part of the final scene best connects to the story's opening conversation? To answer this question, we need to gather and synthesize information from across the story, which motivates decomposing the question into a plan of actions, as in: 1. Identify all participants in initial conversation. 2. Summarize the initial conversation. 3. Summarize events and themes of final scene. 4. Summarize roles of conversation participants in final scene. 5. Identify and rank connections between conversation and final scene. Each action in the above plan varies in complexity, from simple lookup-style actions (Step 1) to more challenging query-focused summarization (Steps 2-4) and conceptual linking (Step 5) actions that require deep narrative understanding. Given the rapidly advancing capabilities of large language models (LLMs), how can we use them to answer questions like these? While we could directly prompt LLMs to generate the answer, prior work on simpler reasoning-based tasks shows that this method is inferior to chain-of-thought prompting Given the difficulty of obtaining plans and intermediate explanations for long documents, one potential solution is to delegate this task to smaller executable modules instead of forcing the LLM to come up with all of them at once. In this work, we introduce PEARL, a framework that combines Planning with Executable Actions for Reasoning over Long documents. Each stage of PEARLaction mining, plan decomposition, and plan execution -is implemented by applying zero-shot or few-shot prompting to an LLM. The stages (Figure 1. Action mining: An LLM is prompted to come up with simple actions that can help solve questions from an input training dataset. Unlike predefined "toolboxes" in methods such as Toolformer 2. Plan generation: Given an input test question, an LLM generates an executable plan consisting of a series of actions selected from the action set produced in the previous stage. The plan is formatted as a simple program in which the execution result of one action can serve as an argument to future actions, which enables complex composition. The LLM executes the plan action-by-action via a prompt template that includes an action and the long-form input document. Note that this is the only stage that includes the document, as the other stages operate over just questions. We demonstrate PEARL's effectiveness on a challenging subset of QuALITY Prompting LLMs with PEARL yields more accurate and comprehensive answers than those generated by directly prompting the LLM to answer the question, particularly for questions that require reasoning over the full long document. This result is particularly impressive given the potential for error propagation in the PEARL framework: as each stage is implemented via an LLM, errors in plan formulation or execution can significantly affect the output answer. To further verify the integrity of the plans, we perform human evaluation by asking annotators to provide feedback and ratings; annotators generally find the plans to be reasonable, although a small percentage contain unnecessary actions or omit critical actions. Overall, we hope PEARL further opens the door towards using LLMs for complex reasoning over long documents. Our work builds on recent LLM prompting research and also connects to work on reasoning over long documents. Before describing PEARL, we first survey related papers to contextualize our work within this fast-moving field. Prompting methods: Recently, the capabilities of large language models Does not rely on external tools Chain-of-Thought Table Reasoning over long documents: Large language models have showcased remarkable reasoning capabilities We are interested in using LLMs to solve tasks that require complex reasoning over long documents. 2 In this paper, we focus on the task of answering questions about long-form narratives. Most prompting strategies that aim to improve the reasoning 2 As there is no consensus on what is "long", we consider it to mean documents of several thousands of tokens in length. Given a question about a long document and the seed action set, come up with new actions that could help to answer the question... FIND_MISSION(CTX, X) : Find the mission of character X from the input context CTX... What is the alien's mission? Figure In many prior prompting techniques such as Re-ACT and Toolformer, the LLM is able to query external APIs (e.g., Wikipedia search or a calculator) to solve a given task. Unlike these works, which assume a predefined action space, PEARL mines actions directly from data of similar distribution (in our case, training set questions of QuAL-ITY). As shown by prior research A plan serves as the guiding framework or outline for answering complex questions that may involve multi-step reasoning and/or global understanding of long documents. Given a question, as shown in Figure where the output variable stores the result of the current ACTION , and the arguments can be (1) the input document, (2) a string, or (3) an output variable from previous steps of the plan. When generating the plan, we do not show the LLM the entire document as input, which provides ample space for incorporating few-shot in-context examples. Similar to the seed actions in the previous stage, we provide a seed set of plans and allow the model to generate more demonstrations automatically, which we provide more details in Section 3.4. In the previous stage, the LLM generates a plan that serves as a blueprint for producing a response. To execute each step in the plan, we prompt the LLM with a template filled with output from previous stages. Concretely, as shown in Figure 3.4 Self-correction and self-refinement LLM-generated plans can have two major issues: (1) they can be syntactically-invalid, which prevents execution; and (2) they can semantically irrelevant to the question. To address these issues, we prompt the LLM to "debug" its own generated plans via self-correction and self-refinement, inspired by Self-correction of syntax errors: Given a heldout question, we first generate a plan via an LLM and then pass it into a simple parser In total, we extract a dataset of 1K examples from QuALITY divided into two splits, one of which requires long context understanding to answer and the other of which doesn't. Each QuAL-ITY question contains a human-annotated score of how much context is required to answer it, which ranges from 1 (only a sentence or two of context is needed) to 4 (most or all of the passage for context is needed). The two splits are (1) Long, which consists of 330 examples from the QuALITY dev set and 368 examples from training set marked with a context score ≥ 3, and (2) Short, which has 302 examples from the dev set that do not require long contexts to answer (context score < 3). The latter is a control dataset to make sure our methods do not overly worsen performance on simpler questions. Evaluation: While QuALITY is a multiplechoice dataset, we reframe it into a generative task in which an LLM does not have access to the choices and must instead generate a long-form Long denotes the split where the questions require reasoning over long contexts to answer accurately. As we only evaluate on a subset, we also provide p-values to verify statistical significance against the zero-shot GPT-4 baseline. answer. We do this for two reasons: (1) transforming the task to a novel setting reduces the risk of data leakage, and (2) the generative task better resembles the usage of LLMs in real world. In our generative setup, we automatically map the longform answer generated by the models back to one of the choices with an LLM to evaluate the accuracy. We provide a generic illustration of the evaluation process in Figure As each of the stages in PEARL has critical hyperparameters and implementation details, we describe our specific configurations here. We provide an LLM with seven seed actions and two in-context examples to demonstrate the required format for generating new actions. 6 We collect new actions by passing all training set questions into the model, excluding those questions in our evaluation set. Ultimately, we obtain 407 actions and corresponding definitions, of which several are duplicates or overly specific, and in total exceeds GPT-4's maximum context window of 8K tokens. We thus instruct GPT-4 to simplify and abstract over existing actions to reduce the total number of actions. After repeating this pro- 6 We present the prompt template in Appendix E cess twice, As existing sophisticated prompting methods require few-shot examples in-context, which is not feasible when long document is involved, we compare PEARL with simple zero-shot baselines (GPT-4 (OpenAI, 2023) and GPT-3.5 We discover that PEARL significantly outperforms competing prompting methods on questions that require reasoning over long contexts, which demonstrates the utility of the planning module. We also observe a small drop in accuracy on questions that require only short contexts, possibly because the plans end up over-complicating what is a simple reasoning process. In this section, we dig deeper into the main results of our experiments, which are presented in Table PEARL improves accuracy on long-document QA: Overall, PEARL's accuracy is higher than that of all competing methods, particularly for the QuALITY split annotated by humans as requiring long contexts to answer In Figure Action execution is necessary: Do we actually need to execute the generated plans to answer these questions? Feeding just the generated plan to the model along with the question (minus any execution results) may still encourage the LLM to follow the plan's reasoning steps and generate a better answer. However, we observe that removing the execution results from the model's input reduces absolute accuracy by around 3 points, which suggests that it is important to perform multiple passes over the document to execute each action before answering the original question. With that said, we do observe a modest improvement over the GPT-4 zero-shot and CoT baselines (∼ 2 absolute points), which suggests that the plan itself is also valuable. To reduce human input, the majority of the plan generation demonstrations are generated by the LLM with self-refinement. We observe that self-refinement is critical to performance: without it, the overall accuracy drops nearly 3 absolute points (ablations in Table In this section, we analyze the behavior of PEARL by diving into the composition of its generated plans, its most preferred actions, and what types of questions it improves most on. We also offer a qualitative error analysis as well as a human evaluation on the correctness of the generated plans. Plan statistics: Plans are roughly 4 actions long on average, with around 3.4 unique actions per plan. The most commonly used actions are shown in Figure Accuracy by reasoning types: Since QuALITY questions require different reasoning strategies to solve, what types of reasoning does PEARL help improve the most? To this end, we further evaluate questions based on the type of reasoning required to answer them. 10 Table PEARL is significantly slower than zeroshot prompting: The improved performance of PEARL comes at the cost of longer running time and cost: PEARL requires 4.4 times more tokens in the prompt, and it needs to generate 1.3 times more tokens owing to the intermediate steps. 11 Specific examples where PEARL helps: To better understand PEARL, we qualitatively analyze 40 examples for which zero-shot GPT-4 generates incorrect answers while PEARL answers correctly. This analysis reveals two key advantages of PEARL. First, while zero-shot prompting is reasonably good at finding salient information from the 10 We prompt GPT-4 with the definition of each reasoning type presented in the Appendix 11 These multiples were estimated from a small run of 30 examples. input document, its generative answers tend to be based only on local context around this information. For instance, when asked about the number of wives the character "Dan Merrol" has, the baseline successfully identifies six names that appear to be Dan's wives. However, PEARL takes into account the revelation that these names "were actually memories from the brain donors whose parts were used to reconstruct his brain" and thus correctly reasons that Dan only has one wife. Second, PEARL generates more detailed and thorough answers. For instance, given the question "Why is Kumaon a good region for potential forest preservation?", the zero-shot answer considers only one aspect of the reason, whereas PEARL elaborates on multiple aspects, allowing PEARL's answer to be mapped to the correct option ("All other choices"), while the zero-shot answer maps to the option that describes the single aspect. Where does PEARL go wrong? We additionally examine 40 examples for which PEARL answers incorrectly, and group the errors into three categories (detailed examples in Appendix A Table • True negatives: Questions for which PEARL's generative answer is mapped to the wrong option. This category can be further divided into two subcategories: (1) cases where the plan has critical issues, and (2) cases where the plan is satisfactory but the intermediate execution produces incorrect output. Out of the 40 examples, 29 are true negatives, with 7 plan errors and 22 execution errors. • False negatives: Questions for which PEARL's generative answers are correct but incorrectly mapped to the wrong option. This kind of error is unavoidable as we use LLM for automatic answer mapping. Out of the 40 examples, 5 are false negatives. • Other: Some QuALITY questions are heavily dependent on the options; that is, the correct answer can only be determined after examining all the options. For instance, Table The quality of plans generated by PEARL is critical, as they serve as the basis for the plan execution stage. To gain further insight on the quality of these plans, we perform a human evaluation by hiring annotators on Upwork In this work, we introduce PEARL, a framework for tackling complex reasoning over long documents. To answer a question, PEARL first proposes a plan based on a set of actions mined from a training set, and then it executes the plan step by step via prompting itself with a template filled with output from previous stages. We demonstrate the effectiveness of PEARL on a challenging subset of QuAL-ITY. Experiments and analysis show that prompting GPT-4 with PEARL yields more accurate and comprehensive answers than zero-shot and chainof-thought prompting, and human annotators judge the generated plans to be reasonable. While PEARL shows promising results for long document reasoning, there are several limitations to our approach. Like other prompting methods, PEARL is susceptible to generating misinformation or hallucinations. It is also more time-consuming and computationally costly than the baseline approach of directly prompting an LLM to answer the question. Moreover, PEARL may over-complicate simple questions that only need superficial reasoning over long-form narratives. Due to our limited budget and the cost of API access to proprietary LLMs, we did not stress test the framework with extensive variations in the prompt aside from the ablations in the paper. Finally, PEARL is still bounded by the maximum context window size of the LLMs, and we have not tested it on less powerful LLMs. Overall, prompting on document-level with continuous dependencies is still an under-explored area, and we hope our work spur future research in this space (e.g., new datasets, modules, stage refinements). PEARL relies heavily on closed-source large language models, which while tuned to align with human preferences, are still susceptible to generating hallucination and misinformation. The documentation of these models is opaque, and it is difficult to know to what extent the copyrighted data is used during pre-training. We use these models for purely research purposes. We hope our method can shed light on mitigating similar issues when an LLM needs to process long document. Finally, human annotators are paid hourly, and the evaluation process was deemed exempt from IRB review. A Supplementary details of analysis B GPT-4 Multiple-choice setup performance While our primary focus is on the generative QA setup in the main text, we provide GPT-4's performance under the standard multiple-choice setup here in the Appendix. On the entire QuALITY dev set, GPT-4 achieves an accuracy of 84.4%. For the 1000 challenging question set, GPT-4 reaches an accuracy of 78.7%, nearly 10 points higher than the GPT-4 zero-shot generative baseline. This result suggests that there is still room for improvement in GPT-4's generative answers. We also observe that GPT-4 is sensitive to the ordering of the provided options. We further evaluate GPT-4 with three shuffled versions of the options (swap A and D, B and C; swap A and C, B and D; swap A and B, C and D). While the overall accuracy of these versions remains similar, the questions that are consistently answered correctly across all four option orderings drop to 68.7%. This result raises the question of whether GPT-4 truly "understands" the question and further motivates the generative QA setup. As demonstrated in Section 6, the mapping stage is not always reliable. To understand the frequency of mapping errors, we conduct a small-scale human answer mapping study. We recruit three professionals on Upwork. We randomly select 50 questions and ask annotators to read PEARL output and then map it to one of the provided options. On average, annotators agree with ∼83% of GPT-4 mappings, with inter-annotator agreement on four-class settings of κ = 0.677. For questions where annotators disagree with each other or do not concur with GPT-4, they tend to be those that can be mapped to than one option or none of the options. We believe this level of accuracy is decent enough to let GPT-4 perform the mapping step for evaluation.
| 1,305 | 320 | 1,305 |
Unsupervised Joint Training of Bilingual Word Embeddings
|
State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks. 13 We used the News Commentary corpora provided by WMT for en→de and en→fr to train SMT systems performing at 15.4 and 20.1 BLEU points on Newstest2016 en-de and Newstest2014 en-fr, respectively.
|
Bilingual word embeddings (BWE) represent the vocabulary of two languages in one common continuous vector space. They are known to be useful in a wide range of cross-lingual NLP tasks. The most prevalent methods for training BWE are so-called mapping methods In spite of their success, unsupervised mapping methods are inherently limited by the dissimilarity between the original word embedding spaces to be mapped. The feasibility of aligning two embedding spaces relies on the assumption that they are isomorphic. However, On the other hand, supervised methods that jointly train BWE from scratch In this paper, we propose unsupervised joint training of BWE. Our method is an extension of previous work on unsupervised BWE: we propose to generate, without supervision, synthetic parallel sentences that can be directly exploited to jointly train BWE with existing algorithms. We empirically show that this method learns better BWE for several cross-lingual NLP tasks.
|
On the strong assumption that existing algorithms for joint training of BWE are robust enough even with very noisy parallel training data, we formulate the following research question: Do synthetic sentence pairs supply useful bilingual contextual information for learning better BWE? Previous work on joint training of BWE hypothesizes that exploiting both monolingual and bilingual contextual information yields better word embeddings, monolingually and bilingually. Among several existing algorithms for joint training of BWE, in this work, we use bilingual skipgram (BIVEC) For an unsupervised training of BWE, the training data must also be generated in an unsupervised way. To this end, we chose unsupervised machine translation (MT). Recent work has shown significant progress in unsupervised MT Given an initial BWE, for instance learned with unsupervised mapping methods, our method works as follows (see also Figure • Synthetic parallel data are generated by translating monolingual data using the USMT. Both L1-to-L2 and L2-to-L1 translations can be considered • A new phrase table is trained on the synthetic parallel data to form a new USMT. Finally, on the synthetic parallel data generated by our USMT after N refinement steps, we jointly train new BWE as described in Section 2.1. Although this approach can efficiently generate parallel data of a reasonable quality, as shown in Figure More importantly, we use USMT assuming that BIVEC is robust enough to learn from very noisy parallel data. Our intuition comes from the fact that SMT generates less diverse translations, with a significantly different word frequency distribution than in translations naturally produced by humans. SMT is limited by the vocabulary of its phrase table and will favor the generation of frequent n-grams thanks to its language model. Same words appear more frequently in similar contexts, facilitating the training of word embeddings and compensating, to some extent, for the noisiness of the translations. In Appendix A, we provide results of our preliminary experiments supporting this assumption. Are BWE unsupervisedly and jointly trained on noisy synthetic data better than unsupervised mapped BWE? To answer this question, we conducted experiments in three different tasks with three language pairs: English-German (en-de), English-French (en-fr), and English-Indonesian (en-id). We trained monolingual word embeddings with fastText Our initial USMT systems were induced with the following configuration. Maximum phrase length was set to six (L = 6). To make our experiments reasonably fast, we selected the 300k most frequent phrases referring to each monolingual corpus, and retained 300-best target phrases for each source phrase (k = 300). 4-gram language models were trained with lmplz All the methods for training word embeddings were trained with 512 dimensions and their -min-count parameter set to 5. Note that in all our experiments, we filtered the vocabulary so that all BWE spaces have the same vocabulary when compared. Bilingual lexicon induction (BLI) is by far the most popular evaluation task for BWE used by previous work in spite of its limits Table To evaluate the robustness of BIVEC, we compared the performance to those obtained with noisier synthetic data generated by the initial USMT (without refinement). As shown in Table Although BIVEC and SENTID used a sub-part of the monolingual data used by VECMAP, their vocabulary size can be larger. This unintuitive observation comes from the use of USMT to generate synthetic data: L1 words not covered by the phrase table are directly copied in the translations. As a result, such L1 words are introduced into the L2 vocabulary even if they do not appear in the L2 monolingual data used to train VECMAP, artifically increasing the coverage ratio In the phrase table induction for USMT, both the geometry of the space (when retrieving the kclosest translations for a given source phrase) and the embeddings themselves (when computing cosine similarity for the translation probability) play an important role. Better BWE should lead to bet- ter phrase tables and consequently translations of better quality. We thus regard USMT as an extrinsic evaluation task for BWE. Table In the literature, VECMAP and BIVEC BWE have been shown to perform as well as, or better than, word embeddings trained exclusively on monolingual data in monolingual tasks. Since we use significantly less and noisier data for training BIVEC than VECMAP, we assume that this observation may not hold in our configuration. We tested our assumption with the English word analogy task of We show in several cross-lingual NLP tasks that unsupervised joint BWE achieved better results than unsupervised mapped BWE. Our experiments also highlight the robustness of joint training that can take advantage of bilingual contexts even from very noisy synthetic parallel data. Since our approach works on top of unsupervised mapping for BWE and uses synthetic data generated by unsupervised MT, it will directly benefit from any future advances in these two types of techniques. Our approach has, however, a higher computational cost due to the need of generating synthetic parallel data, while generating more data would also improve the vocabulary coverage. As a future work, we would like to study, for training BWE, the impact of the use of synthetic parallel data generated by unsupervised NMT, or of a different nature, such as translation pairs extracted from monolingual corpora without supervision. Such translation pairs are, in general, more fluent but potentially much less accurate.
| 922 | 969 | 922 |
CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning
|
Compared to standard retrieval tasks, passage retrieval for conversational question answering (CQA) poses new challenges in understanding the current user question, as each question needs to be interpreted within the dialogue context. Moreover, it can be expensive to retrain well-established retrievers such as search engines that are originally developed for nonconversational queries. To facilitate their use, we develop a query rewriting model CONQRR that rewrites a conversational question in the context into a standalone question. It is trained with a novel reward function to directly optimize towards retrieval using reinforcement learning and can be adapted to any off-theshelf retriever. CONQRR achieves state-ofthe-art results on a recent open-domain CQA dataset containing conversations from three different sources, and is effective for two different off-the-shelf retrievers. Our extensive analysis also shows the robustness of CON-QRR to out-of-domain dialogues as well as to zero query rewriting supervision.
|
Passage retrieval in an open-domain conversational question answering (CQA) system Therefore, in this paper, we focus on query rewriting for the task of conversational passage retrieval in a CQA dialogue with any off-the-shelf retrieval system that can only be used as a black box. Specifically, we seek to build a QR model that rewrites a user query into the input of the retriever, in such a way that optimizes for passage retrieval performance. Figure Recent work that leverages QR for conversational passage retrieval We propose a reinforcement learning (RL)based model CONQRR (Conversational Query Rewriting for Retrieval). It directly optimizes the rewritten query towards retrieval performance, using only weak supervision from retrieval. We adopt a novel reward function that computes an approximate but effective retrieval performance metric on in-batch passages at each training step. Our reward function does not assume any specific retriever model design, and is generic enough for CONQRR to adapt to any off-the-shelf retriever. We show CONQRR outperforms existing QR models on a recent large-scale open-domain CQA dataset QReCC To conclude, our contributions are as follows. 1) We introduce CONQRR as the first RL-based QR model that can be adapted to and optimized towards any off-the-shelf retriever for conversational retrieval. 2) We demonstrate that CONQRR achieves state-of-the-art results with off-the-shelf retrievers on QReCC with conversations from three sources, and is effective for two retrievers including BM25 and a dual encoder model. 3) Our analysis shows CONQRR trained with no human rewrite supervision provides better retrieval results than strong baselines trained with full supervision, and is robust to out-of-domain dialogues, topic shifts and long dialogue contexts. 4) We conduct a novel quantitative study to analyze the limitations and utility of human rewrites in retrieval performance, which are largely unexplored in prior work.
|
Most existing CQA datasets In contrast, QReCC before generating an answer to the question. A few recent works Most existing conversational retrieval models require fine-tuning a retriever of a specific type (Table Query Rewriting (QR) In order to directly use an off-the-shelf retriever as we aim to do, conversational QR RL-based QR for Retrieval Other Applications Prior work also applies RL approaches to address text generation tasks like machine translation Problem Definition We focus on the task of query rewriting (QR) for conversational passage retrieval in a CQA dialogue, with an off-the-shelf retriever. The task inputs include a dialogue context x consisting of a sequence of previous utterances (u 1 , u 2 , . . . , u n-1 ), the current user question u n , a passage corpus P and an off-the-shelf retriever R. In this section, we first describe a supervised QR model based on T5 (T5QR) T5 is an encoder-decoder model that is pre-trained on large textual corpora QR models trained with a standard CE loss are agnostic to the retriever. In addition, human rewrites are not necessarily the most effective ones for passage retrieval (see Section 4.3 for an exploration). This motivates us to design our RL-based framework CONQRR (Figure Here, the RL environment includes the retriever model, dialogue context and passage candidates, in which the QR model takes actions by generating rewritten queries and obtains rewards accordingly. To be comparable with supervised QR models that do not use gold passages in training, we first describe how we obtain weak retrieval supervision for the RL reward calculation in CONQRR. Then we introduce the RL training details of CONQRR. Weak Retrieval Supervision In a CQA dialogue, each question naturally comes with an answer in its following conversational utterance. For each x, we mark its weak passage label p as the one having a string span with the highest token overlap F1-score with the following answer string u n+1 : where s is a string span and sim() calculates the token overlap score between two strings. RL Training CONQRR also has T5 as the base model architecture. It can be initialized with either T5 or T5QR. Our analysis in Section 4 shows that both setups generally work well. For each training example with the dialogue context x, we use the concatenated utterances in x as the model input. For each input, we generate m sampled rewritten queries (q s 1 , . . . , q sm ) as well as a baseline generated rewrite q. To generate each sampled rewrite q s , at time step t of the decoding process, a token q t s is drawn from the decoder probability distribution P r(w|x, q 1:t-1 s ) The baseline rewrite q is the output of greedy decoding, To compute score(q) for a rewrite q, we first use q to do retrieval from the in-batch passage candidates P X defined as follows, instead of from the full passage corpus P . We pre-compute one positive and one hard negative passage (p and p n ) for each training example x where p n is a randomly selected passage that is different from p, 50% of the time from the top 100 BM25-retrieved candidates (with the BM25 input being the human rewrite) and remaining 50% of the time from P . We define the set of all such positive and negative passages of input examples in a batch X as the in-batch passage candidates P X . Formally, we define P X = {p i , p i n |x i ∈ X} as the set of in-batch passage candidates for the batch X. Then for a generated rewritten query q of x ∈ X, we calculate score(q) as a binary indicator of whether the retriever R ranks the assigned positive passage p highest from P X . We denote R(q, P X , k) as the k-th most relevant passage retrieved by R from the candidate pool P X , and define: Then the RL training loss for x becomes: Following prior work where α ∈ [0, 1] is a tunable parameter. Inference At inference time, both T5QR and CONQRR work in the same way. The trained QR model greedily generates the rewritten query given a dialogue context. Then, the predicted rewrite is given to the provided retriever to perform retrieval. We evaluate the effectiveness of CONQRR in experiments with two general-domain retrieval systems, with more details in Appendix A.1. Evaluation Metrics Following See more details in Appendix A.3. Implementation Details Following prior work on RL for text generation For QR models, we compare three supervised models including GPT2 with weak supervision (WS) To have a direct comparison with the original QR baseline Transformer++, which has the retrieval performance reported on the overall QReCC test set by using BM25 as the off-the-shelf retriever, we first compare all QR models in the same setting in Table Zero or Few QR Supervision We investigate how sensitive CONQRR and T5QR are to the availability of QR labels. We experiment with training T5QR with 0%, 1%, 10% or 100% of QR labels in the QReCC train set. For the case of 0% examples, we simply use the original T5 checkpoint without fine-tuning. When training CONQRR, we mask out the CE loss in Eq. ( The slight difference in performance for the 100% QR label case with respect to Table We hypothesize that a context involving a topic shift will present the greatest challenges for conversational passage retrieval. To explore this factor, we split the QReCC data into topic-concentrated and topic-shifted subsets as follows. A test example (with at least one previous turn) is considered topicconcentrated if the gold passage of the current question comes from a document that was used in at least one previous turn. In contrast, a test example (with at least one previous turn) is considered topicshifted if the gold passage of the current question comes from a document that was never used in any previous turn. There are about 4.7k and 1.1k examples in the topic-concentrated and topic-shifted subsets, respectively. We compare the retrieval performance of different retriever inputs: dialogue context (which uses the concatenated dialogue history without QR), the predicted rewrite from T5QR and CONQRR with two loss alternatives, and the human rewrite. Table However, for the topic-shifted set, the human rewrite outperforms the dialogue context by 52% and 61%, averaging over three metrics, on BM25 and DE, respectively. The predicted rewrite by CONQRR (mix) outperforms the dialogue context by 30% and 44% on BM25 and DE, respectively. Therefore, compared with dialogue context, QR has great value in the aspect of robustness to topic shifts. When comparing with human rewrites, we also see room for improvement for QR models. These observations are largely unexplored in previous work, and they motivate our work on the task of QR for conversational passage retrieval in general, and optimizing directly towards retrieval. vious utterances (including the current question). For topic-concentrated conversations, all compared models have similar robustness to the dialogue context length and CONQRR (mix) is slightly more robust than T5QR. For topic-shifted conversations, both QR models and human rewrites show little drop or even an increase in performance as the context length gets longer. In contrast, the robustness of the dialogue context worsens with longer contexts, which confirms the importance of QR discussed above. We have similar observations for other metrics as well as for the BM25 retriever. Quantitative Attributes of Rewrites Table In order to understand why rewrites generated by CON-QRR lead to better retrieval performance and even sometimes outperform human rewrites, To summarize, we introduce CONQRR to address query rewriting for conversational passage retrieval with an off-the-shelf retriever. Motivated by our analysis showing both the limitations and utility of human rewrites, which are unexplored by prior work, we adopt RL with a novel reward to train CONQRR directly towards retrieval. As shown, CONQRR is the first QR model that can be trained adaptively to any off-the-shelf retriever, and achieves state-of-the-art retrieval performance on QReCC with conversations from 3 different sources. It shows better performance with zero QR supervision when compared with strong supervised baselines trained with full QR supervision. A direction for future work includes leveraging QR to facilitate other tasks like question answering and response generation in a full CQA system, as well as sentence rewriting in a document We show in Section 4.3 (Table The training time of CONQRR is longer than fine-tuning a DE retriever of a similar model size (9 vs 2 hours) because for each training step of CONQRR, CONQRR needs to do autoregressive decoding to get greedily decoded and sampled q and q s . However, re-indexing passages after finetuning the retriever can be very time-consuming (about 24 hours) and memory-consuming. In addition, unlike DE, CONQRR can also be used for any blackbox retriever such as search engines that are infeasible to fine-tune or be replaced. Another downside of QR is that for out-ofdomain and topic-shifted scenarios, QR may still require additional labels to achieve robust performance. Although we show that CONQRR (RL) initialized with T5 does not require QR labels and can work well on the overall QReCC test set, CONQRR (RL) does show worse robustness to out-of-domain and topic-shifted examples when compared with CONQRR (mix). Therefore, training a more robust CONQRR model may still require additional annotation efforts to collect human rewrites. CONQRR has only been tested on the standard CQA dialogue format of alternating questions and answers. To facilitate more practical use cases with more diverse dialogue acts or discourse relations (e.g., the agent asks a clarification question to the user), further investigation is needed. Our work is primarily intended to leverage query rewriting (QR) models to facilitate the task of conversational passage retrieval in an open-domain CQA system. Retrieving the most relevant passage(s) to the current user query in a conversation would help to generate a more appropriate agent response. Predicted rewrites from our QR model are mainly intended to be used as intermediate results (e.g., the inputs to the downstream retrieval system). They may also be useful for interpretability purposes when a final response does not make sense to the user in a full CQA system, but that introduces a potential risk of offensive text generation. In addition, to prevent the retriever from retrieving passages from unreliable resources, filtering of such passages in the corpus should be performed before any practical use. only. During the RL training of CONQRR, due to the complexity of applying Pyserini to calculate rewards on-the-fly, we instead use a Pyserini approximate called BM25-light. The only differences between them are that BM25-light (1) uses T5's subword tokenization instead of whole word tokenization and (2) does not use special operations (e.g., stemming) as applied in Pyserini. After training, we still run inference and report retrieval performance on BM25. Pyserini simply encodes the whole query input and each passage without truncating. We set maximum query and passage length as 128 and 2000 for BM25-light, but only less than 0.1% cases require truncation with these thresholds. For the dual encoder, the maximum query or passage length is 384. The average passage length is 378, but we observe performance drop by further increasing the maximum length for the dual encoder. QReCC reuses questions in QuAC and TREC conversations and re-annotates answers. For each NQbased conversation, they only use one randomly chosen question from NQ to be the starting question and then annotate the remaining conversation. In total, there are 63k, 16k and 748 question and answer pairs in the three subsets QuAC-Conv, NQ-Conv, TREC-Conv respectively, where TREC-Conv only appears in the test set. The original data is only divided into train and test sets. We randomly choose 5% examples from the train set to be our validation set. In some conversations from QuAC-Conv, the first user query is ambiguous as it depends on some topical information from the original QuAC dataset. Therefore, in order to fix this issue, we follow QReCC is a publicly available dataset that was released under the Apache License 2.0 and we use the same task set-up proposed by the original QReCC authors. Some agent turns in QReCC do not have valid gold passage labels, 13 and the (provided) original evalu- 13 Missing gold labels for certain examples in the dataset has no effect on the training of CONQRR as we induce weak labels without using the provided labels. ation script assigns a score of 0 to all such examples. Their updated evaluation script calculates the scores by removing those examples from the evaluation set (roughly 50%), which results in 6396, 1442 and 371 test instances for QuAC-Conv, NQ-Conv and TREC-Conv, respectively. This leads to a total of 8209 test instances in QReCC. We use the updated evaluation script for most of our experiments, except that we also use the original version for calculating scores in Table Lower Recall@100 with DE Previous work We hypothesize that simply generating a longer rewritten query is not the only factor that contributes to better retrieval performance. We investigate this by applying a brevity penalty Fine-tuned Retriever Although our work focuses on the off-the-shelf retriever setting, we also conduct an experiment of fine-tuning the DE retriever with the concatenated dialogue context, the predicted rewrite from CONQRR (mix) or the human rewrite as the query input, with results in Table Additional Data Efficiency Figure Figure Additional Rewrite Examples In addition to Table 5, we put more examples in Table
| 1,025 | 1,973 | 1,025 |
The (Non-)Utility of Structural Features in BiLSTM-based Dependency Parsers
|
Classical non-neural dependency parsers put considerable effort on the design of feature functions. Especially, they benefit from information coming from structural features, such as features drawn from neighboring tokens in the dependency tree. In contrast, their BiLSTM-based successors achieve state-ofthe-art performance without explicit information about the structural context. In this paper we aim to answer the question: How much structural context are the BiLSTM representations able to capture implicitly? We show that features drawn from partial subtrees become redundant when the BiLSTMs are used. We provide a deep insight into information flow in transition-and graph-based neural architectures to demonstrate where the implicit information comes from when the parsers make their decisions. Finally, with model ablations we demonstrate that the structural context is not only present in the models, but it significantly influences their performance.
|
When designing a conventional non-neural parser substantial effort is required to design a powerful feature extraction function. Such a function Recently, Since the introduction of the K&G architecture BiLSTM-based parsers have become standard in the field. Inspired by recent work
|
Our graph-and transition-based parsers are based on the K&G architecture (see Figure In both transition-and graph-based architectures input tokens are represented in the same way (see level The embeddings are initialized randomly at training time and trained together with the model. The representations x i encode words in isolation and do not contain information about their context. For that reason they are passed to the Bi-LSTM feature extractors Transition-based parsers gradually build a tree by applying a sequence of transitions. During training they learn a scoring function for transitions. While decoding they search for the best action given the current state and the parsing history. Figure Our implementation (denoted TBPARS) uses the arc-standard decoding algorithm TBEXT: is the extended architecture. We use the original extended feature set from K&G: { s 0 , where . L and . R denote left-and right-most child. The K&G graph-based parser follows the structured prediction paradigm: while training it learns a scoring function which scores the correct tree higher than all the other possible ones. While decoding it searches for the highest scoring tree for a given sentence. The parser employs an arcfactored approach Figure 3 Experimental Setup Data sets and preprocessing. We perform experiments on a selection of nine treebanks from Universal Dependencies We use automatically predicted universal POS tags in all the experiments. The tags are assigned using a CRF tagger Evaluation. We evaluate the experiments using Labeled Attachment Score (LAS). Analysis is carried out on the development sets in order not to compromise the test sets. We present the results on the concatenation of all the development sets (one model per language). While the absolute numbers vary across languages, the general trends are consistent with the concatenation. Implementation details. All the described parsers were implemented with the DyNet library We start by evaluating the performance of our four models. The purpose is to verify that the simple architectures will compensate for the lack of additional structural features and achieve comparable accuracy to the extended ones. Table In the case of graph-based models (GBMIN vs. GBSIBL) adding the second-order features to a BiLSTM-based parser improves the average performance slightly. However, the difference between those two models is significant only for two out of ten treebanks. For the transition-based parser (TBMIN vs. TBEXT) a different effect can be noticed -additional features cause a significant loss in accuracy for seven out of ten treebanks. One possible explanation might be that TBEXT suffers more from error propagation than TBMIN. The parser is greedy and after making the first mistake it starts drawing features from configurations which were not observed during training. Since the extended architecture uses more features than the simple one the impact of the error propagation might be stronger. This effect can be noticed in Figure We now investigate whether BiLSTMs are the reason for models being able to compensate for lack of features drawn from partial subtrees. Transition-based parser. We train TBPARS in two settings: with and without BiLSTMs (when no BiLSTMs are used we pass vectors x i directly to the MLP layer following Chen and Manning (2014)) and with different feature sets. We start with a feature set {s 0 } and consecutively add more until we reach the full feature model of TBEXT. Figure Adding the BiLSTM representations changes the picture (dark bars). First of all, as in the case of arc-standard system Graph-based parser. We train two models: GBMIN and GBSIBL with and without BiLSTMs. To ensure a fairer comparison with the models without BiLSTMs we expand the basic feature sets ({ h , d } and { h , d , s }) with additional surface features known from classic graph-based parsers, such as distance between head and dependent (dist), words at distance of 1 from heads and dependents (h ±1 , d ±1 ) and at distance ±2. We follow Figure As expected, adding BiLSTMs changes the picture. Since the representations capture surface context, they already contain a lot of information about words around heads and dependents and adding features h ±1 , d ±1 and h ±2 , d ±2 does not influence the performance. Interestingly, introducing dist is also redundant which suggests that either BiLSTMs are aware of the distance between tokens or they are not able to use this information in a meaningful way. Finally, even after adding all the surface features the models which do not employ BiLSTMs are considerably behind the ones which do. Comparing GBMIN (blue) with GBSIBL (red) we see that adding information about structural context through second-order features is beneficial when the BiLSTM are not used (light bars): the second-order GBSIBL has an advantage over GBMIN of 0.81 LAS even when both of the models use all the additional surface information (last group of bars on the plot). But this advantage drops down to insignificant 0.07 LAS when the BiLSTMs are incorporated. We conclude that, for both transition-and graph-based parsers, BiLSTMs not only compensate for absence of structural features but they also encode more information than provided by the manually designed feature sets. Now that we have established that structural features are indeed redundant for models which employ BiLSTMs we examine the ways in which the simple parsing models (TBMIN and GBMIN) implicitly encode information about partial subtrees. We start by looking at the BiLSTM representations. We know that the representations are capable of capturing syntactic relations when they are trained on a syntactically related task, e.g, number prediction task To do so, we follow For every sentence from the development set and every vector x i we calculate the impact of every representation x j from the sentence on the vector x i . We bucket those impact values according to the distance between the representation and the word. We then use the gold-standard trees to divide every bucket into five groups: correct heads of x i , children (i.e., dependents) of x i , grandparents (i.e., heads of heads), siblings, and other. Figure We conclude that the BiLSTMs are indeed influenced by the distance, but when trained with a dependency parser they also capture a significant amount of non-trivial syntactic relations. Now that we know that the representations encode structural information we ask how this information influences the decisions of the parser. First, we investigate how much structural information flows into the final layer of the network. When we look back at the architecture in Figure We extend the definition of impact to capture how a specific word representation x i influences the final MLP score sc (we calculate the derivative of sc with respect to x i ). We parse every development sentence. For every predicted transition/arc we calculate how much its score sc was affected by every word from the sentence. We group impacts of words depending on their positions. Transition-based parser. For the transitionbased parser we group tokens according to their positions in the configuration. For example, for the decision in Figure In Figure Graph-based parser. For the graph-based parser we group tokens according to their position in the predicted tree. We then bucket the impacts into: heads (h), dependents (d), children (i.e., dependents of dependents) (c), siblings (s), and grandparents (i.e., heads of heads) (g). Words which do not fall into any of those categories are grouped according to their surface distance from heads and dependents. For example, h ±2 are tokens two positions away from the head which do not act as dependent, child, sibling, or grandparent. Figure The results above show that the implicit structural context is not only present in the models, but also more diverse than when incorporated through conventional explicit structural features. Finally, we investigate if the implicit structural context is important for the performance of the parsers. To do so, we take tokens at structural positions with the highest impact and train new ablated models in which the information about those tokens is dropped from the BiLSTM layer. For example, while training an ablated model without s 0L , for every configuration we re-calculate all the BiLSTM vectors as if s 0L was not in the sentence. When there is more than one token at a specific position, for example s 0L or c (i.e., children of the dependent), we pick a random one to drop. That way every ablated model looses information about at most one word. We note that several factors can be responsible for drops in performance of the ablated models. For example, the proposed augmentation distorts distance between tokens which might have an adverse impact on the trained representations. Therefore, in the following comparative analysis we interpret the obtained drops as an approximation of how much particular tokens influence the performance of the models. Transition-based parser. Figure Graph-based parser. Corresponding results for the graph-based parser are presented in Figure We conclude that information about partial subtrees is not only present when the parser makes 9 It is worth noting that not all of the models suffer from the ablation. For example, dropping vectors s2R causes almost no harm. This suggests that re-calculating the representations multiple times does not have a strong negative effect on training. final decisions but also strongly influences those decisions. Additionally, the deteriorated accuracy of the ablated models shows that the implicit structural context can not be easily compensated for. Feature extraction. None of the above mentioned efforts address the question how dependency parsers are able to compensate for the lack of structural features. The very recent work by RNNs and syntax. Recurrent neural networks, which BiLSTMs are a variant of, have been repeatedly analyzed to understand whether they can learn syntactic relations. Such analyses differ in terms of: (1) methodology they employ to probe what type of knowledge the representations learned and (2) tasks on which the representations are trained on. Our work contributes to this line of research in two ways: (1) from the angle of methodology, we show how to employ derivatives to pinpoint what syntactic relations the representations learn; (2) from the perspective of tasks, we demonstrate how BiLSTM-based dependency parsers take advantage of structural information encoded in the representations. In the case of constituency parsing We examined how the application of BiLSTMs influences the modern transition-and graph-based parsing architectures. The BiLSTM-based parsers can compensate for the lack of traditional structural features. Specifically, the features drawn from partial subtrees become redundant because the parsing models encode them implicitly. The main advantage of BiLSTMs comes with their ability to capture not only surface but also syntactic relations. When the representations are trained together with a parser they encode structurally-advanced relations such as heads, children, or even siblings and grandparents. This structural information is then passed directly (through feature vectors) and indirectly (through BiLSTMs encoding) to MLP and is used for scoring transitions and arcs. Finally, the implicit structural information is important for the final parsing decisions: dropping it in ablated models causes their performance to deteriorate. The introduction of BiLSTMs into dependency parsers has an additional interesting consequence. The classical transition-and graph-based dependency parsers have their strengths and limitations due to the trade-off between the richness of feature functions and the inference algorithm
| 963 | 281 | 963 |
Accelerating Sparse Matrix Operations in Neural Networks on Graphics Processing Units
|
Graphics Processing Units (GPUs) are commonly used to train and evaluate neural networks efficiently. While previous work in deep learning has focused on accelerating operations on dense matrices/tensors on GPUs, efforts have concentrated on operations involving sparse data structures. Operations using sparse structures are common in natural language models at the input and output layers, because these models operate on sequences over discrete alphabets. We present two new GPU algorithms: one at the input layer, for multiplying a matrix by a few-hot vector (generalizing the more common operation of multiplication by a one-hot vector) and one at the output layer, for a fused softmax and top-N selection (commonly used in beam search). Our methods achieve speedups over state-of-theart parallel GPU baselines of up to 7× and 50×, respectively. We also illustrate how our methods scale on different GPU architectures.
|
The speedups introduced by parallel architectures inspired the development of accelerators tailored towards specialized functions. Graphics Processing Units (GPUs) are now a standard platform for deep learning. GPUs provide faster model training and inference times compared to serial processors, because they can parallelize the linear algebra operations used so heavily in neural networks Currently, major open source toolkits Adapting parallel neural operations to a specific hardware platform is required to obtain optimal speed. Since matrix operations are used heavily in deep learning, much research has been done on optimizing them on GPUs Much recent work in High Performance Computing (HPC) and Natural Language Processing (NLP) focuses on an expensive step of a model or models and optimizes it for a specific architecture. The lookup operation used in the input layer and the softmax function used in the output are two examples seen in machine translation, language modeling, and other tasks. Previous work has accelerated the softmax step by skipping it entirely Another strategy is to fuse multiple tasks into a single step. This approach increases the room for parallelism. Recent efforts have fused the softmax and top-N operations to accelerate beam search on the GPU using similar approaches NMT uses beam search during inference to limit the full set of potential output translations explored during decoding Our work uses ideas from previous work to accelerate two different operations. We focus on operations that manipulate sparse structures
|
GPUs are widely used to accelerate a variety of non-neural tasks such as search CPUs call special functions, also called kernels, to execute a set of instructions in parallel using multiple threads on the GPU. Kernels can be configured to create and execute an arbitrary number of threads. The threads in a kernel are grouped into different thread blocks (also called cooper-ative thread arrays). Threads in the same block can collaborate by sharing the same memory cache or similar operations. The maximum number of threads per block and number of blocks varies across GPU architectures. All threads running in the same block are assigned to a single Streaming Multiprocessor (SM) on the GPU. A SM contains the CUDA cores that execute the instructions for each thread in a single block. The number of CUDA cores per SM varies depending on the architecture. For example, Volta V100 contain 64 cores per SM, while GeForce GTX 1080s contain 128 cores per SM. Multiple thread blocks can be assigned to a SM if the number of blocks in the grid is larger than the number of physical SMs. Execution time will increase when more than one block is assigned to all SMs on the device (assuming all blocks run the same instruction). Regardless of the number of threads per block, all SMs can only run a total of 32 threads, called a warp, asynchronously at a time. Warp schedulers select in a round-robin fashion a warp from an assigned block to execute in parallel. The SMs finish execution when all blocks assigned to them complete their tasks. Each thread running on the SM can access multiple levels of memory on the graphics card, and an efficient use of all levels significantly improves the overall execution time on the device. GPUs contain different levels of memory designed to read and write data stored on the device. There are advantages and disadvantages associated with each memory type. The fastest memory on the device is the register memory. The amount of registers available per SM is limited and the access scope is limited to a single thread during execution. This memory is useful to hold a small amount of variables used at the thread-level. The next type of memory is shared memory. Shared memory is accessible by all threads running on the same block. While slower than registers, shared memory provides fast read and write access times. Shared memory also allows fast operations at the block level such as reductions, usermanaged caches, etc. The amount of shared memory per SM can range from 49KB (K40) up to 96KB (V100). The last (and slowest) type of memory is the global memory. Global memory latency is 100x slower than shared memory. The main use of this memory is to store all the data copied from and to the host CPU. The amount of global memory varies depending on the GPU model (e.g. 12GB on the K40 and 16GB on the V100). An efficient use of the memory hierarchy provides the best performance. A parallel application must be designed to minimize the total amount of calls to global memory while maximizing the use of registers and shared memory. An exclusive use of main memory will produce the worst execution times. Our methods focus on the efficient use of shared and register memory for scenarios where the data is small enough to fit. Currently, state-of-the-art methods use a treebased reduction operation The top-N operation can be accelerated with an improved sorting algorithm for the beam search task on the GPU. Beam search only requires the top-N entries for each mini-batch, and the entries do not need to be sorted in a specific order (ascending or descending). Storing the irrelevant elements for beam search back into global memory is not required for this task and should be avoided. A clear optimization is to obtain the top elements in each minibatch using a faster sorting algorithm. Distinct sorting algorithms can be used to obtain the top elements from a set of candidates. Previous work introduced custom sorting algorithms for specific tasks using multi-core CPU In this section, we describe two sparse operations commonly used in deep learning, especially for NLP: at the input layer, multiplication by a sparse matrix, and at the output layer, softmax and selection of the top-N elements. In models whose inputs are words, the input layer typically looks up a learned word embedding for each word. Equivalently, it represents each word as a one-hot vector (whose dimensionality is equal to the vocabulary size, K) and multiplies it (as a row vector) by a K × M matrix B whose rows are word embeddings. Then, a minibatch of L words can be represented as a L× K matrix A whose rows are one-hot vectors, so that the product C = AB is a matrix whose rows are the embeddings of the words in the minibatch. Deep learning toolkits A problem arises, however, when the input vector is not a one-hot vector, but an "N-hot" vector. For example, we might use additional dimensions of the vector to represent subword or partof-speech tag information The softmax function (Equation For better numerical stability, all deep learning toolkits actually compute the softmax as follows: This alternative requires different optimizations on the GPU given the max operation. Recent work Some applications in deep learning require additional computations after the softmax function. During NMT decoding, the top-N probabilities from softmax(z) are chosen at every time-step t and used as an input to the next search step t + 1. It is common practice to obtain the top-N elements after the softmax operation. Naively, we can do this by sorting the probabilities and then taking the first N elements, as shown in Algorithm 1. This operation is sparse in nature given the fact that several hypotheses are discarded during search. The Algorithm 1 Serial minibatched softmax and top-N algorithm. for ← 1, . . . , L do 5: for ← 1, . . . , L do 12: return D retrieval of non-zero elements in a sparse input parallels the top-N scenario. (Beam search also requires that we keep track of the original column indices (i.e., the word IDs) of the selected columns; this is not shown in Algorithm 1 for simplicity.) In NMT, the top-N operation consumes a significant fraction of time during decoding. In this section, we present our algorithms for Nhot lookup ( §4.1) and fused softmax and top-N selection ( §4.2). Our sparse N-hot lookup method, shown in Algorithm 2, multiplies a sparse matrix A in Compressed Sparse Row (CSR) format by a row-major matrix B to yield a dense matrix C. CSR is widely used to store and process sparse matrices. This format stores all non-zero elements of a sparse matrix A contiguously into a new structure A v . Two additional vectors A r and A c are required to access the values in A v . An example of the CSR format is illustrated in Figure The beam size, or top-N, used in NMT is usually small, with the most commonly used values ranging from 1 to 75 parfor ← 1, . . . , L do Block level 3: x ← 0 4: for k ← k start , . . . , k end -1 do 7: sertion sort, it maintains separate buffers for the sorted portion (D) and the unsorted portion (C); it also performs an insertion by repeating swapping instead of shifting. The key to our method is that we can parallelize the loop over k (line 3) while maintaining correctness, as long as the comparison and swap can be done atomically. To see this, note that no swap can ever decrease the value of one of the D[n]. Furthermore, because for each k, we compare C[k] with every element of D, it must be the case that after looping over all n (line 4), we have C[k] ≤ D[n] for all n. Therefore, when the algorithm finishes, D contains the top-N values. Fusing this algorithm with the softmax algorithm, we obtain Algorithm 4. It takes an input array C containing a minibatch of logits and returns an array D with the top-N probabilities and an array E with their original indices. The comparisons in our method are carried out by the CUDA atomicMax operation (line 12). This function reads a value D [ ][n] and computes the max-Algorithm 4 Parallel fused batched softmax, and top-N algorithm. The comment "kernel-level" means a loop over blocks, and the comment "block-level" means a loop over threads in a block. for n ← 1, . . . N do 5: parfor k ← 1, . . . , K do block-level 8: x for n ← 1, . . . , N do Our algorithm recovers the original column indices (m) with a simple extension following Argueta and Chiang (2017). We pack each probability as well as its original column index into a single 64-bit integer before the sorting step (line 5), with the probability in the upper 32 bits and the column index in the lower 32 bits. This representation preserves the ordering of probabilities, so a single atomicMax operation on the packed representation will atomically update both the probability and the index. The final aspect to consider is the configuration of the kernel calls from the host CPU. The grid layout must be configured correctly to use this method. The top-N routine relies on specific ker-(a) Tesla V100 0.12 0.12 0.12 0.13 0.13 0.16 0.21 Table nel and memory configurations to obtain the best performance. The number of kernel blocks must be equal to the number of elements in the minibatch. This means that batch sizes smaller than or equal to the number of SMs on the GPU will run more efficiently given only one block, or less, will run on all SMs in parallel. The overall performance will be affected if multiple blocks are assigned to all SMs. The number of SMs on the GPU varies depending on the architecture. For example, the Tesla V100 GPU contains 80 SMs, while the Pascal TITAN X contains 30 SMs. This means that our method will perform better on newer GPU architectures with a large amount of SMs. The number of threads in the block is an additional aspect to consider for our method. The block size used for our experiments is fixed to 256 for all the experiments. This number can be adapted if the expected number of hypotheses to sort is smaller than 256 (the number of threads must be divisible by 32). The amount of shared memory allocated per block depends on the size of N. The auxiliary memory used to store the top-N elements must fit in shared memory to obtain the best performance. A large N will use a combination of shared and global memory affecting the overall execution of our method. We run experiments on two different GPU configurations. The first setup is a 16 core Intel(R) Xeon(R) Silver 4110 CPU connected to a Tesla V100 CPU, and the second set is a 16-core Intel(R) Xeon(R) CPU E5-2630 connected to a GeForce GTX TITAN X. The dense matrices we use are randomly generated with different floating point values. We assume the dense representations contain no values equal to zero. The sparse minibatches used for the top-N experiments are randomly generated to contain a specific amount of non-zero values per element. The indices for all non-zero values are selected at random. For the N-hot lookup task, we compared against the cuBLAS The highest speedups are obtained when the amount of non-zero elements is low, and the lowest speedups are seen when the amount of nonzero elements increase. On the V100, our method starts performing worse than the cuBLAS baseline when the amount of non-zero elements per batch element is larger than 100. On the other side, the performance of our method is worse than cuSPARSE when the sparsity is larger than 10 on the TITAN X architecture. Our method performs well on newer GPU models with a larger amount of SMs. We also compare the performance of our method against a one-hot lookup (i.e., N = 1) implementation used in DyNet We compared our fused softmax operation against the current state-of-the art method from NVIDIA The speedups against the baseline decrease as N grows. Our execution time still outperforms the baseline on most sizes of N used in NMT scenarios. This makes our method suitable for tasks requiring a small amount of elements from an output list. If the size of N exceeds 300, different methods should be used to obtain the most optimal performance. The baseline scales better than our implementation when N increases. Table The batch size affects the performance in a different manner on both architectures. The performance scales in a different manner when the batch size changes. On our largest experiments, the performance for N = 400 does not degrade significantly on the V100 architecture, while the speedups on the TITAN X change significantly from 1.19 to 0.32. This shows that our method runs best on the TITAN X architecture when the batch size is small, and the amount of top-N elements required does not exceed 400. For larger batches, the V100 architecture performs best for all values of N. The TITAN X provides better speedups against the baseline when the number of elements in the mini-batch is small, and both our method and baseline run on the same GPU device. In this work, we introduce two parallel methods for sparse computations found in NMT. The first operation is the sparse multiplication found in the input layer, and the second one is a fused softmax and top-N. Both implementations outperform different parallel baselines. We obtained speedups of up to 7× for the sparse affine transformation, and 50× for the fused softmax and top-N task.
| 923 | 1,564 | 923 |
Enhancing Air Quality Prediction with Social Media and Natural Language Processing
|
Accompanied by modern industrial developments, air pollution has already become a major concern for human health. Hence, air quality measures, such as the concentration of PM 2.5 , have attracted increasing attention. Even some studies apply historical measurements into air quality forecast, the changes of air quality conditions are still hard to monitor. In this paper, we propose to exploit social media and natural language processing techniques to enhance air quality prediction. Social media users are treated as social sensors with their findings and locations. After filtering noisy tweets using word selection and topic modeling, a deep learning model based on convolutional neural networks and overtweet-pooling is proposed to enhance air quality prediction. We conduct experiments on 7month real-world Twitter datasets in the five most heavily polluted states in the USA. The results show that our approach significantly improves air quality prediction over the baseline that does not use social media by 6.9% to 17.7% in macro-F1 scores.
|
In recent centuries, industrialization has considerably changed human society by providing a stimulus to economic growth and improved life quality. However, the advancement is accompanied by the increase in air pollutant emissions and risks to public health. As a consequence, predicting real-time air quality information (AQI), such as the concentration of PM 2.5 , has attracted more and more attention. Air quality prediction may help the government and society to better protect their citizens from potentially harmful effects of poor air quality. To forecast AQI, one of the most conventional approaches is to exploit historical air quality and treat the task as a time series prediction problem To learn additional knowledge without physical sensors, one of the most effective approaches is to leverage the wisdom of the crowd on the internet. For example, 81% of the adults in the USA spend on average two hours on social media and collectively publish 170 million tweets In this paper, we aim to leverage social media for air quality prediction. Our approach consists of three stages, including (1) tweet filtering, (2) feature extraction, and (3) air quality prediction. In the first stage, all of the incoming tweets are filtered by geographical locations and keywords extracted from statistical and topical modeling. After filtering the tweets, a convolutional neural network is applied to extract the individual feature vector for each tweet with a max-over-time pooling layer. A max-over-tweet layer is then proposed to aggregate the feature vectors of all tweets as the social media features for predicting air quality using a fully-connected hidden layer to combine with historical measurements. Finally, experiments conducted on 7-month large-scale Twitter datasets show that our approach significantly outperforms all comparative baselines.
|
Following the previous studies In most of the cities, the majority of tweets should be irrelevant to air quality because users are less likely to discuss air quality situations unless there is a dramatic change. Hence, we need to filter tweets before using them for air quality prediction. Following the previous work (Shike Mei and R.Dyer, 2014), we use three groups of keywords for filtering tweets, including (1) environmentrelated terms like smog released by EPA, (2) health-related terms like choke provided by the National Library of Medicine The incoming tweets are filtered by the aforementioned keywords in the three groups. The tweets containing at least one of these keywords are likely to be relevant to the topics about air quality. We denote the corpus of relevant tweets as D (l, t). The features extracted from relevant tweets are expected to be more robust. To extract features from text data, the effectiveness of convolutional neural networks (CNNs) has been demonstrated in many studies Tweet Representation. A tweet w i can be repby a matrix W i ∈ R d×|w i | , where d is the dimension of word embeddings; and |w i | is the number of words in the tweet. As shown in Figure Corpus Representation. Since relevant tweets in the corpus can be myriad and not fixed, we need to aggregate various representations into an ultimate representation for the whole corpus. Here we propose max-over-tweet pooling to derive the corpus representation. The layer of max-over-tweet pooling reads all tweet representations and aggregates them by deriving the maximum value for each representation dimension. More precisely, a dimension of the representation can be treated as the sensor about a particular topic while the max-overtweet pooling layer attempts to find the maximum sensor value among the sensor values of all relevant tweets. Finally, the max-over-tweet pooling layer can derive the corpus representation m all by considering all tweet representations. After determining the corpus representation m all , the final features x(l, t) for air quality prediction can be constructed by concatenating m all and the historical measurements H(l, t). As a consequence, the final features incorporate the knowl-edge of existing observations and the crowd power on social media. To address the air quality prediction, we apply a fully-connected hidden layer to estimate the logits of all classes. More precisely, the logits z(l, t) can be computed as z(l, t) = F (x(l, t)), where F (•) is a fully-connected hidden layer with L hidden units; the dimension of z(l, t) is identical to the number of classes in air quality categorization. Then the probabilistic score for each class can be obtained with a softmax function 3.1 Experimental Settings. Data Collection. For social media data, we exploit the Twitter developer API Baseline Methods. Because we are the first study using social media to predict air quality situation, there are much few available methods. Even though some studies (1) Prediction with only AQIs (PAQI): To under-stand the base performance, PAQI predicts the air quality conditions with only historical measurements. The knowledge of social media is ignored for this baseline method. (2) Bag-of-words Features (BOW): To demonstrate the effectiveness of extracted features, we replace the extracted features with conventional bag-of-words features as a baseline method. Note that all baselines apply a neural network with a hidden layer for prediction. For evaluation, micro-and macro-F1 scores are selected the evaluation metrics. Table Micro-F1 scores are generally better than macro-F1 scores because the trivial cases like the class of good air quality are the majority of datasets with higher weights in micro-F1 scores. PAQI is better than BOW although BOW uses the knowledge of social media. It is because BOW features involve all irrelevant words so that the actual essential knowledge cannot be recognized. Our approach significantly outperforms all baseline methods in almost all metrics. More precisely, our approach improves the air quality prediction over PAQI from 6.92% to 17.71% in macro-F1 scores. The results demonstrate that social media and NLP can benefit air quality prediction. In addition to the unbalanced datasets based on the categorization of EPA, we also conduct the experiments with relatively balanced datasets to show the robustness of our proposed approach. More specifically, the categorization is refined to four classes with finer windows of AQIs, including: [0, 25), In this paper, we propose a novel framework for leveraging social media and NLP to air quality prediction. After filtering irrelevant tweets, a CNN derives a feature vector for each tweet with max-over-time pooling. We also propose the novel max-over-tweet pooling to aggregate the feature vectors of all tweets over numerous hid- Figure den topics. Finally, the corpus representation can be taken into account to predict air quality with historical measurements. The results of extensive experiments show that our proposed approach significantly outperforms two comparative baseline methods across both balanced and unbalanced datasets for different locations in the USA. This is because: (1) Most noisy and irrelevant tweets are effectively filtered in the stage of tweet filtering; (2) The convolutional neural network and the proposed max-over-tweets are able to extract essential knowledge about air quality prediction from myriad tweets in social media; (3) There are some All of the improvements of our approach over the baseline method are significant with a paired t-test at a 99% significance level. limitations on only using historical measurements, such as the capability of recognizing real-world events.
| 1,050 | 1,857 | 1,050 |
AUTONLU: An On-demand Cloud-based Natural Language Understanding System for Enterprises
|
With the renaissance of deep learning, neural networks have achieved promising results on many natural language understanding (NLU) tasks. Even though the source codes of many neural network models are publicly available, there is still a large gap from open-sourced models to solving real-world problems in enterprises. Therefore, to fill this gap, we introduce AUTONLU, an on-demand cloud-based system with an easy-to-use interface that covers all common use-cases and steps in developing an NLU model. AUTONLU has supported many product teams within Adobe with different use-cases and datasets, quickly delivering them working models. To demonstrate the effectiveness of AUTONLU, we present two case studies. i) We build a practical NLU model for handling various image-editing requests in Photoshop. ii) We build powerful keyphrase extraction models that achieve stateof-the-art results on two public benchmarks. In both cases, end users only need to write a small amount of code to convert their datasets into a common format used by AUTONLU.
|
In recent years, many deep learning methods have achieved impressive results on a wide range of tasks, ranging from question answering • Ease of use. AUTONLU aims to help users with limited technical knowledge to train and test models on their datasets. We provide GUI modules to accommodate the most common use-cases, from creating/cleaning a dataset to training/evaluating/debugging a model. • State-of-the-art models. Users should not sacrifice performance for ease-of-use. Our built-in models provide state-of-the-art performance on multiple public datasets. AU-TONLU also supports hyperparameter tuning using grid search, allowing users to fine-tune the models even further. • Scalability. AUTONLU aims to be deployed in enterprises where computing costs could be a limiting factor. We provide an on-demand architecture so that the system could be utilized as much as possible. At Adobe, AUTONLU has been used to train NLU models for different product teams, ranging from Photoshop to Document Cloud. To demonstrate the effectiveness of AUTONLU, we present two case studies. i) We build a practical NLU model for handling various image-editing requests in Photoshop. ii) We build powerful keyphrase extraction models that achieve state-of-the-art results on two public benchmarks. In both cases, end users only need to write a small amount of code to convert their datasets into a common format used by AUTONLU.
|
Closely related branches of work to ours are toolkits and frameworks designed to provide a suite of state-of-the-art NLP models to users In 2018, Google introduced AutoML Natural Language 1 , a platform that enables users to build and deploy machine learning models for various NLP tasks. Our system is different from AutoML in the following aspects. First, AutoML uses neural architecture search (NAS) Figure • A web application that serves as the frontend to the users. The most important component of the application is a Scheduler that moni- tors the status of the cluster, then assigns jobs to the most appropriate instances, as well as spawns more/shuts off instances based on the workload to minimize the computing costs. The user interface is discussed in more detail in Section 3.3. • A cloud storage system that stores datasets, large pre-trained language models (e.g., BERT Regardless of the underlying model, in each prebuilt image, an included webserver is configured to serve the following endpoints: • /train that connects to the training code of the underlying model. • /is free that returns various information about the utilization of the instance (e.g, GPU memory usage). • /test that connects to the testing code of the underlying model. • /notebook that connects to the Jupyter Lab notebook's URL packaged in the image. Each image also exposes an SSH connection, authenticated using LDAP. Experienced users can also make use of the packaged TensorBoard to monitor the training process. Public and internal datasets come in many different formats, as they may have been collected for many years and annotated in different ways. To mitigate that, we develop an intermediate representation (IR) that is suitable for many NLU tasks and write frontends to convert common dataset formats to said IR. We also provide a converter that converts this IR back into other dataset formats, making converting a dataset from one format to another trivial. In our setting (an enterprise environment), a dataset frontend converter is the only part that may need to be written by an end-user, and we believe that it is significantly simpler than building the whole NLU pipeline. Figure We include TensorBoard in our prebuilt images to display common training metrics. However, since our main users are typically product teams with limited experience in machine learning, we also develop interactive views to analyze the trained results. For example, Figure In most use-cases, AUTONLU automatically handles resource management for the users. However, if an advanced user wants to manually manage instances' life cycle, assign a task to a specific instance, or to debug an instance, we provide a GUI to do so as well. Concretely, we provide the following functionalities: • Create an instance with a desired hardware configuration and docker image. By default, AUTONLU creates an instance with 4 CPU cores, 8 GBs of RAM, and 1 NVIDIA V100 GPU, which are all configurable to the user's desire. The default docker image is the one containing all the supported models, but users can choose from one of the prebuilt images that contains just a single model if that's their use-case. • Assign a task to an instance. During training and testing, users can choose whether to let AUTONLU to distribute the task or to assign the task to a specific instance: it is common for a product team to reserve a few instances for themselves and want to use just those instances. • Access an instance's shell and files. Since Ease-of-use is one of our core design principles, we package in all of our prebuilt images a Jupyter Lab server, with the intention of using it as a lightweight IDE/shell environment. While we also expose SSH connection to each instance, we expect users to find the Jupyter Lab a more friendly approach. 4 Case studies One of the first clients of AUTONLU was the Photoshop team, as we want to build a chatbot using their image-editing requests dataset The dataset was collected in many years, annotated both using Amazon Mechanical Turk and by our in-house annotators. Cleaning this dataset is a challenge in itself, and in this case study, we aim to create an effective workflow to train a state-of-theart model and clean the dataset at the same time. We first convert the dataset into our IR, and train a simple model using the fastest algorithm provided by AUTONLU. This initial model provides us with a rough confusion matrix, and we manually inspect cells with the biggest values. Those cells give us an insight into some systematic labeling errors, such as in Figure Once the fast model performance is comparable to its performance on some public datasets, such as ATIS Keyphrase extraction is the task of automatically extracting a small set of phrases that best describe a document. As keyphrases provide a high-level summarization of the considered document and they give the reader some clues about its contents, keyphrase extraction is a problem of great interest to the Document Cloud team of Adobe. In this case study, we aim to develop an effective keyphrase extraction system for the team. Similar to recent works on keyphrase extraction After that, we simply use AUTONLU to train and tune models. We employ the BiLSTM-CRF archi- tecture In this work, we introduce AUTONLU, an ondemand cloud-based platform that is easy-to-use and has enabled many product teams within Adobe to create powerful NLU models. Our design principles make it an ideal candidate for enterprises who want to have an NLU system for themselves, with minimal deep learning expertise. AUTONLU 's code is in the process to be open-sourced, and we invite contributors to contribute. In future work, we will implement more advanced features such as transfer learning, knowledge distillation and neural architecture search, which have been shown to be useful in building real-world NLP systems
| 1,047 | 1,416 | 1,047 |
Inference Rules and their Application to Recognizing Textual Entailment
|
In this paper, we explore ways of improving an inference rule collection and its application to the task of recognizing textual entailment. For this purpose, we start with an automatically acquired collection and we propose methods to refine it and obtain more rules using a hand-crafted lexical resource. Following this, we derive a dependency-based structure representation from texts, which aims to provide a proper base for the inference rule application. The evaluation of our approach on the recognizing textual entailment data shows promising results on precision and the error analysis suggests possible improvements.
|
Textual inference plays an important role in many natural language processing (NLP) tasks. In recent years, the recognizing textual entailment (RTE) Studies such as A typical example is the following RTE pair in which accelerate to in H is used as an alternative formulation for reach speed of in T. T: The high-speed train, scheduled for a trial run on Tuesday, is able to reach a maximum speed of up to 430 kilometers per hour, or 119 meters per second. H: The train accelerates to 430 kilometers per hour. One way to deal with textual inference is through rule representation, for example X wrote Y ≈ X is author of Y. However, manually building collections of inference rules is time-consuming and it is unlikely that humans can exhaustively enumerate all the rules encoding the knowledge needed in reasoning with natural language. Instead, an alternative is to acquire these rules automatically from large corpora. Given such a rule collection, the next step to focus on is how to successfully use it in NLP applications. This paper tackles both aspects, acquiring inference rules and using them for the task of recognizing textual entailment. For the first aspect, we extend and refine an existing collection of inference rules acquired based on the Distributional Hypothesis (DH). One of the main advantages of using the DH is that the only input needed is a large corpus of (parsed) text For the second aspect, we focus on applying these rules to the RTE task. In particular, we use a structure representation derived from the dependency parse trees of T and H, which aims to capture the essential information they convey. The rest of the paper is organized as follows: Section 2 introduces the inference rule collection we use, based on the Discovery of Inference Rules from Text (henceforth DIRT) algorithm and discusses previous work on applying it to the RTE task. Section 3 focuses on the rule collection itself and on the methods in which we use an external lexical resource to extend and refine it. Section 4 discusses the application of the rules for the RTE data, describing the structure representation we use to identify the appropriate context for the rule application. The experimental results will be presented in Section 5, followed by an error analysis and discussions in Section 6. Finally Section 7 will conclude the paper and point out future work directions.
|
A number of automatically acquired inference rule/paraphrase collections are available, such as The DIRT algorithm has been introduced by An inference rule in DIRT is a pair of binary relations pattern 1 (X, Y ), pattern 2 (X, Y ) which stand in an inference relation. pattern 1 and pattern 2 are chains in dependency trees 3 while X and Y are placeholders for nouns at the end of this chain. The two patterns will constitute a candidate paraphrase if the sets of X and Y values exhibit relevant overlap. In the following example, the two patterns are prevent and provide protection against. 3 obtained with the Minipar parser The algorithm does not extract directional inference rules, it can only identify candidate paraphrases; many of the rules are however unidirectional. Besides syntactic rewriting or lexical rules, rules in which the patterns are rather complex phrases are also extracted. Some of the rules encode lexical relations which can also be found in resources such as WordNet while others are lexical-syntactic variations that are unlikely to occur in hand-crafted resources Current work on inference rules focuses on making such resources more precise. Intuitively such inference rules should be effective for recognizing textual entailment. However, only a small number of systems have used DIRT as a resource in the RTE-3 challenge, and the experimental results have not fully shown it has an important contribution. In ( In Based on observations of using the inference rule collection on the real data, we discover that 1) some of the needed rules still lack even in a very large collection such as DIRT and 2) some systematic errors in the collection can be excluded. On both aspects, we use WordNet as additional lexical resource. Missing Rules A closer look into the RTE data reveals that DIRT lacks many of the rules that entailment pairs require. Table The more complex rules are even more difficult to capture with a DIRT-like algorithm. Some of these do not occur frequently enough even in large amounts of text to permit acquiring them via the DH. Combining WordNet and DIRT In order to address the issue of missing rules, we investigate the effects of combining DIRT with an exact hand-coded lexical resource in order to create new rules. For this we extended the DIRT rules by adding rules in which any of the lexical items involved in the patterns can be replaced by WordNet synonyms. In the example above, we consider the DIRT rule X face threat of Y → X, at risk of Y (Table The idea behind this is that a combination of various lexical resources is needed in order to cover the vast variety of phrases which humans can judge to be in an inference relation. The method just described allows us to identify the first four rules listed in Table Our extension is application-oriented therefore it is not intended to be evaluated as an independent rule collection, but in an application scenario such as RTE (Section 6). In our experiments we also made a step towards removing the most systematic errors present in DIRT. DH algorithms have the main disadvantage that not only phrases with the same meaning are extracted but also phrases with opposite meaning. In order to overcome this problem and since such errors are relatively easy to detect, we applied a filter to the DIRT rules. This eliminates inference rules which contain WordNet antonyms. For such a rule to be eliminated the two patterns have to be identical (with respect to edge labels and content words) except from the antonymous words; an example of a rule eliminated this way is X have confidence in Y ≈ X lack confidence in Y. As pointed out by In this section we point out two issues that are encountered when applying inference rules for textual entailment. The first issue is concerned with correctly identifying the pairs in which the knowledge encoded in these rules is needed. Following this, another non-trivial task is to determine the way this knowledge interacts with the rest of information conveyed in an entailment pair. In order to further investigate these issues, we apply the rule collection on a dependency-based representation of text and hypothesis, namely Tree Skeleton. A straightforward experiment can reveal the number of pairs in the RTE data which contain rules present in DIRT. For all the experiments in this paper, we use the DIRT collection provided by T: The sale was made to pay Yukos US$ 27.5 billion tax bill, Yuganskneftegaz was originally sold for US$ 9.4 billion to a little known company Baikalfinansgroup which was later bought by the Russian state-owned oil company Rosneft. H: Baikalfinansgroup was sold to Rosneft. On average, only 2% of the pairs in the RTE data is subject to the application of such inference rules. Out of these, approximately 50% are lexical rules (one verb entailing the other). Out of these lexical rules, around 50% are present in WordNet in a synonym, hypernym or sister relation. At a manual analysis, close to 80% of these are correct rules; this is higher than the estimated accuracy of DIRT, probably due to the bias of the data which consists of pairs which are entailment candidates. However, given the small number of inference rules identified this way, we performed another analysis. This aims at determining an upper bound of the number of pairs featuring entailment phrases present in a collection. Given DIRT and the RTE data, we compute in how many pairs the two patterns of a paraphrase can be matched irrespective of their anchor values. An example is the following pair, T: Libya's case against Britain and the US concerns the dispute over their demand for extradition of Libyans charged with blowing up a Pan Am jet over Lockerbie in 1988. H: One case involved the extradition of Libyan suspects in the Pan Am Lockerbie bombing. This is a case in which the rule is correct and the entailment is positive. In order to determine this, a system will have to know that Libya's case against Britain and the US in T entails one case in H. Similarly, in this context, the dispute over their demand for extradition of Libyans charged with blowing up a Pan Am jet over Lockerbie in 1988 in T can be replaced with the extradition of Libyan suspects in the Pan Am Lockerbie bombing preserving the meaning. Altogether in around 20% of the pairs, patterns of a rule can be found this way, many times with more than one rule found in a pair. However, in many of these pairs, finding the patterns of an inference rule does not imply that the rule is truly present in that pair. Considering a system is capable of correctly identifying the cases in which an inference rule is needed, subsequent issues arise from the way these fragments of text interact with the surrounding context. Assuming we have a correct rule present in an entailment pair, the cases in which the pair is still not a positive case of entailment can be summarized as follows: • The entailment rule is present in parts of the text which are not relevant to the entailment value of the pair. • The rule is relevant, however the sentences in which the patterns are embedded block the entailment (e.g. through negative markers, modifiers, embedding verbs not preserving entailment) • The rule is correct in a limited number of contexts, but the current context is not the correct one. To sum up, making use of the knowledge encoded with such rules is not a trivial task. If rules are used strictly in concordance with their definition, their utility is limited to a very small number of entailment pairs. For this reason, 1) instead of forcing the anchor values to be identical as most previous work, we allow more flexible rule matching (similar to The Tree Skeleton (TS) structure was proposed by Following their algorithm, we first preprocess the data using a dependency parser H Robin Warren was awarded a Nobel Prize. Notice that, in order to match the inference rules with two anchors, the number of the dependency Applying DIRT on a TS Dependency representations like the tree skeleton have been explored by many researchers, e.g. In the example above, the rule ---→ Y satisfies this criterion, as it is matched at the root nodes. Notice that the rule is correct only in restricted contexts, in which the object of receive is something which is conferred on the basis of merit. However in this pair, the context is indeed the correct one. Our experiments consist in predicting positive entailment in a very straightforward rule-based manner (Table In the first two columns (Dirt T S and Dirt+WN T S ) we consider DIRT in its original state and DIRT with rules generated with WordNet as described in Section 3; all precisions are higher than 67% In the third column we report the results of using a set of rules containing only the trivial identity ones (Id T S ). For our current system, this can be seen as a precision upper bound for all the other collections, in concordance with the fact that identical rules are nothing but inference rules of highest possible confidence. The fourth column (Dirt+Id+WN T S ) contains what can be considered our best setting. In this setting considerably more pairs are covered using a collection containing DIRT and identity rules with WordNet extension. Although the precision results with this setting are encouraging (65% for RTE2 data and 72% for RTE3 data), the coverage is still low, 8% for RTE2 and 6% for RTE3. This aspect together with an error analysis we performed are the focus of Section 7. The last column (Dirt+Id+WN) gives the precision we obtain if we simply decide a pair is true entailment if we have an inference rule matched in it (irrespective of the values of the anchors or of the existence of tree skeletons). As expected, only identifying the patterns of a rule in a pair irrespective of tree skeletons does not give any indication of the entailment value of the pair. At last, we also integrate our method with a bag of words baseline, which calculates the ratio of overlapping words in T and H. For the pairs that our method covers, we overrule the baseline's decision. The results are shown in Table In this section we take a closer look at the data in order to better understand how does our method of combining tree skeletons and inference rules work. We will first perform error analysis on what we have considered our best setting so far. Following this, we analyze data to identify the main reasons which cause the low coverage. For error analysis we consider the pairs incorrectly classified in the RTE3 test data set, consisting of a total of 25 pairs. We classify the errors into three main categories: rule application errors, inference rule errors, and other errors (Table In the first category, the tree skeleton fails to match the corresponding anchors of the inference rules. For instance, if someone founded the Institute of Mathematics (Instituto di Matematica) at the University of Milan, it does not follow that they founded The University of Milan. The Institute of Mathematics should be aligned with the University of Milan, which should avoid applying the in-ference rule for this pair. A rather small portion of the errors (16%) are caused by incorrect inference rules. Out of these, two are correct in some contexts but not in the entailment pairs in which they are found. For example, the following rule X generate Y ≈ X earn Y is used incorrectly, however in the restricted context of money or income, the two verbs have similar meaning. An example of an incorrect rule is X issue Y ≈ X hit Y since it is difficult to find a context in which this holds. The last category contains all the other errors. In all these cases, the additional information conveyed by the text or the hypothesis which cannot be captured by our current approach, affects the entailment. For example an imitation diamond is not a diamond, and more than 1,000 members of the Russian and foreign media does not entail more than 1,000 members from Russia; these are not trivial, since lexical semantics and fine-grained analysis of the restrictors are needed. For the second part of our analysis we discuss the coverage issue, based on an analysis of uncovered pairs. A main factor in failing to detect pairs in which entailment rules should be applied is the fact that the tree skeleton does not find the corresponding lexical items of two rule patterns. Issues will occur even if the tree skeleton structure is modified to align all the corresponding fragments together. Consider cases such as threaten to boycott and boycott or similar constructions with other embedding verbs such as manage, forget, attempt. Our method can detect if the two embedded verbs convey a similar meaning, however not how the embedding verbs affect the implication. Independent of the shortcomings of our tree skeleton structure, a second factor in failing to detect true entailment still lies in lack of rules. For instance, the last two examples in Throughout the paper we have identified important issues encountered in using inference rules for textual entailment and proposed methods to solve them. We explored the possibility of combining a collection obtained in a statistical, unsupervised manner, DIRT, with a hand-crafted lexical resource in order to make inference rules have a larger contribution to applications. We also investigated ways of effectively applying these rules. The experiment results show that although coverage is still not satisfying, the precision is promising. Therefore our method has the potential to be successfully integrated in a larger entailment detection framework. The error analysis points out several possible future directions. The tree skeleton representation we used needs to be enhanced in order to capture more accurately the relevant fragments of the text. A different issue remains the fact that a lot of rules we could use for textual entailment detection are still lacking. A proper study of the limitations of the DH as well as a classification of the knowledge we want to encode as inference rules would be a step forward towards solving this problem. Furthermore, although all the inference rules we used aim at recognizing positive entailment cases, it is natural to use them for detecting negative cases of entailment as well. In general, we can identify pairs in which the patterns of an inference rule are present but the anchors are mismatched, or they are not the correct hypernym/hyponym relation. This can be the base of a principled method for detecting structural contradictions (de
| 625 | 2,386 | 625 |
Type Level Clustering Evaluation: New Measures and a POS Induction Case Study
|
Clustering is a central technique in NLP. Consequently, clustering evaluation is of great importance. Many clustering algorithms are evaluated by their success in tagging corpus tokens. In this paper we discuss type level evaluation, which reflects class membership only and is independent of the token statistics of a particular reference corpus. Type level evaluation casts light on the merits of algorithms, and for some applications is a more natural measure of the algorithm's quality.
|
Clustering is a central machine learning technique. In NLP, clustering has been used for virtually every semi-and unsupervised task, including POS tagging In this paper we discuss type level evaluation, which evaluates the set membership structure created by the clustering, independently of the token statistics of the gold standard corpus. Many clustering algorithms are evaluated by their success in tagging corpus tokens Clustering evaluation has been extensively investigated (Section 3). However, the discussion centers around the monosemous case, where each item belongs to exactly one cluster, although polysemy is the common case in NLP. The contribution of the present paper is as follows. First, we discuss the issue of type level evaluation and explain why even in the monosemous case a token level evaluation presents a skewed picture (Section 2). Second, we show for the common polysemous case why adapting existing information-theoretic measures to type level evaluation is not natural (Section 3). Third, we propose new mapping-based measures and algorithms to compute them (Section 4). Finally, we perform a detailed case study with part-of-speech (POS) induction (Section 5). We compare seven leading algorithms, showing that token and type level measures can weakly or even negatively correlate. This shows that type level evaluation indeed reveals aspects of a clustering solution that are not revealed by the common tagging-based evaluation. Clustering is a vast research area. As far as we know, this is the first NLP paper to propose type level measures for the polysemous case.
|
This section motivates why both type and token level external evaluations should be done, even in the monosemous case. Clustering algorithms compute a set of induced clusters (a clustering). Some algorithms directly compute a clustering, while some others produce a tagging of corpus tokens from which a clustering can be easily derived. A clustering is monosemous if each item is allowed to belong to a single cluster only, and polysemous otherwise. An external evaluation is one which is based on a comparison of an algorithm's result to a gold standard. In this paper we focus solely on external evaluation, which is the most common evaluation approach in NLP. Token and type level evaluations reflect different aspects of a clustering. External token level evaluation assesses clustering quality according to the clustering's accuracy on a given manually annotated corpus. This is certainly a useful evaluation measure, e.g. when the purpose of the clustering algorithm is to annotate a corpus to serve as input to another application. External type level evaluation views the computed clustering as a set membership structure and evalutes it independently of the token statistics in the gold standard corpus. There are two main cases in which this is useful. First, a type level evaluation can be the natural one in light of the problem itself. For example, if the purpose of the clustering algorithm is to automatically build a lexicon (e.g., VerbNet To motivate type level evaluation, consider POS induction, which exemplifies both cases above. Clearly, a word form may belong to several parts of speech (e.g., 'contrast' is both a noun and a verb, 'fast' is both an adjective and an adverb, 'that' can be a determiner, conjunction and adverb, etc.). As an evaluation of a POS induction algorithm, it is natural to evaluate the lexicon it generates, even if the main goal is to annotate a corpus. The lexicon lists the possible POS tags for each word, and thus its evaluation is a polysemous type level one. Even if we ignore polysemy, type level evaluation is useful for a POS induction algorithm used to tag a corpus. There are POS classes whose members are very frequent, e.g., determiners and prepositions. Here, a very small number of word types usually accounts for a large portion of corpus tokens. For example, in the WSJ Penn Treebank The type and token behavior differences result from the Zipfian distribution of word tokens to word types Other natural language entities also demonstrate Zipfian distribution of tokens to types. For example, the distribution of syntactic categories in parse tree constituents is Zipfian, as shown in It may be argued that a token level evaluation is sufficient since it already reflects type information. In this paper we demonstrate that this is not the case, by showing that they correlate weakly or even negatively in an important NLP task. Clustering evaluation is challenging. Many measures have been proposed in the past decades Mapping based measures are based on a postprocessing step in which each induced cluster is mapped to a gold class (or vice versa). The standard mappings are greedy many-to-one (M-1) and greedy one-to-one (1-1). Several measures which rely on these mappings were proposed. The most common and perhaps the simplest one is accuracy, which computes the fraction of items correctly clustered under the mapping. Other measures include: L Counting pairs measures are based on a combinatorial approach which examines the number of data element pairs that are clustered similarly in the reference and proposed clustering. Among these are Rand Index Information-theoretic (IT) measures. IT measures assume that the items in the dataset are taken from a known distribution (usually the uniform distribution), and thus the gold and induced clusters can be treated as random variables. These measures utilize a co-occurrence matrix I between the gold and induced clusters. We denote the induced clustering by K and the gold clustering by C. I ij contains the number of items in the intersection of the i-th gold class and the j-th induced cluster. When assuming the uniform distribution, the probability of an event (a gold class c or an induced cluster k) is its relative size, so Under this assumption we define the entropies and the conditional entropies: H(K) and H(K|C) are defined similarly. In Section 5 we use two IT measures for token level evaluation, V In the monosemous case (type or token), the application of the measures described in this section to type level evaluation is straightforward. In the polysemous case, however, they suffer from serious shortcomings. Consider a case in which each item is assigned exactly r gold clusters and each gold cluster has the exact same number of items (i.e., each has a size of l•r |C| , where l is the number of items). Now, consider an induced clustering where there are |C| induced clusters (|K| = |C|) and each item is assigned to all induced clusters. The co-occurrence matrix in this case should have identical values in all its entries. Even if we allow the weight each item contributes to the matrix to depend on its gold and induced entry sizes, the situation will remain the same. This is because all items have the exact same entry size and both gold and induced clusterings have uniform cluster sizes. In this case, the random variables defined by the induced and gold clustering assignments are independent (this easily follows from the definition of independent events, since the joint probability is the multiplication of the marginals). Hence, H(K|C) = H(K) and H(C|K) = H(C), and both V and NVI obtain their worst possible values The problem can in theory be solved by providing the number of clusters per item as an input to the algorithm. However, in NLP this is unrealistic (even if the total number of clusters can be provided) and the number should be determined by the algorithm. We therefore do not consider IT-based measures in this paper, deferring them to future work. In this section we present new type level evaluation measures for the polysemous case. As we show below, these measures do not suffer from the problems discussed for IT measures in Section 3. All measures are mapping-based: first, a mapping between the induced and gold clusters is performed, and then a measure E is computed. As is common in the clustering evaluation literature (Section 3), we use M-1 and 1-1 greedy mappings, defined to be those that maximize the corresponding measure E. Let C = {c 1 , ..., c n } be the set of gold classes and K = {k 1 , ..., k m } be the set of induced clusters. Denote the number of words types by l. Let A i ⊂ C, B i ⊂ K, i = 1...l be the set of gold classes and set of induced clusters for each word. The polysemous nature of task is reflected by the fact that A i and B i are subsets, rather than members, of C and K respectively. Our measures address quality from two persectives, that of the individual items clustered (Section 4.1) and that of the clusters (Section 4.2). Item-based measures especially suit evaluation of clustering quality for the purpose of lexicon induction, and have no counterpart in the monosemous case. Cluster-based measures are a direct generalization of existing mapping based measures to the polysemous case. The difficulty in designing item-based and cluster-based measures is that the number of clusters assigned to each item is determined by the clustering algorithm. Below we show how to overcome this. For a given mapping h : A fundamental quantity for item-based evaluation is the number of correct clusters for each item (word type) under this mapping, denoted by IM i (IM stands for 'item match'): The total item match IM is defined to be: In the monosemous case, IM is normalized by the number of items, yielding an accuracy score. Applying a similar definition in the polysemous case, normalizing instead by the total number of gold clusters assigned to the items, can be easily manipulated. Even a clustering which has the correct number of induced clusters (equal to the number of gold classes) but which assigns each item to all induced clusters, receives a perfect score under both greedy M-1 and 1-1 mappings. This holds for any induced clustering for which ∀i, A i ⊂ h(B i ). Note that using a mapping from C to K (or a combination of both directions) would exhibit the same problem. To overcome the problem, we use the harmonic average of two normalized terms (F-score). We use two average variants, micro and macro. Macro average computes the total number of matches over all words and normalizes in the end. Recall (R), Precision (P) and their harmonic average (Fscore) are accordingly defined: IMi F (h) is a constant depending on h. As all items are equally weighted, those with larger gold and induced entries have more impact on the measure. The micro average, aiming to give all items an equal status, first computes an F-score for each item and then averages over them. Hence, each item contributes at most 1 to the measure. This MicroI measure is given by: Where w i (h) is a weight depending on h but also on i. For both measures, the maximum score is 1. It is obtained if and only if A i = h(B i ) for every i. In 1-1 mapping, when the number of induced clusters is larger than the number of gold clusters, some of the induced clusters are not mapped. To preserve the nature of 1-1 mapping that punishes for excessive clusters 2 , we define |h(B i )| to be equal to |B i | even for these unmapped clusters. Recall that any induced clustering in which ∀i, A i ⊂ h(B i ) gets the best score under a greedy mapping with the accuracy measure. In MacroI and MicroI the obtained recalls are perfect, but the precision terms reflect deviation from the correct solution. 2 And to allow us to compute it accurately, see below. In the example in Section 3 showing an unreasonable behavior of IT-based measures, the score depends on r for both MacroI and MicroI. With our new measures, recall is always 1, but precision is r n . This is true both for 1-1 and M-1 mappings. Hence, the new measures show reasonable behavior in this example for all r values. MicroI was used in In the following we discuss how to compute the 1-1 and M-1 greedy mappings for each measure. 1-1 Mapping. We compute h by finding the maximal weighted matching in a bipartite graph. In this graph one side represents the induced clusters, the other represents the gold classes and the matchings correspond to 1-1 mappings. The problem can be efficiently solved by the Kuhn-Munkres algorithm To be able to use this technique, edge weights must not depend upon h. In 1-1 mapping, |h(B i )| = |B i |, and therefore F (h) = F and w i (h) = w i . That is, both quantities are independent of h There are two problems in applying the bipartite graph technique to finding an M-1 mapping. First, under such mapping w i (h) and F (h) do depend on h. The problem may be solved by selecting some constant weighting scheme. However, a more serious problem also arises. Consider a case in which an item x has a gold entry {C 1 } and an induced entry {K 1 , K 2 }. Say the chosen mapping mapped both K 1 and K 2 to C 1 . By summing over the graph's edges selected by the mapping, we add weight (F (h) for MacroI and w i (h) for MicroI) both to the edge between K 1 and C 1 and to the edge between K 2 and C 1 . However, the item's IM i is only 1. This prohibits the use of the bipartite graph method for the M-1 case. Since we are not aware of any exact method for solving this problem, we use a hill-climbing algorithm. We start with a random mapping and a random order on the induced clusters. Then we iterate over the induced clusters and map each of them to the gold class which maximizes the measure given that the rest of the mapping remains constant. We repeat the process until no improvement to the measure can be obtained by changing the assignment of a single induced cluster. Since the score depends on the initial random mapping and random order, we repeat this process several times and choose the maximum between the obtained scores. The cluster-based evaluation measures we propose are a direct generalization of existing monosemous mapping based measures to the polysemous type case. For a given mapping h : K → C, we define h : K h → C. K h is defined to be a clustering which is obtained by performing set union between every two clusters in K that are mapped to the same gold cluster. The resulting h is always 1-1. We denote Our motivation for using h in the definition of the measures instead of h is to stay as close as possible to accuracy, the most common mappingbased measure in the monosemous case. M-1 (monosemous) accuracy does not punish for spliting classes. For instance, in a case where there is a gold cluster c i and two induced clusters k 1 and k 2 such that c i = k 1 ∪ k 2 , the M-1 accuracy is the same as in the case where there is one cluster k 1 such that c i = k 1 . M-1 accuracy, despite its indifference to splitting, was shown to reflect better than 1-1 accuracy the clustering's applicability for subsequent applications (at least in some contexts) Recall that in item-based evaluation, IM i measures the intersection between the induced and gold entries of each item. Therefore, the set union operation is not needed for that case, since when an item appears in two induced clusters that are mapped to the same gold cluster, its IM i is increased only by 1. A fundamental quantity for cluster-based evaluation is the intersection between each induced cluster and the gold class to which it is mapped. We denote this value by CM j (CM stands for 'cluster match'): The total intersection (CM ) is accordingly defined to be: As with the item-based evaluation (Section 4.1), using CM or a derived accuracy as a measure is problematic. A clustering that assigns n induced classes to each word (n is the number of gold classes) will get the highest possible score under every greedy mapping (1-1 or M-1), as will any clustering in which ∀i, As in the item-based evaluation, a possible solution is based on defining recall, precision and Fscore measures, computed either in the micro or in the macro level. The macro cluster-based measure turns out to be identical to the macro item-based measure MacroI The following derivation shows the equivalence for the 1-1 case. The M-1 case is similar. We note that h = h in the 1-1 case and we therefore exchange them in the definition of CM . It is enough to show that CM = IM , since the denominator is the same in both cases: The micro cluster-based measures are defined: The micro cluster measure MicroC is obtained by taking a weighted average over the F j 's: Where N * = z∈K h |z| is the number of clustered items after performing the set union and including repetitions. If, in the 1-1 case where m > n, an induced cluster is not mapped, we define F k = 0. A definition of the measure using a reverse mapping (i.e., from C to K) would have used a weighted average with weights proportional to the gold classes' sizes. The definition of h causes a similar computational difficulty as in the M-1 item-based measures. Consequently, we apply a hill climbing algorithm similar to the one described in Section 4.1. The 1-1 mapping is computed using the same bipartite graph method described in Section 4.1. The graph's vertices correspond to gold and induced clusters and an edge's weight is the F-score between the class and cluster corresponding to its vertices times the cluster's weight (|k|/N * ). As a detailed case study for the ideas presented in this paper, we apply the various measures for the POS induction task, using seven leading POS induction algorithms. POS Induction Algorithms. We experimented with the following models: ARR10 Clark03 and ARR10 are monosemous algorithms, allowing a single cluster for each word type. The other algorithms are polysemous. They perform sequence labeling where each token is tagged in its context, and different tokens (instances) of the same type (word form) may receive different tags. Data Set. All models were tested on sections 2-21 of the PTB-WSJ, which consists of 39832 sentences, 950028 tokens and 39546 unique types. Of the tokens, 832629 (87.6%) are not punctuation marks. Evaluation Measures. Type level evaluation used the measures MacroI (which is equal to MacroC), MicroI and MicroC both with greedy 1-1 and M-1 mappings as described in Section 4. The type level gold (induced) entry is defined to be the set of all gold (induced) clusters with which it appears. For the token level evaluation, six measures are used (see Section 3): accuracy with M-1 and 1-1 mappings, NVI, V, H(C|K) and H(K|C), using e as the logarithm's base. We use the full WSJ POS tags set excluding punctuation Punctuation. Punctuation marks occupy a large volume of the corpus tokens (12.4% in our experimental corpus), and are easy to cluster. Clustering punctuation marks thus greatly inflates token level results. To study the relationship between type and token level evaluations in a focused manner, we excluded punctuation from the evaluation (they are still used during training, so algorithms that rely on them are not harmed). Number of Induced Clusters. The number of gold POS tags in WSJ is 45, of which 11 are punctuation marks. Therefore, for the ARR10 and Clark03 models, 34 clusters were induced. For GJ08 we received the output with 45 clusters. The iHMM models of GVG09 determine the number of clusters automatically (resulting in 47, 91 and 192 clusters, see below). For GG07, our computing resources did not enable us to induce 45 clusters and we therefore used 17 Configurations. We ran the ARR10 tagger with the configuration detailed in We obtained the code of Goldwater and Griffiths' BHMM model and ran it for 10K iterations with an annealing technique for parameter estimation. That was the best parameter estimation technique available to us. This is the first time that this model is evaluated on such a large experimental corpus, and it performed well under these conditions. The output of the model of GJ08 was sent to us by the authors. The model was run on sec-tions 2-21 of the WSJ-PTB using significantly inferior computing resources compared to those used for producing the results reported in their paper. While this model cannot be compared to the aforementioned six models due to the suboptimal configuration, we evaluate its output using our measures to get a broader variety of experimental results Table Note that the table should not be used to deduce which algorithm is the 'best' for the task, even according to a single evaluation type. This is because, as explained above, the algorithms do not induce the same number of clusters and this affects their results. Results indicate that type level evaluation reveals aspects of the clustering quality that are not expressed in the token level. For the Clark03 model the disparity is most apparent. While in the token level it performs very well (better than the polysemous algorithms for the 1-1, V and NVI token level measures), in the type level it is the second worst in the item-based 1-1 scores and the worst in the M-1 scores. Here we have a clear demonstration of the value of type level evaluation. The Clark03 algorithm is assessed as excellent using token level evaluation (second only to ARR10 in M-1, 1-1, V and NVI), and only a type level one shows its relatively poor type performance. Although readers may think that this is natural due to the algorithm's monosemous nature, this is not the case, since the monosemous ARR10 generally ranked first in the type level measures (more on this below). The disparity is also observed for polysemous algorithms. The GG07 model's token level scores are mediocre, while in the type level MicroC 1-1 measure this model is the best and in the type level MicroI and MacroI 1-1 measures it is the second best. The table shows that the ARR10 model achieves the best results in most type and token level evaluation measures. The fact that this monosemous algorithm outperforms the polysemous ones, even in a type level evaluation, may seem strange at first sight but can be explained as follows. Polysemous tokens account for almost 60% of the corpus (565K out of 950K), so we could expect that a monosemous algorithm should do badly in a token-level evaluation. However, for most of the polysemous tokens the polysemy is only weakly present in the corpus Hence, monosemous POS induction algorithms are not at such a great disadvantage relative to polysemous ones. This observation, which was fully motivated by our type level case study, might be used to guide future work on POS induction, and it thus serves as another demonstration for the utility of type level evaluation. For the type level measures with greedy M-1 mapping, we used the hill-climbing algorithm described in Section 4. Recall that the mapping to which our algorithm converges depends on its random initialization. We therefore ran the algorithm with 10 different random initializations and report the obtained maximum for MacroI, MicroI and MicroC in Table 1. The different initializations caused very little fluctuation: not more than 1% in the 9 (7) best runs for the item-based (MicroC) measures. We take this result as an indication that the obtained maximum is a good approximation of the global maximum. We tried to improve the algorithm by selecting an intelligent initialization heuristic. We used the M-1 mapping obtained by mapping each induced cluster to the gold class with which it has the high- est weight edge in the bipartite graph. Recall from Section 4.1 that this is a reasonable approximation of the greedy M-1 mapping. Again, we ran it for the three type level measures for 10 times with a random update order on the induced clusters. This had only a minor effect on the final scores. Previous work The item based measures. The table indicates that there is no substantial difference between the two item based type level scores with 1-1 mapping. The definitions of MacroI and MicroI imply that if |A i | + |h(B i )| (which equals |A i | + |B i | under a 1-1 mapping) is constant for all word types, then a clustering will score equally on both 1-1 type measures. Indeed, in our experimental corpus 83.4% of the word types have one POS tag, 12.5% have 2, 3.1% have 3 and only 1% of the words have more. Therefore, |A i | is roughly constant. The ARR10 and Clark03 models assign a word type to a single cluster. For the other models, the number of clusters per word type is generally similar to that of the gold standard. Consequently, |B i | is roughly constant as well, which explains the similar behavior of the two measures. Note that for other clustering tasks |A i | may not necessarily be constant, so the MacroI and MicroI scores are not likely to be as similar under the 1-1 mapping. We discussed type level evaluation for polysemous clustering, presented new mapping-based evaluation measures, and applied them to the evaluation of POS induction algorithms, demonstrating that type level measures provide value beyond the common token level ones. We hope that type level evaluation in general and the proposed measures in particular will be used in the future for evaluating clustering performance in NLP tasks.
| 490 | 1,601 | 490 |
Integrating Language Models into Direct Speech Translation: An Inference-Time Solution to Control Gender Inflection
|
When translating words referring to the speaker, speech translation (ST) systems should not resort to default masculine generics nor rely on potentially misleading vocal traits. Rather, they should assign gender according to the speakers' preference. The existing solutions to do so, though effective, are hardly feasible in practice as they involve dedicated model re-training on gender-labeled ST data. To overcome these limitations, we propose the first inferencetime solution to control speaker-related gender inflections in ST. Our approach partially replaces the (biased) internal language model (LM) implicitly learned by the ST decoder with gender-specific external LMs. Experiments on en→es/fr/it show that our solution outperforms the base models and the best training-time mitigation strategy by up to 31.0 and 1.6 points in gender accuracy, respectively, for feminine forms. The gains are even larger (up to 32.0 and 3.4) in the challenging condition where speakers' vocal traits conflict with their gender. 1
|
The problem of gender bias in automatic translation particularly emerges when translating from genderless or notional gender languages (e.g., English) -which feature limited gender-specific markinginto grammatical gender languages (e.g., Spanish) -which exhibit a rich lexical and morpho-syntactic system of gender So far, this topic has been investigated only by To overcome these limitations, we propose the first inference-time solution in direct ST to control gender translation for speaker-dependent words when the speaker's gender is known. 2 Our approach guides gender translation by partially substituting the biased internal language model implicitly learned by the ST decoder of a base model with a gender-specific external language model learned on monolingual textual data. Through experiments on three language pairs (en→es/fr/it), we demon-strate that, in terms of gender accuracy, our solution outperforms the base system by up to 31.0 points (for feminine forms) and is on par with the best training-time approach (with up to 1.6 of gain for feminine forms). Its effectiveness is also confirmed when speakers' vocal traits conflict with their gender, with gains up to 32.0 and 3.4 over the base system and the best training-time solution.
|
The autoregressive decoder of an encoder-decoder architecture is trained to predict the next target token given the previous ones and the encoder output. Thereby, it implicitly learns to model the target language from the training data, thus developing an internal language model (ILM) The integration of end-to-end models with ELMs is a widespread solution to leverage text data in speech recognition As regards the ILM removal, which previous studies already shown to amplify the performance gains yielded by ELM integration namely, by feeding the ST decoder with the average c of the encoder outputs h n,t over all the T n timesteps of the N training samples, where c is: Therefore, given an audio input x, the output y of our solution is the translation y that maximizes the log-linear combination of p M B , p ELM and p ILM : where β ILM and β ELM are positive scalar weights calibrating ELM integration and ILM removal. The three components (p M B , p ELM , and p ILM ) convey different information: i) p M B embeds both the acoustic and the linguistic information learned from the ST data; ii) p ILM represents the estimated linguistic knowledge learned by M B ; iii) p ELM embeds linguistic information (in our case genderspecific forms) learned from external textual resources. Therefore, β ILM and β ELM must be set to values that effectively integrate the internal and external linguistic knowledge, so that the gender bias affecting the ST decoder is mitigated by the ELM. At the same time, the linguistic contribution supplied by the ELM must not override the acoustic modelling capabilities of p M B , so as to avoid translation quality drops. Accordingly, we estimate β ILM and β ELM by optimizing the harmonic mean of the two metrics (gender accuracy and BLEUsee §3) used to measure gender bias and overall translation quality, so as to equally weigh our two objectives. In Appendix A, we discuss the computation of β ILM and β ELM values, also showing that their precise estimation is not critical since final results are rather robust to small weight variations. Our en→es/fr/it ST systems are trained on the TED-based MuST-C corpus The statistics of all these datasets are presented in Table For each language pair, we evaluate our approach by training: i) an ST baseline model (M B ) that is not aware of the speaker's gender; ii) the specialized models (M SP ) presented in Table is the best on average for F, the most misgendered category. Translation Quality. Looking at BLEU scores, we notice that, with the only exception of en-it, the simple integration of the ELM (M B+ELM ) degrades the quality with respect to both M B and M SP , In conclusion, our inference-time solution effectively improves gender translation in direct ST, especially for feminine forms (see Appendix C for output examples). Moreover, it achieves comparable results with the best training-time approach, while overcoming its limitations. Such improvements do not come at the detriment of the overall translation quality (as shown by BLEU scores) nor of the accuracy in assigning gender to words that pertain to human referents other than the speaker (as shown in Appendix D). We also evaluate the inclusivity of our solution for speakers whose vocal traits are stereotypically associated with a gender opposite to their own. As MuST-SHE solely contains utterances from speakers whose gender aligns with their vocal prop-erties, we simulate this condition using the provided "wrong references", in which the speakerdependent words are swapped to the opposite gender. We treat them as correct references, so as to have female voices with masculine targets and vice versa, and we require the systems to produce the output with the gender of the target. Table Translation Quality. In terms of BLEU, our approach (M B-ILM+ELM ) is on par with the trainingtime strategy (M SP ), but they both suffer a ∼2.5 BLEU drop with respect to the base system (M B ). The reason for this drop may lay on the fact that gender-specific models learned patterns that differentiate male and female language We proposed the first inference-time solution to control gender translation of speaker-dependent words in direct ST. Our approach partially replaces the biased ILM of the ST decoder with a genderspecific ELM. As such, it can be applied to existing models without the need for labeled ST data or computationally expensive re-trainings, overcoming the limitations of existing training-time methods. Experiments on three language pairs proved the effectiveness of our technique in controlling gender inflections of words referring to the firstperson subject, regardless of whether the speakers' vocal traits are aligned with their gender or not. In addition to significantly increasing the gender accuracy of base ST models, it achieves substantial parity with the best training-time method while consistently increasing the correct generation of feminine forms. This work is part of the project "Bias Mitigation and Gender Neutralization Techniques for Automatic Translation", which is financially supported by an Amazon Research Award AWS AI grant. Moreover, we acknowledge the support of the PNRR project FAIR -Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. In our experiments, we exclusively evaluated our approach on English to Romance language translations. Conducting experiments on different language pairs would be valuable. However, it is important to note that such endeavors would demand substantial efforts in annotating data, as benchmarks akin to MuST-SHE are currently unavailable for other target languages. Our inference-time solution, as described in the paper, significantly reduces the computational costs of current approaches by eliminating the need for ST retraining. However, there is an increase in inference costs, due to the additional forward passes on the ELM and ILM (which is the same as the ST decoder, but fed with a different encoder output). In particular, since our implementation has not been optimized and performs the operations sequentially, our solution reduces the inference speed (computed as the number of generated tokens per second) by ∼40% (from 165 to 100). Lastly, our ELM implementation uses the same BPE In this paper we presented a new methodology to improve ST systems in their ability to correctly generate masculine and feminine forms for firstperson-singular referents. Hereafter, we contextualize the impact of our research and discuss the ethical principles at the basis of our work. We define gender bias in MT/ST as the tendency of systems to systematically favor masculine forms to the detriment of the feminine ones when related to human entities In light of the above, we believe that our solution positively impacts single individuals and society at large, by improving not only the experience of using such technologies but also feminine visibility. Furthermore, by relying on explicit gender information, our mitigation solution goes beyond a mere and potentially misleading exploitation of the speech signal. Indeed, using speaker's vocal properties would foster the stereotypical expectations about how masculine or feminine voices should sound, which is not inclusive for certain users, such as transgender individuals or people with laryngeal diseases As regards possible concerns about the gender information considered in our experiments, we relied on the annotations of the two datasets used, MuST-C/MuST-Speakers and MuST-SHE. Both these resources have been manually annotated with speakers' gender information based on the personal pronouns found in their public TED profile Last but not least, in this work we only consider binary linguistic forms as they are the only ones rep-resented in the currently available ST data. In fact, to the best of our knowledge, ST corpora also representing non-binary speakers are not yet available. However, we encourage a vision of gender going beyond binarism and we believe that extending the application of our method to non-binary forms (e.g. by integrating a third, non-binary ELM) can be an interesting extension of this work. A Contributions of β ILM -β ELM As stated in §2, our method relies on two hyperparameters (β ELM and β ILM ). In this section, we report their optimal values ( §A.1), and discuss the impact of varying these values on the results ( §A.2). A.1 Optimal β ILM -β ELM Combinations In the lack of a validation set with the same characteristics of MuST-SHE, we used this same benchmark for a 10-fold cross validation. At each iteration, we translate the held-out data with the pair (β ILM , β ELM ) ∈ {0.00, 0.05, . . . 0.95, 1.00} 2 that maximizes harmonic mean between gender accuracy and BLEU (see §2) on the validation folds. At the end of this process, the whole MuST-SHE was fairly translated and ready for evaluation, and β ILM and β ELM were robustly estimated. However, in a real use case, we need a unique combination of β ELM and β ILM for each gender. Therefore, in Table In addition to empirically estimating β ILM and β ELM through cross-validation, we also investigated the importance of optimizing the balance between the ILM and the ELM for mitigating bias without compromising translation quality. To this end, for each language direction we computed the performance variations by adjusting The trends are similar for all the three language directions. As for gender accuracy, ELM integration appears to be more critical than ILM removal. Specifically, we observe that the accuracy improves as the value of β ELM increases. Looking at BLEU, we observe a diagonal ellipse-shaped trend with higher scores around the bottom left corner. This indicates that, to preserve translation quality, β ILM and β ELM should be similar and not too high. Overall, although the trends for translation quality and gender accuracy differ, the two objectives share high results in the middle area. Most importantly, we can notice that the results are not significantly affected by small variations in the weights, with wide smooth areas with similar scores and no isolated peaks. This demonstrates the robustness of our solution with respect to a suboptimal estimation of β ILM and β ELM . ST Models Our direct ST models are made of a 12-layer Conformer Looking at the term coverage, we do not see clear trends across language pairs. For F, M B-ILM+ELM suffers from a significant drop in en-it with respect to M B while it achieves the best scores in en-es and en-fr. For M, there is a significant drop in en-fr, which is not confirmed in the other two language pairs. In addition, the differences with M SP are always ascribable to random fluctuations. All in all, we can conclude that our debiasing solution specifically designed for speaker-dependent words does not significantly alter the gender assignment for referents different from the speaker.
| 1,021 | 1,254 | 1,021 |
Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
|
Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a taskagnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.
|
One of the primary goals of training NLP models is generalization. Since testing "in the wild" is expensive and does not allow for fast iterations, the standard paradigm for evaluation is using trainvalidation-test splits to estimate the accuracy of the model, including the use of leader boards to track progress on a task A number of additional evaluation approaches have been proposed, such as evaluating robustness to noise In this work, we propose CheckList, a new evaluation methodology and accompanying tool We demonstrate the usefulness and generality of CheckList via instantiation on three NLP tasks: sentiment analysis (Sentiment), duplicate question detection (QQP;
|
Conceptually, users "CheckList" a model by filling out cells in a matrix (Figure While testing individual components is a common practice in software engineering, modern NLP models are rarely built one component at a time. Instead, CheckList encourages users to consider how different natural language capabilities are manifested on the task at hand, and to create tests to evaluate the model on each of these capabilities. For example, the Vocabulary+POS capability pertains to whether a model has the necessary vocabulary, and whether it can appropriately handle the impact of words with different parts of speech on the task. For Sentiment, we may want to check if the model is able to identify words that carry positive, negative, or neutral sentiment, by verifying how it behaves on examples like "This was a good flight." For QQP, we might want the model to understand when modifiers differentiate questions, e.g. accredited in ("Is John a teacher?", "Is John an accredited teacher?"). For MC, the model should be able to relate comparatives and superlatives, e.g. (Context: "Mary is smarter than John.", Q: "Who is the smartest kid?", A: "Mary"). We suggest that users consider at least the following capabilities: Vocabulary+POS (important words or word types for the task), Taxonomy (synonyms, antonyms, etc), Robustness (to typos, irrelevant changes, etc), NER (appropriately understanding named entities), Fairness, Temporal (understanding order of events), Negation, Coreference, Semantic Role Labeling (understanding roles such as agent, object, etc), and Logic (ability to handle symmetry, consistency, and conjunctions). We will provide examples of how these capabilities can be tested in Section 3 (Tables We prompt users to evaluate each capability with three different test types (when possible): Minimum Functionality tests, Invariance, and Directional Expectation tests (the columns in the matrix). A Minimum Functionality test (MFT), inspired by unit tests in software engineering, is a collection of simple examples (and labels) to check a behavior within a capability. MFTs are similar to creating small and focused testing datasets, and are particularly useful for detecting when models use shortcuts to handle complex inputs without actually mastering the capability. The Vocabulary+POS examples in the previous section are all MFTs. We also introduce two additional test types inspired by software metamorphic tests Users can create test cases from scratch, or by perturbing an existing dataset. Starting from scratch makes it easier to create a small number of highquality test cases for specific phenomena that may be underrepresented or confounded in the original dataset. Writing from scratch, however, requires significant creativity and effort, often leading to tests that have low coverage or are expensive and time-consuming to produce. Perturbation functions are harder to craft, but generate many test cases at once. To support both these cases, we provide a variety of abstractions that scale up test creation from scratch and make perturbations easier to craft. Templates Test cases and perturbations can often be generalized into a template, to test the model on a more diverse set of inputs. In placeholder (e.g. positive verbs for {POS_VERB}). We provide users with an abstraction where they mask part of a template and get masked language model (RoBERTa Open source We release an implementation of CheckList at 3 Testing SOTA models with CheckList We CheckList the following commercial Sentiment analysis models via their paid APIs Commercial models do not fail simple Fairness sanity checks such as "I am a black woman." (template: "I am a {PROTECTED} {NOUN}."), always predicting them as neutral. Similar to software engineering, absence of test failure does not imply that these models are fair -just that they are not unfair enough to fail these simple tests. On Machine Comprehension Vocab+POS tests in Table The model does not seem capable of handling short instances with Temporal concepts such as before, after, last, and first, or with simple examples of Negation, either in the question or in the context. It also does not seem to resolve basic Coreferences, and grasp simple subject/object or active/passive distinctions (SRL), all of which are critical to true comprehension. Finally, the model seems to have certain biases, e.g. for the simple negation template "{P1} is not a {PROF}, {P2} is." as context, and "Who is a {PROF}?" as question, if we set {PROF} = doctor, {P1} to male names and {P2} to female names (e.g. "John is not a doctor, Mary is."; "Who is a doctor?"), the model fails (picks the man as the doctor) 89.1% of the time. If the situation is reversed, the failure rate is only 3.2% (woman predicted as doctor). If {PROF} = secretary, it wrongly picks the man only 4.0% of the time, and the woman 60.5% of the time. We applied the same process to very different tasks, and found that tests reveal interesting failures on a variety of task-relevant linguistic capabilities. While some tests are task specific (e.g. positive adjectives), the capabilities and test types are general; many can be applied across tasks, as is (e.g. testing Robustness with typos) or with minor variation (changing named entities yields different expectations depending on the task). This small selection of tests illustrates the benefits of systematic testing in addition to standard evaluation. These tasks may be considered "solved" based on benchmark accuracy results, but the tests highlight various areas of improvement -in particular, failure to demonstrate basic skills that are de facto needs for the task at hand (e.g. basic negation, agent/object distinction, etc). Even though some of these failures have been observed by others, such as typos The failures discovered in the previous section demonstrate the usefulness and flexibility of Check-List. In this section, we further verify that Check-List leads to insights both for users who already test their models carefully and for users with little or no experience in a task. We approached the team responsible for the general purpose sentiment analysis model sold as a service by Microsoft ( on Table We invited the team for a CheckList session lasting approximately 5 hours. We presented Check-List (without presenting the tests we had already created), and asked them to use the methodology to test their own model. We helped them implement their tests, to reduce the additional cognitive burden of having to learn the software components of CheckList. The team brainstormed roughly 30 tests covering all capabilities, half of which were MFTs and the rest divided roughly equally between INVs and DIRs. Due to time constraints, we implemented about 20 of those tests. The tests covered many of the same functionalities we had tested ourselves (Section 3), often with different templates, but also ones we had not thought of. For example, they tested if the model handled sentiment coming from camel-cased twitter hashtags correctly (e.g. "#IHateYou", "#ILoveYou"), implicit negation (e.g. "I wish it was good"), and others. Further, they proposed new capabilities for testing, e.g. handling different lengths (sentences vs paragraphs) and sentiment that depends on implicit expectations (e.g. "There was no {AC}" when {AC} is expected). Qualitatively, the team stated that CheckList was very helpful: (1) they tested capabilities they had not considered, (2) they tested capabilities that they had considered but are not in the benchmarks, and (3) even capabilities for which they had benchmarks (e.g. negation) were tested much more thoroughly and systematically with CheckList. They discovered many previously unknown bugs, which they plan to fix in the next model iteration. Finally, they indicated that they would definitely incorporate CheckList into their development cycle, and requested access to our implementation. This session, coupled with the variety of bugs we found for three separate commercial models in Table We conduct a user study to further evaluate different subsets of CheckList in a more controlled environment, and to verify if even users with no previous experience in a task can gain insights and find bugs in a model. We recruit 18 participants (8 from industry, 10 from academia) who have at least intermediate NLP experience We present the results in Table At the end of the experiment, we ask users to evaluate the severity of the failures they observe on each particular test, on a 5 point scale The study results are encouraging: with a subset of CheckList, users without prior experience are able to find significant bugs in a SOTA model in only 2 hours. Further, when asked to rate different aspects of CheckList (on a scale of 1-5), users indicated the testing session helped them learn more about the model (4.7 ˘0.5), capabilities helped them test the model more thoroughly (4.5 ˘0.4), and so did templates (4.3 ˘1.1). One approach to evaluate specific linguistic capabilities is to create challenge datasets. With the increase in popularity of end-toend deep models, the community has turned to "probes", where a probing model for linguistic phenomena of interest (e.g. NER) is trained on intermediate representations of the encoder There are existing perturbation techniques meant to evaluate specific behavioral capabilities of NLP models such as logical consistency While useful, accuracy on benchmarks is not sufficient for evaluating NLP models. Adopting principles from behavioral testing in software engineering, we propose CheckList, a model-agnostic and task-agnostic testing methodology that tests individual capabilities of the model using three different test types. To illustrate its utility, we highlight significant problems at multiple levels in the conceptual NLP pipeline for models that have "solved" existing benchmarks on three different tasks. Further, CheckList reveals critical bugs in commercial systems developed by large software companies, indicating that it complements current practices well. Tests created with CheckList can be applied to any model, making it easy to incorporate in current benchmarks or evaluation pipelines. Our user studies indicate that CheckList is easy to learn and use, and helpful both for expert users who have tested their models at length as well as for practitioners with little experience in a task. The tests presented in this paper are part of Check-List's open source release, and can easily be incorporated into existing benchmarks. More importantly, the abstractions and tools in CheckList can be used to collectively create more exhaustive test suites for a variety of tasks. Since many tests can be applied across tasks as is (e.g. typos) or with minor variations (e.g. changing names), we expect that collaborative test creation will result in evaluation of NLP models that is much more robust and detailed, beyond just accuracy on held-out data. CheckList is open source, and available at
| 1,044 | 677 | 1,044 |
FAD-X: Fusing Adapters for Cross-lingual Transfer to Low-Resource Languages
|
Adapter-based tuning, by adding light-weight adapters to multilingual pretrained language models (mPLMs), selectively updates language-specific parameters to adapt to a new language, instead of finetuning all shared weights. This paper explores an effective way to leverage a public pool of pretrained language adapters, to overcome resource imbalances for low-resource languages (LRLs). Specifically, our research questions are, whether pretrained adapters can be composed, to complement or replace LRL adapters. While composing adapters for multi-task learning setting has been studied, the same question for LRLs has remained largely unanswered. To answer this question, we study how to fuse adapters across languages and tasks, then validate how our proposed fusion adapter, namely FAD-X, can enhance a cross-lingual transfer from pretrained adapters, for well-known named entity recognition and classification benchmarks. 1
|
While fine-tuning the multilingual pretrained language models (mPLMs), such as mBERT To overcome this challenge, MAD-X language-and task-specific parameters, which can also be released as pretrained adapters. However, we argue that a significant resource imbalance yet remains, especially for LRLs. To illustrate, Figure In this paper, we propose Fusing multiple ADapters for cross-lingual transfer (FAD-X), to overcome imbalances, by transferring from both LA and TA resources available for higher-resource languages. Inspired by multilingual PLM outperforming monolingual PLM for LRLs from a cross-lingual transfer Toward this goal, given the pool of pretrained adapters L and target language t, we propose to utilize pretrained language adapter LA l i ∈ L, to train task adapter per each language, denoted as T A l i . We show that fusing such task adapters contributes to overcoming limited training resources, in training TA in the target language (the yellow line in Figure Contributions Our contributions are as follows: • We devise FAD-X, a method to fuse adapters trained from different languages. • We propose two designs to fuse language and task adapters, and evaluate the effectiveness on two different tasks; For LRLs, we improve +5.3% F1 on WikiAnn and +16.5% accuracy on Amazon Review dataset, on average. • We also validate FAD-X, in a more resourceconstrained setting, where LA does not exist for the target language. 2 Proposed Method
|
We first briefly review MAD-X To overcome the lack of resources for LA/TA observed for LRLs, we propose FAD-X. Our key idea is fusing task adapters trained with pretrained adapters in other languages. More formally, given a pool of n pretrained adapters, L = {LA l 1 , • • • , LA ln }, our goal is fusing T A l i trained from each language adapter LA l i , which can be implemented as one of the following two designs, as also illustrated in Figure • Paired then Fused (PtF): Each task adapter TA is paired by language adapter LA used for training, or, where (1) In the above equation, ⊗ denotes the dot product, and Q, K, and V represent the learnable query, key, and value matrices. With the proposed architecture, we can fully utilize other available pretrained adapters. Datasets We used two datasets to confirm the effect of our proposed method, FAD-X. WikiAnn Languages For experiments conducted with WikiAnn dataset, we select LRLs used in Methods For given language t, we compare three methods. • F use(L): Fusion of adapters pretrained on languages L, following our proposed method FAD-X. • S(t): A baseline which stacks T A t with LA t , following a state-of-the-art method, MAD-X. • S(t) w/ param+: A baseline which uses adapters with same additional parameters as F use(L). Experimental Settings To train T A l for WikiAnn in each language l, we use batch size of 16, learning rate of 2e-5, and train for 100 epochs then select best checkpoint based on the validation F1 score. We conduct each experiment 5 times and report the average test F1 score. We use multilingual BERT We consider two possible scenarios: • LA t ∈ L. We conjecture that, with knowledge transfer from adapters trained in other languages, fused adapters outperform using LA t only. • LA t / ∈ L (no adapter). LA t is proxied by that of some l i in L, which we select the HRL in same language family, or English if isolated. LA t ∈ L: Combining LA t with others in L was complementary for all target languages (Table Parameter Efficiency: We investigate whether our improvement comes from an increase of parameters-We add the same number of parameters as Q, K, V in the fusion module to S(t), described in the row named 'S(t) w/ param+' in Table 1. Though such an increase does improve results for some languages, it often negatively impacts the performance as well. This indicates that our fusion model proposes an effective use of increased parameters. Selection of HRLs for fusion: This section explores an alternative of choosing one HRL in the same family (as discussed in Section 3.1), by selecting the most resourced language (ml) regardless of the family. Row named 'Fuse(L-LA t ) w/ ml' in Table FtP vs PtF: In Section 2, we proposed two designs to fuse with HRL adapters, FtP and PtF. We investigate which approach is better with validation scores in WikiAnn, revealed in Table We investigated whether these exceptions correlate with phonological similarity, which is studied to highly correlate with cross-lingual transfer performance of WikiAnn Our conjecture is that FAD-X helps MAD-X outperform mPLM baselines, when the resource for LA or TA lags behind. To verify, we evaluate FAD-X when such condition is violated. Table We further verify previous observations with Amazon Reviews dataset. We perform same analyses, as long as supported by this dataset. LA t ∈ L: Similar to WikiAnn results, LAs in L help LA t , for all target languages (Table Parameter Efficiency: Again, we examine whether the parameter increment is the main cause for the enhanced performance. By comparing last two rows of Table FtP vs PtF: We investigate whether FtP outperform PtF consistently over various train data sizes, with mBERT. We additionally build train sets by randomly sampling 0.1% and 10% of the original train datasets. Table Adapters Adapters proposed for domain adaptations in computer vision tasks Cross-lingual transfer A de-facto cross-lingual transfer is finetuning PLMs: mBERT We proposed FAD-X, fusing multiple pretrained adapters, for a cross-lingual transfer to LRLs, overcoming the imbalances in resources for LA/TA. We validate the effectiveness of our approach, for LRLs with no pretrained adapter or that trained with limited resources.
| 928 | 1,453 | 928 |
Learning to Discover, Ground and Use Words with Segmental Neural Language Models
|
We propose a segmental neural language model that combines the generalization power of neural networks with the ability to discover word-like units that are latent in unsegmented character sequences. In contrast to previous segmentation models that treat word segmentation as an isolated task, our model unifies word discovery, learning how words fit together to form sentences, and, by conditioning the model on visual context, how words' meanings ground in representations of nonlinguistic modalities. Experiments show that the unconditional model learns predictive distributions better than character LSTM models, discovers words competitively with nonparametric Bayesian word segmentation models, and that modeling language conditional on visual context improves performance on both.
|
How infants discover words that make up their first language is a long-standing question in developmental psychology In this paper, we introduce a single model that discovers words, learns how they fit together (not just locally, but across a complete sentence), and grounds them in learned representations of naturalistic non-linguistic visual contexts. We argue that such a unified model is preferable to a pipeline model of language acquisition (e.g., a model where words are learned by one character-aware model, and then a full-sentence grammar is acquired by a second language model using the words predicted by the first). Our preference for the unified model may be expressed in terms of basic notions of simplicity (we require one model rather than two), and in terms of the Continuity Hypothesis of In §2 we introduce a neural model of sentences that explicitly discovers and models word-like units from completely unsegmented sequences of characters. Since it is a model of complete sentences (rather than just a word discovery model), and it can incorporate multimodal conditioning context (rather than just modeling language unconditionally), it avoids the two continuity problems identified above. Our model operates by generating text as a sequence of segments, where each segment is generated either character-by-character from a sequence model or as a single draw from a lexical memory of multi-character units. The segmentation decisions and decisions about how to generate words are not observed in the training data and marginalized during learning using a dynamic programming algorithm ( §3). Our model depends crucially on two components. The first is, as mentioned, a lexical memory. This lexicon stores pairs of a vector (key) and a string (value) the strings in the lexicon are contiguous sequences of characters encountered in the training data; and the vectors are randomly initialized and learned during training. The second component is a regularizer ( §4) that prevents the model from overfitting to the training data by overusing the lexicon to account for the training data. 1 Our evaluation ( §5- §7) looks at both language modeling performance and the quality of the induced segmentations, in both unconditional (sequence-only) contexts and when conditioning on a related image. First, we look at the segmentations induced by our model. We find that these correspond closely to human intuitions about word segments, competitive with the best existing models for unsupervised word discovery. Importantly, these segments are obtained in models whose hyperparameters are tuned to optimize validation (held-out) likelihood, whereas tuning the hyperparameters of our benchmark models using held-out likelihood produces poor segmentations. Second, we confirm findings 1 Since the lexical memory stores strings that appear in the training data, each sentence could, in principle, be generated as a single lexical unit, thus the model could fit the training data perfectly while generalizing poorly. The regularizer penalizes based on the expectation of the powered length of each segment, preventing this degenerate solution from being optimal. Ablation studies demonstrate that both the lexicon and the regularizer are crucial for good performance, particularly in word segmentationremoving either or both significantly harms performance. In a final experiment, we learn to model language that describes images, and we find that conditioning on visual context improves segmentation performance in our model (compared to the performance when the model does not have access to the image). On the other hand, in a baseline model that predicts boundaries based on entropy spikes in a character-LSTM, making the image available to the model has no impact on the quality of the induced segments, demonstrating again the value of explicitly including a word lexicon in the language model.
|
We now describe the segmental neural language model (SNLM). Refer to Figure The SNLM defines the distribution over x as the marginal distribution over all segmentations that give rise to x, i.e., p(x) = s:π(s)=x p(s). (1) To define the probability of p(s), we use the chain rule, rewriting this in terms of a product of the series of conditional probabilities, p(s t | s <t ). The process stops when a special end-sequence segment /S is generated. To ensure that the summation in Eq. 1 is tractable, we assume the following: which amounts to a conditional semi-Markov assumption-i.e., non-Markovian generation hap- pens inside each segment, but the segment generation probability does not depend on memory of the previous segmentation decisions, only upon the sequence of characters π(s <t ) corresponding to the prefix character sequence x <t . This assumption has been employed in a number of related models to permit the use of LSTMs to represent rich history while retaining the convenience of dynamic programming inference algorithms We model p(s t | x <t ) as a mixture of two models, one that generates the segment using a sequence model and the other that generates multi-character sequences as a single event. Both are conditional on a common representation of the history, as is the mixture proportion. Representing history To represent x <t , we use an LSTM encoder to read the sequence of characters, where each character type σ ∈ Σ has a learned vector embedding v σ . Thus the history representation at time t is This corresponds to the standard history representation for a character-level language model, although in general, we assume that our modelled data is not delimited by whitespace. The first component model, p char (s t | h t ), generates s t by sampling a sequence of characters from a LSTM language model over Σ and a two extra special symbols, an end-of-word symbol /W / ∈ Σ and the end-of-sequence symbol /S discussed above. The initial state of the LSTM is a learned transformation of h t , the initial cell is 0, and different parameters than the history encoding LSTM are used. During generation, each letter that is sampled (i.e., each s t,i ) is fed back into the LSTM in the usual way and the probability of the character sequence decomposes according to the chain rule. The end-of-sequence symbol can never be generated in the initial position. The second component model, p lex (s t | h t ), samples full segments from lexical memory. Lexical memory is a key-value memory containing M entries, where each key, k i , a vector, is associated with a value v i ∈ Σ + . The generation probability of s t is defined as where [v i = s t ] is 1 if the ith value in memory is s t and 0 otherwise, and K is a matrix obtained by stacking the k i 's. This generation process assigns zero probability to most strings, but the alternate character model can generate all of Σ + . In this work, we fix the v i 's to be subsequences of at least length 2, and up to a maximum length L that are observed at least F times in the training data. These values are tuned as hyperparameters (See Appendix C for details of the experiments). The mixture proportion, g t , determines how likely the character generator is to be used at time t (the lexicon is used with probability 1 -g t ). It is defined by as g t = σ(MLP(h t )). Total segment probability The total generation probability of s t is thus We are interested in two inference questions: first, given a sequence x, evaluate its (log) marginal likelihood; second, given x, find the most likely decomposition into segments s * . Marginal likelihood To efficiently compute the marginal likelihood, we use a variant of the forward algorithm for semi-Markov models (3) By letting x t+1 = /S , then p(x) = α t+1 . The most probable segmentation of a sequence x can be computed by replacing the summation with a max operator in Eq. 3 and maintaining backpointers. When the lexical memory contains all the substrings in the training data, the model easily overfits by copying the longest continuation from the memory. To prevent overfitting, we introduce a regularizer that penalizes based on the expectation of the exponentiated (by a hyperparameter β) length of each segment: This can be understood as a regularizer based on the double exponential prior identified to be effective in previous work The model parameters are trained by minimizing the penalized log likelihood of a training corpus D of unsegmented sentences, We evaluate our model on both English and Chinese segmentation. For both languages, we used standard datasets for word segmentation and language modeling. We also use MS-COCO to evaluate how the model can leverage conditioning context information. For all datasets, we used train, validation and test splits. 2 Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix A for dataset statistics. The Brent corpus is a standard corpus used in statistical modeling of child language acquisition As the Brent corpus does not have a standard train and test split, and we want to tune the parameters by measuring the fit to held-out data, we used the first 80% of the utterances for training and the next 10% for validation and the rest for test. English Penn Treebank (PTB) We use the commonly used version of the PTB prepared by Since Chinese orthography does not mark spaces between words, there have been a number of efforts to annotate word boundaries. We evaluate against two corpora that have been manually segmented according different segmentation standards. Beijing University Corpus (PKU) The Beijing University Corpus was one of the corpora used for the International Chinese Word Segmentation Bakeoff Chinese Penn Treebank (CTB) We use the Penn Chinese Treebank Version 5.1 To assess whether jointly learning about meanings of words from non-linguistic context affects segmentation performance, we use image and caption pairs from the COCO caption dataset We compare our model to benchmark Bayesian models, which are currently the best known unsupervised word discovery models, as well as to a simple deterministic segmentation criterion based on surprisal peaks (Elman, 1990) on language modeling and segmentation performance. Although the Bayeisan models are shown to able to discover plausible word-like units, we found that a set of hyperparameters that provides best performance with such model on language modeling does not produce good structures as reported in previous works. This is problematic since there is no objective criteria to find hyperparameters in fully unsupervised manner when the model is applied to completely unknown languages or domains. Thus, our experiments are designed to assess how well the models infers word segmentations of unsegmented inputs when they are trained and tuned to maximize the likelihood of the held-out text. DP/HDP Benchmarks Among the most effective existing word segmentation models are those based on hierarchical Dirichlet process (HDP) models The base distribution, p 0 , is defined over strings in Σ * ∪{ /S } by deciding with a specified probability to end the utterance, a geometric length model, and a uniform probability over Σ at a each position. Intuitively, it captures the preference for having short words in the lexicon. In addition to the HDP model, we also evaluate a simpler single Dirichlet process (DP) version of the model, in which the s t 's are generated directly as draws from Categorical(θ • ). We use an empirical Bayesian approach to select hyperparameters based on the likelihood assigned by the inferred posterior to a held-out validation set. Refer to Appendix D for details on inference. Deterministic Baselines Incremental word segmentation is inherently ambiguous (e.g., the letters the might be a single word, or they might be the beginning of the longer word theater). Nevertheless, several deterministic functions of prefixes have been proposed in the literature as strategies for discovering rudimentary word-like units hypothesized for being useful for bootstrapping the lexical acquisition process or for improving a model's predictive accuracy. These range from surprisal criteria (Elman, 1990) to sophisticated language models that switch between models that capture intra-and inter-word dynamics based on deterministic functions of prefixes of characters In our experiments, we also include such deterministic segmentation results using (1) the surprisal criterion of LSTMs had 512 hidden units with parameters learned using the Adam update rule For the image caption dataset, we extend the model with a standard attention mechanism in the backbone LSTM (LSTM enc ) to incorporate image context. For every character-input, the model calculates attentions over image features and use them to predict the next characters. As for image representations, we use features from the last convolution layer of a pre-trained VGG19 model In this section, we first do a careful comparison of segmentation performance on the phonemic Brent corpus (BR-phono) across several different segmentation baselines, and we find that our model obtains competitive segmentation performance. Additionally, ablation experiments demonstrate that both lexical memory and the proposed expected length regularization are necessary for inferring good segmentations. We then show that also on other corpora, we likewise obtain segmentations better than baseline models. Finally, we also show that our model has superior performance, in terms of heldout perplexity, compared to a character-level LSTM language model. Thus, overall, our results show that we can obtain good segmentations on a variety of tasks, while still having very good language modeling performance. Word Segmentation (BR-phono) Table Furthermore, the priors used in the DP/HDP models were tuned to maximize the likelihood assigned to the validation set by the inferred posterior predictive distribution, in contrast to previous papers which either set them subjectively or inferred them BR Word Segmentation Qualitative Analysis We show some representative examples of segmentations inferred by various models on the BR-text and PKU corpora in Table Turning to the Chinese examples, we see that both baseline models fail to discover basic words such as 山间 (mountain) and 人们 (human). Finally, we observe that none of the models successfully segment dates or numbers containing mul-tiple digits (all oversegment). Since number types tend to be rare, they are usually not in the lexicon, meaning our model (and the H/DP baselines) must generate them as character sequences. Language Modeling Performance The above results show that the SNLM infers good word segmentations. We now turn to the question of how well it predicts held-out data. Table One might object that because of the lexicon, the SNLM has many more parameters than the character-level LSTM baseline model. However, unlike parameters in LSTM recurrence which are used every timestep, our memory parameters are accessed very sparsely. Furthermore, we observed that an LSTM with twice the hidden units did not improve the baseline with 512 hidden units on both phonemic and orthographic versions of Brent corpus but the lexicon could. This result suggests more hidden units are useful if the model does not have enough capacity to fit larger datasets, but that the memory structure adds other dynamics which are not captured by large recurrent networks. Multimodal Word Segmentation Finally, we discuss results on word discovery with nonlinguistic context (image). Although there is much evidence that neural networks can reliably learn to exploit additional relevant context to improve language modeling performance (e.g. machine translation and image captioning), it is still unclear whether the conditioning context help to discover structure in the data. We turn to this question here. Table To understand what kind of improvements in segmentation performance the image context leads to, we annotated the tokens in the references with part-of-speech (POS) tags and compared relative improvements on recall between SNLM (-image) and SNLM (+image) among the five POS tags which appear more than 10,000 times. We observed improvements on ADJ (+4.5%), NOUN (+4.1%), VERB (+3.1%). The improvements on the categories ADP (+0.5%) and DET (+0.3%) are were more limited. The categories where we see the largest improvement in recall correspond to those that are likely a priori to correlate most reliably with observable features. Thus, this result is consistent with a hypothesis that the lexican is successfully acquiring knowledge about how words idiosyncratically link to visual features. glish phoneme or Chinese segmentation tasks. As we discussed in the introduction, previous work has focused on segmentation in isolation from language modeling performance. Models that obtain better segmentations include the adaptor grammars (F1: 87.0) of Learning to discover and represent temporally extended structures in a sequence is a fundamental problem in many fields. For example in language processing, unsupervised learning of multiple levels of linguistic structures such as morphemes (Snyder and Barzilay, 2008), words Word discovery is a fundamental problem in language acquisition. While work studying the problem in isolation has provided valuable insights (showing both what data is sufficient for word discovery with which models), this paper shows that neural models offer the flexibility and performance to productively study the various facets of the problem in a more unified model. While this work unifies several components that had previously been 4 We use 8000, 2000 and 10000 images for train, development and test set in order of integer ids specifying image in cocoapi For each RNN based model we used 512 dimensions for the character embeddings and the LSTMs have 512 hidden units. All the parameters, including character projection parameters, are randomly sampled from uniform distribution from -0.08 to 0.08. The initial hidden and memory state of the LSTMs are initialized with zero. A dropout rate of 0.5 was used for all but the recurrent connections. To restrict the size of memory, we stored substrings which appeared F -times in the training corpora and tuned F with grid search. The maximum length of subsequences L was tuned on the held-out likelihood using a grid search. Tab. 7 summarizes the parameters for each dataset. Note that we did not tune the hyperparameters on segmentation quality to ensure that the models are trained in a purely unsupervised manner assuming no reference segmentations are available. By integrating out the draws from the DP's, it is possible to do inference using Gibbs sampling directly in the space of segmentation decisions. We use 1,000 iterations with annealing to find an approximation of the MAP segmentation and then use the corresponding posterior predictive distribution to estimate the held-out likelihood assigned by the model, marginalizing the segmentations using appropriate dynamic programs. The evaluated segmentation was the most probable segmentation according to the posterior predictive distribution. In the original Bayesian segmentation work, the hyperparameters (i.e., α 0 , α 1 , and the components of p 0 ) were selected subjectively. To make comparison with our neural models fairer, we instead used an empirical approach and set them using the
| 787 | 3,910 | 787 |
A Joint Neural Model for Information Extraction with Global Features
|
Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions. For example, a VICTIM of a DIE event is likely to be a VICTIM of an AT-TACK event in the same sentence. In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, ONEIE, that aims to extract the globally optimal IE result as a graph from an input sentence. ONEIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations;
|
Information Extraction (IE) aims to extract structured information from unstructured texts. It is a complex task comprised of a wide range of subtasks, such as named, nominal, and pronominal mention extraction, entity linking, entity coreference resolution, relation extraction, event extraction, and event coreference resolution. Early efforts typically perform IE in a pipelined fashion, 1
|
Example: Prime Minister Abdullah Gul resigned earlier Tuesday to make way for Erdogan, who won a parliamentary seat in by-elections Sunday. person To address this issue, we propose a joint neu- ral framework, ONEIE, to perform end-to-end IE with global constraints. As Figure To the best of our knowledge, ONEIE is the first end-to-end neural IE framework that explicitly models cross-subtask and cross-instance interdependencies and predicts the result as a unified graph instead of isolated knowledge elements. Because ONEIE does not rely on language-specific features, it can be rapidly applied to new languages. Furthermore, global features in our framework are highly explainable and can be explicitly analyzed. Given a sentence, our ONEIE framework aims to extract an information network representation Entity Extraction aims to identify entity mentions in text and classify them into pre-defined entity types. A mention can be a name, nominal, or pronoun. For example, "Kashmir region" should be recognized as a location (LOC) named entity mention in Figure Relation Extraction is the task of assigning a relation type to an ordered pair of entity mentions. For example, there is a PART-WHOLE relation between "Kashmir region" and "India". Event Extraction entails identifying event triggers (the words or phrases that most clearly express event occurrences) and their arguments (the words or phrases for participants in those events) in unstructured texts and classifying these phrases, respectively, for their types and roles. An argument can be an entity, time expression, or value (e.g., MONEY, JOB-TITLE, CRIME). For example, in Figure 2, the word "injured" triggers an INJURE event and "300" is the VICTIM argument. We formulate the task of extracting information networks as follows. Given an input sentence, our goal is to predict a graph G = (V, E), where V and E are the node and edge sets respectively. Each node v i = a i , b i , l i ∈ V represents an entity mention or event trigger, where a and b are the start and end word indices, and l is the node type label. Each edge e ij = i, j, l ij ∈ E is represented similarly, whereas i and j denote the indices of involved nodes. For example, in Figure As Figure Given an input sentence of L words, we obtain the contextualized representation x i for each word using a pre-trained BERT encoder. If a word is split into multiple word pieces (e.g., Mondrian → Mon, ##dr, ##ian), we use the average of all piece vectors as its word representation. While previous methods typically use the output of the last layer of BERT, our preliminary study shows that enriching word representations using the output of the third last layer of BERT can substantially improve the performance on most subtasks. At this stage, we identify entity mentions and event triggers in the sentence, which will act as nodes in the information network. We use a feedforward network FFN to compute a score vector ŷi = FFN(x i ) for each word, where each value in ŷi represents the score for a tag in a target tag set where X = {x 1 , ..., x L } is the contextualized representations of the input sequence, ŷi,ẑ i is the ẑi -th component of the score vector ŷi , and A ẑi-1 ,ẑ i is the (ẑ i-1 , ẑi ) entry in matrix A that indicates the transition score from tag ẑi-1 to ẑi . The weights in A are learned during training. We append two special tags <start> and <end> to the tag path as ẑ0 and ẑL+1 to denote the start and end of the sequence. At the training stage, we maximize the log-likelihood of the gold-standard tag path as ,ẑ) , where Z is the set of all possible tag paths for a given sentence. Thus, we define the identification loss as L I = -log p(z|X). In our implementation, we use separate taggers to extract entity mentions and event triggers. Note that we do not use types predicted by the taggers. Instead, we make a joint decision for all knowledge elements at the decoding stage to prevent error propagation and utilize their interactions to improve the prediction of node type. We represent each identified node as v i by averaging its word representations. After that, we use separate task-specific feed-forward networks to calculate label scores for each node as ŷt i = FFN t (v i ), where t indicates a specific task. To obtain the label score vector for the edge between the i-th and j-th nodes, we concatenate their span representations and calculate the vector as ŷt k = FFN t (v i , v j ). For each task, the training objective is to minimize the following cross-entropy loss where y t i is the true label vector and N t is the number of instances for task t. If we ignore the inter-dependencies between nodes and edges, we can simply predict the label with the highest score for each knowledge element and thus generate the locally best graph Ĝ. The score of Ĝ can be calculated as where T is the set of tasks. We refer to s ( Ĝ) as the local score of Ĝ. Categary Description Role 1. The number of entities that act as <rolei> and <rolej> arguments at the same time. 2. The number of <event typei> events with <number> <rolej> arguments. 3. The number of occurrences of <event typei>, <rolej>, and <entity typek> combination. 4. The number of events that have multiple <rolei> arguments. 5. The number of entities that act as a <rolei> argument of an <event typej> event and a <rolek> argument of an <event typel> event at the same time. Relation 6. The number of occurrences of <entity typei>, <entity typej>, and <relation typek> combination. 7. The number of occurrences of <entity typei> and <relation typej> combination. 8. The number of occurrences of a <relation typei> relation between a <rolej> argument and a <rolek> argument of the same event. 9. The number of entities that have a <relation typei> relation with multiple entities. 10. The number of entities involving in <relation typei> and <relation typej> relations simultaneously. Trigger 11. Whether a graph contains more than one <event typei> event. Table A limitation of local classifiers is that they are incapable of capturing inter-dependencies between knowledge elements in an information network. We consider two types of inter-dependencies in our framework. The first type of inter-dependency is Crosssubtask interactions between entities, relations, and events. Consider the following sentence. "A civilian aid worker from San Francisco was killed in an attack in Afghanistan." A local classifier may predict "San Francisco" as a VICTIM argument because an entity mention preceding "was killed" is usually the victim despite the fact that a GPE is unlikely to be a VICTIM. To impose such constraints, we design a global feature as shown in Figure Another type of inter-dependency is Crossinstance interactions between multiple event and/or relation instances in the sentence. Take the following sentence as an example. "South Carolina boy, 9, dies during hunting trip after his father accidentally shot him on Thanksgiving Day." It can be challenging for a local classifier to predict "boy" as the VICTIM of the ATTACK event triggered by "shot" due to the long distance between these two words. However, as shown in Figure Motivated by these observations, we design a set of global feature templates (event schemas) as listed in Table where M is the number of global features and f i (•) is a function that evaluates a certain feature and returns a scalar. For example, Next, ONEIE learns a weight vector u ∈ R M and calculates the global feature score of G as the dot product of f G and u. We define the global score of G as the sum of its local score and global feature score, namely We make the assumption that the gold-standard graph for a sentence should achieve the highest global score. Therefore, we minimize the following loss function where Ĝ is the graph predicted by local classifiers and G is the gold-standard graph. Finally, we optimize the following joint objective function during training As we have discussed, because local classifiers ignore interactions among elements in an information network, they may predict contradictory results or fail to predict difficult edges that require information from other elements. In order to address these issues, ONEIE makes a joint decision for all nodes and their pairwise edges to obtain the globally optimal graph. The basic idea is to calculate the global score for each candidate graph and select the one with the highest score. However, exhaustive search is infeasible in many cases as the size of search space grows exponentially with the number of nodes. Therefore, we design a beam search-based decoder as Figure Given a set of identified nodes V and the label scores for all nodes and their pairwise links, we perform decoding with an initial beam set B = {K 0 }, where K 0 is an order-zero graph. At each step i, we expand each candidate in B in node step and edge step as follows. Node step: We select v i ∈ V and define its candidate set as i denotes the label with the k-th highest local score for v i , and β v is a hyper-parameter that controls the number of candidate labels to consider. We update the beam set by Edge step: We iteratively select a previous node v j ∈ V, j < i and add possible edges between v j and v i . Note that if v i is a trigger, we skip v j if it is also a trigger. At each iteration, we construct a candidate edge set as ij is the label with k-th highest score for e ij and β e is a threshold for the number of candidate labels. Next, we update the beam set by At the end of each edge step, if |B| is larger than the beam width θ, we rank all candidates by global score in descending order and keep the top θ ones. After the last step, we return the graph with the highest global score as the information network for the input sentence. We perform our experiments on the Automatic Content Extraction (ACE) 2005 dataset In order to reinstate some important elements absent from ACE05-R and ACE05-E, we create a new dataset, ACE05-E + , by adding back the order of relation arguments, pronouns, and multi-token event triggers, which have been largely ignored in previous work. We also skip lines before the <text> tag (e.g., headline, datetime) as they are not annotated. In addition to ACE, we derive another dataset, ERE-EN, from the Entities, Relations and Events (ERE) annotation task created under the Deep Exploration and Filtering of Test (DEFT) program because it covers more recent articles. Specifically, we extract 458 documents and 16,516 sentences from three ERE datasets, LDC2015E29, LDC2015E68, and LDC2015E78. For ERE-EN, we keep 7 entity types, 5 relation types, 38 event types, and 20 argument roles. To evaluate the portability of our model, we also develop a Chinese dataset from ACE2005 and a Spanish dataset from ERE (LDC2015E107). We refer to these datasets as ACE05-CN and ERE-ES respectively. We optimize our model with BertAdam for 80 epochs with a learning rate of 5e-5 and weight decay of 1e-5 for BERT, and a learning rate of 1e-3 and weight decay of 1e-3 for other parameters. We use use the bert-base-multilingual-cased model 5 for ACE05-CN and ERE-ES, and use the bert-large-cased model for other datasets. Following • Entity: An entity mention is correct if its offsets and type match a reference entity. • Relation: A relation is correct if its relation type 5 • Trigger: A trigger is correctly identified (Trig-I) if its offsets match a reference trigger. It is correctly classified (Trig-C) if its event type also matches the reference trigger. • Argument: An argument is correctly identified (Arg-I) if its offsets and event type match a reference argument mention. It is correctly classified (Arg-C) if its role label also matches the reference argument mention. In Table (2) BASELINE that follows the architecture of ONEIE but only uses the output of the last layer of BERT and local classifiers. We can see that our model consistently outperforms DYGIE++ and BASELINE on ACE05-R and ACE05-E. In Table In candidate graph contains multiple ORG-AFF edges incident to the same node, the model demote this graph by adding a negative value into its global score. We also observe that the weights of about 9% global features are almost not updated, which indicates that they are barely found in both goldstandard and predicted graphs. In Table As Table We have analyzed 75 of the remaining errors and in Figure Need background knowledge. Most of current IE methods ignore external knowledge such as entity attributes and scenario models. For exam- Global feature categories: 2 and 5 Analysis: 1. An ELECT usually has only one PERSON argument; 2. An entity is unlikely to act as a PERSON argument for END-POSITION and ELECT events at the same time. Global feature category: 3 Analysis: As "Campbell" is likely to be an ENTITY argument of a FINE event, the model corrects its entity type from FAC to PER. can correct this error based on the first sentence in its Wikipedia page "Kommersant is a nationally distributed daily newspaper published in Russia mostly devoted to politics and business". Rare words. The second challenge is the famous long-tail problem: many triggers, entity mentions (e.g., "caretaker", "Gazeta.ru") and contextual phrases in the test data rarely appear in the training data. While most event triggers are verbs or nouns, some adverbs and multi-word expressions can also serve as triggers. Multiple types per trigger. Some trigger words may indicate both the procedure and the result status of an action. For example, "named" may indicate both NOMINATE and START-POSITION events; "killed" and "eliminate" may indicate both ATTACK and DIE events. In these cases the human ground truth usually only annotates the procedure types, whereas our system produces the resultant event types. Need syntactic structure. Our model may benefit from deeper syntactic analysis. For example, in the following sentence "As well as previously holding senior positions at Barclays Bank, BZW and Kleinwort Benson, McCarthy was formerly a top civil servant at the Department of Trade and Industry", our model misses all of the employers "Barclays Bank", "BZW" and "Kleinwort Benson" for "McCarthy" probably because they appear in a previous sub-sentence. Uncertain events and metaphors. Our model mistakenly labels some future planned events as specific events because its lacking of tense prediction and metaphor recognition. For example, START-ORG triggered by "formation" does not happen in the following sentence "The statement did not give any reason for the move, but said Lahoud would begin consultations Wednesday aimed at the formation of a new government". Our model also mistakenly identifies "camp" as a facility, and a DIE event triggered by "dying" in the following sentence "Russia hints 'peace camp' alliance with Germany and France is dying by Dmitry Zaks.". The IE community is lacking of newer data sets with end-to-end annotations. Unfortunately, the annotation quality of the ACE data set is not perfect due to some long-term debates on the annotation guideline; e.g., Should "government" be tagged as a GPE or an ORG? Should "dead" be both an entity and event trigger? Should we consider designator word as part of the entity mention or not? Previous work Some recent efforts develop joint neural models to perform extraction of two IE subtasks, such as entity and relation extraction We propose a joint end-to-end IE framework that incorporates global features to capture the inter-dependency between knowledge elements. Experiments show that our framework achieves better or comparable performance compared to the state of the art and prove the effectiveness of global features. Our framework is also proved to be languageindependent and can be applied to other languages, and it can benefit from multi-lingual training. In the future, we plan to incorporate more comprehensive event schemas that are automatically induced from multilingual multimedia data and external knowledge to further improve the quality of IE. We also plan to extend our framework to more IE subtasks such as document-level entity coreference resolution and event coreference resolution.
| 632 | 391 | 632 |
BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization
|
The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.
|
Abstractive summarization aims to shorten a source article or paragraph by rewriting while preserving the main idea. Due to the difficulties in rewriting long documents, a large body of research on this topic has focused on paragraph-level article summarization. Among them, sequence-tosequence models have become the mainstream and some have achieved state-of-the-art performance Template-based summarization Despite their potential in relieving the verbosity and insufficiency problems of natural language data, templates have not been exploited to full advantage. For example, The contributions of this work include: • We propose a novel bi-directional selective mechanism with two gates to mutually select important information from both article and template to assist with summary generation. • We develop a Fast Rerank method to automatically select high-quality templates from training corpus. • Empirical evaluations on the benchmark dataset show our model has achieved a new state of the art. • The source code of this work has been released for future research.
|
Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization. This module starts with a standard information retrieval library are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates. As mentioned before, the role of Fast Rerank is to re-rank the initial search results and return a best template for summarization. To examine the effect of this module, we studied its ranking quality under different ranges as in Section 4.1. The original rankings by Retrieve are presented for comparison with the NDCG metric. We regard the ROUGE-2 score of each candidate template with the reference summary as the ground truth. As shown in Figure In this section, we explore three traditional approaches to taking advantage of the templates for summarization. They share the same encoder and decoder layers, but own different interaction layers for combination of a source article and template. The encoder layer uses a standard bi-directional RNN (BiRNN) to separately encode the source article and the template into hidden states h s i and h t j . Concatenation. This approach directly concatenates the hidden state, h t i N i=1 , of a template after the article representation, This approach is similar to R 3 Sum where ';' is the concatenation operation. We then normalize each row and col- The overall performance of all the studied models is shown in Table The Retrieve module involves an unsupervised process with traditional indexing and retrieval techniques. For Fast Rerank, since there is no ground truth available, we use ROUGE-1 (21) where s is a score predicted by Equation For the BiSET module, the loss function is chosen as the negative log-likelihood between the generated summary, w, and the true summary, w * : (22) where L is the length of the true summary, θ contains all the trainable variables, and x and y denote the source article and the template, respectively. In this section, we introduce our evaluations on a standard dataset. The dataset used for evaluation is Annotated English Gigaword During training, both the Fast Rerank and BiSET modules have a batch size of 64 with the Adam optimizer Following previous work In this section, we report our experimental results with thorough analysis and discussions. The Retrieve module is intended to narrow down the search range for a best template. We evaluated this module by considering three types of templates: (a) Random means a randomly selected summary from the training corpus; (b) Retrievetop is the highest-ranked summary by Retrieve; (c) N-Optimal means among the N top search results, the template is specified as the summary with largest ROUGE score with gold summary. As the results show in Table In Section 2.3, we also explored three alternative approaches to integrating an article with its template. The results are shown in Table Our model is designed for both accuracy and efficiency. Due to the parallelizable nature of CNN, the Fast Rerank module only takes about 30 minutes for training and 3 seconds for inference on The purpose of this study is to examine the roles of the bi-directional selective layer and its two gates. Firstly, we removed the selective layer and replaced it with the direct concatenation of an article with its template representation. As the results show in Table We then carried out a human evaluation to evaluate the generated summaries from another perspective. Our evaluators include 8 graduate students and 4 senior undergraduates, while the dataset is 100 randomly-selected articles from the test set. Each sample in this dataset also includes: 1 reference summary, 5 summaries generated by Open-NMT Abstractive sentence summarization, a task analogous to headline generation or sentence compression, aims to generate a brief summary given a short source article. Early studies in this problem mainly focus on statistical or linguistic-rule-based methods, including those based on extractive and compression The advent of large-scale summarization corpora accelerates the development of various neural network methods. In this paper, we presented a novel Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. To counteract the verbosity and insufficiency of training data, we proposed to retrieve high-quality existing summaries as templates to assist with source article representations through an ingenious bidirectional selective layer. The enhanced article representations are expected to contribute towards better summarization eventually. We also developed the corresponding retrieval and re-ranking modules for obtaining quality templates. Extensive evaluations were conducted on a standard benchmark dataset and experimental results show that our model can quickly pick out high-quality templates from the training corpus, laying key foundation for effective article representations and summary generations. The results also show that our model outperforms all the baseline models and sets a new state of the art. An ablation study validates the role of the bi-directional selective layer, and a human evaluation further proves that our model can generate informative, concise, and readable summaries. The paper was partially supported by the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (No.2017ZT07X355) and the Key R&D Program of Guangdong Province (2019B010120001).
| 762 | 1,071 | 762 |
Language Models as Inductive Reasoners
|
Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, formal language is used as representations of knowledge (facts and rules, more specifically). However, formal language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and incapacity to handle ambiguous input. To this end, we propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language. New automatic metrics are also proposed and analysed for the evaluation of this task. With DEER, we investigate a modern approach for inductive reasoning where we use natural language as representation for knowledge instead of formal language and use pretrained language models as "reasoners". Moreover, we provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts. We also propose a new framework drawing insights from philosophy literature for this task, which we show in the experiment section that surpasses baselines in both automatic and human evaluations. We discuss our future perspectives on inductive reasoning in detail in Section 7. Dataset and code are available at
|
Inductive reasoning is to reach to a hypothesis (usually a rule that explains an aspect of the law of nature) based on pieces of evidence (usually observed facts of the world), where the observations can not provide conclusive support to the hypothesis that the hypothesis supports more than mere reformulation of the content of the evidence Past research works on inductive reasoning within computer science are investigated by Inductive Logic Programming (ILP) To overcome the challenges above, we present a novel paradigm for inductive reasoning based entirely on natural language, i.e., inducing natural language rules from natural language facts. In particular, we create a first-of-its-kind natural language inductive reasoning dataset named DEER containing 1.2k rule-fact pairs (more details illustrated in §3.1). With this dataset, we investigate Short fact 1
|
The Venus flytrap is a carnivorous plant native to subtropical wetlands on the East Coast of the United States in North Carolina and South Carolina. It catches its prey-chiefly insects and arachnids-with a trapping structure formed by the terminal portion of each of the plant's leaves, which is triggered by tiny hairs on their inner surfaces. Pitcher plants are several different carnivorous plants which have modified leaves known as pitfall traps-a prey -trapping mechanism featuring a deep cavity filled with digestive liquid. The traps of what are considered to be "true" pitcher plants are formed by specialized leaves. The plants attract and drown their prey with nectar. Drosera, which is commonly known as the sundews, is one of the largest genera of carnivorous plants, with at least 194 species. The trapping and digestion mechanism of Drosera usually employs two types of glands: stalked glands that secrete sweet mucilage to attract and ensnare insects and enzymes to digest them, and sessile glands that absorb the resulting nutrient soup. , then it probably has a trapping structure. Table a modern approach to inductive reasoning where both facts and rules are in natural language, and pretrained language models (PLMs) are used as the inductive reasoner. Note that the inductive reasoning considered in this paper has several distinctions considered by other reasoning tasks over text With natural language as representation and PLMs as the reasoner, such an inductive reasoning system can avoid the systematic disadvantages of formal language and symbolic reasoners. Specifically, with natural language as representation, it can naturally handle raw input as natural language text. In addition, different from symbolic methods, PLMs contain knowledge via pretraining Based on the proposed dataset, we study the PLM's ability to induce (generate) natural language rules from natural language facts under different settings, such as different FOL rule types and topics with varying input facts and PLM model sizes. We also propose a new framework for this task, named chain-of-language-models (CoLM) which is shown in Figure To sum up, our contributions are three-fold: • We propose a new paradigm (task) of inducing natural language rules from natural language facts, which naturally overcomes three systematic disadvantages of past works on inductive reasoning. In particular, we create a first-ofits-kind natural language inductive reasoning dataset DEER containing 1.2k rule-fact pairs, where fact and rule are both written in natural language. New automatic metrics are also proposed for task evaluation, which shows strong consistency with human evaluation. • We provide the first and comprehensive analysis of how well PLMs can induce natural language rules from natural language facts. • Drawing insights from philosophy literature In §7 we discuss our future perspectives on inductive reasoning in detail. Definition of Inductive Reasoning It is still under debate on the definition of inductive reasoning in philosophy research Relation with Other Reasoning Tasks The goal is quite different from (1) deductive reasoning as given facts and rules and reach to new facts In this section, we discuss the data collection process for our proposed dataset, and our proposed metrics for automatic and human evaluation. In general, we propose two datasets. The first one, named DEER (inDuctive rEasoning with natural languagE Representation), contains 1.2k rulefact pairs, where rules are written by human annotators in English, and facts are existing English sentences on the web. The other one, named DEER-LET (classification of inDucEd rulEs with natuRal LanguagE representaTion), including Collected by a human expert (the first author), DEER contains 1.2k natural language rule-fact pairs where rules cover 6 topics and 4 common rule types of FOL. The 6 topics are zoology, botany, geology, astronomy, history, and physics. Shown in Table Natural language rule is firstly written by a human expert, then for each rule 6 supporting facts (3 long facts and 3 short facts) are collected from existing human-written text from commercial search engines and Wikipedia. Long facts are paragraphs collected from different web pages to for more difference, and short facts are core sentences selected from corresponding long facts. Each fact itself should contain enough information that is possible to induce the full corresponding rule (an example is shown in Table To validate the correctness of the DEER dataset, we randomly split DEER data to 4 subsets, and 4 graduate students manually check each of the subsets on whether each fact contains enough information that is possible to induce the given rule. The overall correctness of DEER is 95.5%. The reason that DEER is not larger is that it requires experts who are familiar enough with inductive reasoning and possesses a relatively high level of science knowledge to annotate. DEERLET is a dataset collected by a human expert (the first author) in inductive reasoning for classification tasks to evaluate the specific capabil- Here, facts are directly from DEER, and the corresponding rules are collected from PLMs. Label0 to label3 are classification labels evaluating specific aspects of the generated rules. The reason in DEERLET we collect rules from the generation of PLMs is that we want to avoid human annotation biases Inspired by Obeid and Hoque (2020), label 0/1/2 are annotated on a 3-point scale (true / partially true / false), and label 3 are annotated on a 2-point scale (true / false). More details on annotation of DEERLET are illustrated in §A.5. DEERLET provides human annotations for evaluation of the generated rules from four different aspects. Here we use precision / recall / f1, and the four aspects in DEERLET for human evaluation. For the DEER dataset, as it requires generating rules based on input facts, the first metric we adopt is METEOR It makes the METEOR metric here a similar metric to "precision", as it only calculates the score for rules that are classified as "true". As a result, the model might have a low recall in that it might only keep the rule with the highest confidence score, and classify many reasonable good rules as "false". To measure the "recall" of inductive reasoning models, we propose "weighted recall (WRecall)" as the second automatic evaluation metric for this task. The difficulty lies in that we don't have the ground truth labels for generated rules without human evaluation. To calculate WRecall, we make an assumption, which is that the higher METEOR a rule has, generally the higher probability it is a reasonable rule for given facts. This assumption is reasonable given the relatively high correlation coefficient between METEOR and human evaluation shown in §A.7. Specifically, as shown in table Now that we have a METEOR metric that provides a similar measurement of "precision", and WRecall for "recall", we propose GREEN (GeometRic mEan of METEOR aNd WRecall) to consider METEOR and WRecall together. It is defined as a geometric mean instead of a harmonic mean because METEOR is not in the range [0, 1]. More specifically, In general, compared with METEOR, GREEN gives a more comprehensive evaluation of the induced rules. Therefore GREEN can be a more favorable metric when the recall is an important factor (e.g., computational power is limited). However, when the precision of the induced rules is more favored, METEOR should be a more proper metric than GREEN. §A.6 discusses more on the importance of each metric for this task. More discussions on the usage of automatic evaluation metrics and how should we interpret the results of automatic metrics can be found in §A.8. In this section, we formally present the task definition and our proposed framework for natural language inductive reasoning. Figure DEER dataset is used as the dataset for the natural language inductive reasoning task. The data format for DEER is (rule, f act), where both rule and f act are natural language sentences. The goal of the task is to generate reasonable natural language rules given f act in an inductive reasoning way (the rules should be more general and therefore cover more information than f act). Hypothetical Induction is an important induction type in inductive reasoning Hypothetical induction fits our task well, as in DEER we also want to induce a hypothesis as a more general rule that can entail the facts. We borrow insights from the requirements for the induced rules in hypothetical induction to develop our framework. Specifically, there are mainly three requirements More concretely, we define the requirements for designing our framework as 1) there should be as fewer contradictions between facts and the rule as possible, and 2) the rule should reflect the reality, 3) the content in facts should be relevant specific statements that are covered by the rule, 4) the rule should not be trivial. Based on this, we develop our framework as shown in Figure In practice, we implement all five modules with PLMs. We call our implementation as CoLM (Chain-of-Language-Models). The goal of M1 is to generate rules based on the input facts and a given rule template. Thus, M1's input contains facts, a rule template, and prompts that demonstrate the rule induction task.M2 and M4's inputs include prompts that explain the rule-fact compatibility, a rule, and fact(s); M3 and M5's inputs include again prompts that explain the task and a rule, as their targets are independent of fact. More interestingly, although our framework is solely based on the insights from philosophy literature, we also find a mathematical interpretation of this approach. Here, we denote P (A) as the probability indicating whether A is valid for simplicity. Thus, M2 and M4 jointly measure the validness of a fact given the corresponding rule P (f act|rule) ≈ P M 24 (f act|rule) = P M 2 (f act|rule)P M 4 (f act|rule), M3 and M5 directly measure the validness of the rule itself P (rule) ≈ P M 35 (rule) = P M 3 (rule)P M 5 (rule). Here P M 24 and P M 35 are parameterized as the product of two corresponding probabilities. By using Bayes' rule, we can easily show that the validness of a rule based on the input fact is (here we omit constant P (f acts)) Note that this score is merely a discrimination score and thus different from the generation probability from M1. In other words, the rules proposed by M1 are then selected by M2/3/4/5 in a Bayesian inference fashion. In this section, we discuss the evaluation metrics and baselines, and then present the main results of our framework (all are averaged by 5 runs). We carry out evaluations for the framework (the rule generation task with DEER) and individual modules for classification using DEERLET. For evaluation of the rule generation of the overall framework, we use METEOR, WRecall, and GREEN as automatic evaluation metrics; And use precision, recall, f1, and the four metrics in DEERLET as human evaluation metrics. WRecall, GREEN, and the four metrics in DEERLET are our newly proposed metrics for inductive reasoning introduced in §3.3. For evaluation of the classification tasks on DEERLET, we use accuracy, f1, and averaged precision as metrics. We use a non-neural method and a neural method as baselines for the framework. We call the nonneural baseline "R+F", as it randomly fills the given rule template with sentences or phases from the given fact. The neural baseline we use is the rule proposer itself in Figure We use majority class and TF-IDF (Jones, 2004) as baselines for individual modules. The majority class baseline always predicts "yes", which is equivalent to not using M2/3/4/5 to filter rules from M1. TF-IDF is another reasonable baseline as the induced rules contain similar contents compared to input facts. In practice, each input fact-rule pair is assigned a TF-IDF value, and a threshold for correctness (to compare with the TF-IDF value) is tuned on the DEERLET validation set. Most modules are implemented with GPT-J (Wang and Komatsuzaki, 2021), a pre-trained language model with 6 billion parameters. Results on other LLMs such as LLaMA In this section, we investigate the question of "how well can pretrained language models perform inductive reasoning?". Specifically, we provide analyses in terms of rule types, topics, variations of input fact, and scales of language models. Except for Table Table Table In table 7, long facts mean the paragraph-level facts in DEER, and short facts mean the core sentencelevel facts selected from corresponding paragraphlevel facts. The different number of facts indicates the different number of facts given as input that exhibit similar rule patterns (e.g. Lemon tree / orange tree / apple tree can conduct photosynthesis). We consider the number of facts as an important factor because psychological research shows that more facts with similar patterns can help with inductive reasoning Figure 6B and GPT-NeoX 20B We sampled 100 rules from CoLM (rules that generated by M1 and pass all M2/3/4/5), and have conducted an error analysis of the samples. Figure The first version of this paper was finished in 2022. At that time, inductive reasoning-in the sense of deriving explicit natural language hypotheses (rules) from observations (input facts), where the hypotheses and observations adhere to specific relations defined by induction-was a new and unexplored research area. Previously, the most closely related works came from the ILP (Inductive Logic Programming) community, which focused on symbolic approaches to the task of inductive reasoning (inducing explicit formal language hypotheses). This paper aims to act as a bridge between the ILP and NLP communities by (1) demonstrating how natural language and related techniques (foundation models) can address challenges within the ILP community, and (2) introducing the definition and task of inductive reasoning to NLP. Moreover, this paper can serve as a preliminary study, suggesting that language models have the potential to function as inductive reasoners. The transcription of requirements for inductive arguments from philosophical literature, as illustrated in Section 4.2, could remain useful even in the era of powerful LLMs. The possible future challenges of research on inductive reasoning include (1) establishing and solving more challenging tasks for inductive reasoning, and (2) overcoming the fundamental challenges inherent in induction. A naturally more challenging task is scientific hypotheses discovery, which is to generate novel and valid scientific hypotheses. Here, "novel" means "not known or recognized by any literature". In fact, inductive reasoning is one of the primary types of reasoning in the development of science. Essentially, scientists use inductive reasoning whenever they move from limited data to a more general conclusion This challenge stems from certain fundamental requirements for the induced rules. As illustrated in Section 4.2, some of these requirements include • Checking whether the induced rule accurately reflects reality. • Determining whether the hypotheses are more general than the observations. Here, the "reflects reality" in the first requirement refers to whether the rule mirrors the objective world (or the environment of the task). In certain task settings, such as scientific hypothesis discovery, verifying whether an induced hypothesis mirrors the objective world can be very challenging, given that LLMs do not directly interact with the world. To ascertain the validity of the hypotheses, LLMs might need to utilize tools to conduct actual experiments to test the induced hypotheses. In other tasks, such as pattern induction, meeting this requirement could be much simpler, as whether it catches the designed patterns can be examined by executing the program and checking whether it produces the expected output. The second requirement can be interpreted as "whether the hypothesis is novel compared to the all existing literature" in the task of scientific hypothesis discovery To overcome the systematic problems of using formal language for inductive reasoning, we propose a new paradigm (task) of inducing natural language rules from natural language facts, and correspondingly propose a dataset DEER and new evaluation metrics for this task. We provide the first and comprehensive analysis of PLM's ability to induce natural language rules from natural language facts. We also propose a new framework, drawing insights from philosophical literature, which, as shown in the experimental section, surpasses baselines in both automatic and human evaluations. In this work, the size of dataset (DEER) contains 1.2k fact-rule pairs, which is relatively small. The reason is that the "rules" in this task are required to be very general. It is not easy to collect a large set of such rules in high-quality. Additionally, a rule can be collected only if (1) there are several facts findable in online texts, and (2) these facts satisfy certain relation with the rule required by induction (the rule generalizes over the facts). In addition, the DEER dataset mainly covers commonsense knowledge. A successive work to this paper "Correct but not very related" means although the rule is correct, but it is not very related to the facts given. For example, the facts are only about the depth and shape of Marianas Trench, while the rule is "if there exists a place with a greater depth, then it is possible to find something strange and interesting" (the "find something strange and interesting" aspect is not mentioned in facts). "Correct but not completely" means the rule is somewhat to mostly correct, such as "if a fruit has a strong smell, then it probably tastes good" (while facts are about durian, champedek, and morinda citrifolia); "if an economy is based on textiles, then it might experience an industrial revolution" (this rule is only true during a specific period of time in history); "if a wire moves, then it might induce voltage in the conductor" (this rule is only true if given magnetic fields). "Meaningless" means the rule is from a strange angle and it's hard to justify whether it is correct or not, such as "if an event has a positive impact on an individual and on family, then the impact on the family is greater", and "if a man has experienced hardships and life has been tough, then he might be able to understand and change his ways in the future". Reasoning Tasks In this paper, we strictly follows the definition and categorization of logical reasoning (including deductive, inductive, and abductive reasoning) in a survey of logical reasoning There have been some NLP works on case-based reasoning Inductive reasoning is also different from commonsense reasoning In DEERLET, given fact(s) and a rule, the annotation targets are whether the rule satisfies four requirements. Specifically, the requirements are "if the rule is deductively consistent with the fact", "if the rule reflects reality", "if the rule is more general than the fact", and "if the rule is not trivial". The first three requirements are annotated on a 3-point scale (true / partially true / false), and the last is annotated on a 2-point scale (true / false). Here we explain the standards of annotation on the four requirements. For "if the rule is deductively consistent with the fact", a 2-point will be assigned if the rule is totally relevant and consistent with the facts; a 1-point will be assigned if the rule introduces new information that does not show in facts but is consistent with the given fact as well as some limited amount of commonsense knowledge related to the facts; a 0-point will be assigned if the rule is (1) in conflict with given facts or (2) totally irrelevant to given facts or (3) introduces new information that is obviously wrong. For "if the rule reflects reality", a 2-point will be assigned if the rule totally reflects reality; a 1-point will be assigned if the rule reflects reality at most of the time; a 0-point will be assigned if (1) the rule is totally incorrect or (2) the rule is only occasionally correct. For "if the rule is more general than the fact", a 2point will be assigned if (1) the rule is more general than the facts or (2) it is obvious that the rule is trying to be more general than the facts; a 1-point will be assigned if (1) it is even hard for humans to induce a more general rule from the given facts or (2) the rule copies part of the given facts that are already containing very general information; a 0-point will be assigned if (1) from the facts it's easy for humans to induce a more general rule but the rule is not more general or (2) the rule is totally irrelevant to the facts. For "if the rule is not trivial", a 0-point will be assigned if (1) the rule is an incomplete sentence or (2) the latter sub-sentence of the rule only repeats the information in the former sub-sentence of the rule; otherwise, a 1-point will be assigned. Since inductive reasoning over natural language is a new task, and new metrics are designed (e.g., WRecall, GREEN), it is important to understand which aspects each metric focus on and which metric should we pay more attention to. As mentioned in §3.3, METEOR can be seen as evaluating the "precision" of the final rules, while GREEN evaluates "precision" and "recall" at the same time. However, it should be aware that the "recall" here is not as important as the "recall" in other tasks. More specifically, here "recall" measures how many good rules generated by M1 are filtered by M2/3/4/5. However, we can use M1 to generate a large number of rules, and as long as CoLM has good precision, it is easy to obtain a large number of high-quality rules, especially considering that the computational cost of only inference of M1 is relatively very low. Based on this observation, we argue that "precision" should be a much more important aspect of evaluation compared to "recall" (measured by WRecall) or even "f1" (measured by GREEN) for this task. More specifically, "recall" can be used to mainly measure at what efficiency can the system obtain rules with high precision. This viewpoint of evaluation metrics, of course, can raise the question of whether some typical kinds of rules are mostly filtered when pursuing rules with high precision, and in the end inductive reasoning system with high precision might only be able to obtain some other typical kinds of rules. We leave this question as an open question for this task to solve in the future. We choose METEOR since METEOR has a higher correlation coefficient with human evaluation than BLEU. More specifically, on DEERLET, we calculate the METEOR and BLEU for each generated rule with its golden rule in DEER and collect the human evaluation for the generated rule from label0/1/2/3 annotations in DEERLET (we normalize each label to [0,1] and use the product of label0/1/2/3 as the overall human evaluation score for the generated rule). Then, we can calculate the correlation coefficient between METEOR / BLEU and the overall human evaluation score. On DEERLET, the correlation coefficient between METEOR and human evaluation is 0.29, it is statistically significant as its p-value is 4.48 * 10 -6 , smaller than the significance level (0.05). Similarly, the correlation coefficient between BLEU and human evaluation is 0.24, with p-value of 1.17 * 10 -72 , which is also significant. We called 0.29 relatively high since in other open-ended NLP tasks such as dialogue systems, the Pearson correlation is typically only around 0.14 0.19 (shown in Table Developing better metrics for measuring the similarity between sentences is a challenging topic in NLP. Of course, METEOR is not a "perfect" automatic evaluation metric for inductive reasoning. We leave the question of "what is a better metric for inductive reasoning over natural language" as an open question for future works in the field. One good thing is that WRecall and GREEN can be applied with many metrics measuring sentence similarity such as METEOR and BLEU, so the evaluation of "recall" should be able to also benefit from the advance of metrics that evaluate "precision". Evaluation Metrics for Inductive Reasoning Tasks and How Should We Interpret the Results of Automatic Metrics Designing automatic evaluation methods for inductive reasoning is fundamentally difficult, mainly because of two reasons. Firstly, generalizing over existing facts is not restricted in a single way. Given existing facts, multiple rules that are very diverse from each other could all be true. Secondly, when it comes to more difficult inductive reasoning data, it is nearly inevitable to use long sentences for facts and rule, which make it even harder for common evaluation metrics such as BLEU or METEOR. However, we argue that although we don't have perfect automatic evaluation metrics for inductive reasoning now, it is not a reason to stop exploring research on inductive reasoning. In fact, with the fast development of LLMs, more difficult tasks are needed to further explore the scientific boundary in NLP, and many recently proposed tasks are so difficult to be evaluated with automatic evaluation metrics that they fully rely on human evaluation The reason we try to propose suitable automatic evaluation metrics is that we hope to simplify the evaluation process for the inductive reasoning task (at least for preliminary evaluations). We have illustrated why these metrics should be reasonable in §A.6 and §A.7. Similar to inductive reasoning, abductive reasoning also have multiple diverse correct generations, however abductive reasoning generation task also utilizes METEOR or BLEU Table (2) M2/3/4/5 instantiating with LLaMA have not been finetuned, but just in-context learning setting. Given that finetuned GPT-J largely improves GPT-J under in-context learning setting in Table While our work takes the first step to inductive reasoning in NLP and provide the first analysis, introducing more challenging inductive reasoning benchmarks would be beneficial to the the further development of the inductive reasoning field in NLP. Given an argument consisting of a premise and a conclusion, if the conclusion involves new information that is not covered by the premise and can not be conclusively entailed by the premise, the argument is an inductive argument When the conclusion has a larger scope of information coverage than the premise, and can entail the premise, it can be said that the conclusion is "more general" to the premise. In this case, we termed the premise as a "fact", and the conclusion as a "rule"; When the conclusion contains new pieces of information and cannot entail the premise, as defined by For instance, if facts that are about cats and dogs are good accompaniment of humans, then some examples of a "more general" rule can be (1) mammals are good accompaniment of humans, or (2) domesticated animals are good accompaniment of humans, or (3) animals with four legs are good accompaniment of human. In these examples, the rules cover a larger scope than the facts (e.g., mammals compared to cats; domesticated animals compared to cats), and therefore the rules are "more general" than the facts. "More general" means not only about finding higher taxonomic rank, but can be in unlimited forms. For instance, if the fact is about the Sun rises and falls every day, then some examples of a "more general" rule can be (1) the Earth is the king of the universe or (2) the Earth is rotating itself. Both rule examples are "more general" than the given fact, since the rule can entail not only the given fact, but also other not mentioned facts such as the observable movements of the other stars in the Milky Way. A.11 Set up Thresholds for M2/3/4/5 Setting up thresholds is an important step for our framework, since different thresholds can lead to different inductive reasoning results. We discuss the details of setting up thresholds in the section. We design the standard for setting up thresholds based on heuristics that the thresholds should be set up that each module (in M2/3/4/5) should filter some rules but a single module should not filter too many rules (in this case, since we have many modules, there might not remain a reasonable proportion of rules left). More specifically, given a rule (and facts), M2/3/4/5 can produce a score on evaluating the validity of the rule from a specific aspect. The score is the ratio of the probability of the "yes" token and "no" token obtained from the last layer of PLM. The score is in the range of We find that getting a specific threshold for each module is more beneficial than using the default 0.5 threshold. We obtain the thresholds on the DEERLET validation set. More concretely, on the validation set, if there exists a global optimal threshold that (1) achieves the best f1 or accuracy and (2) the threshold should not be very close to 0 or 1 and (3) recall is not very close to 0 (when close to 1, it should not be in the case that the threshold accepts nearly all generated rules but should be that the threshold already rejects some rules), then the global optimal threshold is adopted; if there is no such global optimal threshold, then find a local optimal threshold that (1) achieves the best f1 or accuracy compared to its neighboring thresholds and (2) the threshold should not be very close to 0 or 1, and (3) the recall range is in [0.7, 0.9], then the local optimal threshold is adopted. A.12 More Details to Prevent Collection of Generated Trivial Rules We use a simple heuristic method to prevent collection of generated trivial rules. Specifically, only rules generated from Module 1 that is with more than 45 tokens (not 45 words) do we pass to it Module 2/3/4/5, otherwise we directly filter it. The reason that we set it up is that we find generated rules with less than 45 tokens are mostly (if not all) incomplete sentences. If we collect and label these incomplete sentences to finetune Module 2/3/4/5, then Module 2/3/4/5 mostly learn to classify whether the rules are complete or not, but not to learn the designed patterns (since the la-bel0/1/2/3 in DEERLET for incomplete sentences are all false). For this reason, all annotated data in DEERLET only use rules that contain at least 45 tokens. Inductive Logic Programming (ILP) is a subfield of machine learning that uses FOL to represent hypotheses and data. It relies on formal language for knowledge representation and reasoning purposes Recently,
| 1,490 | 867 | 1,490 |
Life is a Circus and We are the Clowns: Automatically Finding Analogies between Situations and Processes
|
Analogy-making gives rise to reasoning, abstraction, flexible categorization and counterfactual inference -abilities lacking in even the best AI systems today. Much research has suggested that analogies are key to non-brittle systems that can adapt to new domains. Despite their importance, analogies received little attention in the NLP community, with most research focusing on simple word analogies. Work that tackled more complex analogies relied heavily on manually constructed, hard-to-scale input representations. In this work, we explore a more realistic, challenging setup: our input is a pair of natural language procedural texts, describing a situation or a process (e.g., how the heart works/how a pump works). Our goal is to automatically extract entities and their relations from the text and find a mapping between the different domains based on relational similarity (e.g., blood is mapped to water).
|
The ability to find parallels across diverse domains and transfer ideas across them is one of the pinnacles of human cognition. The analogous reasoning process allows us to abstract information, form flexible concepts and solve problems based on our previous experience Surprisingly, despite analogy's important role in the way humans understand language, the problem of recognizing analogies has received relatively little attention in NLP. Most works have focused on SAT-type of analogies ("a to b is like c to d"), with recent works In this work, we focus on a different type of analogies: analogies between situations or processes. Here the input is two domains (e.g., heart and pump) and the goal is to map objects from the base domain to objects from the target domain. Importantly, the mapping should rely on a common relational structure rather than object attributes, making it challenging for NLP methods. The most influential work in this line of research is Structure Mapping Theory (SMT) Many have argued that too much human creativity is required to construct these inputs, and the analogy is already expressed in them In our work, we explore a more realistic, challenging setup. Our input is a pair of two procedural texts, describing a situation or a process, expressed in natural language. We develop an algorithm to automatically extract entities and their relations from the text and find a mapping between the different domains based on relational similarity. For example, the two texts in Figure Figure • We present a novel setting in computational analogy -mapping between procedural texts expressed in natural language. We develop a scalable, interpretable method to find mappings based on relational similarity. • Our method identifies the correct mappings 87% of the time for procedural texts from ProPara dataset
|
Our framework is based on Gentner's structure mapping theory (SMT) Intuitively, we want similarity to be high if the two sets share many distinct relations. For example, {provide, destroy}, should be more similar to {supply, ruin} than to {destroy, ruin} as the last set does not include any relation similar to provide. Given a pair of entities b i , b j ∈ B and a pair of entities t k , t l ∈ T , we define a similarity function measuring how similar these pairs are, in terms of the relations between them. Since sim is asymmetric, we consider both possible orderings: Objective. Our goal is to find a mapping function M : B → T ∪ ⊥ that maps entities from base to target. Mapping into ⊥ means the entity was not mapped. The mapping should be consistentno two base entities can be mapped to the same entity. We look for a mapping that maximizes the relational similarity between mapped pairs: If b i or b j maps to ⊥, sim * is defined to be 0. Our goal in this section is to find the best mapping between B and T . Our algorithm consists of four phases: we begin with a basic text processing (Section 3.1). Then, we extract potential entities and relations (Section 3.2). Since entities can be referred to in multiple ways, we next cluster the entities (Section 3.3). Finally, we find a mapping between clusters from B and T (Section 3.4). We note that our goal in this paper is to present a new task and find a reasonable model for it; many other architectures and design choices are possible and could be explored in future work. We begin by chunking the sentences in the input. As our next step is structure extraction, we first want to resolve pronouns. We apply a lightweight co-reference model Analogy is based on relational similarity; thus, we now extract relations from the text. This naturally falls under Semantic Role Labeling (SRL) We chose to use QA-SRL since it allows the questions themselves to define the set of roles, with no predefined frame or thematic role ontologies. Recent studies show that QA-SRL achieves 90% coverage of PropBank arguments, while capturing much implicit information that is often missed by traditional SRL schemes We focus on questions likely to capture useful relations for our task. We filter out "When" and "Why" questions, "Be" verbs, and questions and answers with a low probability (see Appendix A). In classical computational analogy work, entities are explicitly given, each with a unique name ("cell"). However, in our input, entities are often referred to in different ways ("the animal cell", "the cell", "cell"), which might confuse the mapping algorithm. Therefore, in this step we merge those different phrasings, resulting in a new, more refined set of entities. Since we do not know in advance the number of clusters, we use Agglomerative Clustering (Zepeda-Mendoza and Resendis-Antonio, 2013). We manually fine-tuned the linkage threshold that determines the number of clusters (see Appendix B.2 for details). We denote the resulting clusters of entities as B = {b 1 , ..., b n } and T = {t 1 , ..., t m }. Figure Our problem definition (Equation "Animal cells must also produce proteins and other organic molecules necessary for growth and repair. Ribosomes are used for this process" / "The factory synthesizes products from raw materials using machines" Ideally, we would like to infer that ribosomes produce proteins and machines synthesize products. QA-SRL only gives us partial information, but it is still useful. For example, both proteins and products are associated with similar questions (what is produced?, what is synthesized?), hinting that they might play similar roles. Thus, we propose a heuristic approach to approximate Equation Intuitively, the similarity score between two entities b i , t k is high if the similarity between their associated questions is high (for example, cell and factory have multiple distinct similar questions). We define this as the sum of cosine distances over their associated questions' SBERT We observe that questions are mostly of similar length (in ProPara, ∼1/3 of the questions have 3 words, ∼1/3 have 4 words, ∼1/6 have 2 words and ∼1/6 have 5 words). Note that the entities are not part of the questions. Beam Search. After computing all similarities, we use beam search to find the mapping M * (see Appendix B.2 for parameters). Figure Our research questions are as follows: • RQ1: Can we leverage our algorithm for retrieving analogies from a large dataset of procedural texts? • RQ2: Does our algorithm produce the correct mapping solution? • RQ3: Is our algorithm robust to paraphrasing the input texts? We chose to test our ideas on the ProPara dataset One of the reasons for developing a metric for analogous similarity between paragraphs is to be able to retrieve analogies from a large corpus (RQ1). To find analogies, we wish to rank all ∼76K possible pairs (over the 390 ProPara paragraphs), so that analogies rise to the top. We expect very few pairs of paragraphs to be truly analogous, while the number of pairs that might happen to have one or two strong entity matches could be significantly higher. Thus, we prefer a mapping involving more entities, even though their scores are not very strong. We chose a simple ranking formulation that balances between the number of mappings and their strength -multiplying the number of mappings by the median similarity, |M| • median(M). To the best of our knowledge, there is no baseline that solves our task. We first compare our method, FMQ, to SBERT (Reimers and Gurevych, 2019), a well-known method to derive semantically meaningful sentence embeddings for tasks like semantic textual similarity. The ranking is based on cosine similarity between paragraph embeddings. The second baseline we use is a simpler variant of our method we call Find Mappings by Verbs (FMV). FMV is identical to FMQ, but when finding a mapping (Section 3.4) we compute the similarity between the verbs of the questions instead of between the questions themselves. As verbs represent relations, which are a core part of analogies, this baseline is meant to test the additional benefit of using the questions extracted by QA-SRL. We rank the 76K possible pairs via all three methods. We annotate the top 100 pairs, as well as 40 pairs from all quartiles (bottom, middle, 25% and 75%), resulting in a total of 260 annotated pairs from each list (702 unique). The main intersections are between FMQ and FMV (25% top, 95% bottom), and between FMQ and SBERT (11% top). Labels. If the texts are not analogous to each other, we use the Not analogy label. Analogies are divided into Self analogy (entities and their roles are identical), Close analogy (a close topic, entities from a similar domain), Far analogy (unrelated topics with different entities), and Sub-Analogy (only a part of one process is analogous to a part of the other; should contain at least two similar relations.) See Table Annotation. We had an expert (member of our team) annotate the 702 unique pairs from the three lists in a double-blind fashion. As this is not an easy annotation, we performed two checks to assess the clarity and consistency of our annotation scheme. First, another expert from our team (highly familiar with analogies) annotated a sample of the data (containing all labels), achieving 90% agreement, with Cohen's Kappa of 0.74 for the 2-labels and 0.88 for the 5-labels. Next, we recruited 15 volunteer annotators (graduate students in CS, most with a basic knowledge of analogies). We Figure for each label, along with the correct label and an explanation. Annotators discussed the examples with the experimenter. We sampled from our expert's annotation 5 pairs for each label, resulting in 25 pairs of paragraphs. Each annotator received 5 pairs, s.t. each pair is assigned to 3 annotators. When treating our expert's annotations as ground-truth, annotators' accuracy was 0.96 for the binary (analogy/non-analogy) task, and 0.73 for the 5-class task (27% scored perfectly, 33% had one mistake). Figure Results. All methods had zero analogies in the 25%, middle, 75% and bottom samples; the only analogies found were at the top-100. At the top, SBERT found 100% analogies, FMQ reached 79% Method P AP NDCG FMV (@25) 0.68 0.36 0.4 (@50) 0.72 0.37 0.41 (@75) 0.71 0.36 0.43 (@100) 0.72 0.36 0.43 FMQ (@25) 0.96 0.5 0.57 (@50) 0.84 0.43 0.52 (@75) 0.77 0.39 0.47 (@100) 0.79 0.4 0.49 Table and FMV -72%. At first glance, it seems like SBERT is winning. However, a closer look at analogy types (Table To estimate the prevalence of analogous pairs in the data, we randomly sampled 100 more pairs and had the expert annotate them, finding 3% analogies (one sub-analogy, one self analogy and one far analogy), confirming the hardness of the task. Table In the previous section, we saw our method is able to identify analogies. However, we did not check that it indeed finds strong (true) mappings. In this section, we take a closer look at the mappings themselves and tackle RQ2 -does our algorithm produce the correct mapping? We chose 15 analogous pairs of paragraphs from ProPara, identified in the previous experiment (Section 4.1), equally divided between close, self and far (we did not take sub-analogies as they are harder for the annotators, sometimes with more than one correct answer). We assigned one paragraph to each of our 15 volunteer annotators and asked them to find the correct mapping between the entities. We note that choosing pairs of paragraphs from the previous experiment might introduce some bias. To evaluate mappings, we need pairs of analogous texts. Randomly sampling and annotating ProPara pairs would be prohibitively expensive due to sparsity of analogies in the data; we do believe that analyzing a sample of the analogies found through our previous experiment, despite the potential bias, is still interesting and worth exploring. In addition, we decided to try our algorithm on a different kind of data -analogous stories from cognitive-psychology literature (which does not suffer from the potential bias mentioned above). We used the Rattermann and Keane problems We instructed the annotators to find mappings between entities, and emphasized that the mappings should be consistent and based on the roles entities play in the texts. We showed them two examples of correct mappings with explanations. One user provided an invalid mapping for ProPara, and we discarded his annotation. We consider the annotators' labels as ground truth, and the algorithm's mappings as predictions. We compare the performance of FMQ and FMV (Section 4.1). Again, to the best of our knowledge, there is no baseline for computing mappings. P R F1 ProPara FMV (@1) 0.48 0.33 0.39 FMQ (@1) 0.82 0.64 0.72 FMV (@3) 0.58 0.40 0.47 FMQ (@3) 0.87 0.67 0.76 Stories FMV (@1) 0.64 0.46 0.54 FMQ (@1) 0.88 0.68 0.77 FMV (@3) 0.73 0.52 0.61 FMQ (@3) 0.94 0.76 0.84 Results. Table We note that the results on the stories are better for both methods, which is probably due to the fact that the stories are written as analogies in the first place, with explicit parallels between them. Recall analysis. We analyzed FMQ's recall errors on stories and found that recurring sources of error include filtering "Where" questions, "How" questions and "Be" verbs. Refer to Appendix A for a short discussion of our filtering design choices. These error patterns apply to ProPara as well. Our method heavily relies on the way the input texts are phrased. In this experiment, we examine robustness to paraphrasing (RQ3). Automatic paraphrases. We focus on the ProPara dataset. We chose ten paragraphs which are not analogous to each other and generated four paraphrases using wordtune, a large-scale language model We labeled the 100 pairs that came from the same original paragraph as an analogy (in fact, they are self-analogies), and the rest as non-analogy. Then, we rank all pairs via SBERT, FMV and FMQ. Table Responses to the same prompt. In the ProPara dataset, the same prompt is sometimes given to multiple authors. We now explore whether our algorithm can recognize those (self) analogies. Again, we take ten non-analogous paragraphs given to at least five authors, and randomly choose five authors for each, resulting in 1225 pairs of paragraphs, with 100 labeled as analogies. Unlike the previous experiment, the labels here are much noisier, as authors given the same prompt can focus on different aspects or granularity, resulting in non-analogous paragraphs (e.g., when describing the human life cycle, some authors mention zygotes and embryos, and others mention teenagers and adults. See Appendix E.2). Table Error analysis. Looking at our model's false negatives, we see mostly pairs of paragraphs describing the same topic from different points of view, which is really a mistake in the ground truth (see example in Appendix E.4). We note that some annotators reported it was impossible to notice two paragraphs were self-analogies without seeing the (identical) prompts. SBERT is not much affected by this, as it looks at the entire text, while our methods are "blind" to the entities. Another interesting source of false negatives is mistakes introduced by wordtune (e.g., expanding "the water builds up" to "Nitrates build up in the body of the water"). For false positives, we identify several sources of error, such as non-analogous texts with similar verbs, QA-SRL handling of phrasal verbs ("take care", "take off"), repeating verbs, and extraction issues (for example, the sentence "Water, ice, and wind hit rocks" lead Method P@25 P@50 P@75 P@100 FMV 1.00 to singleton entities and "water, ice, and wind", resulting in double-counting). Computational analogy. Classical analogymaking approaches are typically categorized as symbolic, connectionist and hybrid. See In the NLP community, most work focused on simple word analogies. This area has gained popularity after showing that word embeddings can model some relational similarities in terms of word vector offsets ("king -man + woman = queen" In a different setting, LRME (Turney, 2008) took as input two sets of entities (base, target) and tried to extract a common relational structure between them, mining relations from a large web corpus. Unlike our work, their setting focused on commonsense relations (e.g., electrons revolve around the nucleus), and could not handle either procedural texts (where entities go through multiple stages) or relations that are situation-specific. Also, LRME requires entities and relations to be expressed exactly the same across domains, making it brittle. Other works combined NLP and crowds to find analogies between products Aligning texts. Multi-text applications often need to model redundancies across texts. There has been much work in this area, exploiting graph-based representations Understanding procedural text has a long history in NLP, with a lot of work focusing on event extraction, tracking what happens to entities throughout the text Analogies can facilitate learning and problemsolving, helping people apply their prior experience to new situations. Much research has suggested that analogy-making is necessary for AI systems to robustly generalize and adapt to new contexts. In this work we explored a complex, challenging analogy-finding setting: our input is a pair of natural language procedural texts, describing a situation or a process. We presented a novel, scalable and interpretable method to extract entities and relations from the text and find a mapping between entities across the domains. We show that our method successfully identifies the correct mappings between the domains for both procedural texts from ProPara, and stories from cognitive-psychology literature. We demonstrate our method can be used to mine analogies from ProPara, including far, non-trivial analogies. Lastly, we show that our method is robust to paraphrases the input texts. In the future, we plan to improve our relation extraction and augment the text with commonsense knowledge to account for relations that do not appear explicitly in the text. We also plan to extend our algorithm to take the order of actions into account, and to apply our method to new domains, such as legal texts and recipes. Two particularly interesting applications are (1) education, where analogies can help a teacher explain a complex concept, and (2) computer-assisted creativity, where engineers and designers could find inspiration in distant domains. • Relation extraction: as discussed in Section 3.4, QA-SRL misses some relations (e.g., those that are expressed across multiple lines). This reduces the effectiveness of our method. For this reason, we also expect our method to work best on more technical descriptions (where there are actions and entities that can be tracked), and less on paragraphs with a narrative style. • Insensitivity to order of actions: our method does not take the order in which actions took place into consideration. For example, a sequence of actions and its reverse sequence would look analogical to the model. • Handling of phrasal verbs: QA-SRL does not handle phrasal verbs well, reducing phrases such as 'take care" and "take off" to the verb "take" ("what takes something?"). • Language: Our datasets contain solely English texts. The results may differ in other languages. A QA-SRL Ignore QA criteria Here we explain our criteria for ignoring QA which we parsed from the QA-SRL model. • When the probability of the question (according to QA-SRL) is less or equal to our question probability threshold (see B.2 for the chosen threshold). • When the probability of the answer (according to QA-SRL) is less or equal to the answer probability threshold (see B.2 for the chosen threshold). It means that the probability this is the correct span is too low. • When the question is not one of "what", "who" and "which". The rationale is that we focus on questions most likely to capture useful relations for our task. • When the question's verb is "be". The rationale is that "be" is not indicative enough. For example, if we have "X was something" on one text and "Y was something" on the other text, it does not indicate that X and Y play similar roles. • When the answer contains a verb or does not contain a noun. In this case, it does not an entity according to our definition. • When the answer is a pronoun. Here we discuss how we choose the value for different parameters which are used in the experiments. We note our goal in this paper was to come up with a proof-of-concept system, and further parameter tuning might improve the results. To determine the best cutoff for cosine similarity threshold between questions (FMQ) or between verbs (FMV), we did the following: We sampled 15 pairs of verbs (from ProPara paragraphs) in every range of threshold (intervals of 0.05 from 0 to 1 for questions and for verbs), then manually labeled the pairs of verbs as similar or not, and chose the threshold of 0.5. The threshold was chosen to balance between precision (percentage of correct samples from all samples passing the threshold) and an estimation of recall (percentage of correct samples from all correct samples that do manage to pass the threshold), computed using samples from all intervals. For the similarity of the questions we did the same process, but instead of verbs, we sampled questions. We found that 0.7 is the best cutoff. See Figure We choose the values for the following parameters by manual fine-tuning. For the QA-SRL settings, we set the answer probability threshold to 0.05 (range checked: 0.0-0.15) and question probability threshold to 0.1 (range checked: 0.0-0.15, intervals of 0.05), optimizing for F1-score. We set the Agglomerative clustering distance parameter to be 1.0 (range checked: 0-2 with a step size of 0.5), optimizing for cluster purity compared to a ground-truth clustering on several examples. For the SBERT embedder model, we used the pre-trained model msmacro-distilbert-base-v4 C.1 FMQ / FMV top@k comparison Here we compare our method (FMQ) and FMV on the top of their lists according to information retrieval metrics. See Figure
| 916 | 1,838 | 916 |
Interventional Training for Out-Of-Distribution Natural Language Understanding
|
Out-of-distribution (OOD) settings are used to measure a model's performance when the distribution of the test data is different from that of the training data. NLU models are known to suffer in OOD settings (Utama et al., 2020b). We study this issue from the perspective of causality, which sees confounding bias as the reason for models to learn spurious correlations. While a common solution is to perform intervention, existing methods handle only known and single confounder (Pearl and Mackenzie, 2018), but in many NLU tasks the confounders can be both unknown and multifactorial. In this paper, we propose a novel interventional training method called Bottom-up Automatic Intervention (BAI) that performs multi-granular intervention with identified multifactorial confounders. Our experiments on three NLU tasks, namely, natural language inference, fact verification and paraphrase identification, show the effectiveness of BAI for tackling different OOD settings. 1
|
From the era of word embeddings Recently, causal inference has been adopted in NLP to identify robust correlations by analyzing reliable causal effects between variables (Zhang A common solution of deconfounding is intervention However, the confounder C is not always observed. Furthermore, confounders can be multifactorial in NLU, e.g., it may contain both inherent dataset bias and artifacts from crowdsourced workers. Both scenarios make intervention non-trivial. In this paper, we propose BAI, a bottom-up automatic intervention method, which can (1) identify the unobserved confounder(s) automatically, and (2) perform multi-granular intervention to handle multifactorial confounders. Inspired by We apply BAI on three OOD benchmarks for NLU tasks. The results show that our method outperforms state-of-the-art methods, e.g., achieving 7 percentage points of absolute gains from the previous best method under OOD setting of Quora Question Pairs (QQP) Contributions: (1) we analyze the issue of NLU vulnerability from the perspective of causality analysis; (2) we propose a bottom-up automatic intervention method to perform intervention for unobserved and multifactorial confounders; and (3) extensive experiments on three OOD benchmarks demonstrate that our method outperforms state-ofthe-art methods. 2 Related Work OOD Generalization. OOD settings have been studied in recent years in NLU. To tackle dataset bias, most existing work relies on instance reweighting with a bias model for debiasing. Specifically, these methods However, instance reweighting based methods rely on either prior knowledge of bias or heuristic design of the bias model. Furthermore, it is pointed out that such bias models may not be able to predict the main model's reaction of biased samples and reweighting may waste data
|
Causal Intervention is the core idea of this paper. We formulate NLU tasks with a causal graph Naïve model training, i.e., empirical risk minimization (ERM) where the bias is introduced via P (C|X). For example, consider the NLI task. Let X be a pair of two sentences (premise and hypothesis) and Y the entailment label. Let C represents the degree of lexical overlap between the two sentences in X, (3) where X e denotes the data in the environment of e and XE denotes cross-entropy loss. w is a fixed dummy classifier. The second term measures the optimality of w for each environment to encourage the model to make environment-invariant predictions. This version of IRM is unstable due to the second-order derivatives. Another version of IRM (Teney et al., 2021) adopted in our paper initializes individual classifier W e for each environment e while all environments share one feature extractor. Here we denote the model for the environment e as f e = W e • Φ where Φ is a feature extracter, e.g., BERT. The corresponding loss is written as: (4) The second term is the variance of classifier weights, which encourages optimal classifiers for different environments to be close to each other To implement intervention on NLU tasks with the unobserved and multi-factorial confounder, we propose a Bottom-up Automatic Intervention (BAI) method using IRM. Figure As shown in Figure where E is the partition of environments determined by M. Note that max operation makes the backpropagation of gradients from M infeasible. To address this issue, we deploy the Gumbel Softmax trick We term the environment matrix with n environments as M n . Specifically, we deploy automatic stratifying to extract two environments matrices, i.e., fine-grained M n 1 and coarse-grained We first generate fine-grained partition E n 1 and coarse-grained partition E n 2 from M n 1 and M n 2 (see Figure The feature extractor and the classifiers of E n 1 in bottom fine-grained intervention are optimized by: ) Then we conduct the intervention of coarsegrained partition E n 2 . To prevent the catastrophic forgetting, i.e., the intervention with new partition may make the model forget the invariant property on previous partition, we incorporate the idea from continual learning where the first term is based on the new partition E n 2 while the second term computes the variance of classifier weights across all n 1 + n 2 classifiers. Inference is based on the design of IRM v2 where W denotes the mean weight of all classifiers. The overall pipeline of BAI is summarized in Algorithm 1. with Eq. 5 and Eq. 6 5: Initialize f ref 6: for X in D do 7: Get environment e ∈ E n 1 of X from M n 1 8: Update Φ and W e with Eq. 7 9: end for 10: for X in D do 11: Get environment e ∈ E n 2 of X from M n 2 12: Update W e with Eq. 8 13: end for 4 Experiment We apply our method on three NLU tasks to evaluate the effectiveness of our method. Specifically, we train on the original training set and evaluate on both the IID and the OOD evaluation sets. The accuracy is reported for all the benchmark datasets. Natural Language Inference aims to classify the relationship between two sentences, i.e., a premise and a hypothesis, into three classes: "entailment", "contradiction" and "neutral". It has been observed that NLI models may rely on the lexical overlap bias Paraphrase Identification identifies whether a sentence is paraphrase of another sentence. A sentence pair is labeled as "duplicate" if the two sentences share the same semantic meaning, otherwise "non-duplicate". Similar to NLI, lexical overlap bias exists in paraphrase identification. We use QQP BERT-base In this section, we compare our method with the following baselines: Naïve Fine-tuning We also observe a trade-off between IID and OOD on MNLI and QQP across most of the methods, i.e., performance gains on OOD are achieved with the sacrifice of IID performance. It is because naïve fine-tuning fits IID training data well. Interestingly, the IID test data of FEVER benefits from debiasing methods, which suggests that the data distribution of the IID test data may be different from that of the training data. In this section, we conduct extensive ablation studies to evaluate the components in our BAI and answer the following research questions. RQ1: How does each component of BAI contribute to the performance gains? Answer: We design four ablative settings: (a) Replacing the learned environment matrix with a randomly initialized one; (b) Removing the regularizer term in Eq. 7 and 8; (c) Replacing bottom-up intervention with single intervention, i.e., removing Eq. 8. (d) Using the same number of classifiers on naïve fine-tuning model as our BAI. As reported in method and the improvement of our method is not from the added parameters (3) Prior knowledge of bias, i.e., lexical overlap bias in Figure As summarized in Table Dev HANS We further investigate the crowdsourced worker preference in E 2 , i.e., the difficulty of the samples in these two environments is distinguishable. Samples in the second environment are more challenging compared to the first one. As depicted in Figure In this paper, we explore how to improve the robustness of NLU models under OOD setting, and propose a bottom-up automatic intervention method for debiasing. The experiment results demonstrate the superiority of our model over state-of-the-art methods. In future work, we will consider two improvements on BAI. First, we target at an end-toend framework for intervention and dynamic learn the partition of environment for NLU tasks. Second, we want to ease the trade-off effect between IID and OOD sets. The limitations of this paper are twofold. First, the proposed method is only evaluated on natural language understanding tasks. Thus the effectiveness on natural language generation tasks and sequence labeling tasks is not guaranteed. Similarly, the optimal hyper-parameters for other tasks may also differ from the selections stated in this paper. Second, the performance trade-off (see Table
| 973 | 1,811 | 973 |
Food Knowledge Representation Learning with Adversarial Substitution
|
Knowledge graph embedding (KGE) has been well-studied in general domains, but has not been examined for food computing. To fill this gap, we perform knowledge representation learning over a food knowledge graph (KG). We employ a pre-trained language model to encode entities and relations, thus emphasizing contextual information in food KGs. The model is trained on two tasks -predicting a masked entity from a given triple from the KG and predicting the plausibility of a triple. Analysis of food substitutions helps in dietary choices for enabling healthier eating behaviors. Previous work in food substitutions mainly focuses on semantic similarity while ignoring the context. It is also hard to evaluate the substitutions due to the lack of an adequate validation set, and further, the evaluation is subjective based on perceived purpose. To tackle this problem, we propose a collection of adversarial sample generation strategies for different food substitutions over our learnt KGE. We propose multiple strategies to generate high quality context-aware recipe and ingredient substitutions and also provide generalized ingredient substitutions to meet different user needs. The effectiveness and efficiency of the proposed knowledge graph learning method and the following attack strategies are verified by extensive evaluations on a large-scale food KG.
|
Structured knowledge furnishes an in-depth understanding of the world. Knowledge graph embedding (KGE) maps entities and relations into vectors while retaining their semantics domain-specific KGs As for encoding models in KGE, most deep learning-based methods like convolutional neural networks (CNN) Large-scale food data offers rich knowledge that can help many issues related to healthy eating behaviors. Among various food related research, the food substitution problem is gaining increasing attention owing to its applicability in tasks like food question answering Previous work discovers suitable substitution options based on semantic similarity via explicit substitution rules and additional context Massive food KGs have become good sources for suggesting substitutions, since they provide unified and standardized concepts and their relationships in structured form, which is very valuable for food related studies. However, KGs often suffer from sparseness if one only uses structure information in observed triple facts To tackle the above issues, we conduct textual adversarial attack on our learnt KGE model. We utilize a masked language model to generate high quality adversarial samples which finds substitutions that maximize the risk of making wrong assertions on KG triple plausibility prediction. We employ the generated adversarial samples as food substitutions. Furthermore, to meet the different food substitution purposes, we design a collection of attack strategies to generate three types of food substitutions: context-aware recipe substitutions, context-aware ingredient substitutions and generalized ingredient substitutions. In order to generate context-aware recipe substitutions, we first find the vulnerable tokens in recipes, defined as those that trigger an error in a target prediction model. Next, we apply a masked language model in a semanticpreserving way to generate substitutes, with flexibility to replace, add, or delete vulnerable tokens. The generation of context-aware ingredient substitutions is similar to recipe substitutions but only valid ingredients are selected as substitutions. The two types of substitutions are naturally aware of context since they are generated from a pre-trained language model, taking advantage of its superiority in contextualized information and rich linguistic knowledge. For the generalized ingredient substitutions, the adversarial attack is conducted among triples formed from all the ingredient's neighbors in the KG. A successful attack is achieved only when the adversarial sample fools most of its neighbors, preventing it to be contextualized to any specific neighbor. The contribution of our work is twofold: First, we address the sparseness problem in food KG and enrich its representation through the retraining of a pre-trained language model on two tasks -masked entity and triple plausibility prediction. Second, we conduct the food substitution work over KGs to leverage the structured and large-scale knowledge. We propose a novel collection of attack strategies to create different types of food substitutions. We are the first to deeply generate food substitutions in an adversarial attack manner, thus avoiding the problem of substitutions ground truth shortage. Both automatic and human evaluations show the high quality of our food substitutions.
|
The models that encode the interactions of entities and relations in knowledge graphs can be categorized into: linear/bilinear models, factorization models, and neural networks. Among the neural networks-based models, Convolutional Neural Networks (CNNs) are utilized for learning deep expressive features Previous work on food substitutions is mainly based on semantic similarity with explicit substitution rules such as food taxonomy and food subclass information An increasing amount of effort is being devoted to generating better textual adversarial examples with various attack methods. There are a lot of attack models to explore synonym substitution rules to enhance semantic meaning preservation In this section, we first encode a food KG into a pretrained language model (BERT) to learn entity and relation representations. Then, we conduct attacks on BERT to generate different types of adversarial samples as food substitutions. Given a KG G composed of head-relation-tail triples {(h, r, t)}. Each triple indicates a relation r ∈ R between two entities h, t ∈ E, where E and R are the entity and relation sets. The entities in food KG are recipes and ingredients. Here we formulate the triple (h, r, t) as a path h → r → t, e.g., banana bread → consist_of → all purpose flour. The input to the model can be one triple or multiple triples of the form h → r → t. The first token of every input path is always a special classification token Note that different elements separated by [SEP] have different segment embeddings: the tokens head and tail entities share the same segment embedding e A , while the tokens in relation have another segment embedding e B . For token x h i in head entity, we construct its input representation as , where x h i and p h i are the token and position embeddings. After constructing all input representations, we feed them into a stack of L Transformer encoders The final hidden states T h i ∈ R H are taken as the desired representations for entities and relations within X, where H is the hidden state size. These representations are naturally contextualized, and automatically adaptive to the input. Afterwards, the encoding model is retrained with two tasks: predicting a masked ingredient entity and predicting the plausibility of a triple. During training, for each input path X = {x h 1 , . . . , x h a , x r 1 , . . . , x r b , x t 1 , . . . , x t c }, we create the training instance by replacing the head entity or tail entity with a special token [MASK] if it is an ingredient. Then, the masked sequence is fed into the Transformer encoding blocks. The final hidden state corresponding to [MASK] is used to predict the target entity: where } over all ingredients. Here we only do masked ingredient entity prediction because the vocabulary size of recipes is too large for training. We compute a cross-entropy loss over the one-hot label y t and the prediction u t : Predicting the plausibility of a triple Given triples that reveal rich graph structures, similar to knowledge graph embeddings where W ∈ R 1×H is a trainable parameter and s τ ∈ [0, 1] is the triple plausibility score. Given the positive triple set D + and a negative triple set D -, we compute the cross-entropy loss with s τ and triple labels: where y τ ∈ {0, 1} is the triple label. The negative triple set D -is simply generated by replacing head entity h or tail entity t in a positive triple (h, r, t) ∈ D + with a random entity, that is, via negative sampling. After training the knowledge graph embedding model, we conduct attacks to generate feasible adversarial samples as recipe, ingredient and generalized ingredient substitutions, respectively, with three different attack strategies. We utilize an attack model to find vulnerable tokens in KG triples τ = (h, r, t) and replace them with generated substitutions that maximize the risk of making wrong assertions on a target model. Here we assume it is a KG triple plausibility classifier f r (h, t) since we have used it in our preceding KGE model. An adversarial entity t ′ is supposed to modify the text in t to trigger an error in the target model f r (h, t). For simplicity, we assume the tail entity t (it can also be the head entity h and recipe entities are always in the head of triples) is formatted as t = {x 1 , . . . , x i , . . . , x c }. At the same time, perturbations on t should be minimal, such that t ′ is close to t. There are lots of efforts being devoted to generating adversarial examples with various textual attack models on BERT ii) t ′ should be semantically similar to t, sim(t ′ , t) > d, where sim(t ′ , t) denotes the cosine similarity between representations of t ′ and t. iii) When placing t ′ in the retrained BERT model for KG triple plausibility classification, f r (h, t ′ ) yields low probability for the gold label y τ which indicates that t ′ can trigger an error in the target model. Under the attack theory, it might seem contradictory to treat t ′ as a food substitution, given that the triple (h, r, t ′ ) is less plausible in the KG. However, our assumption is the food KG is sparse (which it is in practice). The plausibility of the triple formed from food substitution cannot be a standard to judge the quality of the substitution, since it can be a potential triple missed in the KG. Thus, a better gauge of the plausibility is based on the semantic similarity of the substitution or human evaluation, as done in our experiments. Since recipes are usually short phrases, instead of mask-then-infill permutation, we consider more flexible actions to generate adversarial samples by replacing, adding, and deleting tokens. For every t obtained from the above three actions, we estimate the action score by computing the decrease in probability of predicting the correct label y τ . The action score I i is defined as: where o yτ (•) denotes the logit output by the target model for correct label y τ . To conduct the attack on BERT, we sequentially apply this attack strategy over t until an adversarial example t ′ is found or a limit of permutation action M is reached. We filter the set of top K tokens (K is a pre-defined constant) predicted by the masked language model for the masked token according to condition ii). To represent t and t ′ , previous work in textual adversarial attack often uses the universal sentence encoder Different from recipes, most ingredients only consist of 1-3 words. The plausibility of generated in-gredient substitutions is vital in our task. Therefore, we conduct entity-level perturbation on KG triples. We reuse the masked BERT model in Section 3.1 to detect vulnerable entities and suggest candidate ingredients. The attack process is similar to the attack on recipes. For instance, "mozzarella cheese" can be substituted with "cream cheese" in triple (Philly cheese steak pizza, consist_of, mozzarella cheese), where "cream cheese" is picked from the ingredient vocabulary. The ingredient generated in such a way can provide reasonable substitution for a particular recipe when recipe and ingredient make up the head and tail entities in a KG triple (h, r, t). Moreover, we introduce a new attack strategy to produce more generalized ingredient substitutions since there are also many scenarios asking for ingredient substitution for general purpose without any context. Given an ingredient entity t, we retrieve its neighbors N t in KG and form N triples {(h, r, t)|h ∈ N t }, note that a neighbor entity can also be a tail entity t in this triple set, we denote it as h for simplicity. Then, we obtain a candidate ingredient set Z via our pretrained masked BERT model. For every ingredient candidate z in Z, we iteratively apply attack over f r (h, t) and record the attack success rate α until it reaches a threshold determined by βN (β is a pre-defined constant). Since the adversarial attack is conducted among all t's neighbor, a successful attack is achieved only when the adversarial sample t ′ fools most of its neighbors N t . Therefore, the t ′ is regulated by N t , preventing it to be contextualized to any specific neighbor. An an example of generalized substitution, given an ingredient entity "couscous", we first retrieve all its neighbors in the food KG, forming a triple set {(h, r, t)|h ∈ N t }. The masked language model suggests {"quinoa", "sorghum", "millet", • • • } as the candidate substitution set. When conducting the adversarial attack, "quinoa" successfully attacks the target model f r (h, t ′ ) over βN times, thus we take "quinoa" as the generalized substitution of "couscous". Comparing to other candidates, triple (pesto chicken wrap with sun dried tomatoes, consist_of, quinoa) triggers an error in triple plausibility prediction, whereas triples (pesto chicken wrap with sun dried tomatoes, consist_of, sorghum) and (pesto chicken wrap with sun dried tomatoes, consist_of, millet) are predicted as true. Engaging more entity neighbors from the KG to conduct attacks makes the final substitution more generic. We use the FoodKG We compare our BERT-based KGE model with some typical KGE methods with regards to encoding models, including: • Linear models: TransE • CNN/GNN models: ConvE • Transformer-based models: KG-BERT We follow previous work on textual adversarial attack We compare our method with recent state-of-theart adversarial attack methods against pre-trained language models as follows: • BERT-Attack • BAE (Garg and Ramakrishnan, 2020): Similar to BERT-Attack, while BAE allows adding a token via perturbation. • CLARE We perform adversarial attacks on our KGE model and summarize the results in Table We observe that BERT-attack and BAE models have close performance. BERT-attack only replaces tokens. BAE allows adding a token while it inserts only near the replaced token, thus limiting its attacking capability. CLARE uses three different perturbations (Replace, Insert and Merge), each allowing efficient attacking against any position of the input, and can produce outputs of varied lengths. Our model's attack strategy is similar to CLARE for recipe substitution, with a different action scoring function. It is reasonable that CLARE performs close to our model. For ingredient substitution, the three baselines It is important to note that our main focus is not purely on successful attacks, but rather on the quality of generated samples. Therefore, to further examine the quality of the food substitutions and compare with previous adversarial attack work CLARE (2020) has created a ground truth dataset for generalized ingredient substitutions. Thus, we evaluate the semantic similarity between the ground truth ingredient substitution and the substitutes provided in Shirai et al. ( In order to have a deep understanding of the adversarial samples, we conduct qualitative analysis over the three types of food substitutions. We observe the following: • Recipe substitution: i) We have three perturbation actions during recipe substitution generation process. We calculate the action scores of these three and do perturbation according to the action with the highest score. In our final results, the replace action occurs most, accounting for 74.5% of the entire recipe substitutions. The noun token in recipes has a higher chance to be detected as a vulnerable token. The delete action often results in merging two noun tokens into one and the add action tend to insert tokens into noun phrase bi-grams. the three actions. For example, the token "blueberry" in recipe "the sweetest blueberry muffins" listed in Table • Ingredient substitution: i) Rare ingredients with low frequency in the ingredient vocabulary (occurring less than 50 times in all triples) tend to be detected as vulnerable and are replaced by more common ones. As demonstrated in Table 5, "poppy seed dressing" is substituted by "sesame seed dressing" in "chicken salad rollups appetizer". This can be useful in practice, since people often ask for a substitute when an ingredient is not at hand. ii) Most ingredients are suggested different substitutions in different recipes. As shown in Table • Generalized ingredient substitution: We report some generalized ingredient substitutions that have successfully attacked the KGE model over 100 times. The results are listed in Appendix, Table In this work, we proposed a novel framework to learn food KG embeddings via a pre-trained language model and generate high quality food substitutions by conducting attacks in the language model. Specifically, we addressed the sparseness problem in food KG and enriched its contextualized representation via the retraining of BERT model on two tasks. We then employed a masked language model to iteratively generate feasible food substitutions via adversarial attacks on KGE. We further invented a collection of attack strategies to generate three types of food substitutions to meet different user needs: namely, contextualized recipe and ingredient substitutions for substitution queries with a given context, and generalized ingredient substitutions for general substitution purpose. For future work, we aim to take the health or nutrition information into consideration during adversarial sample generation, thus guiding healthier dietary choices for people.
| 1,360 | 3,350 | 1,360 |
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation
|
Knowledge distillation (KD) is the preliminary step for training non-autoregressive translation (NAT) models, which eases the training of NAT models at the cost of losing important information for translating low-frequency words. In this work, we provide an appealing alternative for NAT -monolingual KD, which trains NAT student on external monolingual data with AT teacher trained on the original bilingual data. Monolingual KD is able to transfer both the knowledge of the original bilingual data (implicitly encoded in the trained AT teacher model) and that of the new monolingual data to the NAT student model. Extensive experiments on eight WMT benchmarks over two advanced NAT models show that monolingual KD consistently outperforms the standard KD by improving lowfrequency word translation, without introducing any computational cost. Monolingual KD enjoys desirable expandability, which can be further enhanced (when given more computational budget) by combining with the standard KD, a reverse monolingual KD, or enlarging the scale of monolingual data. Extensive analyses demonstrate that these techniques can be used together profitably to further recall the useful information lost in the standard KD. Encouragingly, combining with standard KD, our approach achieves 30.4 and 34.1 BLEU points on the WMT14 English-German and German-English datasets, respectively. Our code and trained models are freely available at
|
Non-autoregressive translation (NAT, Although the standard KD on original bilingual data eases the training of NAT models, distillation may lose some important information in the raw training data, leading to more errors on predicting low-frequency words Specifically, we leverage the monolingual data to perform KD (monolingual KD, §2.2), and train the NAT student model on the distilled monolingual data (Figure Furthermore, we analyze the bilingual links in the bilingual and monolingual distilled data from two alignment directions (i.e. source-to-target and target-to-source). We found that the monolingual KD makes low-frequency source words aligned with targets more deterministically compared to bilingual KD, but both of them fail to align lowfrequency words from target to source due to information loss. Starting from this finding, we propose reverse monolingual KD to recall more alignments for low-frequency target words. We then concatenate two kinds of monolingual distilled data (bidirectional monolingual KD, §2.3) to maintain advantages of deterministic knowledge and lowfrequency information. We validated our approach on several translation benchmarks across scales (WMT14 En↔De, WMT16 Ro↔En, WMT17 Zh↔En, and WMT19 En↔De) over two advanced NAT models: Mask Predict • Monolingual KD achieves better performance than the standard KD in all cases, and the proposed bidirectional monolingual KD can further improve performance by a large margin. • Monolingual KD enjoys appealing expandability: enlarging the scale of monolingual data consistently improves performance until reaching the bottleneck of model capacity. • Monolingual KD is complementary to the standard KD, and combining them obtains further improvement by alleviating two key issues of NAT, i.e., the multimodality problem and the low-frequency word translation problem. The paper is an early step in exploring monolingual KD for NAT, which can narrow the performance gap between NAT models and the SOTA AT models. We hope the promising effect of monolingual KD on NAT can draw more interest and can make NAT a common translation framework. 2 Redistributing Low-Frequency Words
|
Non-Autoregressive Translation Recent years have seen a surge of interest in NAT Standard Knowledge Distillation Knowledge distillation is the preliminary step for training NAT models by reducing the modes in the original bilingual data, which makes NAT easily acquire more deterministic knowledge and achieve significant improvement Different Distributions of Source Words To empirically reveal the difference on word distribution between bilingual and monolingual data, we visualize the overall word distributions, as plotted in Figure Our Approach Researches and competitions have shown that fully exploiting the monolingual data is at the core of achieving better generalization and accuracy for MT systems Intuitively, the monolingual KD can embed both the knowledge of the original bilingual data (implicitly encoded in the trained teacher model) and that of the newly introduced monolingual data. The comprehensive experiments in the following section provide empirical support for our hypothesis. In addition, the complementarity between the bilingual and monolingual data makes explicitly combining Standard KD and Monlingual KD can further improve model performance. KD" denotes the standard KD on source-language data, and " ← -KD" denotes reverse KD on target-language data. The subscripts B and M represent Bilingual and Monolingual distilled data. Recalling Low-Frequency Target Words KD simplifies the training data by replacing lowfrequency target words with high-frequency ones Table Our Approach (Bid. Monolingual KD) Based on the above observations, we propose to train NAT models on bidirectional monolingual data by concatenating two kinds of distilled data. Like back-translation ← → KD M ) is used to train the final NAT model. We expect that the better alignments of LFW links can lead to overall improvement of translation performance. Bilingual Data We conducted experiments on two widely-used NAT benchmarks: WMT14 English-German and WMT16 English-Romanian tasks, which consist of 4.5M and 0.6M sentence pairs respectively. To prove the universality of our approach on large-scale data, we also validated on WMT17 English-Chinese and WMT19 English-German tasks, which consist of 20.6M and 36.8M sentence pairs respectively. We shared the source and target vocabularies, except for En↔Zh data. We split the training data into subword units using byte pair encoding (BPE) Monolingual Data We closely followed previous works to randomly sample monolingual data from publicly available News Crawl corpus Model Training We validated our approach on two state-of-the-art NAT models: • MaskPredict [MaskT, • Levenshtein Transformer [LevT, In this section, we evaluated the impact of different components of the monolingual KD on WMT14 En-De validation sets. is complementary to standard KD (i.e. "+ -→ KD B " column). As seen, standard KD consistently improves translation performance across monolingual KD variants. Another interesting finding is that although reverse monolingual KD ( ← -KD M ) significantly underperforms its forward counterpart ( -→ KD M ) when used alone, they achieve comparable performance when using together with standard KD. We discuss in details how the two KD models complement each other in Section 3.4. Impact of Monolingual Data Sampling Some researchers may doubt that our approach heavily depends on the sampled monolingual data. To dispel the doubt, we investigated whether our model is robust to the selected monolingual data by varying the sampling strategies. Specifically, we conducted experiments on the full set of monolingual data from News Crawl 2007∼2020, which consist of 243M English and 351M German sentences. We compared with two representative approaches that sampled data with different priors: (1) LOW-FREQ samples difficult examples containing lowfrequency words As listed in Table To verify the effectiveness of our method across different data sizes, we further experimented on two widely-used large-scale MT benchmarks, i.e. WMT17 En↔Zh and WMT19 En↔De. As listed in rectional monolingual KD outperforms standard KD by averagely +1.9 and +2.3 BLEU points on En↔Zh and En↔De datasets, respectively, demonstrating the robustness and effectiveness of our monolingual KD approach. By combining with standard KD, our methods can achieve further +1.8 and +0.9 BLEU improvements. In this section, we provide some insights into how monolingual KD works. We report the results on WMT14 En-De data using Mask-Predict. Alignment We first present data-level qualitative analyses to study how monolingual KD complements bilingual KD. where T x denotes the length of the source sentence, x and y represent a word in the source and target vocabularies, respectively. We run fast-align on each parallel corpus to obtain word alignment. For fair comparison, we sampled the subsets (i.e. 4.5M) of " ← → KD M " and " ← → KD M + -→ KD B " to perform complexity computation. As seen in bilingual data (1.95 vs. 3.67), and monolingual KD reduces even more data complexity. Additionally, the data complexity can be further reduced by combining with standard KD. Monolingual KD Mainly Improves Low-Frequency Word Translation We first followed In this section, we provide some potential directions to further improve NAT performance by making the most of monolingual data. Exploiting Monolingual Data at Scale One strength of monolingual KD is the potential to exploit more monolingual data to further improve translation performance. To validate our claim, we scaled the size of monolingual data by {2×, 5×, 10×}, which are randomly sampled from the full set of monolingual data. As shown in enlarging the monolingual data consistently improves the BLEU scores, while this trend does not hold when further scaling the monolingual data (i.e. 10×). One possible reason is that the limited capacity of NAT-base models cannot fully exploit the large data, which suggests future exploration of larger NAT architectures. Augmenting AT Teacher with Monolingual KD An alternative to exploit monolingual data is to strength the AT teacher with monolingual KD, as listed in Table To bridge the performance gap, a number of recent efforts have explored, including model architectures Closely related to our work, Zhou and Keung (2020) improved NAT models by augmenting source-side monolingual data. Their work can be regarded as a special case of our approach (i.e. "Mono. KD + Standard KD" in Section 3.3), and our work has several more contributions. Firstly, we demonstrated the effectiveness of using only monolingual KD for NAT models, which can achieve better performance than the standard KD without introducing any computational cost. Secondly, we proposed a novel bidirectional monolingual KD to exploit both the source-side and target-side monolingual data. Finally, we provide insights into how monolingual KD complements the standard KD. In this work, we propose a simple, effective and scalable approach -monolingual KD to redistribute the low-frequency words in the bilingual data using external monolingual data. Monolingual KD consistently outperforms the standard KD with more translation accuracy of low-frequency words, which attribute to its strength of exploiting both the knowledge of the original bilingual data (implicitly encoded in the parameters of AT teacher) and that of the new monolingual data. Monolingual KD enjoys appealing expandability, and can be further enhanced by (1) combining with a reverse monolingual KD to recall more alignments for low-frequency target words; (2) combining with the standard KD to explicitly combine both types of complementary knowledge; (3) enlarging the scale of monolingual data that is cheap to acquire. Our study empirically indicates the potential to make NAT a practical translation system. Future directions include designing advanced monolingual KD techniques and validating on larger-capacity NAT models (e.g. BIG setting) to strengthen the power of monolingual KD, and fully NAT models
| 1,430 | 2,160 | 1,430 |
A Walk-based Model on Entity Graphs for Relation Extraction
|
We present a novel graph-based neural network model for relation extraction. Our model treats multiple pairs in a sentence simultaneously and considers interactions among them. All the entities in a sentence are placed as nodes in a fully-connected graph structure. The edges are represented with position-aware contexts around the entity pairs. In order to consider different relation paths between two entities, we construct up to l-length walks between each pair. The resulting walks are merged and iteratively used to update the edge representations into longer walks representations. We show that the model achieves performance comparable to the state-ofthe-art systems on the ACE 2005 dataset without using any external tools.
|
Relation extraction (RE) is a task of identifying typed relations between known entity mentions in a sentence. Most existing RE models treat each relation in a sentence individually Multiple relations in a sentence between entity mentions can be represented as a graph. Neural graph-based models have shown significant improvement in modelling graphs over traditional feature-based approaches in several tasks. They are most commonly applied on knowledge graphs (KG) for knowledge graph completion In this study, we propose a neural relation extraction model based on an entity graph, where entity mentions constitute the nodes and directed edges correspond to ordered pairs of entity mentions. The overview of the model is shown in Figure The contributions of our model can be summarized as follows: • We propose a graph walk based neural model that considers multiple entity pairs in relation extraction from a sentence. • We propose an iterative algorithm to form a single representation for up-to l-length walks between the entities of a pair. • We show that our model performs comparably to the state-of-the-art without the use of external syntactic tools.
|
The goal of the RE task is given a sentence, entity mentions and their semantic types, to extract and classify all related entity pairs (target pairs) in the sentence. The proposed model consists of five stacked layers: embedding layer, BLSTM Layer, edge representation layer, walk aggregation layer and finally a classification layer. As shown in Figure The embedding layer involves the creation of n w , n t , n p -dimensional vectors which are assigned to words, semantic entity types and relative positions to the target pairs. We map all words and semantic types into real-valued vectors w and t respectively. Relative positions to target entities are created based on the position of words in the sen- tence. In the example of Figure The word representations of each sentence are fed into a Bidirectional Long-short Term Memory (BLSTM) layer, which encodes the context representation for every word. The BLSTM outputs new word-level representations h (Hochreiter and Schmidhuber, 1997) that consider the sequence of words. We avoid encoding target pair-dependent information in this BLSTM layer. This has two advantages: (i) the computational cost is reduced as this computation is repeated based on the number of sentences instead of the number of pairs, (ii) we can share the sequence layer among the pairs of a sentence. The second advantage is particularly important as it enables the model to indirectly learn hidden dependencies between the related pairs in the same sentence. For each word t in the sentence, we concatenate the two representations from left-to-right and right-to-left pass of the LSTM into a n edimensional vector, The output word representations of the BLSTM are further divided into two parts: (i) target pair representations and (ii) target pair-specific context representations. The context of a target pair can be expressed as all words in the sentence that are not part of the entity mentions. We represent a related pair as described below. A target pair contains two entities e i and e j . If an entity consists of N words, we create its BLSTM representation as the average of the BLSTM representations of the corresponding words, e = 1 |I| i∈I e i , where I is a set with the word indices inside entity e. We first create a representation for each pair entity and then we construct the representation for the context of the pair. The representation of an entity e i is the concatenation of its BLSTM representation e i , the representation of its entity type t i and the representation of its relative position to entity e j , p ij . Similarly, for entity e j we use its relative position to entity e i , p ji . Finally, the representations of the pair entities are as follows: The next step involves the construction of the representation of the context for this pair. For each context word w z of the target pair e i , e j , we concatenate its BLSTM representation e z , its semantic type representation t z and two relative position representations: to target entity e i , p zi and to target entity e j , p zj . The final representation for a context word w z of a target pair is, v ijz = [e z ; t z ; p zi ; p zj ]. For a sentence, the context representations for all entity pairs can be expressed as a three-dimensional matrix C, where rows and columns correspond to entities and the depth corresponds to the context words. The context words representations of each target pair are then compiled into a single representation with an attention mechanism. Following the method proposed in where q ∈ R n d , n d = n e + n t + 2n p denotes a trainable attention vector, α is the attended weights vector and c ij ∈ R n d is the context representation of the pair as resulted by the weighted average. This attention mechanism is independent of the relation type. We leave relation-dependent attention as future work. Finally, we concatenate the representations of the target entities and their context (∈ R nm ). We use a fully connected linear layer, W s ∈ R nm×ns with n s < n m to reduce the dimensionality of the resulting vector. This corresponds to the representation of an edge or a one-length walk between nodes i and j: v Our main aim is to support the relation between an entity pair by using chains of intermediate relations between the pair entities. Thus, the goal of this layer is to generate a single representation for a finite number of different lengths walks between two target entities. To achieve this, we represent a sentence as a directed graph, where the entities constitute the graph nodes and edges correspond to the representation of the relation between the two nodes. The representation of one-length walk between a target pair v (1) ij , serves as a building block in order to create and aggregate representations for one-to-l-length walks between the pair. The walkbased algorithm can be seen as a two-step process: walk construction and walk aggregation. During the first step, two consecutive edges in the graph are combined using a modified bilinear transformation, where v (λ) ij ∈ R n b corresponds to walks representation of lengths one-to-λ between entities e i and e j , represents element-wise multiplication, σ is the sigmoid non-linear function and W b ∈ R n b ×n b is a trainable weight matrix. This equation results in walks of lengths two-to-2λ. In the walk aggregation step, we linearly combine the initial walks (length one-to-λ) and the extended walks (length two-to-2λ), where β is a weight that indicates the importance of the shorter walks. Overall, we create a representation for walks of length one-to-two using Equation (3) and λ = 1. We then create a representation for walks of length one-to-four by re-applying the equation with λ = 2. We repeat this process until the desired maximum walk length is reached, which is equivalent to 2λ = l. For the final layer of the network, we pass the resulted pair representation into a fully connected layer with a softmax function, where W r ∈ R n b ×nr is the weight matrix, n r is the total number of relation types and b r is the bias vector. We use in total 2r+1 classes in order to consider both directions for every pair, i.e., left-to-right and right-to-left. The first argument appears first in a sentence in a left-to-right relation while the second argument appears first in a right-to-left relation. The additional class corresponds to non-related pairs, namely "no relation" class. We choose the most confident prediction for each direction and choose the positive and most confident prediction when the predictions contradict each other. We evaluate the performance of our model on ACE 2005 1 for the task of relation extraction. ACE 2005 includes 7 entity types and 6 relation types between named entities. We follow the preprocessing described in We implemented our model using the Chainer library The forget bias of the LSTM layer was initialized with a value equal to one following the work of We extract all possible pairs in a sentence based on the number of entities it contains. If a pair is not found in the corpus, it is assigned the "no relation" class. We report the micro precision, recall and F1 score following Table As it can be observed from the table, the Baseline model achieves the lowest F1 score between the proposed models. By incorporating attention we can further improve the performance by 1.3 percent point (pp). The addition of 2-length walks further improves performance (0.9 pp). The best results among the proposed models are achieved for maximum 4-length walks. By using up-to 8-length walks the performance drops almost by 2 pp. We also compared our performance with Nguyen and Grishman (2015) (CNN) using their data split. 4 For the comparison, we applied our 4 The authors kindly provided us with the data split. best performing model (l = 4). Finally, we show the performance of the proposed model as a function of the number of entities in a sentence. Results in Table Traditionally, relation extraction approaches have incorporated a large variety of hand-crafted features to represent related entity pairs State-of-the-art systems have proved to achieve good performance on relation extraction using RNNs We proposed a novel neural network model for simultaneous sentence-level extraction of related pairs. Our model exploits target and context pair-specific representations and creates pair representations that encode up-to l-length walks between the entities of the pair. We compared our model with the state-of-the-art models and observed comparable performance on the ACE2005 dataset without any external syntactic tools. The characteristics of the proposed approach are summarized in three factors: the encoding of dependencies between relations, the ability to represent multiple walks in the form of vectors and the independence from external tools. Future work will aim at the construction of an end-to-end relation extraction system as well as application to different types of datasets. We tuned our proposed model using the RoBO toolkit (
| 732 | 1,161 | 732 |
FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue
|
Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to finetune language models. Dialogue understanding encompasses many diverse tasks, yet task transfer has not been thoroughly studied in conversational AI. This work explores conversational task transfer by introducing FETA: a benchmark for FEw-sample TAsk transfer in open-domain dialogue. FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs and create a baseline for future work. We run experiments in the single-and multi-source settings and report valuable findings, e.g., most performance trends are model-specific, and span extraction and multiple-choice tasks benefit the most from task transfer. In addition to task transfer, FETA can be a valuable resource for future research into the efficiency and generalizability of pre-training datasets and model architectures, as well as for learning settings such as continual and multitask learning.
|
Improving sample efficiency through transfer learning has been a long-standing challenge in the machine learning and natural language processing communities Two essential transfer learning settings, namely domain adaptation and task transfer, have been studied on language tasks Prior studies have focused on cross-dataset task transfer, gathering tasks annotated on disjoint datasets In this work, we create FETA, a benchmark for few-sample task transfer for language understanding in open-domain dialogue with 17 total tasks. FETA datasets cover a variety of properties (dyadic vs. multi-party, anonymized vs. recurring speaker, varying dialogue lengths) and task types (utterance-level classification, dialogue-level classification, span extraction, multiple-choice), and maintain a wide variety of data quantities. We study task transfer on FETA by comparing three task transfer algorithms and three commonly used language models in single-source and multisource settings. Figure In this study, we find that: (i) Trends are largely model-dependent, a finding that previous works have not discussed. (ii) Out of all task types, span extraction tasks gain the most as a target, especially with few samples. (iii) Adding source tasks does not uniformly improve over a single source task, motivating a better understanding of the complex relationship between source and target tasks. FETA provides a resource for various future studies, e.g., on the generalizability of model architectures, and pre-training datasets that enable efficient transfer. In addition to task transfer, FETA can also facilitate the study of continual and multitask learning. In summary, our main contributions are: • We create the first large-scale benchmark for task transfer in dialogue, with 132 sourcetarget task pairs. • Extensive experimentation on FETA in both the single-source and multi-source settings, and an in-depth analysis comparing models, learning algorithms, sample sizes, and task types, finding new and non-intuitive results. • A readily extensible transfer learning framework
|
Transfer Learning in NLP Prior works on transfer learning in NLP have studied a wide variety of topics, including domain adaptation More recently, DialoGLUE In this section, we briefly define intra-dataset task transfer, the problem setting of FETA. Then, we introduce FETA, our benchmark for few-sample task transfer in open-domain dialogue. Finally, we define the metrics we use to evaluate models and learning algorithms on FETA. Let a dataset be composed of the instance set, X, FETA, each instance x ∈ X is a dialogue. Definition 1 (Domain and Task). A domain D = {X , P (X)} consists of a feature space X and a marginal probability distribution P (X). The marginal probabilities are over the instance set Definition 2 (Learning Algorithm). A learning algorithm, A, is a protocol that determines the method by which the instance set X and taskspecific label sets Y 1 , Y 2 , . . . , Y n will be used to train a predictive function, f . Definition 3 (Task Transfer). Given a source task T S = {Y S , f S (X S )} and target task T T = {Y T , f T (X T )}, task transfer is the use of a learning algorithm, A, to improve the learning of f T by using the knowledge in T S . In cross-dataset task transfer, when X S ≠ X T , we also have P (X S ) ≠ P (X T ) and D S ≠ D T ; domain shift. In intra-dataset task transfer, when X S = X T , there is no domain shift. This enables the study of the learning algorithm's performance on task transfer, isolated from domain adaptation. We refer the reader to Pan and Yang (2010) and Few-Sample Due to the challenge and cost of collecting and annotating data, many real-world applications of NLP techniques are limited by data quantities. For this reason, we focus on the fewsample setting, defined in FETA as 10% of the original instance set. Out of 10%, 5%, and 1%, 10% was empirically determined to be the smallest percentage that retains labels from all label sets in both the train and development partitions. Given the recent attention focused on NLP applications in low-resource settings In this section, we describe the two dialogue sources we use, DailyDialog We select these datasets because they complement each other in desirable ways. DailyDialog contains 2-speaker dialogues where speakers are anonymized and averages 88 words per dialogue. In contrast, Friends consists of multiparty dialogues (3.6 speakers mean, 15 max) with recurring characters and averages 283 words per dialogue. These differences lead to each set of dialogue instances having different task annotations, giving FETA a wider variety of tasks. For example, Dai-lyDialog tasks include understanding the causes of emotions and commonsense reasoning, while tasks annotated on Friends revolve more around recog- nizing entities and understanding personalities. To create FETA versions of each dataset, we first partition the dialogues into 70/15/15% splits for training, validation, and test sets. After splitting, we randomly down-sample the train and development dialogues to 10% of the original quantities. Thus, FETA splits use 7/1.5/15% of the original dialogues. Not every dialogue is annotated for all tasks, allowing some tasks to have more samples than others. Crucially, the data splits are the same for all tasks, preventing data leakage. Table Many works add annotations on top of these dialogues and FETA utilizes 10 of them. Figure The Friends dialogues come from transcripts of 10 seasons of the TV show by the same name In total, FETA has 7 task annotations on top of the Friends scripts. As illustrated in Figure To define the metrics, we consider 4 variables: source task s, target task t, model f , and learning algorithm A, and we abuse notation slightly to allow for f A (s, t) to represent a model trained on the source and target tasks using the given learning algorithm. In FETA, we evaluate the performance of a model and learning algorithm with multiple metrics: average and top-1 raw scores, as well as average and top-1 score ∆s. Average and Top-1 Scores First, we consider the two raw scores: average score and top-1 score. These metrics aim to answer the following questions: How well do a model and algorithm perform across all task pairs, and, how well do a model and algorithm perform supposing that we knew the best source task a priori. We calculate an average score across all sourcetarget task pairs to understand how each model and algorithm performs in the aggregate. Formally, let the score for a single task be computed as: where M t is the set of metrics associated with task t, found in Table where T is the set of tasks. Additionally, we calculate top-1 score to understand how models and algorithms perform if the best source task is known ahead of time. This score is calculated as the maximum score over source tasks averaged over target tasks. The top-1 score does not consider scores less than the baseline, which is a model trained directly on the target task. Denote the baseline algorithm by A B and the baseline score as score(s, t, f, A B ). Formally, the top-1 score is calculated as: Average and Top-1 ∆s In addition to raw scores, we also calculate score differences to measure how much a source task benefits a target task. The average ∆ describes how much benefit the model saw in the aggregate over all source tasks, while the top-1 ∆ considers only the best source. Score ∆s are calculated with respect to the baseline score as: and the average ∆ is calculated as: Additionally, we calculate the top-1 ∆ as the maximum positive score difference over source tasks averaged over target tasks: |T | In this work, we consider three commonly used task transfer methods: Pre-train/Fine-tune, Multitask, Multitask/Fine-tune. We apply these methods with cross-entropy loss to further optimize pretrained language models on FETA. Pre-train/Fine-tune Commonly used in NLP today, the pre-train/fine-tune algorithm consists of two stages of training Multitask In this algorithm, there is only a single stage of multitask training Multitask/Fine-tune This algorithm combines the previous algorithms in two stages. In the first stage, the source and target task are optimized jointly, as in Eq 3. Then, the second stage trains using only the target task, as in Eq 2. Even though model selection in multitasking is generally done w.r.t. multiple source and target tasks To study task transfer on FETA, we run extensive experimentation. We utilize three task transfer algorithms: pre-train/fine-tune, multitask, and multitask/fine-tune, as described in Section 4. To draw broad conclusions about the performance of each learning algorithm, we utilize pretrained language models with three different architectures: encoder-only (BERT) A complete experiment for a single target task, T , is as follows: First, we directly fine-tune on T to get the baseline score. Then, for each source task, S, we take the model pre-trained on S and fine-tune on T . Next, we jointly train on S and T together. Finally, we fine-tune the jointly trained model on T . FETA datasets have 10 and 7 tasks, giving 90 + 42 = 132 unique source-target task pairs. Our experiments include three learning algorithms, three models, and we run each experiment with 5 random seeds. In total, we run 132 × 3 × 3 × 5 = 5940 transfer experiments, and 17×3×5 = 255 baseline experiments leading to 6195 trained models. In addition to the single-source setting described above, we also consider a subset of tasks to study in the multi-source setting, where multiple tasks are simultaneously used as source tasks to transfer to a single target task (6.2). For our experiments, we select two target tasks from each dataset that benefit the most from task transfer, and we use the three source tasks that transferred best onto those targets. 6 Results and Analysis Table Aggregate Performance We find that, on average, Friends tasks get scores between 7-8 points less than DailyDialog, likely due to the greater number of speakers and utterance length of Friends. We find that GPT-2 lags behind the raw scores of BERT and T5 by ∼10 points. This is expected as autoregressive decoder models are not designed with classification in mind. We find that the largest average ∆ is 1.4, leaving room for improvement in task transfer on FETA. Furthermore, we are interested in knowing: how much we would gain by using the best source task vs. a random source task. We calculate the differences between average ∆ and top-1 ∆ and find the mean difference to be ∼1.6 and the largest difference to be ∼3.5, motivating a further understanding of which source tasks transfer best to target tasks. Performance Across Learning Algorithms We average scores across both datasets and find that pre-train/fine-tune gets an average score of 42.85, multitask 42.84, and multitask/fine-tune 44.07. Table 2 shows that multitask/fine-tune achieves the best average score for all models and datasets, and indeed its average score is a 2.8% improvement over the other algorithms. However, aggregate scores obscure some interesting nuances. Looking at an individual column can demonstrate best source tasks for that target. Looking at rows can determine which source task works well across multiple targets. Furthermore, Figure For nearly all dimensions of analysis (e.g., sample sizes, learning algorithm), we find different trends between models. We strongly suggest that future research be performed on multiple models before attempting to draw broad conclusions on transfer learning. Multitask/Fine-tune As Regularization We find that T5's top-1 score and ∆ on DailyDialog are highest for pre-train/fine-tune, but the average score and ∆ are highest for multitask/finetune. To understand why this occurred, we find the bottom-1 scores for T5 on DailyDialog: 46.78, 46.69, and 48.26 for pre-train/fine-tune, multitask, and multitask/fine-tune algorithms, confirming that multitask/fine-tune does achieve the best worstcase performance. Moreover, we find that for all datasets and models, multitask/fine-tune does achieve the best worst-case performance. In fact, for GPT-2 on Friends, utilizing the bottom-1 source tasks still lead to a 0.74% improvement over the baseline. Do All Task Types Benefit Equally? We find that span extraction tasks gain the most as target tasks, shown in Figure Additionally, we find that utterance-level classification tasks decrease in score ∆ at increasing source-to-target sample ratios. This is possibly due to models overfitting to specific tasks and a catastrophic forgetting of general skills learned during their large-scale pre-training. Do All Task Types Give Equal Benefit? We find that multiple-choice tasks give the greatest benefit as source tasks, especially when the ratio of source- to-target samples is low, as shown in Figure How Do Sample Sizes Affect Transfer? Figure For multi-source transfer we select the two target tasks from each dataset with the best score differences from the single-source setting, shown in Figures We expect that by utilizing the top-3 source tasks from the single-source setting, the multi-source setting will improve performance for all models and algorithms, but find results to the contrary. We find that 6/9 multi-source algorithms outperform their average top-3 single-source counterparts in DRSE, 6/9 for DNLI, 3/9 for CI, and only 2/9 for QA, showing that naively combining source tasks is not always beneficial. The impressive result for DRSE follows our original intuition, given that there is an almost unanimous benefit from all source tasks, shown in Figure Which Models Benefit From Multi-Source? Table 6 shows that GPT-2 improves in 8/12 experi- Table ments over its average top-3 single-source counterparts, but BERT only 5/12 and T5 in only 4/12 experiments. It is counter-intuitive that T5 should perform the worst as we expect that it has a higher capacity for learning due to twice the model size. On the other hand, the additional parameters may be causing T5 to overfit on training data in the few-sample setting. We introduce FETA, a comprehensive benchmark for evaluating language models and task transfer learning algorithms in open-domain dialogue with few samples. Through extensive experimentation, we find new and non-intuitive insights on the mechanisms of transfer learning. In particular, we find that most trends are model-specific, and we strongly encourage researchers to consider multiple model architectures before attempting to draw broad conclusions on transfer learning. It is our hope that FETA enables further research not only in task transfer, but also in other learning settings, and in the generalizability and efficiency of model architectures and pre-training datasets. A concern regarding any work that includes largescale experiments with large language models is the energy consumption and environmental impact, the current work included. While there is a cost to running these experiments, the goal of this work is to improve sample efficiency in the future and we hope that the benefits in future energy saved will outweigh the up-front costs of discovering efficient methods. Another concern of a large-scale benchmark is that of accessibility. A benchmark requiring too many resources will limit those who can reasonably compete. For this reason and others, in addition to our large-scale benchmark we also include a smaller multi-source setting which requires only 4 experiments to be run for a single model and algorithm, rather than 132 in the single-source setting. We believe this smaller setting will maintain the ability to extract high-quality insights on task transfer, yet allow for increased community access and reduce the carbon footprint of this benchmark. While we do control for domain adaptation in our experiments on task transfer, there are some aspects that we cannot control. For example, each model has done language model pre-training with a different corpus. BERT was trained on English Wikipedia and BookCorpus Additionally, we cannot exhaustively test every language model, but still try to provide enough variety in order to draw broad conclusions on task transfer. For example, we don't run any experiments on language models pre-trained in the dialogue domain or language models larger than basesized. We expect that both of these changes would improve raw performance on FETA. More importantly though, it is unclear whether either of these changes would lead to improved task-transfer performance (average and top-1 ∆s) and we leave this exploration for future work. Furthermore, we cannot exhaustively test all learning algorithms. For example, Finally, we stress the importance of intra-dataset task transfer in this work. However, this limits the number of pre-annotated tasks that are available, and there are certainly some tasks which we were not able to accomodate in FETA. was supported by the National Science Foundation award #2048122. The views expressed are those of the author and do not reflect the official policy or position of the US government. Finally, we thank the Robert N. Noyce Trust for their generous gift to the University of California via the Noyce Initiative. RECCON CIDER DailyDialog++ Sai et al. ( EmoryNLP Chen and Choi (2016) and Zahiri and Choi (2018) provide annotations on emotion recognition, with the 7 fine-grained emotions from the Feeling Wheel For our experiments, we use the pretrained model implementations from the HuggingFace Transformers library worked well across all models. In all experiments we utilize validation-based best model selection, and train models for 30 epochs on DailyDialog tasks and 20 epochs on Friends tasks. 10949
| 1,262 | 2,072 | 1,262 |
A Multitask Learning Approach for Diacritic Restoration
|
In many languages like Arabic, diacritics are used to specify pronunciations as well as meanings. Such diacritics are often omitted in written text, increasing the number of possible pronunciations and meanings for a word. This results in a more ambiguous text making computational processing on such text more difficult. Diacritic restoration is the task of restoring missing diacritics in the written text. Most state-of-the-art diacritic restoration models are built on character level information which helps generalize the model to unseen data, but presumably lose useful information at the word level. Thus, to compensate for this loss, we investigate the use of multi-task learning to jointly optimize diacritic restoration with related NLP problems namely word segmentation, part-of-speech tagging, and syntactic diacritization. We use Arabic as a case study since it has sufficient data resources for tasks that we consider in our joint modeling. Our joint models significantly outperform the baselines and are comparable to the state-ofthe-art models that are more complex relying on morphological analyzers and/or a lot more data (e.g. dialectal data). * * The work was conducted while the author was with AWS, Amazon AI.
|
In contrast to English, some vowels in languages such as Arabic and Hebrew are not part of the alphabet and diacritics are used for vowel specification. Diacritic restoration (or diacritization) is the process of restoring these missing diacritics for every character in the written texts. It can specify pronunciation and can be viewed as a relaxed variant of word sense disambiguation. For example, the Arabic word Elm The state-of-the-art diacritic restoration models reached a decent performance over the years using recurrent or convolutional neural networks in terms of accuracy In this paper, we improve the performance of diacritic restoration by building a multitask learning model (i.e. joint modeling). Multitask learning refers to models that learn more than one task at the same time, and has recently been shown to provide good solutions for a number of NLP tasks The use of a multitask learning approach provides an end-to-end solution, in contrast to generating the linguistic features for diacritic restoration as a preprocessing step. In addition, it alleviates the reliance on other computational and/or data resources to generate these features. Furthermore, the proposed model is flexible such that a task can be added or removed depending on the data availability. This makes the model adaptable to other languages and dialects. We consider the following auxiliary tasks to boost the performance of diacritic restoration: word segmentation, part-of-speech (POS) tagging, and syntactic diacritization. We use Arabic as a case study for our approach since it has sufficient data resources for tasks that we consider in our joint modeling. 1. We investigate the benefits of automatically learning related tasks to boost the performance of diacritic restoration; 2. In doing so, we devise a state-of-the-art model for Arabic diacritic restoration as well as a framework for improving diacritic restoration in other languages that include diacritics.
|
We formulate the problem of (full) diacritic restoration (DIAC) as follows: given a sequence of characters, we identify the diacritic corresponding to each character in that sequence from the following set of diacritics {a, u, i, o, K, F, N, ∼, ∼a, ∼u, ∼i, ∼F, ∼K, and ∼N}. We additionally consider three auxiliary tasks: syntactic diacritization, partof-speech tagging, and word segmentation. Two of which operate at the word level (syntactic diacritization and POS tagging) and the remaining tasks (diacritic restoration and word segmentation) operate at the character level. This helps diacritic restoration utilize information from both charac-ter and word level information, bridging the gap between the two levels. Syntactic Diacritization (SYN): This refers to the task of retrieving diacritics related to the syntactic positions for each word in the sentence, which is a sub-task of full diacritic restoration. Arabic is a templatic language where words comprise roots and patterns in which patterns are typically reflective of diacritic distributions. Verb patterns are more or less predictable however nouns tend to be more complex. Arabic diacritics can be divided into lexical and inflectional (or syntactic) diacritics. Lexical diacritics change the meanings of words as well as their pronunciations and their distribution is bound by patterns/templates. In contrast, inflectional diacritics are related to the syntactic positions of words in the sentence and are added to the last letter of the main morphemes of words (word finally), changing their pronunciations. Because Arabic has a unique set of diacritics, this study formulates syntactic diacritization in the following way: each word in the input is tagged with a single diacritic representing its syntactic position in the sentence. Word segmentation (SEG): This refers to the process of separating affixes from the main unit of the word. Word segmentation is commonly used as a preprocessing step for different NLP applications and its usefulness is apparent in morphologically rich languages. For example, the undiacritized word whm might be diacritized as waham∼a "and concerned", waham "illusion", where the first diacritized word consists of two segments "wa ham∼a" while the second is composed of one word. Word segmentation can be formulated in the following way: each character in the input is tagged following IOB tagging scheme (B: beginning of a segment; I: inside a segment; O: out of the segment) Part-Of-Speech Tagging (POS): This refers to the task of determining the syntactic role of a word (i.e. part of speech) within a sentence. POS tags are highly correlated with diacritics (both syntactic and lexical): knowing one helps determine or reduce the possible choices of the other. For instance, the word ktb in the sentence ktb [someone] means "books" if we know it to be a noun whereas the word would be either katab "someone wrote" or kat∼ab "made someone write" if it is known to be a verb. POS tagging can be formulated in the following way: each word in the input is assigned a POS tag from the Universal Dependencies tagset We built a diacritic restoration joint model and studied the extent to which sharing information is plausible to improve diacritic restoration performance. Our joint model is motivated by the recent success of the hierarchical modeling proposed in Since our joint model may involve both character and word level based tasks, we began our investigation by asking the following question: how to integrate information between these two levels? Starting from the randomly initialized character embeddings as well as a pretrained set of embeddings for words, we follow two approaches (Figure (2) Word-To-Character Representation: To pass information learned by word level tasks into character level tasks, we concatenate each word with each of its composed characters during each pass, similar to what is described in For all architectures, the main component is BiL-STM 1. We extract the two additional input representation described in Section 3.1; 2. We apply BiLSTM for each of the different tasks separately to obtain their corresponding outputs; 3. We pass all outputs from all tasks as well as WordToChar embedding vectors as input to the diacritic restoration model and obtain our diacritic outputs. Figure Dataset: We use the Arabic Treebank (ATB) dataset: parts 1, 2, and 3 and follow the same data division as Evaluation metrics: We use accuracy for all tasks except diacritic restoration. For diacritic restoration, the two most typically used metrics are Word Error Rate (WER) and Diacritic Error Rate (DER), the percentages of incorrectly diacritized words and characters, respectively. In order to approximate errors in the syntactic diacritics, we use Last Diacritic Error Rate (LER), the percentage of words that have incorrect diacritics in the last positions of words. To evaluate the models' ability to generalize beyond observed data, we compute WER on OOV (out-of-vocabulary) words. Table We use WordToChar representation rather than characters for all remaining models that jointly learn more than one task. For all experiments, we observe improvements compared to both baselines across all evaluation metrics. Furthermore, all models except DIAC+SEG outperform WordToChar diacritic restoration model in terms of WER, showing the benefits of considering output distributions for the other tasks. Despite leveraging tasks focused on syntax (SYN/POS) or morpheme boundaries (SEG), the improvements extend to lexical diacritics as well. Thus, the proposed joint diacritic restoration model is also helpful in settings beyond word final syntactic related diacritics. The best performance is achieved when we consider all auxiliary tasks within the diacritic restoration model. We discuss the impact of adding each investigated task towards the performance of the diacritic restoration model. When morpheme boundaries as well as diacritics are learned jointly, the WER performance is slightly reduced on all and OOV words. This reduction is attributed mostly to lexical diacritics. As Arabic exhibits a non-concatenative fusional morphology, reducing its complexity to a segmentation task might inherently obscure morphological processes for each form. Observing only slight improvement is surprising; we believe that this is due to our experimental setup and does not negate the importance of having morphemes that assign the appropriate diacritics. We speculate that the reason for this is that we do not capture the interaction between morphemes as an entity, losing some level of morphological information. For instances, the words waham∼a versus wahum for the undiacritized words whm (bold letters refer to consonants distinguishing it from diacritics) would benefit from morpheme boundary identifications to tease apart wa from hum in the second variant (wahum), emphasizing that these are two words. But on the other hand, it adds an additional layer of ambiguity for other cases like the morpheme ktb in the diacritic variants kataba, kutubu, sayakotubo -note that the underlined segment has the same consonants as the other variantsin which identifying morphemes increased the number of possible diacritic variants without learning the interactions between adjacent morphemes. Furthermore, we found inconsistencies in the dataset for morphemes which might cause the drop in performance when we only consider SEG. When we consider all tasks together, these inconsistencies are reduced because of the combined information from different linguistic signals towards improving the performance of the diacritic restoration model. By enforcing inflectional diacritics through an additional focused layer within the diacritic restoration model, we observe improvements on WER compared to the baselines. We notice improvements on syntactic related diacritics (LER score), which is expected given the nature of syntactic diacritization in which it learns the underlying syntactic structure to assign the appropriate syntactic diacritics for each word. Improvements also extend to lexical diacritics, and this is because word relationships are captured during learning syntactic diacritics in which BiLSTM modeling for words is integrated. When we jointly train POS tagging with full diacritic restoration, we notice improvements compared to both baselines. Compared to syntactic diacritization, we obtain similar findings across all evaluation metrics except for WER on OOV words in which POS tagging drops. Including POS tagging within diacritic restoration also captures important information about the words; the idea of POS tagging is to learn the underlying syntax of the sentence. In comparison to syntactic diacritization, it involves different types of information like passivization which could be essential in learning correct diacritics. Ablation Analysis: Incorporating all the auxiliary tasks under study within the diacritic restoration model (ALL) provides the best performance across all measures except WER on OOV words in which the best performance was given by DIAC+SYN. We discuss the impact of removing one task at a time from ALL and examine whether its exclusion significantly impacts the performance. Excluding SEG from the process drops the performance of diacritic restoration. This shows that even though SEG did not help greatly when it was combined solely with diacritic restoration, the combinations of SEG and the other word based tasks filled in the gaps that were missing from just identifying morpheme boundaries. Excluding either POS tagging or syntactic diacritization also hurts the performance which shows that these tasks complement each other and, taken together, they improve the performance of diacritic restoration model. Impact of output labels: Table Last hidden layer of SEG: Identifying morpheme boundaries did not increase accuracy as we expected. Therefore, we examined whether information learned from the BiLSTM layer would help us learn morpheme interactions by passing the output of last BiLSTM layer to the diacritic restoration model along with segmentation labels. We did not observe any improvements towards predicting accurate diacritics when we pass information regarding the last BiLSTM layer. For ALL, the WER score increased by 0.22%. Thus, it is sufficient to only utilize the segment labels for diacritic restoration. Passive and active verbs: Passivation in Arabic is denoted through diacritics and missing such diacritic can cause ambiguity in some cases Qualitative analysis: We compared random errors that are correct in DIAC (character-based diacritic restoration) with ALL in which we consider all investigated tasks. Although ALL provides accurate results for more words, it introduces errors in other words that have been correctly diacritized by DIAC. The patterns of such words are not clear. We did not find a particular category that occurs in one model but not the other. Rather, the types and quantity of errors differ in each of these categories. State-of-the-art Comparison: Table 2 also shows the performance of the state-of-the-art models. ALL model surpass the performance of We believe that neither the underlying architecture nor the consideration of all possible features were the crucial factor that led to the significant reduction in WER performance. Rather, morphological analyzers is crucial in such significant improvement. As a matter of fact, in We compared the base model of the auxiliary tasks to the state-of-the-art (SOTA). For SEG, BiLSTM model has comparable performance to that in The problem of diacritization has been addressed using classical machine learning approaches (e.g. Maximum Entropy and Support Vector Machine) Arabic syntactic diacritization has been consistently reported to be difficult, degrading the performance of full diacritic restoration Regarding incorporating linguistic features into the model, previous studies have either used morphological features as a preprocessing step or as a ranking step for building diacritic restoration models. As a preprocessing step, the words are converted to their constituents (e.g. morphemes, lemmas, or n-grams) and then diacritic restoration models are built on top of that As a ranking procedure, all possible analyses of words are generated and then the most probable analysis is chosen We present a diacritic restoration joint model that considers the output distributions for different related tasks to improve the performance of diacritic restoration. Our results shows statistically significant improvements across all evaluation metrics. This shows the importance of considering additional linguistic information at morphological and/or sentence levels. Including semantic information through pretrained word embeddings within the diacritic restoration model also helped boosting the diacritic restoration performance. Although we apply our joint model on Arabic, this model provides a framework for other languages that include diacritics whenever resources become available. Although we observed improvements in terms of generalizing beyond observed data when using the proposed linguistic features, the OOV performance is still an issue for diacritic restoration.
| 1,232 | 1,967 | 1,232 |
Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool
|
Few shot learning with large language models has the potential to give individuals without formal machine learning training the access to a wide range of text to text models. We consider how this applies to creative writers and present STORY CENTAUR, a user interface for prototyping few shot models and a set of recombinable web components that deploy them. STORY CENTAUR's goal is to expose creative writers to few shot learning with a simple but powerful interface that lets them compose their own co-creation tools that further their own unique artistic directions. We build out several examples of such tools, and in the process probe the boundaries and issues surrounding generation with large language models.
|
One of the most promising possibilities for large language models (LLMs) is few-shot learning We present STORY CENTAUR, a Human-Computer Interface that closes the gap between non-technical users and the power and possibilities of few shot learning, with the intended audience of writers of creative text. It is our intention that by giving writers a tool for building generative text models unique to their process and vision that these artists will experience genuine feelings of co-creation. STORY CENTAUR consists of a prototyping UI as well as a set of Angular web components that interact via a central pub-sub synchronization mechanism As the ethical implications of LLMs
|
The observation that simple "fill-in-the-blank" neural network models trained on large quantities of text can be used for problems beyond their primary learning objective dates back to word2vec Representation learning techniques made steady advances, expanding to sentence level contextually sensitive word embedding with Human + AI co-creation has existed in both practice and theory for several years. To highlight some examples in practice that relate to creative writing, as opposed to music or visual art of which there are many, we refer the reader to browse the Electronic Literature Collection For a lighter introduction, The core contribution of this work is a UI for the creation of few shot text generation models (Figure We note that while generally LMs refer to any probability distribution over a sequence of tokens, in this work we use the term to refer to the subset model class that factorizes the joint probability into conditional P (w t |w 1...t-1 ) terms. Put simply, we are referring to the "predict the next word given all words so far" variety of LM, which includes all of the GPT models. A Formula is composed of Data and Serialization. Each item in the Data consists of lists of string inputs and outputs that exemplify the desired I/O. The Serialization defines a reversible transformation between the Data and the raw text handled by the LM. STORY CENTAUR uses a Serialization template of fixed text Sentinels that interleave the inputs and outputs; a Sentinel is defined to precede each input and output, as well as one that separates inputs and outputs and one that comes after the final output (See Figure A Formula is used by first invoking the Serialization on the Data, creating the Preamble. Then, the new inputs are converted using the Serialization and concatenated to the Preamble, creating the Prompt. The LM is asked to continue the text in the Prompt, and the Serialization is used to extract the output(s) from the result. The LM cannot explicitly enforce the Serialization format and as such will often produce non-conformant results, in which case it must be rejected. In practice, if the LM is sufficiently capable and the task well suited then a simple rejection sampler suffices to produce several acceptable options, as decoding is parallelizable. STORY CENTAUR's user interface for Formula design is shown in Figure First, the user must enter at least two examples of I/O pairs into the Data panel and take a pass at defining a Serialization, relying on the live updated Preamble panel to preview their progress. With a few examples in place, the Auto-Generate button can then be used to suggest new candidate IO pairs by passing the Preamble to the LM and allowing the user to prune these suggestions. This process can be repeated, quickly converging to several (10 or more) solid examples and clear evidence that the Serialization is being captured by the LM. As a final evaluation technique, we provide a Test mode that takes inference inputs and applies the current Formula, also reporting the rate at which the LM output respects the Serialization. We showcase the potential of Formulas that one might create using STORY CENTAUR in several Experiments. These experiments all rely on one or more Formulas that were built using the development tool and workflow described above, and are each motivated by a different artistic scenario. When possible, we present the I/O specifications for each Experiment and invite the reader to view full Experiment screenshots as well as the underlying Formulas' Data and Serialization in the Appendices. Perhaps the most obvious application of generative language models to creative writing is overcoming writer's block. Specifically, we consider the scenario in which the writer has some existing seed text and wants to be presented with possible continuations. Generative LMs are ripe for this task as they can reliably continue short text; for the definition of LM used in this work (see Section 3) this is indeed exactly the task they were trained on. In this pure use of the LM the author is only able to provide the seed text, and so in this experiment we use a few shot Formula to provide an additional input of a word or phrase that is required to appear in the generated text. The Magic Word formula takes two inputs: the seed text that must be continued by the model and the "Magic Word" that must be used in the continuation. In this Experiment, the generated outputs are not only discarded if they do not conform to the Serialization but also if the Magic Word does not appear as a substring. The UI allows editing of both the magic word and seed text, and on generation the user is given a maximum of three sentences that they can click to append to the editable main text box. From an academic perspective, it is worth noting that this I/O paradigm has been explored in several examples of previous work, often with the same motivation as a writer's aid We phrase this problem as a Formula with one input and one output in which the input is in neutral style and the output is a paraphrase with the desired style applied. This works nicely with few shot learning, as it is relatively easy to invent (or generate) a simple unstyled statement and then to imagine how a character might say it. We showcase several such Formulas in this experiment, se-lectable in a menu. For unstyled source text, there are three editable areas for text to rephrase that can be restyled individually or all at once. We provide one additional Formula that might be considered zero shot style transfer, although it is still performed using a few shot Formula. When the style "CUSTOM" is selected, an input box appears where the user can enter any raw text they wish. This text is then used in a Formula with two inputs, the text to be restyled and the name or description of the character whose style to use. The surprising result is that this is often possible with no examples of the requested style itself, only the proper Serialization and a few example of the full I/O shown in Figure We encourage the further examination of large LMs for style transfer, as we were anecdotally impressed with the output of this experiment in particular. As some recently successful work in style transfer One ubiquitous quality of such hierarchical systems is that the high level representation is a structural and/or semantic abstraction chosen to be amenable to plot coherence modeling. This experiment poses the question: what if the high level representation was itself natural language? To explain our setup we make the distinction between simple text and colorful text, where the former is a grammatically bare bones statement of fact and the latter is more linguistically interesting, as a sentence might actually appear. We use two Formulas to accomplish this goal, shown in Figure The user is presented with an interface that lets them write and edit custom spine plot points as well as use the first Formula to generate up to five candidates for plot points to continue the story. Each spine plot point is connected to its colorized paraphrase, which appear as a whole on the right side. In order to maintain a model of the mapping between the spine and colorized text, the colorized text is not editable. Interesting characters are at the heart of much creative writing, and various template filling exercises exist to create them. Often this comes down to filling out a template containing fields that flesh out the character, as shown in Figure We take this experiment in a direction that goes beyond our own Formula development tools to define a flexible Few Shot model for data completion. Our generalized problem statement is as follows: given a set of fields of which an arbitrary subset are blank, for one such blank field generate a plausible value conditioned on the non-blank fields. We build a dynamic Formula creation system that fulfills this generalized contract, and apply it to the filling of character creation exercise forms. Our few shot solution naturally relies on a small number of fully filled out and plausibly consistent fields (e.g. complete character descriptions). At inference time, we extract the subset of non-blank fields in the inference item from each of these few shot examples and stitch together a Formula on the spot with precisely these inputs and the single output of the desired inference output field. This dynamic creation of Formulas requires a flexible Serialization that can accommodate any field name and value in any order, which for this experiment we simple simply use "name : value". In improvisational acting (improv) one of the primary pleasures is to see actors bring a set of constraints provided by the audience to life in a coherent story. We see the potential for the sometimes wildly creative suggestions of large language models to supply these constraints, either as a tool for practitioners to hone their craft or as a way to spice up (or speed up) a live performance itself. Improv constraints must be both open ended and subject to specific categories; for example the popular "Party Quirks" game requires a personal quirk for each actor attending a dinner party. We build Formulas and UIs for several improv games, and note their distinction from the other Formulas in this work in that they require no user input at all. In constructing such zero input few shot learning models it became apparent that beyond controlling the grammatical form and semantic intent of the outputs we could also control their tone, as it would mimic the tone of the Formula's Data. Crucially, this allows easy adaptation of these tools to different audiences (children versus adults, for example) and an implicit nudge towards whimsical outputs. While the experiments presented above demonstrate how few shot learning can be used to create interesting tools for writers, the real power of STORY CENTAUR is its unlocking of rapid experimentation. Not only were we able to probe the boundary of what "works" efficiently, but also to engage individuals regardless of formal machine learning training to help us to do so. Needless to say, in the course of this work many attempted Formulas did not produce compelling results. Perhaps our most interesting failure was to build a Formula that would produce the second half of a rhyming couplet given the first half, a task that would require understanding of both phonetics and meter as well as linguistic coherence. This was disappointing given the compelling examples of GPT-3 poetry available online 4 . One possible explanation is that while general poetry and specifically rhyming couplets are in our minds connected closely with a subset relationship rooted in human culture, the hard constraint of rhyme and meter in fact divides them into very different problems for an LM. It is certainly the case that recent successful work in rhyming metered poetry generation has needed to resort to fixed rhyme words and syllable counting In terms of larger themes, we found that constructing Formulas in which any of the inputs or outputs were much longer than a few sentences were hard to construct. We speculate that it is more difficult for the models to latch on to the Serialization in this case, as the observed symptom was often that no generated text passed the de-serialization filter. On the positive side we observed that few shot tasks that rely on paraphrasing (such as those used Say It Again) were surprisingly easy to construct successfully. It is a common and intuitively plausible observation that the design of the Serialization is crucial to the performance of few shot learning with large LMs. Our Formulas can only be evaluated quali-4 Finally we note that our true goal is to empower artists with no technical training to imagine a Formula, construct it in our development mode, and then produce experiments as we have. In our current process, such an artist could indeed construct their Formula, but would at some point require a programmer to build it into an experiment, requiring e.g. a WYSIWG editor. While this was beyond the scope of our work, we did construct our system using Angular, a modern web development framework whose core premise is modularity, dependency injection, and reuse of components. Not only do our experiments make use of a small set of these reusable components for functionality like editable text fields and clickable suggestion lists, but also all text and Formulas are synchronized by a global pub-sub service with simple string keys. We present STORY CENTAUR, a tool for the creation and tuning of text based few shot learning Formulas powered by large language models and several experiments using Formulas built with our tool that are focused around the topic of creative writing. The emergence of large language models has shaped the course of NLP research in the late 2010's but the question remains as to what, if any, is a viable use case for these models in their raw, un-finetuned, form. Additionally, while some claim that scaling these models is a viable path to Artificial General Intelligence, others disagree
| 716 | 677 | 716 |
Towards Fine-grained Text Sentiment Transfer
|
In this paper, we focus on the task of finegrained text sentiment transfer (FGST). This task aims to revise an input sequence to satisfy a given sentiment intensity, while preserving the original semantic content. Different from conventional sentiment transfer task that only reverses the sentiment polarity (positive/negative) of text, the FTST task requires more nuanced and fine-grained control of sentiment. To remedy this, we propose a novel Seq2SentiSeq model.
|
Text sentiment transfer aims to rephrase the input to satisfy a given sentiment label (value) while preserving its original semantic content. It facilitates various NLP applications, such as automatically converting the attitude of review and fighting against offensive language in social media (dos Figure positive and negative sentiment polarity. They are confined to scenarios where there are two discrete sentiment labels. To achieve more nuanced and precise sentiment control of text generation, we turn to fine-grained text sentiment transfer (FTST) which revises a sequence to satisfy a given sentiment intensity There are two main challenges of FTST task. First, it is tough to achieve fine-grained control of the sentiment intensity when generating sentence. Previous work about coarse-grained text sentiment transfer usually uses a separate decoder for each sentiment label To tackle the two challenges mentioned above, we propose two corresponding solutions. First, in order to control the sentiment intensity of the generated sentence, we propose a novel sentiment intensity controlled sequence-to-sequence (Seq2Seq) model Seq2SentiSeq. It incorporates the sentiment intensity value into the conventional Seq2Seq model via a Gaussian kernel layer. By this means, the model can encourage the generation of words whose sentiment intensity closer to the given intensity value during decoding. Second, due to the lack of parallel data, we can not directly train the proposed model via MLE (maximum likelihood estimation). Therefore, we propose a cycle reinforcement learning algorithm to guide the model training without any parallel data. The designed reward can balance both sentiment transformation and content preservation, while not requiring any ground truth output. Evaluation of the FTST task is also challenging and complex. In order to build a reliable automatic evaluation, we collect human references for FTST task on the Yelp review dataset The main contributions of this work are summarized as follows: • We propose a sentiment intensity controlled generative model Seq2SentiSeq, in which a sentiment intensity value is introduced via a Gaussian kernel layer to achieve fine-grained sentiment control of the generated sentence. • In order to adapt to non-parallel data, we design a cycle reinforcement learning algorithm CycleRL to guide the model training in an unsupervised way. • Experiments show that the proposed approach can largely outperform state-of-theart systems in both automatic evaluation and human evaluation. 2 Proposed Model
|
Given an input sequence x and a target sentiment intensity value v y , the FTST task aims to generate a sequence y which not only expresses the target sentiment intensity v y , but also preserve the original semantic content of the input x. Without loss of generality, we limit the sentiment intensity value v y ranging from 0 (most negative) to 1 (most positive). Figure We use a bidirectional RNN as the encoder to capture source content information. Each word in the source sequence ) is firstly represented by its semantic representation mapped by semantic embedding E c . The RNN reads the semantic representations from both directions and computes the forward hidden states { -→ h i } m i=1 and backward hidden states { ←h i } m i=1 for each word. We obtain the final hidden representation of the ith word by concatenating the hidden states from both directions Given the hidden representations {h i } m i=1 of the input sequence x and the target sentiment intensity value v y , the decoder aims to generate a sequence y which not only describes the same content as the input sequence x, but also expresses a close sentiment intensity to v y . In order to achieve the aim of controlling sentiment during decoding, we firstly embedded each word with an additional sentiment representation, besides the original semantic representation. The semantic representation characterizes the semantic content of the word, while the sentiment representation characterizes its sentiment intensity. Formally, the hidden state s t of the decoder at timestep t is computed as follows: ' ! = 0.9 where E s (y t-1 ) refers to the sentiment representation of the word y t-1 mapped by the sentiment embedding matrix E s , E c (y t-1 ) is the semantic representation, and the context vector c t is computed by an attention mechanism in the same way as Considering two goals of the FTST task: sentiment transformation and content preservation, we model the final generation probability into a mixture of semantic probability and sentiment probability, where the former evaluates content preservation and the latter measures sentiment transformation. Similar to the traditional Seq2Seq model where W c is a trainable weight matrix. The sentiment probability measures how close the sentiment intensity of the generated sequence to the target v y . Normally, each word has a specific sentiment intensity. For example, the word "okay" has a positive intensity around 0.6, "good" is around 0.7, and "great" is around 0.8. However, when involving to the previous generated words, the sentiment intensity of current generated word may be totally different. For example, the phrase "not good" has a negative intensity around 0.3, while "extremely good" is around 0.9. That is to say, the sentiment intensity of each word at time-step t should be decided by both the sentiment representation E s and the current decoder state s t . Therefore, we define a sentiment intensity prediction function g(E s , s t ) as follows: where W s is a trainable parameter, and sigmoid is used to scale the predicted intensity value to [0, 1]. Intuitively, in order to achieve fine-grained control of sentiment, words whose sentiment intensities are closer to the target sentiment intensity value v y should be assigned a higher probability. Take Figure where σ is the standard deviation. To balance both sentiment transformation and content preservation, the final probability distribution p t over the entire vocabulary is defined as a mixture of two probability distributions: where γ is the hyper-parameter that controls the trade-off between two generation probabilities. A serious challenge of the FTST task is the lack of parallel data. Since the ground truth output y is unobserved, we can not directly use the maximum likelihood estimation (MLE) for training. To remedy this, we design a cycle reinforcement learning (CycleRL) algorithm. An overview of the training process is summarized in Algorithm 1. Two rewards are designed to encourage changing sentiment but preserving content, without the need of parallel data. The definitions of the two rewards and the corresponding gradients for Seq2SentiSeq model S are introduced as follows. We design the respective rewards for two goals (sentiment transformation and content preservation) of the FTST task. Then, an overall reward r is calculated to balance these two goals and guide the model training. Reward for sentiment transformation. A pretrained sentiment scorer is used to evaluate how well the sampled sentence ŷ matches the target sentiment intensity value v y . Specifically, the reward for sentiment transformation is formulated as: where ϕ refers to the pre-trained sentiment scorer which is implemented as LSTM-based linear regression model. Reward for content preservation. Intuitively, if the model performs well in content preservation, it is easy to back-reconstruct the source input x. Therefore, we design the reward for content preservation to be the probability of the model reconstructing x based on the generated text ŷ and the source sentiment intensity value v x . where θ is the parameter of Seq2SentiSeq model. Algorithm 1 The cycle reinforcement learning algorithm for training Seq2SentiSeq. where each sequence xi is labeled with a fine-grained sentiment label vi 1: Initial the pseudo-parallel data V0 = {(xi, ŷi)} 2: Pre-train Seq2SentiSeq model S θ using V0 3: for each iteration t = 1, 2, ..., T do 4: Sample a sentence x from D 5: for k = 1, 2, ..., K do 6: Sample a intensity value v y ; θ) 8: Compute sentiment reward r (k) s based on Eq. 7 9: Compute content reward r based on Eq. 8 10: Compute total reward r (k) based on Eq. 9 11: end for 12: Update θ using reward {r (k) } K k=1 based on Eq. 11 13: Update θ using cycle reconstruction loss in Eq. 12 14: end for Overall reward. To encourage the model to improve both sentiment transformation and content preservation, the final reward r guiding the model training is designed to be the harmonic mean of the above two rewards: where β is a harmonic weight that controls the trade-off between two rewards. The goal of RL training is to minimize the negative expected reward, where ŷ(k) is the k-th sampled sequence according to probability distribution p in Eq. 6, r (k) is the reward of ŷ(k) , and θ is the parameter of the proposed model in Figure (11) where K is the sample size and b is the greedy search decoding baseline that aims to reduce the variance of gradient estimate which is implemented in the same way as Nevertheless, RL training strives to optimize a specific metric which may not guarantee the fluency of the generated text ) where S refers to the Seq2SeniSeq model. Finally, we alternately update the model parameters θ based on Eq. 11 and Eq. 12. In this section, we introduce the dataset, experiment settings, baselines, and evaluation metrics. We conduct experiments on the Yelp dataset We tune hyper-parameters on the validation set. The size of vocabulary is set to 10K. Both the semantic and sentiment embeddings are 300dimensional and are learned from scratch. We implement both encoder and decoder as a 1-layer LSTM with a hidden size of 256, and the former is bidirectional. The batch size is 64. We pre-train our model for 10 epochs with the MLE loss using pseudo-parallel sentences conducted by Jaccard Similarity, which is same as We compare our proposed method with the following two series of state-of-the-art systems. Fine-grained systems aim to modify an input sentence to satisfy a given sentiment intensity. Coarse-grained systems aim to reverse the sentiment polarity (positive/negative) of the input, which can be regarded as a special case where the sentiment intensity is set below average (negative) or above average (positive). We compare our proposed method with the following state-of-the-art systems: CrossAlign We adopt both automatic and human evaluation. Content: To evaluate the content preservation performance, we hired crowd-workers on Crowd-Flower Sentiment: In order to measure how close the sentiment intensity of outputs to the target intensity values, we define three metrics. Given an input sentence x and a list of target intensity values Moreover, for fine-grained text sentiment transfer task, we expect that given a higher sentiment intensity value, the model will generate a more positive sentence. That is to say, the relative intensity ranking of all generated sentences of the same input is also important. Inspired by the Mean Reciprocal Rank metric which is widely used in the Information Retrieval area, we design a Mean Relative Reciprocal Rank (MRRR) metric to measure the relative ranking In addition, we also compare our model with the coarse-grained sentiment transfer systems. In order to make the results comparable, we define the generated test samples of all baselines for reproducibility. sentiment intensity larger/smaller than 0.5 as positive/negative results. Then we use a pre-trained binary TextCNN classifier We also perform human evaluation to assess the quality of generated sentences more accurately. Each item contains the source input, the sampled target sentiment intensity value, and the output of different systems. Then 500 items are distributed to 3 evaluators, who are required to score the generated sentences from 1 to 5 based on the input and target sentiment intensity value in terms of three criteria: content, sentiment, fluency. Content evaluates the content preservation degree. Sentiment refers to how much the output matches the target sentiment intensity. Fluency is designed to measure whether the generated texts are fluent. For each metric, the average Pearson correlation coefficient of the scores given by three evaluators is greater than 0.71, which ensures the interevaluator agreement. The automatic evaluation and human evaluation results are shown in Table In this section, we further discuss the impacts of the components of the proposed model. We retrain our model by ablating multiple components of our model: without pre-training, without cycle reconstruction (Eq. 12), without reinforcement learning the beer isn't bad, but the food was less than desirable. V=0.1 the beer is terrible, and the food was the worst. V=0.3 the beer wasn't bad, and the food wasn't great too. V=0.5 the food is ok, but not worth the drive to the strip. V=0.7 the beer is good, and the food is great. V=0.9 the wine is great, and the food is extremely fantastic. Revised-VAE + L extra V=0.1 n't no about about no when about that was when about V=0.3 the beer sucks , but the food is not typical time. V=0.5 the beer is cheap, but the food was salty and decor. V=0.7 i just because decent management salty were impersonal. V=0.9 n't that about was that when was about as when was Table We also conduct analysis to understand the sentiment representations of words introduced in our model. We use the 1000 most frequent words from the training dataset. Then, we use a human annotated sentiment lexicon Recently, there is a growing literature on the task of unsupervised sentiment transfer. This task aims to reverse the sentiment polarity of a sentence but keep its content unchanged without parallel data They then propose a model based on Variational Autoencoder (VAE) to first disentangle the content factor and source sentiment factor, and then combine the content with target sentiment factor. However, the quality of the pseudo-parallel data is not quite satisfactory, which seriously affects the performance of the VAE model. Different from them, we dynamically update the pseudo-parallel data via on-the-fly back-translation (1) Since sentiment is dependent on local context while specificity is independent of local context, there is a series of design in our model to take the local context (or previous generated words) s t into consideration (e.g., Eq. 1, Eq. 3). (2) Due to the lack of parallel data, we propose a cycle reinforcement learning algorithm to train the proposed model (Section 2.3). In this paper, we focus on solving the finegrained text sentiment transfer task, which is a natural extension of the binary sentiment transfer task but with more challenges. We propose a Seq2SentiSeq model to achieve the aim of controlling the fine-grained sentiment intensity of the generated sentence. In order to train the proposed model without any parallel data, we design a cycle reinforcement learning algorithm. We apply the proposed approach to the Yelp review dataset, obtaining state-of-the-art results in both automatic evaluation and human evaluation.
| 466 | 2,563 | 466 |
Word Error Rate Estimation for Speech Recognition: e-WER
|
Measuring the performance of automatic speech recognition (ASR) systems requires manually transcribed data in order to compute the word error rate (WER), which is often time-consuming and expensive. In this paper, we propose a novel approach to estimate WER, or e-WER, which does not require a gold-standard transcription of the test set. Our e-WER framework uses a comprehensive set of features: ASR recognised text, character recognition results to complement recognition output, and internal decoder features. We report results for the two features; black-box and glass-box using unseen 24 Arabic broadcast programs. Our system achieves 16.9% WER root mean squared error (RMSE) across 1,400 sentences. The estimated overall WER e-WER was 25.3% for the three hours test set, while the actual WER was 28.5%.
|
Automatic Speech Recognition (ASR) has made rapid progress in recent years, primarily due to advances in deep learning and powerful computing platforms. As a result, the quality of ASR has improved dramatically, leading to various applications, such as speech-to-speech translation, personal assistants, and broadcast media monitoring. Despite this progress, ASR performance is still closely tied to how well the acoustic model (AM) and language model (LM) training data matches the test conditions. Thus, it is important to be able to estimate the accuracy of an ASR system in a particular target environment. Word Error Rate (WER) is the standard approach to evaluate the performance of a large vo-cabulary continuous speech recognition (LVCSR) system. The word sequence hypothesised by the ASR system is aligned with a reference transcription, and the number of errors is computed as the sum of substitutions (S), insertions (I), and deletions (D). If there are N total words in the reference transcription, then the word error rate WER is computed as follows: To obtain a reliable estimate of the WER, at least two hours of test data are required for a typical LVCSR system. In order to perform the alignment, the test data needs to be manually transcribed at the word level -a time-consuming and expensive process. It is, thus, of interest to develop techniques which can estimate the quality of an automatically generated transcription without requiring a gold-standard reference. Such quality estimation techniques have been extensively investigated for machine translation Seigel and Woodland (2014) studied the detection of deletions in ASR output using a conditional random field (CRF) sequence model to detect one or more deleted word regions in ASR output. In this paper, we build on these contributions to develop a system to directly estimate the WER of an ASR output hypothesis. Our contributions are: (i) a novel approach to estimate WER per sentence and to aggregate them to provide WER estimation per recording or for a whole test set; (ii) an evaluation of our approach which compares the use of "black-box" features (without ASR decoder information) and "glass-box" features which use internal information from the decoder; and (iii) a release of the code and the data used for this paper for further research 1 . 1
|
Estimating the probability of error of each word in a recognised word sequence has been successfully used to detect insertions, substitutions, and interword deletions In our framework, we use two speech recognition systems; a word-based LVCSR system and a grapheme-sequence based system. Following Our model is required to predict two values for each utterance: ERR and N . Given that each is integer-valued, we decided to frame their estimation as a classification task rather than a regression problem as shown in equations 3 and 4. Each class represents a specific word count. We limit the total number of classes to a maximum of C in ERR, with range from 0 to C. However, the total number of classes for N is C -K to avoid estimating an utterance length of zero, with a range from K to C. If an utterance has more than C words or less than K words, it will thus be penalised by the loss function, ERR = arg max Table To estimate e-WER, we combine features from the word-based LVCSR system with features from the grapheme-based system. By running both wordbased and character-based ASR systems, we are able to align their outputs against each other. We split the studied features into four groups • L: lexical features -the word sequence extracted from the LVCSR. • G: grapheme features -character sequence extracted from the grapheme recognition. • N: numerical features -basic features about the speech signal, as well as grapheme alignment error details. • D: decoder features -total frame count, average log-likelihood, total acoustic model likelihood and total language model likelihood. Similar to previous research in ASR quality estimation, we refer to {L,G,N} as the black-box features, and {L,G,N,D} as the glass-box features, which are used to estimate the total number of words N , and the total number of errors ERR in a given sentence. We deployed a feed-forward neural network as a backend classifier for e-WER. The deployed network in this work has two fully-connected hidden layers (ReLU activation function), with 128 neurons in the first layer and 64 neurons in the second layer followed by a softmax layer. A minibatch size of 32 was used, and the number of epochs was up to 50 with an early stopping criterion. The e-WER training and development data sets are the same as the Arabic MGB-2 development and evaluation sets We trained two DNN systems to estimate N and ERR separately. We explored training both a black-box based DNN system (without the decoder features) and a glass-box system using the decoder features. Overall, four systems were trained: two glass-box systems and two blackbox systems. We used the same hyper-parameters across the four systems. Tables Tables This paper presents our efforts in predicting speech recognition word error rate without requiring a gold-standard reference transcription. We presented a DNN based classifier to predict the total number of errors per utterance and the to- tal word count separately. Our approach benefits from combining word-based and grapheme-based ASR results for the same sentence, along with extracted decoder features. We evaluated our approach per sentences and per program. Our experiments have shown that this approach is highly promising to estimate WER per sentence and we have aggregated the estimated results to predict WER for complete recordings, programs or test sets without the need for a reference transcription. For our future work, we shall continue our investigation into approaches that can estimate the word error rate using convolutional neural networks. In particular, we would like to explore combining the DNN numerical features with the CNN word embedding features.
| 808 | 2,335 | 808 |
Inferring semantic roles using sub-categorization frames and maximum entropy model
|
In this paper, we propose an approach for inferring semantic role using subcategorization frames and maximum entropy model. Our approach aims to use the sub-categorization information of the verb to label the mandatory arguments of the verb in various possible ways. The ambiguity between the assignment of roles to mandatory arguments is resolved using the maximum entropy model. The unlabelled mandatory arguments and the optional arguments are labelled directly using the maximum entropy model such that their labels are not one among the frame elements of the sub-categorization frame used. Maximum entropy model is preferred because of its novel approach of smoothing. Using this approach, we obtained an F-measure of 68.14% on the development set of the data provided for the CONLL-2005 shared task. We show that this approach performs well in comparison to an approach which uses only the maximum entropy model.
|
Semantic role labelling is the task of assigning appropriate semantic roles to the arguments of a verb. The semantic role information is important for various applications in NLP such as Machine Translation, Question Answering, Informa-tion Extraction etc. In general, semantic role information is useful for sentence understanding. We submitted our system for closed challenge at CONLL-2005 shared task. This task encourages participants to use novel machine learning techniques suited to the task of semantic role labelling. Previous approaches on semantic role labelling can be classified into three categories (1) Explicit Probabilistic methods Our approach has two stages; first, identification whether the argument is mandatory or optional and second, the classification or labelling of the arguments. In the first stage, the arguments of a verb are put into three classes, (1) mandatory, (2) optional or (3) null. Null stands for the fact that the constituent of the verb in the sentence is not an semantic argument of the verb. It is used to rule out the false argument of the verb which were obtained using the parser. The maximum entropy based classifier is used to classify the arguments into one of the above three labels. After obtaining information about the nature of the non-null arguments, we proceed in the second stage to classify the mandatory and optional arguments into their semantic roles. The propbank sub-categorization frames are used to assign roles to the mandatory arguments. For example, in the sentence "John saw a tree", the sub-categorization frame "A0 v A1" would assign the roles A0 to John and A1 to tree respectively. After using all the sub-categorization frames of the verb irre-spective of the verb sense, there could be ambiguity in the assignment of semantic roles to mandatory arguments. The unlabelled mandatory arguments and the optional arguments are assigned the most probable semantic role which is not one of the frame elements of the sub-categorization frame using the maximum entropy model. Now, among all the sequences of roles assigned to the non-null arguments, the sequence which has the maximum joint probability is chosen. We obtained an accuracy of 68.14% using our approach. We also show that our approach performs better in comparision to an approach with uses a simple maximum entropy model. In section 4, we will talk about our approach in greater detail. This paper is organised as follows, (2) Features, (3) Maximum entropy model, (4) Description of our system, (5) Results, (6) Comparison with our other experiments, (7) Conclusion and (8) Future work.
|
The following are the features used to train the maximum entropy classifier for both the argument identification and argument classification. We used only simple features for these experiments, we are planning to use richer features in the near future. 1. Verb/Predicate. 6. The path of the constituent to the verb phrase. 7. Preposition of the constituent, NULL if it doesn't exist. The maximum entropy approach became the preferred approach of probabilistic model builders for its flexibility and its novel approach to smoothing Many classification tasks are most naturally handled by representing the instance to be classified as a vector of features. We represent features as binary functions of two arguments, f(a,H), where 'a' is the observation or the class and 'H' is the history. For example, a feature f i (a, H) is true if 'a' is Ram and 'H' is 'AGENT of a verb'. In a log linear model, the probability function P (a|H) with a set of features f 1 , f 2 , ....f j that connects 'a' to the history 'H', takes the following form. Here λ i 's are weights between negative and positive infinity that indicate the relative importance of a feature: the more relevant the feature to the value of the probability, the higher the absolute value of the associated lambda. Z(H), called the partition function, is the normalizing constant (for a fixed H). Our approach labels the semantic roles in two stages, (1) argument identification and (2) argument classification. As input to our system, we use full syntactic information The first task in this stage is to find the candidate arguments and their boundaries using a parser. We use Collins parser to infer a list of candidate arguments for every predicate. The following are some of the sub-stages in this task. • Convert the CFG tree given by Collins parser to a dependency tree. • Eliminate auxilliary verbs etc. • Mark the head of relative clause as an argument of the verb. • If a verb is modified by another verb, the syntactic arguments of the superior verb are considered as shared arguments between both the verbs. • If a prepositional phrase attached to a verb contains more than one noun phrase, attach the second noun phrase to the verb. The second task is to filter out the constituents which are not really the arguments of the predicate. Given our approach towards argument classification, we also need information about whether an argument is mandatory or optional. Hence, in this stage the constituents are marked using three labels, (1) MANDATORY argument, (2) OPTIONAL argument and (3) NULL, using a maximum entropy classifier. For example, a sentence "John was playing football in the evening", "John" is marked MANDATORY, "football" is marked MANDATORY and "in the evening" is marked OPTIONAL. For training, the Collins parser is run on the training data and the syntactic arguments are identified. Among these arguments, the ones which do not exist in the propbank annotation of the training data are marked as null. Among the remaining arguments, the arguments are marked as mandatory or optional according to the propbank frame information. Mandatory roles are those appearing in the propbank frames of the verb and its sense, the rest are marked as optional. A propbank frame contains information as illustrated by the following example: If Verb = play, sense = 01, then the roles A0, A1 are MANDATORY. Argument classification is done in two steps. In the first step, the propbank sub-categorization frames are used to assign the semantic roles to the mandatory arguments in the order specified by the sub-categorization frames. Sometimes, the number of mandatory arguments of a verb in the sentence may be less than the number of roles which can be assigned by the sub-categorization frame. For example, in the sentence "MAN1 MAN2 V MAN3 OPT1", roles could be assigned in the following two possible ways by the sub-categorization frame "A0 v A1" of verb V1. In the second step, the task is to label the unlabelled mandatory arguments and the arguments which are marked as optional. This is done by marking these arguments with the most probable semantic role which is not one of the frame elements of the sub-categorization frame "A0 v A1". In the above example, the unlabelled mandatory arguments and the optional arguments cannot be labelled as either A0 or A1. Hence, after this step, the following might be the role-labelling for the sentence "MAN1 MAN2 V1 MAN3 OPT1". The best possible sequence of semantic roles ( R) is decided by the taking the product of probabilities of individual assignments. This also disambiguates the ambiguity in the assignment of mandatory roles. The individual probabilities are computed using the maximum entropy model. For a sequence R, the product of the probabilities is defined as The best sequence of semantic roles R is defined as R = argmax P ( R) For training the maximum entropy model, the outcomes are all the possible semantic roles. The list of sub-categorization frames for a verb is obtained from the training data using information about mandatory roles from the propbank. The propbank sub-categorization frames are also appended to this list. We present our results in the next section. The results of our approach are presented in table In this paper, we propose an approach for inferring semantic role using sub-categorization frames and maximum entropy model. Using this approach, we obtained an F-measure of 68.14% on the development set of the data provided for the CONLL-2005 shared task. We have observed that the main limitation of our system was in argument identification. Currently, the recall of the arguments inferred from the output of the parser is 75.52% which makes it the upper bound of recall of our system. In near future, we would focus on increasing the upper bound of recall. In this direction, we would also use the partial syntactic information. The accuracy of the first stage of our approach would increase if we include the mandatory/optional information for training the parser
| 918 | 2,618 | 918 |
TBL-Improved Non-Deterministic Segmentation and POS Tagging for a Chinese Parser
|
Although a lot of progress has been made recently in word segmentation and POS tagging for Chinese, the output of current state-of-the-art systems is too inaccurate to allow for syntactic analysis based on it. We present an experiment in improving the output of an off-the-shelf module that performs segmentation and tagging, the tokenizer-tagger from Beijing University (PKU). Our approach is based on transformation-based learning (TBL). Unlike in other TBL-based approaches to the problem, however, both obligatory and optional transformation rules are learned, so that the final system can output multiple segmentation and POS tagging analyses for a given input. By allowing for a small amount of ambiguity in the output of the tokenizer-tagger, we achieve a very considerable improvement in accuracy. Compared to the PKU tokenizertagger, we improve segmentation F-score from 94.18% to 96.74%, tagged word F-score from 84.63% to 92.44%, segmented sentence accuracy from 47.15% to 65.06% and tagged sentence accuracy from 14.07% to 31.47%.
|
Word segmentation and tagging are the necessary initial steps for almost any language processing system, and Chinese parsers are no exception. However, automatic Chinese word segmentation and tagging has been recognized as a very difficult task Second, in addition to the two problems described above, segmentation and tagging also suffer from the fact that the notion of a word is very unclear in Chinese Consequently, automatic segmentation and tagging in Chinese faces a serious challenge from prevalent ambiguities. For example (2) a. 白/a 花/n bái huā white flower b. 白/d 花/v bái huā in vain spend 'spend (money, time, energy etc.) in vain' Even Chinese speakers cannot resolve such ambiguities without using further information from a bigger context, which suggests that resolving segmentation and tagging ambiguities probably should not be a task or goal at the word level. Instead, we should preserve such ambiguities in this level and leave them to be resolved in a later stage, when more information is available. To summarize, the word as a notion and hence word boundaries are very unclear; segmentation and tagging are prevalently ambiguous in Chinese. These facts suggest that Chinese segmentation and part-of-speech identification are probably inherently non-deterministic at the word level. However most of the current segmentation and/or tagging systems output a single result. While a deterministic approach to Chinese segmentation and POS tagging might be appropriate and necessary for certain tasks or applications, it has been shown to suffer from a problem of low accuracy. As pointed out by Yu The system for which we improved the output of the Beijing tokenizer-tagger is a hand-crafted Chinese grammar. For such a system, as probably for any parsing system that presupposes segmented (and tagged) input, the accuracy of the segmentation and POS tagging analyses is critical. However, as described in detail in the following section, even current state-of-art systems cannot provide satisfactory results for our application. Based on the experiments presented in section 3, we believe that a proper amount of non-deterministic results can significantly improve the Chinese segmentation and tagging accuracy, which in turn improves the performance of the grammar.
|
The improved tokenizer-tagger we developed is part of a larger system, namely a deep Chinese grammar (3) The output of the Chinese LFG consists of a Constituent Structure (c-structure) and a Functional Structure (f-structure) for each sentence. While c-structure represents phrasal structure and linear word order, f-structure represents various functional relations between parts of sentences. For example, (4) and ( (4) c-structure of (3) (5) f-structure of (3) To parse a sentence, the Chinese LFG minimally requires three components: a tokenizertagger, a lexicon, and syntactic rules. The tokenizer-tagger that is currently used in the grammar is developed by Beijing University (PKU) 3 and is incorporated as a library transducer Because the grammar's syntactic rules are applied based upon the results produced by the tokenizer-tagger, the performance of the latter is 2 ASP stands for aspect marker. 3 This simple test shows that in order for the deep Chinese grammar to be practically useful, the performance of the tokenizer-tagger must be improved. One way to improve the segmentation and tagging accuracy is to allow non-deterministic segmentation and tagging for Chinese for the reasons stated in Section 1. Therefore, our goal is to find a way to transform PKU's tokenizertagger into a system that produces a proper amount of non-deterministic segmentation and tagging results, one that can significantly improve the system's accuracy without a substantial sacrifice in terms of efficiency. Our approach is described in the following section. For grammars of other languages implemented on the XLE grammar development platform, the input is usually preprocessed by a cascade of generally non-deterministic finite state transducers that perform tokenization, morphological analysis etc. Since word segmentation and POS tagging are such hard problems in Chinese, this traditional setup is not an option for the Chinese grammar. However, finite state rules seem a quite natural approach to improving in XLE the output of a sep-arate segmentation and POS tagging module like PKU's tokenizer-tagger. Although the grammar developer had identified PKU's tokenizer-tagger as the most suitable for the preprocessing of Chinese raw text that is to be parsed with the Chinese LFG, she noticed in the process of development that (i) certain segmentation and/or tagging decisions taken by the tokenizer-tagger systematically go counter her morphosyntactic judgment and that (ii) the tokenizer-tagger (as any software of its kind) makes mistakes. She therefore decided to develop a set of finite-state rules that transform the output of the module; a set of mostly obligatory rewrite rules adapts the POS-tagged word sequence to the grammar's standard, and another set of mostly optional rules tries to offer alternative segment and tag sequences for sequences that are frequently processed erroneously by PKU's tokenizer-tagger. Given the absence of data segmented and tagged according to the standard the LFG grammar developer desired, the technique of hand-crafting FST rules to postprocess the output of PKU's tokenizer-tagger worked surprisingly well. Recall that based on the deterministic segmentation and tagging results produced by PKU's tokenizertagger, our system can only parse 80 out of the 101 sentences, and among the 21 completely failed sentences, 20 sentences failed due to segmentation and tagging mistakes. In contrast, after the application of the hand-crafted FST rules for postprocessing, 100 out of the 101 sentences can be parsed. However, this approach involved a lot of manual development work (about 3-4 person months) and has reached a stage where it is difficult to systematically work on further improvements. Since there are large amounts of training data that are close to the segmentation and tagging standard the grammar developer wants to use, the idea of inducing FST rules rather than hand-crafting them comes quite naturally. The easiest way to do this is to apply transformation-based learning (TBL) to the combined problem of Chinese segmentation and POS tagging, since the cascade of transformational rules learned in a TBL training run can straightforwardly be translated into a cascade of FST rules. TBL is a machine learning approach that has been employed to solve a number of problems in natural language processing; most famously, it has been used for part-of-speech tagging The first attempts to employ TBL to solve the problem of Chinese word segmentation go back to Several implementations of the TBL approach are freely available on the web, the most wellknown being the so-called Brill tagger, fnTBL, which allows for multi-dimensional TBL, and µ-TBL We started out with a corpus of thirty goldsegmented and -tagged daily editions of the Xinhua Daily, which were provided by the Institute of Computational Linguistics at Beijing University. Three daily editions, which comprise 5,054 sentences with 129,377 words and 213,936 characters, were set aside for testing purposes; the remaining 27 editions were used for training. With the idea of learning both obligatory and optional transformational rules in mind, we then split the training data into two roughly equally sized subsets. All the data were broken into sentences using a very simple method: The end of a paragraph was always considered a sentence boundary. Within paragraphs, sentence-final punctuation marks such as periods (which are unambiguous in Chinese), question marks and exclamation marks, potentially followed by a closing parenthesis, bracket or quote mark, were considered sentence boundaries. We then had to come up with a way of casting the problem of combined segmentation and POS tagging as a TBL problem. Following a strategy widely used in Chinese word segmentation, we did this by regarding the problem as a character tagging problem. However, since we intended to learn rules that deal with segmentation and POS tagging simultaneously, we could not adopt the BIO-coding approach. The character tagging scheme that we finally chose is illustrated in (6), where a. and b. show the character tags that we used for the analyses in (1a) and (1b) respectively. The scheme consists in tagging the last character of a word with the part-ofspeech of the entire word; all non-final characters are tagged with '-'. The main advantages of this character tagging scheme are that it expresses both word boundaries and parts-of-speech and that, at the same time, it is always consistent; inconsistencies between BIO tags indicating word boundaries and part-of-speech tags, which Both of the training data subsets were tagged according to our character tagging scheme and converted to the data format expected by µ-TBL. The first training data subset was used for learning obligatory resegmentation and retagging rules. The corresponding rule templates, which define the space of possible rules to be explored, are given in Figure Once the obligatory rules had been learned on the first training data subset, they were applied to the second training data subset. Then, optional rules were learned on this second training data subset. The rule templates used for optional rules are very similar to the ones used for obligatory rules; a few templates of optional rules are given in Figure Finally, the rule sets learned were converted into the fst (Beesley and Karttunen, 2003) notation for transformational rules, so that they could be tested and used in the FST cascade used for preprocessing the input of the Chinese LFG. For evaluation, the converted rules were applied to our test data set of 5,054 sentences. A few example rules learned by µ-TBL with the set-up described above are given in Figure The results achieved by PKU's tokenizer-tagger on its own and in combination with the transformational rules learned in our experiments are given in Table These results show that simply switching from the one-tag mode of PKU's tokenizer-tagger to its all-tags mode is not a solution. First of all, since the tokenizer-tagger always produces only one segmentation regardless of the mode it is used in, segmentation accuracy would stay completely unaffected by this change, which is particularly serious because there is no way for the grammar to recover from segmentation errors and the tokenizertagger produces an entirely correct segmentation only for 47.15% of the sentences. Second, the improved tagging accuracy would come at a very heavy price in terms of ambiguity; the median number of combined segmentation and POS tagging analyses per sentence would be 1,440. In contrast, machine-learned transformation rules are an effective means to improve the output of PKU's tokenizer-tagger. Applying only the obligatory rules that were learned already improves segmented sentence accuracy from 47.15% to 63.14% and tagged sentence accuracy from 14.07% to 27.21%, and this at no cost in terms of ambiguity. Adding the optional rules that were learned and hence making the rule set used for post-processing the output of PKU's tokenizertagger non-deterministic makes it possible to improve segmented sentence accuracy and tagged sentence accuracy further to 65.06% and 31.47% respectively, i.e. tagged sentence accuracy is more than doubled with respect to the baseline. While this last improvement does come at a price in terms of ambiguity, the ambiguity resulting from the application of the non-deterministic rule set is very low in comparison to the ambiguity of the output of PKU's tokenizer-tagger in all-tags mode; the median number of analyses per sentences only increases to 2. Finally, it should be noted that the transformational rules provide entirely correct segmentation and POS tagging analyses not only for more sentences, but also for longer sentences. They increase the average length of a correctly segmented sentence from 18.22 words to 21.94 words and the average length of a correctly segmented and POS-tagged sentence from 9.58 words to 16.33 words. Comparing our results to other results in the literature is not an easy task because segmentation and POS tagging standards vary, and our test data have not been used for a final evaluation before. Nevertheless, there are of course systems that perform word segmentation and POS tagging for Chinese and have been evaluated on data similar to our test data. Published results also vary as to the evaluation measures used, in particular when it comes to combined word segmentation and POS tagging. For word segmentation considered separately, the consensus is to use the (segmentation) F-score (SF). The quality of systems that perform both segmentation and POS tagging is often expressed in terms of (character) tag accuracy (TA), but this obviously depends on the character tagging scheme adopted. An alternative measure is POS tagging F-score (TF), which is the geometric mean of precision and recall of correctly segmented and POS-tagged words. Evaluation measures for the sentence level have not been given in any publication that we are aware of, probably because segmenters and POS taggers are rarely considered as pre-processing modules for parsers, but also because the figures for measures like sentence accuracy are strikingly low. For systems that perform only word segmentation, we find the following results in the literature: For systems that perform both word segmentation and POS tagging, the following results were published: Last but not least, there are parsers that operate on characters rather than words and who perform segmentation and POS tagging as part of the parsing process. Among these, we would like to mention The idea of carrying some ambiguity from one processing step into the next in order not to prune good solutions is not new. E.g., As to future work, we hope to resolve the problem of not having a gold standard that is segmented and tagged exactly according to the guidelines established by the Chinese LFG developer by semi-automatically applying the hand-crafted transformational rules that were developed to the PKU gold standard. We will then induce obligatory and optional FST rules from this "grammarcompliant" gold standard and hope that these will be able to replace the hand-crafted transformation rules currently used in the grammar. Finally, we plan to carry out more training runs; in particular, we intend to experiment with lower accuracy (and score) thresholds for optional rules. The idea is to find the optimal balance between ambiguity, which can probably be higher than with our current set of induced rules without affecting efficiency too adversely, and accuracy, which still needs further improvement, as can easily be seen from the sentence accuracy figures.
| 1,042 | 2,284 | 1,042 |
Knowing the No-match: Entity Alignment with Dangling Cases
|
This paper studies a new problem setting of entity alignment for knowledge graphs (KGs). Since KGs possess different sets of entities, there could be entities that cannot find alignment across them, leading to the problem of dangling entities. As the first attempt to this problem, we construct a new dataset and design a multi-task learning framework for both entity alignment and dangling entity detection. The framework can opt to abstain from predicting alignment for the detected dangling entities. We propose three techniques for dangling entity detection that are based on the distribution of nearest-neighbor distances, i.e., nearest neighbor classification, marginal ranking and background ranking. After detecting and removing dangling entities, an incorporated entity alignment model in our framework can provide more robust alignment for remaining entities. Comprehensive experiments and analyses demonstrate the effectiveness of our framework. We further discover that the dangling entity detection module can, in turn, improve alignment learning and the final performance. The contributed resource is publicly available to foster further research.
|
Knowledge graphs (KGs) have evolved to be the building blocks of many intelligent systems Nonetheless, to practically support the alignment of KGs as a real-world task, existing studies suffer one common problem of identifying entities without alignment across KGs (called dangling entities). Specifically, current methods are all built upon the assumption that any source entity has a counterpart in the target KG Towards more practical solutions of entity alignment for KGs, we provide a redefinition of the task with the incorporation of dangling cases ( §2.1), as the first contribution of this work. Given a source entity, our setting does not assume that it must have a counterpart in the target KG as what previous studies do. Instead, conducting entity alignment also involves identifying whether the counterpart of an entity actually exists in another KG. Hence, a system to tackle this realistic problem setting of entity alignment is also challenged by the requirement for justifying the validity of its prediction. To facilitate the research towards the new problem, the second contribution of this work is to construct a new dataset DBP2.0 for entity alignment with dangling cases ( §2.2). As being discussed, existing benchmarks for entity alignment, including DBP15K Although embedding-based entity alignment has been investigated for several years, handling with dangling entities has not been studied yet. As the third contribution, we present a multi-task learning framework for the proposed task ( §3). It consists of two jointly optimized modules for entity alignment and dangling entity detection, respectively. While the entity alignment module can basically incorporate any existing techniques from prior studies We conduct comprehensive experiments on the new DBP2.0 dataset, which demonstrate the proposed techniques to solve the dangling entity detection problem to different extents. Moreover, we observe that training the dangling detection model (marginal ranking) provides an effective indirect supervision that improves the detection of alignment for matchable entities. We hope our task, dataset and framework can foster further investigation of entity alignment techniques in the suggested real scenario, leading to more effective and practical solutions to this challenging but important problem.
|
We hereby describe the problem setting of our task and introduce the new dataset. A KG is a set of relational triples T ⊆ E × R × E, where E and R denote vocabularies of entities and relations, respectively. Without loss of generality, we consider entity alignment between two KGs, i.e., a source KG Given a small set of seed entity alignment A 12 = {(e 1 , e 2 ) ∈ E 1 × E 2 e 1 ≡ e 2 } along with a small set of source entities D ⊂ E 1 known to have no counterparts as training data, the task seeks to find the remaining entity alignment. Different from the conventional entity alignment setting As discussed, previous testbeds for entity alignment do not contain dangling entities Construction. The key challenge of building our dataset lies in that we need to ensure the selected dangling entities are indeed without counterparts. Specifcally, we cannot simply regard entities without ILLs as dangling ones, since the ILLs are also incomplete Statistics and evaluation. Tab. 1 lists the statistics our dataset. The three entity alignment settings have different data scales and each is much larger than the same setting in DBP15K, thus can benefit better scalability analysis of models. For dangling entity detection, we split 30% of dangling entities for training, 20% for validation and others for test-ing. The splits of reference alignment follow the same partition ratio, which is also consistent with that of DBP15K to simulate the weak alignment nature of KGs We propose a multi-task learning framework for entity alignment with dangling cases, as illustrated in Fig. Embedding-based entity alignment is first attempted in MTransE We propose three techniques to implement the dangling detection module based on the distribution of the nearest neighbor distance in embedding space. This technique is to train a binary classifier to distinguish between dangling entities (labeled 1, i.e., y = 1) and matchable ones (y = 0). Specifically, we experiment with a feed-forward network (FFN) classifier. Given a source entity x, its input feature representation is the difference vector between its embedding x and its transformed NN embedding x nn in the target KG embedding space where y x denotes the truth label for entity x. In a real-world entity alignment scenario, the dangling entities and matchable ones usually differ greatly in quantity, leading to unbalanced label distribution. In that case, we apply label weights Considering that dangling entities are the noises for finding entity alignment based on embedding distance, we are motivated to let dangling entities have solitary representations in the embedding space, i.e., they should keep a distance away from their surrounding embeddings. Hence, we seek to put a distance margin between dangling entities and their sampled NNs. For every input dangling entity x ∈ D, we minimize the following loss: where λ is a distance margin. This loss and the entity alignment loss (e.g., that of MTransE) conduct joint learning-to-rank, i.e., the distance between unaligned entities should be larger than that of aligned entities while dangling entities should have a lower ranking in the candidate list of any source entity. In the two aforementioned techniques, searching for the NN of an entity is time-consuming. Furthermore, selecting an appropriate value for the distance margin of the second technique is not trivial. Based on empirical studies, we find that the margin has a significant influence on the final performance. Hence, we would like to find a more efficient and self-driven technique. Inspired by the open-set classification approach (Dhamija et al., 2018) that lets a classifier equally penalize the output logits for samples of classes that are unknown to training (i.e. background classes), we follow a similar principle and let the model equally enlarge the distance of a dangling entity from any sampled target-space entities. This method is to treat all dangling entities as the "background" of the embedding space, since they should be distant from matchable ones. We also decrease the scale of the dangling entity embeddings to further provide a separation between the embeddings of matchable and dangling entities. For the dangling entity x ∈ D, let X v x be the set of randomly-sampled target entities with size of v. The loss is defined as where | • | denotes the absolute value and α is a weight hyper-parameter for balance. λ x is the average distance, i.e., x Mxx . This objective can push the relatively close entities away from the source entity without requiring a pre-defined distance margin. The overall learning objective of the proposed framework is a combination of the entity alignment loss (e.g., MTransE's loss) and one of the dangling entity detection loss as mentioned above. The two losses are optimized in alternate batches. More training details are presented in §4.1. Like the training phase, the inference phase is also separated into dangling entity detection and entity alignment. The way of inference for dangling entities differs with the employed technique. The NN classification uses the jointly trained FFN classifier to estimate whether the input entity is a dangling one. The marginal ranking takes the preset margin value in training as a confidence threshold, and decides whether an entity is a dangling one based on if its transformed NN distance is higher than the threshold. The inference of background ranking is similar to that of marginal ranking, with only the difference, by its design, to be that the confidence threshold is set as the average NN distance of entities in the target embedding space. After detecting dangling entities, the framework finds alignment in the remaining entities based on the transformed NN search among the matchable entities in the embedding space of the target KG. Accelerated NN search. The first and second techniques need to search NNs. We can use an efficient similarity search library Faiss In this section, we report our experimental results. We start with describing the experimental setups ( §4.1). Next, we separately present the experimentation under two different evaluation settings ( §4.2- §4.3), followed by an analysis on the similarity score distribution of the obtained representations for matchable and dangling entities ( §4.4). To faciliate the use of the contributed dataset and software, we have incorporated these resources into the Ope-nEA benchmark 3 We consider two evaluation settings. One setting is for the proposed problem setting with dangling entities, for which we refer as the consolidated evaluation setting. We first detect and remove the dangling source entities and then search alignment for the left entities. For this evaluation setting, we also separately assess the performance of the dangling detection module. The other simplified setting follows that in previous studies Evaluation Protocol. For the relaxed evaluation setting, given each source entity, the candidate counterpart list is selected via NN search in the embedding space. The widely-used metrics on the ranking lists are Hits@k (k = 1, 10, H@k for short) and mean reciprocal rank (MRR). Higher H@k and MRR indicate better performance. For the consolidated setting, we report precision, recall and F1 for dangling entity detection. As for assessing the eventual performance of realistic entity alignment, since the dangling entity detection may not be perfect. it is inevitable for some dangling entities to be incorrectly sent to the entity alignment module for aligning, while some matchable ones may be wrongly excluded. In this case, H@k and MRR are not applicable for the consolidated entity alignment evaluation. Following a relevant evaluation setting for entity resolution in database H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR We first present the evaluation under the relaxed entity alignment setting based on Tab. 2. This setting only involves matchable source entities to test entity alignment, which is an ideal (but less realistic) scenario similar to prior studies We also examine if jointly learning to detect dangling entities can indirectly improve alignment. As observed, MTransE, even without dangling detection, can achieve promising performance on DBP2.0. The results are even better than those on DBP15K as reported by We now report the experiment on the more realistic consolidated evaluation setting. Tab. 3 gives the precision, recall and F1 results of dangling entity detection, and the final entity alignment performance is presented in Tab. 4. In addition, Fig. shows the accuracy of dangling entity detection. We analyze the results from the following aspects. Dangling entity detection. Regardless of which alignment module is incorporated, NNC performs the worst (e.g., the low recall and accuracy around 0.5) among the dangling detection techniques, whereas BR generally performs the best. NNC determines whether an entity is dangling based on the difference vector of the entity embedding and its NN, instead of directly capturing the embedding distance which is observed to be more important based on the results by the other two techniques. By directly pushing dangling entities away from their NNs in the embedding space, both MR and BR offer much better performance. Besides, BR outperforms MR in most cases. By carefully checking their prediction results and the actual distance of NNs, we find that the induced distance margin in BR better discriminates dangling entities from matchable ones than the pre-defined margin. Efficiency. We compare the average epoch time of training the three dangling detection modules for MTransE in Fig. To illustrate how well the BR technique distinguishes between matchable and dangling entities, we plot in Fig. We discuss two topics of relevant work. Learning with abstention is a fundamental machine learning, where the learner can opt to abstain from making a prediction if without enough decisive confidence To the best of our knowledge, our task, dataset, and the proposed dangling detection techniques are the first contribution to support learning with abstention for entity alignment and structured representation learning. In this paper, we propose and study a new entity alignment task with dangling cases. We construct a dataset to support the study of the proposed problem setting, and design a multi-learning framework for both entity alignment and dangling entity detection. Three types of dangling detection techniques are studied, which are based on nearest neighbor classification, marginal ranking, and background ranking. Comprehensive experiments demonstrate the effectiveness of the method, and provide insights to foster further investigation on this new problem. We further find that dangling entity detection can, in turn, effectively provide auxiliary supervision signals to improve the performance of entity alignment. For future work, we plan to extend the benchmarking on DBP2.0 with results from more base models of entity alignment as well as more abstention inference techniques. Extending our framework to support more prediction tasks with abstention, such as entity type inference A Degree Distribution Fig. For entity alignment, we experiment with MTransE We select each hyper-parameter setting within a wide range of values as follows: • Learning rate: {0.0001, 0.0002, 0.0005, 0.001} • Embedding dimension: {64, 128, 256, 512} • Batch size: {4096, 8192, 10240, 20480, 102400} • # FNN layers: {1, 2, 3, 4} • # Random targets: {1, 10, 20, 30, 40, 50} • λ: {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} D Recall@10 of Entity Alignment Fig.
| 1,161 | 2,330 | 1,161 |
Improving Self-training for Cross-lingual Named Entity Recognition with Contrastive and Prototype Learning
|
In cross-lingual named entity recognition (NER), self-training is commonly used to bridge the linguistic gap by training on pseudolabeled target-language data. However, due to sub-optimal performance on target languages, the pseudo labels are often noisy and limit the overall performance. In this work, we aim to improve self-training for cross-lingual NER by combining representation learning and pseudo label refinement in one coherent framework. Our proposed method, namely ContProto mainly comprises two components: (1) contrastive self-training and (2) prototype-based pseudo-labeling. Our contrastive self-training facilitates span classification by separating clusters of different classes, and enhances crosslingual transferability by producing closelyaligned representations between the source and target language. Meanwhile, prototype-based pseudo-labeling effectively improves the accuracy of pseudo labels during training. We evaluate ContProto on multiple transfer pairs, and experimental results show our method brings in substantial improvements over current stateof-the-art methods. 1 * Ran Zhou is under the Joint Ph.D. Program between Alibaba and Nanyang Technological University.
|
Cross-lingual named entity recognition (NER) To optimize self-training for cross-lingual NER, several methods have been proposed to improve the quality of pseudo labels. One line of work focuses on selecting curated pseudo-labeled data for selftraining via reinforcement learning In this work, we take a different approach and propose ContProto as a novel self-training framework for cross-lingual NER. Unlike existing data selection methods, ContProto sufficiently leverages knowledge from all available unlabeled targetlanguage data. Compared with multi-teacher or multi-round self-training, our method improves pseudo label quality without training separate mod-els. Moreover, we explicitly align the representations of source and target languages to enhance the model's cross-lingual transferability. Specifically, ContProto comprises two key components, namely contrastive self-training and prototypebased pseudo-labeling. Firstly, we introduce a contrastive objective for cross-lingual NER selftraining. Whereas typical supervised contrastive learning It is noteworthy that our contrastive self-training and prototype-based pseudo-labeling are mutually beneficial. On one hand, entity clusters generated by contrastive learning make it easier to determine the closest prototype and update pseudo labels correctly. In turn, the model trained on the refined pseudo labels becomes more accurate when classifying unlabeled spans, and yields more reliable positive pairs for contrastive learning. Our contributions are summarized as follows: (1) The proposed ContProto shows competitive cross-lingual NER performance, establishing new state-of-the-art results on most of the evaluated cross-lingual transfer pairs (five out of six). (2) Our contrastive self-training produces well-separated clusters of representations for each class to facilitate classification, and also aligns the source and target language to achieve improved cross-lingual transferability. (3) Our prototype-based pseudolabeling effectively denoises pseudo-labeled data and greatly boosts the self-training performance.
|
Cross-lingual named entity recognition aims to train a NER model with labeled data in a source language, and evaluate it on test data in target languages. Following previous works Following Typically, self-training (or teacher-student learning) for cross-lingual NER first trains a teacher model M(θ t ) on the available source-language labeled dataset D src l using a cross-entropy loss: where N is the batch size, y c jk = 1 for the true label of span s jk and 0 otherwise. Given an unlabeled target-language sentence X ∈ D tgt ul , the teacher model then assigns soft pseudo label ŷjk = P θt (s jk ) ∈ R |C| to each span s jk ∈ X. The student model M(θ s ) will be trained on the pseudo-labeled target-language data as well, using a soft cross-entropy loss: The total objective for the student model in vanilla self-training is: 3 Methodology In this section, we present our self-training framework ContProto for cross-lingual NER. As shown in the right part of Figure (2) prototypebased pseudo-labeling (Section 3.2) which gradually improves pseudo label quality with prototype learning. In the following section, we first describe supervised contrastive learning for span-based NER, which focuses on source-language representations. Then, we introduce our pseudo-positive pairs, by which we aim to improve target-language representations as well. Supervised contrastive learning We extend SupCon , where y i is the true label of s i and m = X |S(X)| is the total number of spans in the original batch of sentences. Then, the supervised contrastive loss is defined as follows: where A(i) ≡ {1, 2, ..., 2m} \ {i}, and P (i) ≡ {p ∈ A(i) : y i = y p } are indices of the positive sample set consisting of spans sharing the same label as s i . Essentially, supervised contrastive learning helps to pull source-language entities of the same class together while pushing clusters of different classes apart, which induces a clustering effect and thereby benefits classification. Pseudo-positive pairs As the aforementioned positive pair only involve source-language spans, it does not explicitly optimize target-language representations or promote cross-lingual alignment. Therefore, we propose to construct pseudo-positive pairs which take target-language spans into account as well. Concretely, we expand the multi-viewed span set {s i , y i , ζ i } 2m i=1 by adding in unlabeled targetlanguage spans, where m denotes the total number of spans from the source-and target-language sentences. For a source-language span, y i is still its gold label y gold i . However, as gold annotations are not available for target-language spans, we instead treat the model's prediction at the current training step as an approximation for its label y i : Likewise, we construct positive pairs from entities with the same (gold or approximated) label. As an example, positive pairs for the PER (person) class might be composed of: (1) two source-language PER names; (2) one source-language PER name and one target-language span predicted as PER; (3) two target-language spans predicted as PER. Therefore, apart from separating clusters of different classes, our contrastive self-training also explicitly enforces the alignment between languages, which facilitates cross-lingual transfer. We also include a consistency regularization term Recall that each sentence is passed twice through the NER model, and each span s i yields two probability distributions P θ (s i ), P ′ θ (s i ) that are not exactly identical due to random dropout. Therefore, we enforce the model to output consistent predictions by minimizing the following KL divergence: Finally, the total objective for ContProto is: Benefiting from our contrastive self-training in Section 3.1, entity representations (both source-and target-language) of the same class are tightly clustered together. Intuitively, the closest cluster to an unlabeled span is likely to represent the span's true class. Therefore, we can conveniently utilize these induced clusters as guidance to infer the unlabeled span's NER label. To this end, we introduce prototype-based pseudo-labeling, which leverages prototype learning Class-specific prototypes To start off, we first define a series of prototypes ϕ c , each corresponding to a class c ∈ C. A prototype ϕ c is a representation vector that can be deemed as the cluster centroid of class c. Naively, ϕ c can be calculated by averaging representations of class c in the entire dataset at the end of an epoch. However, this means the prototypes will remain static during the next full epoch. This is not ideal as distributions of span representations and clusters are vigorously changing, especially in the earlier epochs. Hence, we adopt a moving-average style of calculating prototypes. Specifically, we iterate through a batch of mixed source-and target-language spans {s i , y i , ζ i } m i=1 , and update prototype ϕ c as the moving-average embedding for spans with (either gold or approximated) label c: Same as Equation Pseudo label refinement Having obtained the prototypes, we then use them as references to refine the pseudo labels of target-language spans. Typically, prototype learning classifies an unlabeled sample by finding the closest prototype, and assigning the corresponding label. However, this may cause two problems: (1) Assigning a hard one-hot label forfeits the advantages of using soft labels in self-training. (2) As the closest prototype might differ between consecutive epochs, there is too much perturbation in pseudo labels that makes training unstable. Thus, we again take a moving-average approach to incrementally update pseudo labels at each training step. Given a target-language span {s, ζ} at epoch t, its soft pseudo label from previ-ous epoch ŷt-1 is updated as follows: where ŷc t represents the pseudo probability on class c and β is a hyperparameter controlling the update rate. We use the dot product to calculate similarity ϕ γ • ζ, and define the distance between span representation and prototype as (1ϕ γ • ζ). In other words, we find the prototype closest to the span's representation and take the corresponding class as an indication of the span's true label. Then, we slightly shift the current pseudo label towards it, by placing extra probability mass on this class while deducting from other classes. Cumulatively, we are able to rectify pseudo labels whose most-probable class is incorrect, while reinforcing the confidence of correct pseudo labels. Margin-based criterion NER is a highly classimbalanced task, where the majority of spans are non-entities ("O"). As a result, non-entity span representations are widespread and as later shown in Section 5.2, the "O" cluster will be significantly larger than other entity types. Therefore, a nonentity span at the edge of the "O" cluster might actually be closer to an entity cluster. Consequently, the above prototype-based pseudo-labeling will wrongly shift its pseudo label towards the entity class and eventually result in a false positive instance. To address this issue, we further add a marginbased criterion to enhance prototype learning. Intuitively, a true entity span should lie in the immediate vicinity of a certain prototype. Thus, we do not update pseudo labels towards entity classes if the span is not close enough to any of the entity prototypes ϕ γ , i.e., the similarity between the prototype and any span representation (ϕ γ • ζ i ) does not exceed a margin r. Meanwhile, as non-entity spans are widely distributed, we do not apply extra criteria and update a span as "O" as long as its closest prototype is ϕ O . Formally: We notice that different entity classes of different target languages might have varying cluster tightness, and thus it is not judicious to manually set a fixed margin r universally. Instead, we automatically set class-specific margin r c from last epoch's statistics, by calculating the averaged similarity between target-language spans predicted as class c and prototype ϕ c : (11) Note that, at the start of training, our model does not produce well-separated clusters and the prototypes are randomly initialized. Therefore, we warm up the model by not updating pseudo labels in the first epoch. We highlight that our contrastive learning and prototype-based pseudo-labeling are mutually beneficial. By virtue of the clustering effect from contrastive learning, the resulting representations and prototypes act as guidance for refining pseudo labels. In turn, the model trained with refined pseudolabels predicts unlabeled spans more accurately, and ensures the validity of pseudo-positive spans for contrastive learning. To summarize, the two components work collaboratively to achieve the overall superior performance of ContProto. In this section, we verify the effectiveness of Con-tProto by conducting experiments on two public NER datasets with six cross-lingual transfer pairs and performing comparisons with various baseline models. Following previous works We mainly benchmark against the following selftraining baselines for cross-lingual NER: TSL We also compare ContProto with several baseline methods that do not leverage unlabeled targetlanguage data, including Wiki We use XLM-R Large We present the experimental results on CoNLL dataset in Table Although MTMT attempts to reduce the distance between entities of the same class in the same lan-guage, it does not account for the relation between a source-and a target-language entity. Besides, AdvPicker implicitly aligns the source and target language during language-independent data selection but does not inherit those representations when training the final model. In comparison, our contrastive objective explicitly reduces the distance between a pair of source-and target-language entities of the same class, which aligns the source-and target-language representations to achieve better cross-lingual performance. For a fair comparison, we further implement span-based NER based on the official codebase of AdvPicker As shown in Table To demonstrate the contribution of each design component of ContProto, we conduct the following ablation studies: (1) w/o proto which removes prototype-based pseudo-labeling and only keeps our contrastive self-training; (2) w/o proto & cl which removes both prototype-based pseudolabeling and the contrastive objective; (3) w/o reg which removes the consistency regularization; (4) fixed margin which manually tunes a universally fixed margin r = 1.0 instead of automatic classspecific margins; (5) proto w/o cl which removes the contrastive objective, and directly uses the unprojected representation z i for constructing prototypes and updating pseudo labels. Based on experimental results in Table (2) w/o proto & cl further lowers target-language performance, which demonstrates the effectiveness of contrastive self-training in separating different classes and aligning the source-and targetlanguage representations. (3) w/o reg demonstrates that removing the consistency regularization leads to slight performance drops on all target languages. (4) Using a manually tuned universal margin, fixed margin underperforms ContProto by a considerable amount. This signifies the flexibility brought by the automatic margin when cluster tightness differs between classes. (5) proto w/o cl leads to drastic performance drops. Without the contrastive objective, clusters of different classes overlap with each other. As a result, the closest prototype might not accurately reflect a span's true label, and this leads to deteriorated pseudo label quality. Thus, the clustering effect from contrastive learning is essential for accurate prototype-based pseudo-labeling. We also conduct a t-SNE visualization (Van der Maaten and Hinton, 2008) of span representations z i . As shown in Figure Recall that we remove gold labels from the original target-language training sets, and treat them as unlabeled data for self-training. For analysis purposes, we retrieve those gold labels, to investigate the efficacy of ContProto in improving the quality of pseudo labels. Specifically, we take the gold labels as references to calculate the oracle F1 of pseudo labels at the end of each epoch. As shown in Figure Cross-lingual NER Existing methods for NER Contrastive learning Self-supervised contrastive learning has been widely used to generate representations for various tasks (2022) leverages contrastive learning for name entity recognition, but they work on monolingual few-shot settings while we focus on cross-lingual NER self-training. Prototype learning Prototype learning In this work, we propose ContProto as a novel selftraining framework for cross-lingual NER, which synergistically incorporates representation learning and pseudo label refinement. Specifically, our contrastive self-training first generates representations where different classes are separated, while explicitly enforcing the alignment between source and target languages. Leveraging the class-specific representation clusters induced by contrastive learning, our prototype-based pseudo-labeling scheme further denoises pseudo labels using prototypes to infer true labels of target language spans. As a result, the model trained with more reliable pseudo labels is more accurate on the target languages. In our method, the contrastive and prototype learning components are mutually beneficial, where the for-mer induces clusters which makes it easier to identify the closest prototype, and the latter helps to construct more accurate sample pairs for contrastive learning. Evaluated on multiple cross-lingual transfer pairs, our method brings in substantial improvements over various baseline methods. In this work, we propose a self-training method which requires unlabeled data in target languages. Recall that we remove gold labels from readily available target-language training data from the same public NER dataset, and use them as unlabeled data in our experiments. However, this might not perfectly simulate a real-life application scenario. Firstly, most free text in target languages might not contain any predefined named entities. This requires careful data cleaning and preprocessing to produce unlabeled data ready for use. Secondly, there might be a domain shift between labeled source-language data and unlabeled targetlanguage data, which poses a question on the effectiveness of our method. Furthermore, the NER datasets used in this work contain only a few entity types and different entity classes are relatively balanced. However, on datasets with a larger number of classes, each class will be underrepresented in a batch and a larger batch size might be required for contrastive selftraining to work satisfactorily. Also, if the entity type distribution is long-tailed, prototypes for those rare entity types might be inaccurate, and this affects the efficacy of prototype-based pseudolabeling. Lastly, as we observe slight drops of pseudo label quality at the end of training for some languages, the pseudo label update strategy can be refined for further improvement.
| 1,199 | 2,092 | 1,199 |
QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations
|
Formulating selective information needs results in queries that implicitly specify set operations, such as intersection, union, and difference. For instance, one might search for "shorebirds that are not sandpipers" or "science-fiction films shot in England". To study the ability of retrieval systems to meet such information needs, we construct QUEST, a dataset of 3357 natural language queries with implicit set operations, that map to a set of entities corresponding to Wikipedia documents. The dataset challenges models to match multiple constraints mentioned in queries with corresponding evidence in documents and correctly perform various set operations. The dataset is constructed semi-automatically using Wikipedia category names. Queries are automatically composed from individual categories, then paraphrased and further validated for naturalness and fluency by crowdworkers. Crowdworkers also assess the relevance of entities based on their documents and highlight attribution of query constraints to spans of document text. We analyze several modern retrieval systems, finding that they often struggle on such queries. Queries involving negation and conjunction are particularly challenging and systems are further challenged with combinations of these operations. 1
|
People often express their information needs with multiple preferences or constraints. Queries corresponding to such needs typically implicitly express set operations such as intersection, difference, and union. For example, a movie-goer might be looking for a science-fiction film from the 90s which does not feature aliens and a reader might be interested in a historical fiction novel set in France. Similarly, a botanist attempting to identify a species based on their recollection might search for shrubs that are evergreen and found in Panama. Further, if the set of entities that satisfy the constraints is relatively small, a reader may like to see and explore an exhaustive list of these entities. In addition, to verify and trust a system's recommendations, users benefit from being shown evidence from trusted sources Addressing such queries has been primarily studied in the context of question answering with structured knowledge bases (KBs), where query constraints are grounded to predefined predicates and symbolically executed. However, KBs can be incomplete and expensive to curate and maintain. Meanwhile, advances in information retrieval may enable developing systems that can address such queries without relying on structured KBs, by matching query constraints directly to supporting evidence in text documents. However, queries that combine multiple constraints with implicit set operations are not well represented in existing retrieval benchmarks such as MSMarco To analyze retrieval system performance on such queries, we present QUEST, a dataset with natural language queries from four domains, that are mapped to relatively comprehensive sets of entities corresponding to Wikipedia pages. We use categories and their mapping to entities in Wikipedia as a building block for our dataset construction approach, but do not allow access to this semistructured data source at inference time, to simulate text-based retrieval. Wikipedia categories represent a broad set of natural language descriptions of entity properties and often correspond to selective information need queries that could be plausibly issued by a search engine user. The relationship between property names and document text is often subtle and requires sophisticated reasoning to determine, representing the natural language inference challenge inherent in the task. Our dataset construction process is outlined in Figure Performing well on this dataset requires systems that can match query constraints with corresponding evidence in documents and handle set operations implicitly specified by the query (see answer sets given a query. We find that current dual 2 erations are not well represented in existing retrieval benchmarks such as MSMarco To analyze how well retrieval systems handle such queries, we present QUEST, a dataset with natural language queries from four domains, that are mapped to relatively comprehensive sets of entities corresponding to Wikipedia pages. We use Wikipedia categories and their mapping to entities in Wikipedia as a building block for our dataset construction approach, but do not allow access to this semi-structured data source at inference time, to simulate text-based retrieval. Wikipedia categories represent a broad set of natural language descriptions of entity properties and often correspond to selective information need queries that could be plausibly issued by a search engine user ([At least 90% of the time based on our filtering?]). The correspondence between property names and document text is also often subtle and requires sophisticated reasoning to determine relevance, representing the natural language inference challenge inherent in the task, while the knowledge of category membership allows us to construct relatively comprehensive sets of candidate entities for atomic categories and their combinations. Our dataset construction process is outlined in proposed for question answering over knowledge 136 bases
|
Previous work in question answering and information retrieval has focused on QA over knowledge bases as well as open-domain QA and retrieval over a set of entities or documents. We highlight how these relate to our work below. Knowledge Base QA Several datasets have been proposed for question answering over knowledge bases Questions are optionally supplemented with logical forms. Open-Domain QA and Retrieval Many opendomain QA benchmarks, which consider QA over unstructured text corpora, have been proposed in prior work. Some of these, such as TREC Multi-Answer Retrieval Related work In concurrent work, RomQA QUEST consists of 3357 queries paired with up to 20 corresponding entities. Each entity has an associated document derived from its Wikipedia page. The dataset is divided into 1307 queries for training, 323 for validation, and 1727 for testing. The task for a system is to return the correct set of entities for a given query. Additionally, as the collection contains 325,505 entities, the task requires retrieval systems that can scale efficiently. We do not allow systems to access additional information outside of the text descriptions of entities at inference time. Category labels are omitted from all entity documents. The base atomic queries (i.e., queries without any introduced set operations) in our dataset are derived from Wikipedia category names However, repurposing these categories for constructing queries poses challenges: 1) lack of evi- dence in documents: documents may not contain sufficient evidence for judging their relevance to a category, potentially providing noisy signal for relevance attributable to the document text, 2) low recall: entities may be missing from categories to which they belong. For about half of the dataset, we crowdsource relevance labels and attribution based on document text, and investigate recall through manual error analysis ( §5). We select four domains to represent some diversity in queries: films, books, animals and plants. Focusing on four rather than all possible domains enables higher quality control. The former two model a general search scenario, while the latter two model a scientific search scenario. To construct queries with set operations, we define templates that represent plausible combinations of atomic queries. Denoting atomic queries as A, B and C, our templates and corresponding examples from different domains are listed in Table Below we describe the logic behind sampling atomic queries (i.e., A, B, C) for composing com-plex queries, with different set operations. In all cases, we ensure that answer sets contain between 2-20 entities so that crowdsourcing relevance judgements is feasible. We sample 200 queries per template and domain, for a total of 4200 initial queries. The dataset is split into train + validation (80-20 split) and testing equally. In each of these sets, we sampled an equal number of queries per template. Intersection. The intersection operation for a template A∩B is particularly interesting and potentially challenging when both A and B have large answer sets but their intersection is small. We require the minimum answer set sizes of each A and B to be fairly large (>50 entities), while their intersection to be small (2-20 entities). Difference. Similar to intersection, we require the answer sets for both A and B to be substantial (>50 entities), but also place maximum size constraints on both A (<200 entities) and B (<10000 entities) as very large categories tend to suffer from recall issues in Wikipedia. We also limit the intersection of A and B (see reasoning in Appendix B). Union. For the union operation, we require both A and B to be well-represented through the entities in the answer set for their union A ∪ B. Hence, we require both A and B to have at least 3 entities. Further, we require their intersection to be non-zero but less than 1/3rd of their union. This is so that A and B are somewhat related queries. For all other templates that contain compositions of the above set operations, we apply the same constraints recursively. For example, for A∩B \C, we sample atomic queries A and B for the intersection operation, then sample C based on the relationship between A ∩ B and C. Automatically generating queries based on templates results in queries that are not always fluent and coherent. Further, entities mapped to a query may not actually be relevant and don't always have attributable evidence for judging their relevance. We conduct crowdsourcing to tackle these issues. The annotation tasks aim at ensuring that 1) queries are fluent, unambiguous and contain diverse natural language logical connectives, (2) entities are verified as being relevant or non-relevant and (3) relevance judgements are attributed to document text for each relevant entity. Crowdsourcing is performed in three stages, described below. More annotation details and the annotation interfaces can be found in Appendix C. Crowdworkers were asked to paraphrase a templatically generated query so that the paraphrased query is fluent, expresses all constraints in the original query, and clearly describes what a user could be looking for. This annotation was done by one worker per query. This stage is aimed at validating the queries we obtain from the paraphrasing stage. Crowdworkers were given queries from the first stage and asked to label whether the query is 1) fluent, 2) equivalent to the original templatic query in meaning, and 3) rate its naturalness (how likely it is to be issued by a real user). This annotation was done by 3 workers per query. We excluded those queries which were rated as not fluent, unnatural or having a different meaning than the original query, based on a ma-jority vote. Based on the validation, we removed around around 11% of the queries from stage 1. Next, crowdworkers were asked to provide relevance judgements for the automatically determined answer sets of queries. Specifically, they were given a query and associated entities/documents, and asked to label their relevance on a scale of 0-3 (definitely not relevant, likely not relevant, likely relevant, definitely relevant). They were asked to ensure that relevance should mostly be inferred from the document, but they could use some background knowledge and do minimal research. We also asked them to provide attributions for document relevance. Specifically, we ask them to first label whether the document provides sufficient evidence for the relevance of the entity (complete/partial/no). Then, for different phrases in the query (determined by the annotator), we ask them to mark sentence(s) in the document that indicate its relevance. The attribution annotation is broadly inspired by Basic dataset statistics are reported in Table Beyond the annotated data, we generated additional synthetic examples for training. We found including such examples improved model performance, and we include these examples for the experiments in §4. To generate these examples, we sample 5000 atomic queries from all domains, ensuring that they do not already appear as sub-queries in any of the queries in QUEST and use their corresponding entities in Wikipedia as their relevant entity set. We evaluate modern retrieval systems to establish baseline performances. We also perform extensive error analysis to understand patterns of model errors and the quality of the labels in QUEST. We consider a corpus, E, that contains entities across all domains in the dataset. Each entity is accompanied with a document based on its Wikipedia page. An example in our dataset consists of a query, x, and an annotated set of relevant entities, y ⊂ E. As described in §3, for all examples |y| < 20. Our task is to develop a system that, given E and a query x, predicts a set of relevant entities, ŷ ⊂ E. Our primary evaluation metric is average F 1 , which averages per-example F 1 scores. We compute F 1 for each example by comparing the predicted set of entities, ŷ, with the annotated set, y. We evaluated several combinations of retrievers and classifiers, as shown in Figure For the best overall system, we sampled errors and manually annotated 1145 query-document pairs from the validation set. For the retriever, we sampled relevant documents not included in the top-100 candidate set and non-relevant documents ranked higher than relevant ones. For the classifier, we sampled false positive and false negative errors made in the top-100 candidate set. This annotation process included judgements of document relevance (to assess agreement with the annotations in the dataset) and whether the document (and the truncated version considered by the dual encoder or classifier) contained sufficient evidence to reasonably determine relevance. We also annotated relevance for each constraint within a query. We discuss these results in §5. We report the performance of our baseline systems on the test set in Table Dual encoders outperform BM25. As shown in Table To analyze why queries with conjunction and negation are challenging, we labeled the relevance of individual query constraints ( §4.4), where a system incorrectly judges relevance of a non-relevant document. The results are summarized in Table For false negative errors, we judged 91.1% of the entities to be relevant for the films and books domains, and 81.4% for plants and animals. Notably, we collected relevance labels for the films and books domains and removed some entities based on these labels, as described in §3, which likely explains the higher agreement for false negatives from these domains. This indicates significant headroom for improving recall as defined by QUEST, especially for the domains where we collected relevance labels. For false positive errors, we judged 28.8% of the entities to be relevant, showing a larger disagreement with the relevance labels in the dataset. This is primarily due to entities not included in the entity sets derived from the Wikipedia category taxonomy (97.7%), rather than entities removed due to relevance labeling. This is a difficult issue to fully resolve, as it is not feasible to exhaustively label relevance for all entities to correct for recall issues in the Wikipedia category taxonomy. Future work can use pooling to continually grow the set of relevant documents (Sparck Truncating document text usually provides sufficient context. In our experiments, we truncate document text to 512 tokens for the dual encoder, and 384 tokens for the classifier to allow for the document and query to be concatenated. Based on our error analysis ( §4.4), out of the documents with sufficient evidence to judge relevance, evidence occurred in this truncated context 93.2% of the time for the dual encoder, and 96.1% of the time for the classifier. This may explain the relative success of this simple baseline for handling long documents. We also evaluated alternative strategies but these performed worse in preliminary experiments We present QUEST, a new benchmark of queries which contain implicit set operations with corresponding sets of relevant entity documents. Our experiments indicate that such queries present a challenge for modern retrieval systems. Future work could consider approaches that have better inductive biases for handling set operations in natural language expressions (for example, Naturalness. Since our dataset relies on the Wikipedia category names and semi-automatically generated compositions, it does not represent an unbiased sample from a natural distribution of real search queries that contain implicit set operations. Further, we limit attention to non-ambiguous queries and do not address the additional challenges that arise due to ambiguity in real search scenarios. However, the queries in our dataset were judged to plausibly correspond to real user search needs and system improvements measured on QUEST should correlate with improvements on at least a fraction of natural search engine queries with set operations. Recall. We also note that because Wikipedia categories have imperfect recall of all relevant entities (that contain sufficient evidence in their documents), systems may be incorrectly penalised for predicted relevant entities assessed as false positive. We quantify this in section 5. We have also limited the trusted source for an entity to its Wikipedia document but entities with insufficient textual evidence in their documents may still be relevant. Ideally, multiple trusted sources could be taken into account and evidence could be aggregated to make relevance decisions. RomQA Answer Set Sizes. To ensure that relevance labels are correct and verifiable, we seek the help of crowdworkers. However, this meant that we needed to restrict the answer set sizes to 20 for the queries in our dataset, to make annotation feasible. On one hand, this is realistic for a search scenario because users may only be interested in a limited set of results. On the other hand, our dataset does not model a scenario where the answer set sizes are much larger. All models were fine-tuned starting from T5 1.1 checkpoints 6 . We fine-tune T5 models on 32 Cloud TPU v3 cores 7 . Fine-tuning takes less than 8 hours for all models. Dual Encoder. We used the t5x_retrieval library 8 for implementing dual encoder models. We tuned some parameters based on results on the validation set. Relevant hyperparameters for training the dual encoder are: • Learning Rate: 1e-3 • Warmup Steps: 1500 • Finetuning Steps: 15000 • Batch Size: 512 • Max Query Length: 64 • Max Candidate Length: 512 Classifier. For negative examples, we sampled 250 random non-relevant documents and sampled 250 non-relevant documents from the top-1000 documents retrieved by BM25. We also replicated each positive example 50 times. We found an approximately even number of positive and negative examples lead to better performance than training with a large class imbalance. We found a combination of random negatives and negatives from BM25 performed better than using only either individual type of negative examples. Additionally, selecting negative examples from BM25 performed better than selecting negative examples from the T5-Large dual encoder. For the T5 input we concatenated the query and truncated document text. The T5 output is the string "relevant" or "not relevant". To classify document relevance at inference time, we applied a threshold to the probability assigned to the "relevant" label, which we tuned on the validation set. When classifying BM25 candidates we used a threshold of 0.9 and when classifying the dual encoder candidates we used a threshold of 0.95. Other relevant hyperparameters for training the classifier are:
| 1,280 | 3,970 | 1,280 |
SUMMEDITS: Measuring LLM Ability at Factual Reasoning Through The Lens of Summarization
|
With the recent appearance of LLMs in practical settings, having methods that can effectively detect factual inconsistencies is crucial to reduce the propagation of misinformation and improve trust in model outputs. When testing on existing factual consistency benchmarks, we find that a few large language models (LLMs) perform competitively on classification benchmarks for factual inconsistency detection compared to traditional non-LLM methods. However, a closer analysis reveals issues with existing evaluation benchmarks, affecting evaluation precision. To address this, we propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SUMMEDITS. This new benchmark is 20 times more costeffective per sample than previous benchmarks and highly reproducible, as we estimate interannotator agreement at about 0.9. Most LLMs struggle on SUMMEDITS, with performance close to random chance. The best-performing model, GPT-4, is still 8% below estimated human performance, highlighting the gaps in LLMs' ability to reason about facts and detect inconsistencies when they occur.
|
With recent progress in generation capabilities of LLMs, automatic summarization is making its appearance in practical information consumption situations such as summarizing work meetings Prior work Recent investigations of using LLMs for evaluation have shown promising results across different NLP tasks To address this issue, we introduce a protocol designed to create challenging benchmarks while ensuring the reproducibility of the labels. The protocol involves manually verifying the consistency of a small set of seed summaries and subsequently generating numerous edited versions of these summaries. We discover that assessing the consistency of edited summaries is relatively straightforward and easy to scale for human annotators, thus guaranteeing low cost and high agreement among annotators, yet keeping the task challenging for models. We create the SUMMEDITS benchmark by implementing the protocol in ten diverse textual domains, including the legal, dialogue, academic, financial, and sales domains. Figure We believe SUMMEDITS can serve as a tool to evaluate LLMs' abilities to detect factual inconsistencies when they (inevitably) occur and encourage LLM developers to report their performance on the benchmark. For practitioners requiring specific domain expertise, the protocol can be adapted to generate low-cost, in-domain benchmarks that can check model capabilities prior to production use. We release the code and benchmark publicly
|
Annotating Factuality of Summaries. With advances in language models and the increase in fluency and abstractiveness of summarizers, prior work showed that one of the key challenges in summarization was enforcing factual consistency Detecting Factual Errors. Some work has taken an automated approach to the detection of inconsistencies, with approaches falling into two main categories: question and entailment-based. In questionbased approaches, questions are generated with the expectation that paired documents and summaries should provide consistent answers. QAFactEval We first analyze model performance on two popular benchmarks for factual consistency detection in summarization: AggreFact We include in our experiments three specialized non-LLM approaches: DAE, SummaC, and QAFactEval and ten LLM models from recent LLM families. We include Cohere's Command-XL, Anthropic's Claude V1.3 To minimize the computational cost of experiments, we select a single Zero-Shot prompt that is used for all LLM models. We make this choice instead of optimizing the prompt for each model for two reasons: (1) there's no guarantee that prompt quality will transfer across benchmarks, and using a single common prompt removes variance from prompt optimization that does not measure underlying model ability, and (2) more complex prompts would require adaptation to each domain (e.g. domain-specific few-shot examples), and restrict the evaluation of models with shorter maximum sequence lengths due to longer prompts. AggreFact-SOTA Table Of the 101 samples, 80 were labeled by the annotator as correct or partially correct explanations that identify and explain a factual inconsistency in the summary. In other words, this manual analysis of a subset of AggreFact reveals that a minimum of 6% of the samples in AggreFact are mislabeled. The low reliability of labels in crowdsourced benchmarks like AggreFact is a known issue This analysis reveals the potential for LLMs as part of dataset creation. In some cases, an LLM explanation that is verifiable -such as an explanation for an identified factual inconsistency -can accelerate and improve the quality of annotation. LLM explanations might not be valuable in all cases, such as when a model asserts a summary is consistent, manual verification is still required to assure quality. In Section 5, we introduce a protocol for benchmark creation that can involve an LLM. Based on the low reliability of labels in Ag-greFact, we note that a key requirement for future benchmarks is to improve label reliability, which can be demonstrated with high annotator agreement when multiple annotators are involved. The DialSummEval (Gao and Wan, 2022) benchmark is a summarization evaluation benchmark created following the format of SummEval Echoing results on AggreFact, increasing model size leads to minor performance gains, with most LLMs underperforming specialized methods. In absolute terms, all methods struggle to achieve strong performance, with accuracies all below 70%. In Figure els assign large proportions of samples from each bucket into consistent and inconsistent classes. We argue that annotating the consistency of summaries using a Likert scale limits the quality and interpretability of the benchmark, as it is not evident to interpret the differences between scores, limiting reproducibility, which is reflected in the moderate Kripendorff's alpha. Instead, we favor framing factual consistency benchmarks as a detection task. In the detection task, identifying any factual inconsistency between the document and summary leads to an overall assessment of the summary being inconsistent. If no inconsistency is detected, the summary is consistent. The detection framing also allows for models to provide natural language explanations when identifying a summary as inconsistent, which can be manually verified to confirm model reasoning ability. In the next section, we propose a novel protocol to create factual consistency benchmarks, incorporating lessons learned from existing benchmarks. We set several design principles that help create higher-quality factual consistency benchmark: P1. Binary Classification Task: In the bench- mark, a summary should either be labeled as inconsistent if any factual inconsistency is identified with the document or consistent otherwise, to improve label interpretability. P2. Focus on Factual Consistency: Summaries in the benchmark should be flawless on aspects unrelated to consistency, such as fluency, coherence, and formatting, to avoid confounding effects on the quality of the benchmark. P3. Reproducibility: Benchmark labels should not depend on annotator identity, and high annotator agreement should confirm the validity of the benchmark, as well as estimate human performance on the benchmark. P4. Benchmark Diversity: Inconsistency errors in the benchmark should represent a wide range of errors in realistic textual domains, to increase understanding of model strengths and weaknesses, and better establish gaps in performance between models and human annotators at factual reasoning, if there are any. We now describe the creation procedure for SUMMEDITS -illustrated in Figure Seed Summary Verification. Benchmark creators select a small collection of documents in a domain of choice, and a seed summary for each document, which can be human-written or model generated. An annotator answers two questions about each (document, seed summary) tuple: (a) "Are there any flaws with the summary? (fluency, format, etc.)", (b) "Is the summary factually consistent with the document?". If the annotator identifies a flaw or an inconsistency, the tuple is filtered out (P2), otherwise, it proceeds to Step 2. Editing Summaries. The second step consists in generating multiple minor edits of the summary, which might or might not affect the summary's consistency. This step can be carried out manually, or automatically with an LLM. Proposed edits should be atomic and localized, not entirely rewriting a novel summary. Table Annotation of Edited Summaries. The annotator who completed Step 1 reviews each edited summary, assigning one of three labels: (a) consistent if an edit does not lead to an inconsistency, (b) inconsistent if the edit modifies the seed summary in a way that introduces a factual inconsistency, (c) borderline for any other case such as the edit making the summary unclear, or requiring subjectivity. Crucially, a single annotator should complete both Steps 1 and 3, as once they have invested the time in reading the (document, summary seed) tuple, judging the consistency of edits is a simpler task. We recommend including a large number of edits (e.g., 30 edits) to maximize edit diversity (P4) and encouraging annotators to assign the borderline label if they are unsure about any aspect of an edit to maximize reproducibility (P3). A benchmark can be formed by retaining edited summaries that are labeled as consistent and inconsistent and filtering out borderline cases. The procedure requires a small number of documents and seed summaries which are derived into many edited summaries. This flexibility facilitates the creation of factual consistency benchmarks in application domains that lack such resources. We implemented the SUMMEDITS protocol on ten realistic summarization domains to explore the reliability of the protocol. For five domains, seed summaries are automatically generated due to the lack or low quality of existing reference summaries. In such cases, we used GPT3.5-turbo and domainspecific prompts to generate seed summaries. We note that the quality of seed summaries is ultimately manually confirmed in step 1 of the protocol. For all domains, we use GPT3.5-turbo 2 for Step 2. We experimented with integrating multiple LLMs in the edit generation process, but preliminary results indicated that many LLMs were not successful at generating minorly edited summaries and often attempted to write entirely novel summaries, which led us to solely use GPT3.5-turbo. More on this choice in Section 7. We hired two professional annotators compensated at a rate of $20/hour to perform Steps 1 and 3. Three authors of the paper also participated in the annotation for quality control purposes. Appendix C has further detail on annotation protocol and an overview of the annotation interface. We next introduce the ten domains included in the SUMMEDITS benchmark. News To avoid selecting documents that are in the training corpora of evaluated models, we follow prior work Podcast BillSum SamSum Shakespeare SciTLDR QMSum ECTSum We generated 40 fictional sales call transcripts, 40 sales emails, and corresponding seed summaries using ChatGPT. These domains evaluate the protocol's validity with entirely synthetic textual data in targeted domains that lack pre-existing summarization datasets. Table For each domain, the seed summaries of at least ten seed summaries were annotated by multiple annotators, corresponding for each domain to at least 20% of the samples in the benchmark. In total, 1,419 of the 6,348 samples in SummEdits received multiple annotations, allowing us to measure agreement levels. When considering all three labels (consistent, inconsistent, borderline), Cohen's Kappa in each domain varies between 0.72-0.90, averaging 0.82. When removing samples annotated as borderline by any annotator, the average Cohen's Kappa rises to 0.92, empirically validating the importance of filtering out borderline samples to create a reproducible benchmark. The edited summaries have and average of 3.6 words inserted, and 3.5 words deleted. These edit statistics do not vary widely based on the consistency label, as consistent edited summaries have an average of 3.6 words inserted, 3.7 words deleted, and inconsistent edited summaries have 3.6 words inserted, 3.4 words deleted. These statistics that models could not rely on structural signals to predict the consistency of a summary, and required factual reasoning to accomplish the task. In the final benchmark, 37% of summaries are consistent, approaching our objective of a balanced benchmark to facilitate robust evaluation and minimize metric fluctuations The total annotation cost of SUMMEDITS is around USD 3,000, representing around 150 hours of annotator work. The average cost of adding a domain to SUMMEDITS is around USD 300, within reach for NLP practitioners looking to evaluate the model ability in their domain of choice. Authors of the FRANK benchmark Table Overall, model performance on the benchmark is low, with only GPT4 getting within 10% of human performance. Larger or more recent LLMs perform better on the benchmark, as is illustrated by the gradual improvements observed with each model generation in the OpenAI model family. PaLM2-Bison, Dav003, ChatGPT, and GPT4 are the only four LLMs that outperform the best non-LLM approach QAFactEval, providing evidence that most LLMs are not yet capable to reason out-of-the-box about the consistency of facts. All three specialized models achieve their highest performance in the news domain, unlike LLM models. The specialized models are likely calibrated to the news domain, which they are most fre- quently tested on We experiment with an oracle setting in which we append the seed summary to the end of the input document and input the concatenation to the model. The seed summary serves as an information scaffold, enabling the model to view modifications between the seed and edited summaries. GPT4 achieves a significant boost under the oracle setting, the model performing within 2% of human performance. This confirms that high model performance on SUMMEDITS is attainable and that the challenge lies in aligning the facts of the edited summary with the document, without knowing that it has been edited. We annotated each inconsistent sample in SUMMEDITS with tags of edit types. The four edit types are: (1) Entity Modification in which an entity or phrase in the summary has been changed in a meaning-altering way, (2) Antonym Swap when a word or phrase is replaced by a word of opposite meaning, (3) hallucinated fact insertion, when a novel fact is introduced in the summary which is not supported by the document, and (4) negation insertion when any negator word (e.g., not, neither) which modifies summary meaning is inserted. Figure To annotate the entire benchmark, one author of the paper first manually annotated 200 samples of the dataset, which was used to evaluate several GPT4-based Zero-Shot and Few-Shot approaches. The best-performing prompt provides the definition of each edit type and a canonical example of each, and it achieved a performance of 0.85 F-1 and 0.92 recall, which was deemed sufficient for analysis purposes. GPT4 was used to annotate all inconsistent summaries in SUMMEDITS. Overall, 78% of inconsistent summaries contain an entity modification, 48% an antonym swap, 22% hallucinated fact insertion, and 18% a negator insertion. The distribution of edit types is highly influenced by the LLM and prompt used to produce the edits in Step 2 of the protocol. Table All models detect inconsistencies due to negator insertions the best, a sign that such errors are more discernable to models. Fact hallucinations are relatively harder to detect for non-LLM models but gradually become more evident to more performant LLMs. Finally, the entity modification and antonym error types generally see the lowest rate of detection by models across the board, perhaps due to such edits modifying an existing consistent fact in a more nuanced way. In SUMMEDITS, it is common for the LLM to introduce multiple edits in each of its candidate summaries, as can be seen in the examples in Table In this work, we explore the capabilities of LLMs to act as factual reasoners through the lens of factual evaluation in text summarization. As part of this analysis, we uncover and discuss shortcomings of existing benchmarks. Using those insights we develop a new protocol for creating inconsistency detection benchmarks, which we implement in a 10-domain benchmark called SUMMEDITS. The SUMMEDITS benchmark is highly reproducible and more cost-effective per sample than previous benchmarks. Our experiments show that the benchmark is challenging for most current LLMs, with the best-performing model, GPT-4, still 8% below estimated human performance. We believe that SUMMEDITS can serve as a valuable tool for evaluating LLMs' abilities to reason about facts, detect factual errors and promote more reliable NLG systems. We encourage LLM developers to report their performance on the benchmark. Why not fix existing benchmarks? In Section 3, analysis reveals limitations with existing benchmarks that in theory can be fixed to yield improved versions of known benchmarks. The analysis we performed however only helps us invalidate a subset of samples in an opportunistic way, by looking at samples where benchmark labels and GPT4 disagree. However, this methodology cannot help us efficiently correct or confirm all samples, and improving existing benchmarks would require reannotating a large portion of the benchmarks, and we do not have a guarantee that new annotations would improve on previous ones. By designing a new protocol for sample annotation that relies on clear, atomic edits, we simplify the annotation process, improving reproducibility. Step 2 of the protocol described in Section 4 relies on an LLM to generate many edits of the seed summary, which are subsequently manually annotated and included in the benchmark. The choice of LLM likely has an effect on the benchmark which could favor a subset of LLMs most similar to the one used for benchmark creation. Initial attempts to use a pool of LLMs to produce edits were unsuccessful as we found that only ChatGPT and GPT4 were currently capable of following editing instructions that do not fully rewrite summaries. Future iterations on similar benchmarks should consider including diverse pools of LLMs in benchmark creation processes to avoid model-specific bias. Beside the edit summaries, we leveraged ChatGPT to generate the seed summaries in five of the ten domains in SUMMED-ITS, due to the low-quality or non-existence of human-written summaries. All seed summaries are manually inspected by our annotators, and we did not find a gap in model performance dependent on the origin of the seed summaries. Beyond Binary Classification. SUMMEDITS focuses on a binary classification formulation of factual reasoning (i.e., determining whether a summary is consistent/inconsistent). Binary classification has multiple advantages, including the ability to benchmark both generative and non-generative models, requiring limited adaptation of previous systems, and supporting well-established evaluation metrics such as balanced accuracy. However, the edit-based protocol of SUMMEDITS could be beneficial in instantiating more advanced factual inconsistency tasks. For example, SUMMEDITS could be modified into an "error localization" task which would require models to identify edit spans that render the summary inconsistent, or an "error correction" task, which would require a generative model to undo problematic edits, removing edit spans that lead to factual errors. These more advanced task formulations would require crafting reliable metrics, which was out of the scope of the current project. Evaluating Summarizers. Previous annotation efforts in factual consistency of summarization were in part collected to evaluate which summarization models are least likely to generate factual inconsistencies Build Your Own Benchmark. The initial release of SUMMEDITS consists of ten diverse domains we hope span common summarization domains. The current benchmark is however limited, as it only includes documents and summaries in English, and mostly limits document length to below 2,000 words. We have however shown that the protocol can be adapted to widely different textual domainsfrom US legal bills to Shakespeare plays -and produce domain-specific benchmarks at low cost. We hope that others will adopt and adapt the protocol to new domains, languages, and NLP tasks. The models and datasets utilized in the project primarily reflect the culture of the English-speaking populace. Gender, age, race, and other socioeconomic biases may exist in the dataset, and models trained on these datasets may propagate these biases. Text generation tasks such as summarization have previously been shown to contain these biases. In Section 3 and Section 5, we recruited professional annotators to perform labeling with respect to summaries' factual consistency label or LLM reasoning explaining factual inconsistencies. We ensured to remunerate the participants fairly ($20/hour). Participants could communicate with us to voice concerns, could work at their own pace, and choose to stop working on the project at any time. Finally, we ensured to anonymize the annotations by not including personally identifiable information in any version of the dataset (annotator identity is instead marked as annotator1, annotator2, etc.). In our work, we relied on several datasets as well as pre-trained language models. We explicitly verified that all datasets and models are publicly released for research purposes and that we have proper permission to reuse and modify the datasets. Google Models. We experiment with two Google models, the Bard Anthropic Model. We collected outputs of the Claude V1.3 model (model card: claude-v1.3), the latest and largest Anthropic model at the time of publication, using the official API hosted by Anthropic Cohere Model. We collected outputs of Cohere's command-xlarge model, the latest and largest Cohere model at the time of publication, using the official API hosted by Cohere (text-babbage-001), Cur001 (text-curie-001), Dav001 (text-davinci-001), Dav002 (text-davinci-002), and Dav003 (text-davinci-003). We also include GT3.5-turbo (gpt-3.5-turbo) and . All models were accessed through OpenAI's official API We hired a professional annotator to complete the annotation of model-generated explanations for AggreFact. The annotators were compensated at $20/hour. They received onboarding documentation that introduced them to the task, and provided the following definition for each type of explanation: • No Explanation: If the model did not provide any explanation. (For example just saying: "The summary is inconsistent"), • Entirely Correct: if the explanation correctly identifies and explains one or more factual inconsistencies in the summary, • Partially Correct: if the explanation provided contains several elements and at least one of them correctly identifies and explains a factual inconsistency in the summary, • Unrelated: if the explanation given does not directly relate to a factual inconsistency between the summary and the document, • Incorrect: if the explanation given does not correctly identify a factual inconsistency in the summary, for example, making a logical error. An example for each type of explanation was provided during onboarding. Annotation was performed in batches, and the first two batches of annotation by the annotator were reviewed by the authors of the paper. Incorrect annotations were discussed, allowing the annotator to better understand edge cases of the task, and modify their annotation in the first batches. Each annotator could communicate with one of the authors to discuss edge cases and maintain a common understanding of the task. Annotators could not communicate with each other. We hired two professional annotators to complete the annotation of Steps 1 and 3 of the SUMMEDITS protocol (see Section 4). The annotators were compensated at $20/hour. They received onboarding documentation that introduced them to the task and used the interface shown in Figure Annotators were first assigned 10 warm-up seed summaries, each with roughly 30 edited summaries, which had been pre-annotated by the authors of the paper. The authors reviewed the completed warmup exercises, and a strong agreement level on the warm-up task with both annotators was observed. Annotators could communicate with one of the authors of the paper to discuss any edge case or domain-specific question. For example, the annotation for the QMSumm domain required additional instructions due to query-focused formulation of the task, and instructions were communicated on how to deal with the "query" element when evaluating summaries. Namely, during Step 1 of the protocol, participants were asked to additionally judge whether the summary accurately responded
| 1,134 | 1,457 | 1,134 |
Don't Mess with Mister-in-Between: Improved Negative Search for Knowledge Graph Completion
|
The best methods for knowledge graph completion use a 'dual-encoding' framework, a form of neural model with a bottleneck that facilitates fast approximate search over a vast collection of candidates. These approaches are trained using contrastive learning to differentiate between known positive examples and sampled negative instances. The mechanism for sampling negatives to date has been very simple, driven by pragmatic engineering considerations (e.g., using mismatched instances from the same batch). We propose several novel means of finding more informative negatives, based on searching for candidates with high lexical overlaps, from the dual-encoder model and according to knowledge graph structures. Experimental results on four benchmarks show that our best single model improves consistently over previous methods and obtains new state-of-the-art performance, including the challenging large-scale Wikidata5M dataset. Combing different strategies through model ensembling results in a further performance boost.
|
A Knowledge Graph (KG) is a structured form of human knowledge consisting of entities, facts, relationships between any pair of entities, and semantic descriptions of entities. As important structures that store millions of data records that represent a part of human knowledge, KGs have been proven to bring substantial benefits to a wide range of applications, including commonsense question answering Graph embedding and textual embedding methods are two mainstream techniques for KGC problems. The former typically map entities and relations into fixed dense vectors and maximises the probability of valid triples using specially-designed scoring functions Recently, Therefore, to fill this gap, in this work, we aim to systematically investigate the effects of various hard negative sampling strategies for dual-encoderbased KGC. Specifically, we construct negative samples using three different ways. Our approach first evaluates the utility of negatives that share high lexical similarity with the head entity or the correct tail entity in terms of entity names and text descriptions. Based on the knowledge graph structures, we alternatively search negatives from the head or tail entity's local neighbourhood, hypothesising that the neighbourhoods of a certain entity that are not directly connected to it are highly related, but not so related to be false negatives (i.e., positives). Lastly, we investigate sampling so-called 'hard negatives' from top-k predictions generated by a baseline dual-encoder KGC model, as negatives that receive high scores are believed to be important and difficult to distinguish. In addition, in order to reduce possible false negatives, we also experimented with two variant neural negative sampling strategies according to heuristics. In summary, our contributions are: 1 1. To the best of our knowledge, we are the first to systematically investigate the impacts of different types of negative sampling strategies for dual-encoder-based KGC. 2. We explore how best to combine the benefits of different negative sampling strategies to obtain further performance gains. 3. We compare our proposed negative searching methods on four benchmark datasets of different scales. Experimental results demonstrate that our best model significantly outperforms baselines, establishing a new state-of-the-art on all datasets, while ensembling leads to further performance gains.
|
In this paper, we deal with the task of predicting missing entities in knowledge graph completion. Formally, given a knowledge graph G which has a set of entities E and predefined relations R, the tail entity retrieval task (h, r, ?) requires retrieving a list of entities {t 1 , t 2 , . . . , t k } from the entity set E, ranked by their relevance to this head-relation pair (h, r). Following the current state-of-the-art approach to KGC Figure The dual encoders are normally trained to maximise the similarity scores of all positive triples (h, r, t). Here, we use the InfoNCE loss, following where the denominator sums over all N = |E| entities in the KG. Since N is usually very large, the common practice is to use specific negative sampling strategies to select a subset of negative samples to replace the full normalisation term in Equation 2. In practice, the selection of negative samples is crucial to the performance of a trained model This strategy treats as negatives the tail entities of other samples in the same mini-batch, which is a cheap way to obtain a large number of negatives 'Hard' negatives are informative negatives, which are difficult to distinguish as they share similar characteristics with true positives (e.g., high lexical overlap or semantic similarity). They normally receive larger similarity scores from the model, which in turn results in larger loss gradients and thus larger parameter updates in training. Selecting negatives from them can effectively mitigate the diminishing gradient norms when using uninformative negative instances (i.e., in-batch negatives), thus providing more optimal training signals and leading to faster convergence speed Sparse Negatives Hard negatives were first shown to be useful for improving the performance of dense passage retrievers in question answering Step 1: In-batch negatives are used to train a dual-encoder KGC model. Step 2: FAISS Step 3: Employ approximate search to find the top-k retrieved entities using head-relation pair (h, r) as the query, with encoding E hr . Then any positive tail entity to this query are removed based on the training dataset and all other entities will be kept as hard negatives. Although sampled negatives are both difficult and informative, the Neural Negative method may end up including many false negatives. This is because for a good KGC model, if the same headrelation pair (h, r) appears in both training and test graphs, the top-k predictions will likely include correct answers in the test set. Pre-batch negatives extend in-batch negatives by using stale entity embeddings from previous n batches, which can be considered a cheaper way Head Entity land_reform_NN_1: a redistribution of agricultural land (especially by government action) Relation hypernym Tail Entity reform_NN_1: a change for the better as a result of correcting abuses; justice was for sale before the reform of the law courts Sparse 1. landing_NN_3: the act of coming down to the earth (or other surface); "the plane made a smooth landing"; "his landing on his feet was catlike" 2. amphibious_landing_NN_1: a military action of coordinated land, sea, and air forces organized for an invasion; "MacArthur staged a massive amphibious landing behind enemy lines" 3. enderby_land_NN_1: a region of Antarctica between Queen Maud Land and Wilkes Land; claimed by Australia Structure 1. event_planner_NN_1: someone who plans social events as a profession (usually for government or corporate officials) 2. price-fixing_NN_1: control (by agreement among producers or by government) of the price of a commodity in interstate commerce 3. lawlessness_NN_1: a state of lawlessness and disorder (usually resulting from a failure of government) Neural 1. reform_NN_3: self-improvement in behavior or morals by abandoning some vice; "the family rejoiced in the drunkard's reform" 2. improvement_NN_1: a change for the better; progress in development 3. reform_NN_2: a campaign aimed to correct abuses or malpractices to expand negatives compared to memory bank approaches During training, we use one of the hard negative sampling strategies proposed in §3.2 together with a combination of in-batch negatives, pre-batch negatives, and self-negatives. Since extra hard negatives are used, the total number of instances used for loss calculation in Eq. 2 will be increased. For fair comparison, we reduce the batch size to ensure the total number of negatives used in training remains identical to our baseline SimKGC method 5 Experiments Our method is evaluated on WN18RR Rank i , where Rank i is the rank of the correct tail entity in the predicted outputs and N is the total number of triples in the test set. Similarly, Hits@k is the proportion of correct tails that appear in the top-k ranked candidates. We follow We replicated the state-of-the-art SimKGC model 7 For reversed triples, a special r -1 relation type is used. For instance, we convert a triple (?, educated at, Cambridge University) to (Cambridge University, reverse educated at, ?). We did not conflate the reversed relations with existing relations, although this may be beneficial (e.g., hyponym ≡ reverse hypernym.) 8 Triples that appear in training, validation and test sets. 9 Most hyperparameters are adopted from Tables By looking into the models trained using different hard negative sampling strategies, the behaviours are quite different in each dataset. More specifically, simply taking the n-hop neighbours as structure-aware negatives results in significant improvement on WN18RR but marginal increases or even negative impacts on the other three datasets. Similar effects are also observed for sparse negatives. Our model achieves the best results on FB15k237, DBPedia500k, and Wiki-data5M when using replaced head-relation negatives, which shows our heuristics are useful and can indeed lead to better performance. Gains over the baseline when using entity similar negatives are also significant, but it lags behind the other two types of neural negatives most of the time. Overall, we can conclude that there is no single negative sampling strategy that outperforms on all datasets. We believe this is due in part to the distinct characteristics of each dataset, which we explore in §6.1. Furthermore, by ensembling models trained using different types of negatives at inference time, we can observe performance gains up to 1.3% MRR and 1.8% H@1 over the best single model. For model ensembling methods, we find that embedding fusion is more beneficial when improving the precision (MRR and H@1), while rank fusion is helpful to boost the recall (H@3 and H@10), especially on DBPedia500k. Besides, both methods do not help much on FB15k237 with only marginal gains. However, both ensembling methods come at the cost of increased inference latency with a factor of N , the number of ensembled models, which FB15k-237 Wikidata5M MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 hinders their utility for real-time deployment. Next, we analyse why the benefits of different hard negative sampling strategies vary from dataset to dataset. We employ two measurements, namely Difficulty and False Negative Rate, to draw the con-nections to the performance on specific datasets. Difficulty is measured by the average model score between hard negatives and their corresponding head-relation pairs. More specifically, for a given KG triplet (h i , r i , t i ) and its associated hard negative pool N i = {t j |1 ≤ j ≤ k}, the difficulty is computed as follows: where s(h i , r i , t j ) is the score predicted by the SimKGC replicated model. i D i . On the other hand, False Negative Rate is the proportion of hard negatives which are correct answers that appear in development and test graphs: where T is the development and test sets. I( * ) = 1 if the corresponding triple appears in the development or test set; otherwise it is 0. A model that is trained using hard negatives with high difficulty and low false negative rate is expected to achieve better performance. Figure One natural following question is how much improvement we may achieve if we remove all false negatives when using the most difficult headrelation hard negatives to train a model. As shown in Figure To further understand the behaviour of our model, we follow Textual embedding methods are known to generalise better to unseen entities than graph embedding ones Knowledge Graph Completion KGC has been extensively studied for many years as a popular research topic. Conventional KGC methods adopt graph embedding methods to map entity and relation into low-dimensional dense vectors and design various scoring functions to measure the plausibility of KG triples, including TransE Dual Encoder for Contrastive Learning A Dual Encoder, or Bi-Encoder, which adopts two encoders without weight sharing for feature encoding, has been widely used in many tasks, including image learning Inspired by previous work, we decouple the encoding of (h, r) and t by dual encoder and use the contrastive learning framework to learn effective knowledge embeddings. Hard Negatives for Contrastive Learning Hard negatives have been identified to be extremely helpful in learning better representations, including image learning Zhang and Stratos (2021) also found doing negative sampling from the model being optimised leads to better performance, as they argued that contrastive learning is a biased estimator and sampling negatives from the model itself can reduce such bias. We follow this direction and propose various hard negative search methods in this paper and show that they can substantially improve KGC. Negative Sampling Strategies for KGC Most KGC works employ a simple negative sampling strategy by corrupting the head entity h or tail entity t of a correct KG triplet (h, r, t) with uni-formly sampled random entities from the whole knowledge graph Moreover, adapting the dual-encoder-based KGC model and our proposed negative sampling methods to multilingual KGs, e.g., for a KG with concepts in different languages, or multi-modal settings, for a KG with concepts in the form of images, videos, or audios would further test the generalisation ability, which can be a promising research direction for future work. The learning rates are set to 5 × 10 -5 on WN18RR, 3 × 10 -5 on Wikidata5M and 1 × 10 -5 on the remaining datasets. All models are trained using Adam optimizer (Kingma and Ba, 2015) with a warmup learning rate scheduler. The model is trained for 50, 10, 5, and 1 epochs on WN18RR, FB15k-237, DBPedia500k, and Wikidata5M. For each hard negative sampling strategy, we generate 30 negatives for each training example. 12 During each training step, we uniformly sample a subset of hard negatives from the pool for each training example, and the best number is chosen from For both rank fusion and embedding fusion methods, their weights are shared and are tuned based on the performance on development sets. A summary of training details and hyperparameters is shown in Table For each training example in a mini-batch, we uniformly sample N negatives from its associated hard negative pool. We also treat the hard negatives and self-negatives of other examples in the same minibatch as in-batch negatives. Suppose the batch size is B and pre-batches are M , the total number of instances used for loss calculation in Eq. 2 will be (N + M + 2) × B for each training example. By contrast, the number of negatives used by SimKGC is (M + 2) × B. If we keep the same batch size, the number of negatives used in our experiment will increase by N ×B, which would potentially weak our claims as more negatives are used for contrastive learning. To ensure fair comparison, we reduced the batch size so that the number of negatives used in our experiment is the same as For example, if we take B = 768, N = 1, M = 1 on WN18RR, the number of negatives equals to 3072; while we take B = 1024 for SimKGC, the number will also be 3072. Thus, our methods will not be affected by including more negatives. Similar settings apply to other datasets. Another possible way to benefit from all kinds of hard negatives is to train a model on their combinations. We experiment with training a model by uniformly sampling negatives from the union of all types of hard negatives generated from §3.2. As shown in Table We analyse the performance difference between using head entity and tail entity as the query in the sparse, structure-aware and entity similar negative sampling strategies. Figure
| 1,026 | 2,410 | 1,026 |
Transferable and Efficient: Unifying Dynamic Multi-Domain Product Categorization
|
As e-commerce platforms develop different business lines, a special but challenging product categorization scenario emerges, where there are multiple domain-specific category taxonomies and each of them evolves dynamically over time. In order to unify the categorization process and ensure efficiency, we propose a two-stage taxonomy-agnostic framework that relies solely on calculating the semantic relatedness between product titles and category names in the vector space. To further enhance domain transferability and better exploit cross-domain data, we design two plugin modules: a heuristic mapping scorer and a pretrained contrastive ranking module with the help of "meta concepts", which represent keyword knowledge shared across domains. Comprehensive offline experiments show that our method outperforms strong baselines on three dynamic multi-domain product categorization (DMPC) tasks, and online experiments reconfirm its efficacy with a 5% increase on seasonal purchase revenue. Related datasets are released 1 .
|
Product categorization In real-world businesses, e-commerce platforms usually maintain multiple business lines with relatively independent taxonomies. These business lines are catering for different customer demands or specific domain applications, for example, one provides express delivery while another specializes in low-price bargains. Multiple business domains correspond to different category taxonomy structures, with various depths and distinct literal expressions of category names. Conventional industry approaches train separate classifiers on each domain, which under-utilize the cross-domain data and their shared knowledge while raising the expenses of maintenance. Meanwhile, with the expansion and reorganization of businesses, each category taxonomy keeps evolving as well, where old categories might be deleted or integrated and new categories are possibly added. Conventional multi-class classifiers need to be re-trained every time taxonomy changes, which disrupts the operation and further diminishes the maintenance efficiency. To mitigate taxonomy evolving issues, intuitively, we reformulate the canonical text classification problem as a text relevance matching problem. Moreover, to ensure both accuracy and online efficiency, we propose a two-stage Taxonomy-agnostic Label Retrieval (TaLR) framework (see Figure To leverage cross-domain data in multi-domain taxonomies challenge, we devise two plug-in modules in both stages to enhance TaLR's domain transferability. These modules are centralized with "meta concepts" that appear in the product titles, which represent fine-grained keyword knowledge shared across domains (Appendix B). As is shown in Figure In summary, our contributions are: (1) For the first time, we address the DMPC problem and release the corresponding multi-domain datasets in Chinese. (2) We propose a unified TaLR framework equipped with two well-designed plug-in modules empowered with meta concepts, which is robust and efficient against the two challenges in DMPC problem. (3) Offline experiments on our annotated real-world DMPC datasets show TaLR's ability to effectively transfer knowledge across domains and generalize to new domains. The unified TaLR outperforms three separately-trained SOTA classifiers by 1.65% on overall accuracy and maintains satisfactory accuracy in taxonomy evolving conditions. Online experiments reaffirm its efficacy with a 5% increase in seasonal purchase revenue.
|
∀i ∈ [1, n] domains, given a taxonomy G i with depth of d i and m leaf nodes, the path from root to leaf node forms the text which is regarded as hierarchical category label y (j) i (j ∈ [1, m]). For an input product title X i along with its meta concept labels {λ k }, our task is to output the correct category label it belongs to. Note that only one leaf category will be the correct answer. Detailed task formulation refers to Appendix A. Our TaLR framework is structured into two stages: Retrieval and Reranking, as illustrated in Figure We first train a dual-encoder to represent both categories and product titles in the vector space. Negative sampling In the original text classification problem, each product title X i has exactly one positive category label y i . However in our reformulation, text relevance matching models need negative category labels during training, otherwise they would not succesfully converge. For each (X i , y i ) pair, we prepare to construct the training examples S from multiple taxonomies by sampling (N -1) negative categories. Instead of randomly chosen, "hard" negative examples are more informative for better convergence. Inspired by teacher-student paradigm For each training dataset S i of taxonomy G i , we split it in k-fold manner, then take turns to train k BERT classifiers on every k-1 k data , with the remain 1 k data as the development set. The m-class classifiers are optimized with the typical m-class cross-entropy loss. The k classifiers would inference (N -1) most possible but not correct category labels concurrently in their corresponding development sets, and their results with ground truth positive labels constitutes the point-wise training set for the following dual-encoder training. We adopt a siamese network architecture (1) where α is the hyper-parameter, and +, -denotes the positive and negative samples in S respectively. We also compare this loss function with other alternatives in Appendix D.1. Figure product title embedding, with one-vs-all similarity measurement like cosine-similarity implemented by Approximate Nearest Neighbor (ANN) techniques targeting time efficiency. Based on this, we can readily collect top-k candidate list C vec . Dense scorer usually prioritizes semantic relatedness of literal expressions, neglecting the commonsense co-occurrence probability that lies within cross-domain training data. For example, "Sunrise Roses 500g" is often recognized as [Flower] by semantic matching algorithms, however, it is actually a variety of Mapping algorithm The shared meta concept set M is constructed by hybrid NER-related techniques. Details are in Appendix B. We can regard "meta concept" as a kind of keyword knowledge because they usually contain very concrete and accurate information. In our released datasets, one product title X is tagged with one or more meta concepts Λ = {λ 1 , λ 2 , ...λ k } from M. For example, "Haagen-Dazs Red Wine Flavor Ice Cream" is tagged with ⟨RedWine⟩, ⟨Icecream⟩, ⟨HaagenDazs⟩ as meta concepts. Given product title X and a category label ŷ, our heuristic strategy establishes X → ŷ mapping as conditional co-occurrence probability P (ŷ|X). First, we model this conditional probability for each category ŷ as: (2) Here we aggregate P (ŷ|λ 1 , λ 2 , ...λ k ) with the max-imum value among multiple λ i referring to the same ŷ. Each P (ŷ|λ i ) is collected from training data distributions: where ν denotes the frequency in training data. Then, we collect candidate list C rule by empirically setting a threshold of P (ŷ|X) > 0.5 to ensure both retrieval quantity and quality. Candidates merging When retrieved candidates from the dense scorer and mapping scorer are prepared, we need to combine the two lists of candidates. Our concept-first strategy prioritizes candidates from C rule . It puts at most 10 top candidates (usually less than 10) from C rule into C union , then keeps filling it with top candidates from C vec until its size reaches 10. To further measure the relatedness of product titles and category names with mutual interactions, we train a matching scorer in Reranking stage. During training, given a product title X and its retrieved candidates C union = {c 1 , c 2 , ...c l }, we concatenate tokenized sequences of X and each of these c i ∈ C union with a [SEP] token as the input to BERT-based model. The ground truth label is 1 if c i is the correct candidate otherwise 0. Optimization is followed with binary cross-entropy loss. During inference, the model gives similarity scores for each (X, c i ) pair, and the candidate with the highest similarity score would be our predicted category. For multi-domain taxonomies, category classes vary from one taxonomy to another. Despite the assorted expressions of category classes among different domain taxonomies, we find their finegrained concepts of products seldom shift. While previous retrieval stage pursues the recall of candidates and focuses less on class discrimination, the cross-encoder in Reranking stage possibly suffers from indistinguishable categories. Inspired by the supervised derivative of contrastive learning where y ′ , Λ ′ denotes samples with either different label y ′ with y or non-overlapping meta concept set Λ ′ with Λ. The BERT model after contrastive pretraining can be used in matching scorer during Reranking stage in Section 2.3. 3 Dynamic Multi-Domain Datasets We select 3 business lines from our e-commerce platform: QuickDelivery (QD, targeting fast delivery), BargainHunters (BH, targeting low price), FreshGrocery (FG, targeting fresh vegetables). These data instances are collected from the realworld business, where the product titles are mostly assigned by sellers from the platform and the category labels stem from three pre-defined business taxonomies. We recruit experienced annotators to manually classify the products X i into assorted categories y i , with 1% sampling to guarantee annotation accuracy. Data groups with over 95% accuracy in quality checking are used in our final datasets. Meanwhile, X i is tagged with concepts {λ k } following the Appendix B. Statistics of three datasets are listed in Table To verify the generalizability of TaLR on zeroshot scenarios, we further construct two taxonomy evolving derivatives of the QD test set. (ii) QD-integrate: During a production business adjustment, 127 classes in the original taxonomy are integrated or replaced by similar categories, which affects 1371 samples in the original test set to form this subset. (i) QD-divide: 22 category nodes from the original QD taxonomy are divided into two or more nodes. 495 samples in the original test set suffer from this evolution. Beyond the category labels, each product title is associated with a list of meta concepts from a set M including over 30k entities covering the most fine-grained concepts in product titles. The tagging step X → {λ 1 , λ 2 , ...λ k } is accomplished by an industrial Label Tagging System that exploits heterogeneous approaches. Details are in Appendix B. In this section, we discuss experimental results under static multi-domain settings and dynamic (taxonomy evolving & new taxonomy) conditions. A brief comparison of time efficiency between TaLR and simple Reranking is also included. We implement several baseline methods based on single-domain, multi-domain, and dynamic scenarios. To ensure fair comparisons, we also experiment concatenating product titles with meta concept text as input for some competitive baselines. Note that all the strong baselines are practicable in our online production environment, and those with unbearable space or time complexity are not considered. Works holding different assumptions (e.g. necessitate multi-label or not support Chinese) with us are not considered either. Finally, we deploy and benchmark the following common baselines: Flat Classifier TF-IDF&LR represents product titles with TF-IDF weighted dense vectors, and executes classification with Logistic Regression. FastText Hierarchical Classifier HMCN We mix up training data from three datasets to train the unified TaLR. We use accuracy score as the evaluation metric to meet real-world business demands. Accuracy mathematically equals to Micro-F1 score in a single-label multi-class classification problem. More details can be found in Appendix C. The overall accuracy score is shown in Table For our proposed framework TaLR, variant (a) already outperforms other baselines in separate model training paradigm, while TaLR (b) further achieves even higher accuracy when jointly trained on the mixed multi-domain data where the multitask BERT fails, verifying TaLR's efficacy on multi-domain taxonomies. We assume that the measurement of semantic relatedness is transferable on either business domain, and their shared knowledge could be integrated via contrastive pretraining as well. Therefore, the unified training helps improving the performance on each respective domain instead of conflicting each other as BERT multi-task does. From the ablation tests, we can observe the effectiveness of the two plug-in modules in our TaLR framework from row (c) and (d), and the contribution of these two modules are orthogonal. Removing the mapping scorer in (d) drops the overall accuracy most, while removing contrastive pretraining in (c) results in its inferior performance than (a) as well. This indicates both modules are indispensable for the enhancement of exploiting multi-domain data. From (e)→(f), concatenating meta concepts somehow improves the overall performance, but (f) still loses to (b). This reaffirms our above assumption that our usage of meta concepts is superior to simple concatenation. To further analyze the effects of the two plug-in modules, we conduct Case Study in Appendix D.2. To meet online deployment requirement, the inference time consumption (seconds cost for each instance) needs to be considered. We compare TaLR with the vanilla model (single BERT cross-encoder) on the three datasets in Figure In order to evaluate the ability of our framework on taxonomy evolving challenge, we use TaLR trained on the original multi-domain datasets to directly infer on two dynamic test sets. The vanilla BERT without any finetuning is a naive baseline BERT-matching. The BERT fine-tuned with fewshot new data (1%) is a strong baseline BERTfew-shot. Here "before" denotes the subset from the original test set and "after" denotes the subset with the same product titles but evolved categories. From the listed accuracy "before" and "after" taxonomy evolving in Table We conduct online experiments on one downstream task where TaLR's domain-independent category recognition ability helps transfer user preferences from other domains and contributes to a more accurate recommendation. When TaLR is incorporated in the recommendation system, customer seasonal purchase revenue increases significantly over 5%. To tackle DMPC problem, we propose a unified TaLR framework with two plug-in modules empowered with cross-domain meta concepts. With comprehensive experiments on real-world DMPC datasets, results under both multi-domain and taxonomy evolving conditions exhibit the transferability and maintenance efficiency of TaLR. We clarify the DMPC problem as follows. Given a set G of n relatively independent label taxonomies at initial time t 0 {G 1 , G 2 , G 3 , ..., G n }, each of which correlates with a domain-specific product categorization task. The taxonomy of product categories G i is tree-structured with depth d i , and it contains m i category leaf nodes: i , y (3) i , ..., y Part of the nodes is enrolling in a dynamic trending. As time goes t >0 , the category node y with corresponding product titles is also possible. In addition, an emerging taxonomy G n+1 may sprout when a new business is cultivated. A single product categorization task on taxonomy G i (i = 1) is a traditional classification task, in which the training data and test data are organized in tuples Each X i in S represents the title of one product and y i is the corresponding class node in the categorical taxonomy tree. In DMPC problem, when i ≥ 2, to unify the training data and the inference procedure cross G i , we reformulate classification as the matching between X i and y i . While traditional classifiers regard y i as meaningless label ordinals, we instead treat them along the path of top-bottom taxonomy nodes equivalently with the product title as free text. In this reformulated text semantic similarity matching task, the data samples are: where Y 1 ∈ {0, 1} is an indicator denoting whether the text pair X i and y i is matched (Y 1 = 1) or not (Y 1 = 0). Meta concepts are fine-grained tags that have been widely used in industrial knowledge graphs (e.g. Amazon Concept set construction is conducted in a semisupervised manner. First, we use a domain-specific named entity recognition (NER) model to mine fine-grained entities from product titles. These entities are complemented with queries from search engine and cumulated knowledge from experts to form the initial pool of concepts. Based on that, we use a naive classifier to pick-up high-quality concepts with high search frequency or broad product coverage. Then, manual annotation is performed on the remaining 20k entities, achieving 95% accuracy in quality checking. Finally, we collect over 30k concepts covering the most fine-grained knowledge in product titles. Concept tagging is comprised of two stages. The first stage is concept recall. In order to find candidate concepts for each product, we adopt three approaches: NER, knowledge reduction and semantic recall. First, seed candidates are found by NER on product titles. Second, we extend seed candidates with their neighbors in commonsense knowledge graphs, such as synonyms and brandconcept relations (some brands sell specific products). Third, for those products without seed candidates, we use Sentence-BERT to retrieve concepts by textual semantics. The low-quality concepts recalled will be filtered in the next stage, i.e. concept classification. The second stage is concept classification. Based on the candidates collected in the previos stage, we train a binary classifier to filter out concepts which attain low relevance score with product titles. The classifier is fine-tuned with knowledge integration which will be introduced in our successive work. For fair comparisons, all the "BERT" abbreviations mentioned in this work are Google BERT-base pre-trained on Chinese corpus. For TF-IDF and Fast-Text baselines, We use jieba where Y 1 is the binary class. For the sake of the alignment between embedding u x and v y , we also refer to the classification objective function in SBERT where W o ∈ R 3l×2 is the weighting parameter to project the concatenation of u x , v y and the elementwise difference |u xv y | to binary classes. l is the dimension of embeddings. The second element in vector o can be regarded as the probability whether u x and v y are matched or not, hence we can adopt the same binary cross entropy loss function in Eq. ( For product "New Farmer ® walnut flavored sunflower seed 160g" which should be categorized into [Sunflower Seed], TaLR without contrastive learning wrongly assign it to [Walnuts]; When concept "sunflower seed" is incorporated in contrastive pretraining, TaLR is capable of distinguishing the right answer. For product "CELSIUS ® cola flavored 300ml" which should belong to [Sports Drink], TaLR without mapping scorer wrongly label it as Text classification with a large hierarchy of classes attracts attention and has been studied with the evolving of LSHTC Product categorization is a hierarchical text classification task assigning categories to product instances. Approaches in early times are centralized with text features and basic machine learning algorithms. Neural network based methods prevail since 2013. Recent studies follow the pretrain-finetune paradigm since the great success of BERT Apart from end-to-end classification approaches, Class incremental learning resolves the problem that the classes increase progressively in a stream, and the classifier should continuously learn the incoming classes while sustaining accuracy on the seen classes as well. iCaRL
| 1,026 | 2,453 | 1,026 |
PETALS: Collaborative Inference and Fine-tuning of Large Models
|
Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose PETALS -a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with ≈ 1 step per second, which is enough for many interactive LLM applications. Unlike most inference APIs, PETALS also natively exposes hidden states of served models, allowing to train and share custom model extensions based on efficient fine-tuning methods. The system, its source code, and documentation are available at
|
In recent years, the NLP community has found that pretrained language models can solve many practical tasks, through either fine-tuning While the public availability of 100B+ parameter models makes them easier to access, they remain difficult to use for the majority of researchers and practitioners due to memory and computational costs. For instance, OPT-175B and BLOOM-176B need over 350 GB accelerator memory for inference and significantly more for fine-tuning. As a result, these LLMs usually require multiple highend GPUs or multi-node clusters to be run. Both of these options are extremely expensive, which limits research and potential applications of LLMs. Several recent works aim to democratize LLMs by "offloading" model parameters to slower but cheaper memory (RAM or SSD), then running them on the accelerator layer by layer Another way to make LLMs more accessible is through public inference APIs, where one party hosts the model and lets others query it over the Internet (OpenAI; AI21; Forefront). Since most of the engineering work is done by the API owner, this is a relatively user-friendly option. However, APIs are often not flexible enough for research use: there is no way to change the model control flow or access internal states. On top of that, current API pricing can make some research projects prohibitively expensive
|
Practical usage of large language models can be broadly divided into two main scenarios: inference and parameter-efficient adaptation to downstream tasks. In this section, we outline the design of PETALS, showing how it handles both scenarios and also allows easily sharing trained adapters between the users of the system. When generating tokens, a client stores the model's token embeddings (which typically comprise a small fraction of the total parameter count and can fit in RAM in most modern laptops, servers, and workstations) locally and relies on servers to run Transformer blocks. Each server holds several consecutive blocks, the number of which depends on the server's available GPU memory. Before each inference session, the client finds a chain of servers that collectively hold all model layers. Once the chain is formed, the client uses the local embedding layer to look up embedding vectors for prefix tokens, then sends those vectors to servers and receives new representations. Once the client obtains the outputs of the final block, it computes next token probabilities and repeats this process. While the session is active, servers store attention keys and values from past client inputs and use them for subsequent inference steps. Clients also store past inputs to each server so that if any server fails or goes offline, another one can quickly take its place. The procedure for finding servers and recovering from failures is detailed in Section 3.2. Client-side API. To generate tokens with PETALS, one first creates an inference session. An inference session iteratively takes inputs as Py-Torch tensors, runs them through all Transformer blocks and returns final representations as PyTorch tensors. Under the hood, sessions form server chains, hold cache, and recover from server failures in a way that is transparent to the user. An example of using an inference session is shown in Figure System requirements. For BLOOM-176B inference, clients need at least 12 GB RAM, most of which is used to store 3.6B embedding parameters. We recommend at least 25 Mbit/s bidirectional bandwidth to avoid bottlenecks in network transfers. Simple greedy inference can use any CPU that runs PyTorch, but more advanced algorithms (e.g., beam search) may require a GPU. In turn, servers need at least 16 GB of CPU RAM, 100 Mbit/s bandwidth and a GPU with at least 8 GB of memory. Chat application. We also provide an example application that lets users chat with LLMs in a messenger-like user interface (see Figure While LLMs achieve high quality on many problems with simple prompt engineering To combat this issue, the NLP community has developed parameter-efficient fine-tuning methods that keep most of the pretrained model intact. Some of them Distributed fine-tuning. The core principle of fine-tuning in a distributed network is that clients "own" trained parameters while servers host original pretrained layers. Servers can run backpropagation through their layers and return gradients with respect to activations, but they do not update the server-side parameters. Thus, clients can simultaneously run different training tasks on the same set of servers without interfering with one another. To illustrate this principle, we first review an example of soft prompt-tuning for text classification and then generalize it to other methods and tasks. Similarly to Section 2.1, clients store the embedding layers locally and rely on servers to compute the activations of Transformer blocks. In this finetuning scenario, a client needs to store trainable soft prompts (task-specific input embeddings) and a linear classification head. For each training batch, the client routes its data through a chain of remote servers to compute sentence representations, then obtains predictions with the classifier head and computes the cross-entropy This interface can also support other popular parameter-efficient fine-tuning algorithms, such as LoRA Although most fine-tuned extensions for pretrained models can be easily shared as-is, simplifying the workflow for sharing these extensions enables users to more easily adapt the model to their target scenario. Indeed, existing model hubs One of the primary considerations for distributed inference is its performance. It can be broken down into three main aspects: computation speed (5-yearold gaming GPU vs. new data center GPU), communication delay due to distance between nodes (intercontinental vs. local), and communication delay due to bandwidth (10 Mbit/s vs. 10 Gbit/s). In terms of raw FLOPs, even consumer-grade GPUs like GeForce RTX 3070 could run a complete inference step of BLOOM-176B in less than a second (NVIDIA, 2020). However, the GPU memory can only hold a small fraction of model layers: running naïvely would require 44 RTX 3070 GPUs and 44 communication rounds. To make this more efficient, we use quantization to store more parameters per GPU, reducing the number of consecutive devices and communication rounds (Section 3.1). On top of that, each client prioritizes nearby servers to make communication rounds faster (Section 3.2). We assume that each server has at least 16 GB of CPU RAM, 8 GB of GPU memory. From this assumption, one of the primary considerations is to reduce the model memory footprint, so that each device can hold more Transformer blocks. For example, BLOOM has 176B parameters, which takes 352 GB of GPU memory in 16-bit precision. Thus, in the worst case, the model is distributed among 352 GB / 8 GB (per server) = 44 nodes. We can reduce both frequency and amount of data transfer in two ways. First, we can achieve this by compressing the hidden states exchanged between nodes. Second, we can compress the weights to 8-bit precision, reducing the number of nodes required to hold all layers. For BLOOM, this changes the number of required nodes from 44 to 22, which reduces latency in half and decreases the probability of a failure. Compressing communication buffers. To send less data between subsequent pipeline stages, we use dynamic blockwise quantization Compressing model weights. We use 8-bit mixed matrix decomposition for matrix multiplication to quantize the weights to 8-bit precision and reduce the memory footprint compared to 16-bit weights, as suggested in As shown in Table Another challenge is to provide reliable inference and training despite nodes joining, leaving or failing at any time. To address this, PETALS uses the hivemind library (Learning@home, 2020) for decentralized training with custom fault-tolerant algorithms for servers and clients detailed below. Fault-tolerant generation. During inference, clients rely on servers to store attention keys and values for previous tokens. This introduces a potential problem if one or more servers disconnect (or fail) while generating a long sequence. To combat this, PETALS needs a way to recover from server failures transparently to the user. A naive solution would be to restart the generation procedure, treating previously generated tokens as part of the prompt. This approach has two scaling issues. When generating longer sequences, the inference would have to restart more often, increasing the inference time superlinearly. Also, the more participants take part in the generation procedure, the higher the chance that one of them fails and the entire procedure needs to restart. To reduce the time spent re-running computations Petals uses a special generation algorithm that supports partial restarts. To enable this, we make both clients and servers store previous activations. While each server stores past keys and values for its local blocks, each client remembers intermediate activations at every "junction" between servers (i.e., the activations it receives from the previous server and sends to the next one). If one of the servers fail, the client only needs to replace the activations from that server. To do so, the client finds other servers holding the same blocks, then resends the cached activations that were sent to the previous (failed) server. Once this recovery is complete, the replacement server is in the same "inference state" as the rest of the chain, and the client can continue generating tokens. Communication pattern. The algorithm above implies that clients send requests and receive responses from servers one by one, while servers do not directly pass activations to each other. This is suboptimal for sequential inference, where performance is bounded by the network latency. To address this, we can make intermediate servers send the output activations both (a) directly to the next server and (b) back to the client. This way, the next server will start computations as soon as possible (after only one network hop instead of two hops), while the client will still be able to reuse the activations in case of server failures. Note that, in this case, sending two times more data does not worsen performance since, typically, sequential inference is not bounded by network bandwidth. Server load balancing. First, we ensure that servers are distributed evenly among Transformer blocks. Formally, servers maximize the total model throughput by choosing the blocks with the lowest throughput, thus eliminating potential bottlenecks. Here, the block throughput is the sum of throughputs of all servers hosting this block, while the server throughput is the minimum of its network and compute throughputs (in requests/sec), measured empirically before a server joins the system. Each active server periodically announces its active blocks to a distributed hash table Since peers may leave or fail at any time, all nodes periodically check if launching a rebalancing procedure would significantly improve the overall throughput. If it is the case, they switch layers until the throughput becomes near-optimal. In particular, if all peers serving certain blocks suddenly leave the system, this procedure quickly redistributes the remaining resources to close the emerged gaps. Client-side routing. Next, we want clients to be able to find a sequence of servers that run the model in the least amount of time. During generation, clients process one or few tokens at a time; in practice, the inference time is mostly sensitive to the network latency. Thus, clients have to ping nearby servers to measure latency and then find the path with minimal time via beam search. Conversely, during fine-tuning one needs to process a batch of examples in parallel. Here, clients can split their batches between multiple servers using the algorithm from We evaluate the performance of PETALS by running BLOOM-176B in emulated and real-world setups. Our first setup consists of 3 local servers, each running on an A100 80GB GPU. This is an optimistic scenario that requires the least amount of communication. In the second setup, we simulate 12 weaker devices by partitioning each A100-80GB into several virtual servers (3 large and 1 small). We evaluate the above setups with three network configurations: 1 Gbit/s with < 5 ms latency, 100 Mbit/s with < 5 ms latency and 100 Mbit/s with 100 ms latency Next, we benchmark BLOOM in a real-world distributed setting with 14 smaller servers holding 2× RTX 3060, 4×2080Ti, 2×3090, 2×A4000, and 4×A5000 GPUs. These are personal servers and servers from university labs, spread across Europe and North America and connected to the Internet at speeds of 100-1000 Mbit/s. Four of the servers operate from under firewalls In Table We also test the effect of having multiple clients. For 12 servers with 100 Mbit/s bandwidth and 100 ms latency, if 8 clients run inference concurrently, each of them gets ≈ 20% slowdown compared to the case when it runs inference alone. Additionally, we compare PETALS with parameter offloading to run large models with limited resources We calculate the maximum throughput for offloading as follows. In 8-bit, the model uses 1 GB of memory per billion parameters while PCIe 4.0 with 16 lanes has a throughput of 256 Gbit/s (or 128 Gbit/s if two GPUs are behind a PCIe switch). As such, offloading 176B parameters takes 5.5 seconds for a regular setup and 11 seconds for a multi-GPU setup. We assume an offloading latency of zero for the upper bound estimation. These results are also shown in Table This paper introduces PETALS, a system for efficient collaborative inference and fine-tuning of large language models. We offer a user-friendly generation interface and a flexible API to access models served over the Internet. We use 8-bit compression that reduces the resource requirements to run very large models. In addition, we develop algorithms for reliable routing and load balancing. With the release of this system, we hope to broaden access to LLMs and pave the road to applications, studies or research questions that were previously not possible or simply too expensive. Running LLMs over the Internet raises a broad range of related questions. One of them is privacy: how to avoid revealing private data to outside peers. Another challenge is to ensure that participants can benefit from this system equitably, i.e. in proportion to their contribution. We discuss future problems such as privacy, security, and incentive structures in Appendix A. An important limitation of our work is data privacy: the intermediate activations of the model for given inputs are sent to the servers without any encryption. As such, it might be possible for people hosting the servers to recover the user's input data. Another limitation is security: while there are ways to detect and penalize peers sending faulty outputs, still there is a chance that peers may do that due to faulty hardware or a malicious intent. Thus, we recommend users working with sensitive data to only use servers hosted by institutions trusted to process this data or set up an isolated PETALS swarm. We discuss these limitations in more detail in Appendix A and acknowledge that the development of methods for privacy-preserving and secure decentralized inference without performance penalties remains an open research problem. This work introduces a general-purpose algorithm for decentralized inference of large models, aiming to simplify access to the latest research in deep learning. Thus, we do not envision any direct negative impacts from our research aside from granting the broader public an ability to interact with LLMs trained on uncurated web-crawled data. However, all models we serve are already in open access and thus can be exposed via APIs or other means. A Discussion and future work Incentives for peers to contribute. In PETALS, peers using the client are not required to run a server. This may lead to an imbalance between supply (peers who dedicate GPUs to serve model layers) and demand (peers using the servers to perform inference or fine-tuning for their own needs) in the network. One way to encourage users to serve model layers is to introduce a system of incentives: peers running servers would earn special points, which can be spent on high-priority inference and fine-tuning or exchanged for other rewards. A key limitation of our approach is that peers serving the first layers of the model can use their inputs to recover input tokens. Thus, clients working with sensitive data should only use the servers hosted by institutions trusted to process this data. This can be achieved with the allowed_servers parameter that limits the set of servers a client can use. Alternatively, users can set up their own isolated Petals swarm. This limitation may be addressed in future work, leveraging the fields of secure multi-party computing Security. We assume that servers in our system are run by many independent parties. In practice, some of them may turn out to be faulty and return incorrect outputs instead of the actual results of forward and backward passes. This may happen due to a malicious intent to influence other people's outputs or, when rewards are introduced (as described above), to earn a reward for serving layers without actually performing the calculations. A possible way to address these issues would be to use an economically motivated approach. Some servers may vouch for the correctness of their outputs (e.g., in exchange for increased inference price) by depositing a certain number of points as a pledge. Then, for each request, they announce a cryptographic hash of the input and output tensors, so anyone having the inputs can check whether the outputs are correct. If someone finds a mismatch confirmed by a trusted third party, they can claim the server's pledge as a reward. In practice, it may be a client who suspects that they received wrong outputs or a "bounty hunter" sending requests to different servers in the hope of catching errors. While this approach still leaves a chance of receiving wrong outputs, it makes cheating costly and creates an incentive to quickly expose the malicious servers. Making changes to the main model. As discussed in Section 2.2, distributed parameterefficient fine-tuning makes it easy for users to apply the base model to new tasks. In Section 2.3, we also described how these updates can be easily shared and reused by others. This capability provides a meaningful step towards collaborative improvement of machine learning models Furthermore, we might expect the model parameters that perform best on a specific task to change over time. Similarly to version control systems for code, it would be useful to track versions of fine-tuned model parameters as they change. A system for rapidly testing the performance of a set of parameters on "living benchmarks" Apart from adaptation to new tasks, it would also be useful to eventually update the main model. Ideally, such updates could be tracked in a principled way. Users of PETALS could specify the versions of the model they want to use, and servers could indicate which versions they support. Introducing a newer version of the model then reduces to adding a new group of layers, which then naturally supersedes older parameters based on the approach from Section 3.2. Similarly, fine-tuned adapters could be annotated with tags denoting the model version they are applicable for. Such fine-grained model versioning is currently uncommon but would be straightforward to add to PETALS.
| 1,189 | 1,351 | 1,189 |
GammaE: Gamma Embeddings for Logical Queries on Knowledge Graphs
|
Embedding knowledge graphs (KGs) for multihop logical reasoning is a challenging problem due to massive and complicated structures in many KGs. Recently, many promising works projected entities and queries into a geometric space to efficiently find answers. However, it remains challenging to model the negation and union operator. The negation operator has no strict boundaries, which generates overlapped embeddings and leads to obtaining ambiguous answers. An additional limitation is that the union operator is non-closure, which undermines the model to handle a series of union operators. To address these problems, we propose a novel probabilistic embedding model, namely Gamma Embeddings (GammaE), for encoding entities and queries to answer different types of FOL queries on KGs. We utilize the linear property and strong boundary support of the Gamma distribution to capture more features of entities and queries, which dramatically reduces model uncertainty. Furthermore, Gam-maE implements the Gamma mixture method to design the closed union operator. The performance of GammaE is validated on three large logical query datasets. Experimental results show that GammaE significantly outperforms state-of-the-art models on public benchmarks.
|
Most important advances encode knowledge into large-scale graph data to model real-world knowledge graphs (KGs), such as Wikidata Knowledge graph reasoning can be represented by the first-order logic (FOL) queries with basic operators, such as the existential quantifier (∃), conjunction (∧), disjunction (∨), and negation (¬). One regular set of such graph queries is the conjunctive query, which only consists of existential quantifiers (∃) and conjunctions (∧) Current methods Other approaches apply the density function to encode entities and relations Here we propose a Gamma Embedding (Gam-maE) probabilistic model to encode entities and relations for multi-hop reasoning on KGs. In the Gamma density space, all FOL operations, such as existential quantifier (∃), intersection (∧), union (∨), and negation (¬), are closed and follow Gamma distributions. The linear property of the Gamma density can dramatically improve the computation efficiency for discovering answers. The contributions of our work are summarized as follows: 1. GammaE provides a closed solution for all FOL operators, including a projection, intersection, union, and negation. 2. GammaE firstly implements the Gamma mixture method to alleviate the non-closure problem on union operators, which significantly reduces computation steps. 3. GammaE enables query embeddings to have strict boundaries on the negation operator, which can effectively avoid finding ambiguous answers. 4. GammaE outperforms state-of-the-art models on multi-hop reasoning over three benchmark datasets. Our results have implications for encoding entities and relations, advancing the science of multihop reasoning, and improving our understanding of general knowledge graph. The rest of the paper is organized as follows: Section 2 shows the related work in multi-hop reasoning over KGs. Next, sections 3 and 4 theoretically demonstrate Gamma embeddings and define its FOL operations. The experimental setup and results are explicitly shown in section 5. Finally, section 6 makes a clear conclusion and section 7 briefly presents its limitations.
|
This work is closely related to query embedding approaches Another method is to encode entities into the probabilistic density for performing multi-hop logical reasoning. A knowledge graph (KG) is a directed graph G = (V, E, R), where V is the set of entities, E is the set of triplets, and R denotes the set of relations. A direct triplet of a KG is represented as (e 1 , r, e 2 ) ∈ E, that is, a relation r linking the entity e 1 to the entity e 2 , where e 1 , e 2 ∈ V and r ∈ R. For each triplet, it has a relational binary function, i.e, r(e 1 , e 2 ) = True if and only if (e 1 , r, e 2 ) exists. First-order Logic Queries. The first-order logic (FOL) queries contain four basic operators, namely existential quantifier (∃), conjunction (∧), disjunction (∨), and negation (¬). A first-order logic query q consists of a non-variable anchor entity set V a ⊆ V , existential quantified set {V 1 , V 2 , ..., V k }, and a target set V t , i.e, query answers. The logical form of a disjunctive query q can be written as where Here, c i is a conjunctive query with one or more literals b ij . And b ij is an atomic formula or a negation. Computation Graphs. Each query can generate its corresponding computation graph, where entities are mapped into nodes, and relations with atomic formulas are calculated by logical operators. These logical operators are defined as follows: 1. Relation Projection. A set of entities is S ⊆ V , a relation type is r ∈ R, and neighbors of entities S ′ are defined as Given sets of entities {S 1 , S 2 , ..., S k }, their intersection is ∩ k i=1 S i . 3. Union. For sets of entities {S 1 , S 2 , ..., S k }, their union is ∪ k i=1 S i . 4. Negation. A set of entities is S ⊆ V , and its complement is S ≡ V \S. To address the multi-hop reasoning on incomplete KGs, we propose a novel model GammaE, which encodes both entities and queries into Gamma distributions. Next, the related probabilistic logical operators are transformed into relation projection, intersection, union, and negation. GammaE provides an efficient method to handle arbitrary FOL queries. The schematics of GammaE answering graph queries are explicitly illustrated in Fig. A Gamma distribution is defined as where x > 0, α > 0 is the shape, β > 0 is the rate, and Γ( * ) is the Gamma function. Thus, the uncertainty of distribution can be obtained by information entropy: where ψ( * ) is the digamma function (see Appendix A). Given a set of entities' embeddings S, the probabilistic projection operator maps from S to another set S ′ dependent on the relation type r. This operator could be defined where MLP r is a multi-layer perceptron network for given relation type r. The transformed set S ′ is It is essential that the projection operator represents a relation type r from one set of entities to another fuzzy set. To avoid obtaining a huge number of answers, the Gamma embeddings are limited in a fixed size, scaling GammaE. A visualization of the projection operator is shown in Fig. For two input embeddings of two entities S 1 , S 2 , their intersection operator is defined as where Z is a normalization constant, , and w 1 + w 2 =1. Since the product of Gamma distribution f (x; α, β) approximates to the linear summation of parameters (α, β), Eq. 5 can be derived Thus, given k input embeddings S 1 , S 2 , ..., S k , the intersection of Gamma embeddings P S Inter can be calculated as where Z is a normalization constant, proof is presented in Appendix B. Fig. For learning the parameters w 1 , w 2 , ..., w k , we realize it with the self-attention mechanism. A single attention parameter is to The union operator is implemented by Gamma mixture models. For k input embeddings S 1 , S 2 , ..., S k , the union results can be calculated as where θ i = exp((P S i ) j exp(P S i ) , and Here, θ i ∈ Θ is the learned weight for each Gamma density in the Gamma mixture model and also uses the self-attention mechanism. Its operation is plotted in Fig. A probabilistic negation operator takes Gamma embedding S as the input, and then obtains an embedding of the complement ¬ S(S). A desired property of the negation operator N is to reverse in the sense where regions of high density in P S should have low probability density in N P S and vice versa (Fig. where P S = f (x; α, β) and ϵ ∈ (0, 1) is the elasticity and set to 0.05. The approach has one important advantage, that is, the pair of two Gamma embeddings has no intersection points. Furthermore, the elasticity ϵ can effectively increase the distance of two opposite embeddings. To avoid the identity problem in the negation, we design two labels (0 and 1) to mark the original Gamma embedding (0) and its complement embedding (1). The label vector can effectively record this status for each entity. Distance Function. In our work, entities and queries are encoded into m-dimensional Gamma space. A entity e is embedded by P e = [f (x; α e 1 , β e 1 ), ..., f (x; α e m , β e m )], and a query embedding q is represented by P q = [f (x; α q 1 , β q 1 ), ..., f (x; α q m , β q m )]. According to Kullback-Leibler (KL) divergence, the distance between two Gamma distributions is given by KL(f (x; α e , β e ), f (x; α q , β q )) = (α eα q )ψ(α e ) log Γ(α e ) + log Γ(α q ) + α q (log β elog β q ) where ψ( * ) is the digamma function. Its proof is shown in Appendix C. Consequently, KL divergence of the entity e and the query q is obtained dist(e; q) = m i=1 KL(P e , i : P q , i), where P e , i (P q , i) represent the i-th Gamma distribution f (x; α e i , β e i ) with parameters α e i and β e i (f (x; α q i , β q i ) with parameters α q i and β q i ) in the entity (query) embedding vector. By this method, query embeddings will theoretically cover all answer entity embeddings Training Objective. In the training process, our objective is to minimize the KL divergence between Gamma embeddings of a query and its answer entity while maximizing the distance between that of this query and wrong answers via negative sampling method where v ∈ [q] represents the answer entity of q, v ′ j / ∈ |q| means a random negative sample, k is the number of negative samples, γ > 0 is the margin, and σ( * ) denotes the sigmoid function. For inference, GammaE aims to find the answers of a query q, and ranks all the entities based on the KL divergence defined in Eq. 12 in constant time using Locality Sensitive Hashing GammaE is evaluated on three large-scale KG benchmark datasets, including FB15k Datasets. For multi-hop reasoning, GammaE is studied on three standard benchmark datasets (details are shown in Appendix D.1): • FB15K Baseline. We compare GammaE with state-ofthe-art models, including GQE Implementation Details. In the training process, the weight w and θ i are calculated by the self-attention mechanism. For updating parameters, Adam is used as the optimizer (Kingma and In the experiments, we have run GammaE model 20 times with different random seeds, and reported the mean values of GammaE 's MRR results on EPFO and negation queries. For the error bars of main results, we report them in Appendix E.1. And the computational costs of GammaE are listed in Appendix E.2. Modeling EPFO (containing only ∃, ∧, and ∨) Queries. First, we compare GammaE with baselines that can only model queries with conjunction and disjunction without negation. Table Table Modeling Queries with Negation (¬). Next, the performance of GammaE is evaluated to model queries with negation. Since GQE and Q2B cannot handle the negation operator, they won't be compared in the experiments. Table To study the uncertainty of GammaE, we need to investigate the cardinality difference between a predicted set and an answer set. The cardinality can efficiently represent the uncertainty of an embedding model. For capturing the cardinality difference, we calculate the correlations between the differential entropy of the Gamma embedding P [q] and the cardinality of the answer set Existing models compute the union operator by De Morgan's laws (DM) and disjunctive normal form (DNF). Due to De Morgan's laws, the union ∪ k i=1 S i can be rewritten as ∪ k i=1 S i = ∩ k i=1 S i . The disjunctive normal form (DNF) is to move all "union" edges to the last step of the computation graph Table GammaE with DM 37.7 29.9 GammaE with DNF 53.5 30.9 GammaE with MM 57.1 57.1 57.1 34.5 34.5 34.5 GammaE with DM 13.5 10.1 GammaE with DNF 13.9 10.3 GammaE with For the negation operator, BETAE takes the reciprocal of the parameter α and β, i.e., N ([(α, β)] = One advantage of our negation operator is to design the elasticity. Since the negation operator aims to maximize the distance between the original embedding and its complement embedding, the elasticity could effectively increase the distance to obtain good performance. Table Interestingly, since these queries are first-order queries, they only contain one negation operator. Therefore, our experiments didn't need to process two negation operators in a query, not facing identity cases. In this paper, we propose a novel embedding model, namely GammaE, to handle arbitrary FOL queries and efficiently realize multi-hop reasoning on KGs. Given a query q, GammaE can map it onto the Gamma space for reasoning it by probabilistic logical operators on the computation graph. Compared to previous methods, its union operator uses the Gamma mixture model to avoid the disjunctive normal form and De Morgan's laws. Furthermore, GammaE significantly improves the perfor-mance of the negation operator due to alleviating the boundary effect. Extensive experimental results show that GammaE outperforms state-of-the-art models on multi-hop reasoning over arbitrary logical queries as well as modeling the uncertainty. Overall, GammaE aims to promote graph embeddings for logical queries on KGs. GammaE can handle all logical operators on largescale KGs. Besides, all logical operators are closed in the Gamma space. It will significantly increase the capability and robustness of multi-hop reasoning on massive KGs. One potential risk is that the model could effectively model basic logical operators, not for more complicated operators or cycle graphs. If a query has many loops, the operations become harder. We will continue to work on this problem to design more effective logical operators. Importantly, we will continue to study this problem in the future. The information entropy of the Gamma distribution is defined as + βE(x) . Eq. A.1 can be solved where ψ( * ) is the digamma function. For two dimensions of gamma embeddings, the intersection operator is Due to w 1 + w 2 = 1, Eq. A.4 can be approximated as where K = (w 1 β 1 ) w 1 α 1 (w 2 β 2 ) w 2 α 2 ZΓ(w 1 α 1 )Γ(w 2 α 2 ) . Based on Eq. A.5, the intersection operator of k Gamma embeddings can be obtained x αq -1 e -βq x β αq q Γ(αq) dx = -β2 ∞ 0 x αq e -βq x β αq q Γ(αq) dx To solve the right integral in Eq. A.7, one obtains -(α e -1) log β q , (A.9) Eq. A.7 can be solved Γ(αq) = ψ(α q ), Eq. A.10 can be rewritten as I(α e , β e , α q , β q ) = -α q β e β q log Γ(α e ) β αe e + (α e -1)ψ(α q ) -(α e -1) log β q . (A.11) The KL divergence of (f (x; α e , β e )) and f (x; α q , β q ) can be obtained = I(α e , β e , α e , β e ) -I(α q , β q , α e , β e ) = (α eα q )ψ(α e )log Γ(α e ) + log Γ(α q ) + α q (log β elog β q ) + α e β qβ e β e . (A.12) Three datasets are used in the experiments, namely FB15k The training dataset is composed of five conjunctive structures (1p/2p/3p/2i/3i) and five queries with negation (2in/3in/inp/pni/pin). The validation dataset and test dataset contain all logical structures, which never occurred during training. Table To obtain best results, we finetune these hyperparameters, such as embedding dimensions To evaluate the training speed, we calculated the average time per 100 training steps. We ran all models with the same number of embedding parameters on a Tesla V100. The results are shown Table Pearson Correlation Coefficient measures the linear correlation of the two variables. Table
| 1,250 | 2,097 | 1,250 |
Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment
|
Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still challenging because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterancelevel sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model's synchronized attention over modalities offers visual interpretability.
|
With the recent rapid advancements in social media technology, affective computing is now a popular task in human-computer interaction. Sentiment analysis and emotion recognition, both of which require applying subjective human concepts for detection, can be treated as two affective computing subtasks on different levels A basic challenge in sentiment analysis and emotion recognition is filling the gap between extracted features and the actual affective states Another challenge is the fusion of cues from heterogeneous data. Most previous works focused on combining multimodal information at a holistic level, such as integrating independent predictions of each modality via algebraic rules We evaluated our model on four published sentiment and emotion datasets. Experimental results show that the proposed architecture outperforms state-of-the-art approaches. Our methods also allow for attention visualization, which can be used for interpreting the internal attention distribution for both single-and multi-modal systems. The contributions of this paper are: (i) a hierarchical multimodal structure with attention mechanism to learn informative features and high-level associations from both text and audio; (ii) three wordlevel fusion strategies to combine features and learn correlations in a common time scale across different modalities; (iii) word-level attention visualization to help human interpretation. The paper is organized as follows: We list related work in section 2. Section 3 describes the proposed structure in detail. We present the experiments in section 4 and provide the result analysis in section 5. We discuss the limitations in section 6 and conclude with section 7.
|
Despite the large body of research on audio-visual affective analysis, there is relatively little work on combining text data. Early work combined human transcribed lexical features and low-level handcrafted acoustic features using feature-level fusion Our architecture is inspired by the document classification hierarchical attention structure that works at both the sentence and word level We introduce a multimodal hierarchical attention structure with word-level alignment for sentiment analysis and emotion recognition (Figure The forced alignment between the audio and text on the word-level prepares the different data for feature extraction. We align the data at the wordlevel because words are the basic unit in English for human speech comprehension. We used aeneas For the text input, we first embedded the words into 300-dimensional vectors by word2vec where T i is the embedded word vector. For the audio input, we extracted Melfrequency spectral coefficients (MFSCs) from raw audio signals as acoustic inputs for two reasons. Firstly, MFSCs maintain the locality of the data by preventing new bases of spectral energies resulting from discrete cosine transform in MFCCs extraction To extract features from embedded text input at the word level, we first used bidirectional GRUs, which are able to capture the contextual information between words. It can be represented as: where bi GRU is the bidirectional GRU, t h → i and t h ← i denote respectively the forward and backward contextual state of the input text. We combined t h → i and t h ← i as t h i to represent the feature vector for the ith word. We choose GRUs instead of LSTMs because our experiments show that LSTMs lead to similar performance (0.07% higher accuracy) with around 25% more trainable parameters. To create an informative word representation, we adopted a word-level attention strategy that generates a one-dimensional vector denoting the importance for each word in a sequence Determine time interval of each word 3: Text Attention Module 7: Frame-Level Attention Module 18: end for 23: Word-Level Attention Module 25: w e i ← getEnergies(w h i ) w α i ← getDistribution(w e i ) end for return w h i , w α i 30: end procedure 2014), we compute the textual attentive energies t e i and textual attention distribution t α i by: where W t and b t are the trainable parameters and v t is a randomly-initialized word-level weight vector in the text branch. To learn the word-level interactions across modalities, we directly use the textual attention distribution t α i and textual bidirectional contextual state t h i as the output to aid word-level fusion, which allows further computations between text and audio branch on both the contextual states and attention distributions. We designed a hierarchical attention model with frame-level acoustic attention and word-level at-tention for acoustic feature extraction. Frame-level Attention captures the important MFSC frames from the given word to generate the word-level acoustic vector. Similar to the text attention module, we used a bidirectional GRU: where f h → ij and f h ← ij denote the forward and backward contextual states of acoustic frames. A ij denotes the MFSCs of the jth frame from the ith word, i ∈ [1, N ]. f h ij represents the hidden state of the jth frame of the ith word, which consists of f h → ij and f h ← ij . We apply the same attention mechanism used for textual attention module to extract the informative frames using equation 3 and 4. As shown in Figure Word-level Attention aims to capture the word-level acoustic attention distribution w α i based on formed word vector f V i . We first used equation 2 to generate the word-level acoustic contextual states w h i , where the input is f V i and w h i = (w h → i , w h ← i ). Then, we compute the word-level acoustic attentive energies w e i via equation 3 as the input for equation 4. The final output is an acoustic attention distribution w α i from equation 4 and acoustic bidirectional contextual state w h i . Fusion is critical to leveraging multimodal features for decision-making. Simple feature concatenation without considering the time scales ignores the associations across modalities. We introduce word-level fusion capable of associating the text and audio at each word. We propose three fusion strategies (Figure 3: end for 8: Vertical Fusion (VF) 9: 11: 12: V i ← weighted(h i , s α i ) 13: end for 14: Fine-tuning Attention Fusion (FAF) 15: u e i ← getEnergies(h i ) 17: u α i ← getDistribution(u e i , s α i ) 18: end for 20: Decision Making 21: return E 23: end procedure Horizontal Fusion (HF) provides the shared representation that contains both the textual and acoustic information for a given word (Figure where t V i and w V i are word-level representations for text and audio branches, respectively; (ii) concatenating them into a single space and further applying a dense layer to create the shared context vector V i , and The HF combines the unimodal contextual states and attention weights; there is no attention interaction between the text modality and audio modality. The shared vectors retain the most significant characteristics from respective branches and encourages the decision making to focus on local informative features. Vertical Fusion (VF) combines textual attentions and acoustic attentions at the word-level, using a shared attention distribution over both modalities instead of focusing on local informative representations (Figure Fine-tuning Attention Fusion (FAF) preserves the original unimodal attentions and provides a fine-tuning attention for the final prediction (Figure2 (c)). The averaging of attention weights in vertical fusion potentially limits the representational power. Addressing such issue, we propose a trainable attention layer to tune the shared attention in three steps: (i) computing the shared attention distribution s α i and shared bidirectional contextual states h i separately using the same approach as in vertical fusion; (ii) applying attention fine-tuning: where W u , b u , and v u are additional trainable parameters. The u α i can be understood as the sum of the fine-tuning score and the original shared attention distribution s α i ; (iii) calculating the weight of u α i and h i to form the final shared context vector V i . The output of the fusion layer V i is the ith shared word-level vectors. To further make use of the combined features for classification, we applied a CNN structure with one convolutional layer and one max-pooling layer to extract the final representation from shared word-level vectors where k is the width of the convolutional filters, f i represents the features from window i to i + k -1. W c and b c are the trainable weights and biases. We get the final representation c by concatenating all the feature maps. A softmax function is used for the final classification. We evaluated our model on four published datasets: two multimodal sentiment datasets (MOSI and YouTube) and two multimodal emotion recognition datasets (IEMOCAP and EmotiW). MOSI dataset is a multimodal sentiment intensity and subjectivity dataset consisting of 93 reviews with 2199 utterance segments YouTube dataset is an English multimodal dataset that contains 262 positive, 212 negative, and 133 neutral utterance-level clips provided by IEMOCAP is a multimodal emotion dataset including visual, audio, and text data EmotiW We compared the proposed architecture to published models. Because our model focuses on extracting sentiment and emotions from human speech, we only considered the audio and text branch applied in the previous studies. BL-SVM extracts a bag-of-words as textual features and low-level descriptors as acoustic features. An SVM structure is used to classify the sentiments LSTM-SVM uses LLDs as acoustic features and bag-of-n-grams (BoNGs) as textual features. The final estimate is based on decision-level fusion of text and audio predictions TFN uses a tensor fusion network to extract interactions between different modality-specific features LSTM(A) introduces a word-level LSTM with temporal attention structure to predict sentiments on MOSI dataset SVM Trees extracts LLDs and handcrafted bagof-words as features. The model automatically generates an ensemble of SVM trees for emotion classification GSV-eVector generates new acoustic representations from selected LLDs using Gaussian Supervectors and extracts a set of weighed handcrafted textual features as an eVector. A linear kernel SVM is used as the final classifier C-MKL 2 extracts textual features using a CNN and uses openSMILE to extract 6373 acoustic features. Multiple kernel learning is used as the final classifier H-DMS uses a hybrid deep multimodal structure to extract both the text and audio emotional features. A deep neural network is used for feature-level fusion Utterance-level Fusion (UL-Fusion) focuses on fusing text and audio features from an entire utterance Decision-level Fusion (DL-Fusion) Inspired by We implemented the model in Keras with Tensorflow as the backend. We set 100 as the dimension for each GRU, meaning the bidirectional GRU dimension is 200. For the decision making, we selected 2, 3, 4, and 5 as the filter width and apply 300 filters for each width. We used the rectified linear unit (ReLU) activation function and set 0.5 as the dropout rate. We also applied batch normalization functions between each layer to overcome internal covariate shift The experimental results of different datasets show that our proposed architecture achieves state-of-the-art performance in both sentiment analysis and emotion recognition (Table For sentiment analysis, the proposed architecture with FAF strategy achieves 76.4% weighted accuracy, which outperforms all the five baselines (Table For emotion recognition, our model with FAF achieves 72.7% accuracy, outperforming all the baselines. The result shows the proposed model brings a significant accuracy gain to emotion recognition, demonstrating the pros of the finetuning attention structure. It also shows that wordlevel attention indeed helps extract emotional features. Compared to C-MKL 2 and SVM Trees that require feature selection before fusion and prediction, our model does not need an additional architecture to select features. We further evaluated our models on 5 emotion categories, including frustration. Our model shows 4.2% performance improvement over H-DMS and achieves 0.644 weighted-F1. As H-DMS only achieves 0.594 F1 and also uses low-level handcrafted features, our model is more robust and efficient. From Table From Table We further tested the generalizability of the proposed model. For sentiment generalization testing, we trained the model on MOSI and tested on the YouTube dataset (Table Our model allows us to easily visualize the attention weights of text, audio, and fusion to better understand how the attention mechanism works. We introduce the emotional distribution visualizations for word-level acoustic attention (w α i ), word-level textual attention (t α i ), shared attention (s α i ), and fine-tuning attention based on the FAF structure (u α i ) for two example sentences (Figure Based on our visualization, the textual attention distribution (t α i ) denotes the words that carry the most emotional significance, such as "hell" for anger (Figure There are several limitations and potential solutions worth mentioning: (i) the proposed architecture uses both the audio and text data to analyze the sentiments and emotions. However, not all the data sources contain or provide textual information. Many audio-visual emotion clips only have acoustic and visual information. The proposed architecture is more related to spoken language analysis than predicting the sentiments or emotions based on human speech. Automatic speech recognition provides a potential solution for generating the textual information from vocal signals. (ii) The word alignment can be easily applied to human speech. However, it is difficult to align the visual information with text, especially if the text only describes the video or audio. Incorporating visual information into an aligning model like ours would be an interesting research topic. (iii) The limited amount of multimodal sentiment analysis and emotion recognition data is a key issue for current research, especially for deep models that require a large number of samples. Compared large unimodal sentiment analysis and emotion recognition datasets, the MOSI dataset only consists of 2199 sentence-level samples. In our experiments, the EmotiW and MOUD datasets could only be used for generalization analysis due to their small size. Larger and more general datasets are necessary for multimodal sentiment analysis and emotion recognition in the future. In this paper, we proposed a deep multimodal architecture with hierarchical attention for sentiment and emotion classification. Our model aligned the text and audio at the word-level and applied attention distributions on textual word vectors, acoustic frame vectors, and acoustic word vectors. We introduced three fusion strategies with a CNN structure to combine word-level features to classify emotions. Our model outperforms the state-ofthe-art methods and provides effective visualization of modality-specific features and fusion feature interpretation.
| 784 | 1,700 | 784 |
SentiRec: Sentiment Diversity-aware Neural News Recommendation
|
Personalized news recommendation is important for online news services. Many news recommendation methods recommend news based on their relevance to users' historical browsed news, and the recommended news usually have similar sentiment with browsed news. However, if browsed news is dominated by certain kinds of sentiment, the model may intensively recommend news with the same sentiment orientation, making it difficult for users to receive diverse opinions and news events. In this paper, we propose a sentiment diversity-aware neural news recommendation approach, which can recommend news with more diverse sentiment. In our approach, we propose a sentiment-aware news encoder, which is jointly trained with an auxiliary sentiment prediction task, to learn sentiment-aware news representations. We learn user representations from browsed news representations, and compute click scores based on user and candidate news representations. In addition, we propose a sentiment diversity regularization method to penalize the model by combining the overall sentiment orientation of browsed news as well as the click and sentiment scores of candidate news. Extensive experiments on real-world dataset show that our approach can effectively improve the sentiment diversity in news recommendation without performance sacrifice.
|
Online news websites such as Google news 1 have gained huge popularity for consuming digital news In this paper, we propose a sentiment diversityaware news recommendation approach named Sen-tiRec, which can improve the sentiment diversity of news recommendation by considering the sentiment orientation of candidate and browsed news. In our approach, we propose a sentiment-aware news encoder, which is jointly trained with an auxiliary news sentiment prediction task, to incorporate sentiment information into news modeling and generate sentiment-aware news representations. We learn user representations from the representations of browsed news, and compute the click scores of candidate news based on their relevance to the user representations. In addition, to enhance the sentiment diversity of news recommendation, we propose a sentiment diversity regularization method to penalize our model during model training, which is based on the overall sentiment orientation of browsed news as well as the sentiment scores and click scores of candidate news. We conduct extensive experiments on a real-world benchmark dataset, and the results show that our approach can achieve better sentiment diversity and recommendation accuracy than many baseline methods. The contributions of this paper are summarized as follows: • To the best of our knowledge, this is the first work that explores to improve the sentiment diversity of news recommendation. • We propose a sentiment-aware news encoder that incorporates an auxiliary news sentiment prediction task to encode sentiment-aware news representations. • We propose a sentiment diversity regularization method to encourage the model to recommend news with diverse sentiment from the browsed news. • Extensive experiments on real-world benchmark dataset verify that our approach can recommend news with diverse sentiment without performance loss.
|
News recommendation is an important technique for online news websites to provide personalized news reading services In recent years, several news recommendation methods based on deep learning techniques are proposed In this section, we first present the formal definitions of the problem explored in this paper, then introduce the details of our sentiment diversity-aware news recommendation (SentiRec) approach. The problem studied in this paper is defined as follows. Given a user u with her news browsing history 1 , D c 2 , ..., D c P ] (N and P respectively denote the number of browsed news and candidate news), the goal of the news recommendation model is to predict the personalized click scores [ŷ 1 , ŷ2 , ..., ŷP ] of these candidate news, which are further used for ranking and display. We denote the sentiment labels of the browsed news and candidate news as [s 1 , s 2 , ..., s N ] and [s c 1 , s c 2 , ..., s c P ], respectively. In this paper we assume the sentiment labels are real values from -1 to 1, which indicate the sentiment polarity of news articles. We denote the overall sentiment orientation of browsed news as s. The sentiment diversity is defined as the differences between the sentiment orientation of recommended news and the overall sentiment of browsed news. In this section, we introduce the general news recommendation framework of our SentiRec approach, as shown in Fig. In this section, we introduce the details of the sentiment-aware news encoders in our SentiRec approach. Its architecture is shown in Fig. where V s and v s are parameters. The loss function of sentiment prediction we use is the mean absolute error (MAE), which is formulated as follows: where ŝi and s i respectively stand for the predicted sentiment score and sentiment label of the i-th news, and S denotes the number of news. The sentiment labels are obtained by the sentiment analyzer modules in Fig. To further improve the sentiment diversity of news recommendation, we propose a sentiment diversity regularization method to penalize the recommendation model according to the overall sentiment score of browsed news, the sentiment score of candidate news, and its predicted click score. As shown in Fig. A positive s indicates that the user has read news with more positive sentiment and a negative s indicates the negative sentiment is dominant in the browsed news. If the news recommender intensively recommends news with the same sentiment polarity with the overall sentiment s of a user's browsed news, it is difficult for this user to receive diverse news information. Thus, it is important to recommend news with diverse sentiment to users. To solve this problem, we propose a sentiment diversity regularization method. We first propose to compute a sentiment diversity score p with a sentiment monitor, which is formulated as follows: where a larger score of p indicates less sentiment diversity. In this formula, for a candidate news that shares the same sentiment polarity with s, the score p is larger if the model assigns it a higher click score or its sentiment and the overall browsed news sentiment are more intense, which indicate that the recommendation is less diverse in sentiment. Then, we propose a sentiment diversity loss function to regularize our model as follows: where S is the data set for model training, and p i denotes the sentiment diversity score of the i-th sample in S. In this section, we introduce how to train the models in our SentiRec approach. Following where S is the data set for model training. We jointly train the news recommendation model with the auxiliary sentiment prediction task and meanwhile regularize it using the sentiment diversity loss. The final unified loss function of our approach is a weighted summation of the three loss functions, which is formulated as follows: where λ and µ are coefficients to control the relative importance of the sentiment prediction loss and sentiment diversity regularization loss. Our experiments were conducted on a real-world news recommendation dataset provided by Following Since there is no off-the-shelf metric to evaluate the sentiment diversity of news recommendation, motivated by the MRR and hit ratio metrics, we propose three metrics named Senti M RR , Senti@5 and Senti@10 to quantitatively measure sentiment 8 The relevance grade is binary, i.e., 0 for non-clicked news and 1 for clicked news. diversity. They are computed as follows: Senti@5 = max(0, s 5 i=1 Senti@10 = max(0, s 10 i=1 where C is the number of candidate news in an impression, s c i denotes the sentiment score of the candidate news with the i-th highest click score. In these metrics, higher scores indicate that the recommendation results are less diverse from the browsed news in their sentiment. We evaluate the recommendation performance and sentiment diversity of our approach by comparing it with several baseline methods, including: (1) LibFM In this section, we conduct ablation studies to verify the influence of the auxiliary sentiment prediction task in the sentiment-aware news encoder and the sentiment diversity regularization method on the recommendation performance and sentiment diversity. The results are shown in Fig. In this section, we will explore the influence of two important hyperparameters on our approach, i.e., the loss coefficients λ and µ in Eq. ( Then, we vary the value of µ under λ = 0.4 to evaluate the recommendation performance and sentiment diversity of our approach. In this section, we present several case studies to better demonstrate the effectiveness of our approach in improving sentiment diversity of news recommendation. The clicked news of a randomly selected user as well as the top ranked candidate news recommended by a state-of-the-art method NRMS and our SentiRec approach are shown in Table 4. We can see that the historical browsed news of this user are mainly about negative topics such as crime, which usually convey negative sentiment. However, the NRMS method still intensively recommends news with negative sentiment such as "Sheriff: California officer's killer...". It indicates that NRMS tends to recommend news with similar sentiment to the browsed news, which is not suitable for users to acquire diverse news information. Different from NRMS, our approach can effectively recommend news with diverse sentiment from browsed news, and the recommended news also has some inherent relatedness with browsed news in their content (e.g., both the first candidate news and the third browsed news mention "fashion"). It shows that our approach can improve the sentiment diversity of news recommendation and meanwhile keep recommendation accuracy. In this paper, we propose a sentiment diversityaware neural news recommendation approach which can effectively recommend news with diverse sentiment from browsed news. We propose a sentiment-aware news encoder to learn sentimentaware news representations by jointly training it with an auxiliary sentiment prediction task. We learn user representations from representations of browsed news, and compute click scores based on user and candidate news representations. In addition, we propose a sentiment diversity regularization method to regularize the model according to the overall sentiment orientation of browsed news as well as the click scores and sentiment scores of candidate news. Extensive experiments on realworld benchmark dataset validate that our approach can effectively enhance the sentiment diversity of news recommendation without hurting the recommendation performance. In our future work, we plan to analyze the sentiment on the entities in news and explore to improve the entity-level sentiment diversity of news recommendation. In addition, we plan to extend sentiment polarities to more kinds of emotions, such as angry, happiness, sad and surprise, to enhance the emotion diversity of news recommendation.
| 1,321 | 1,892 | 1,321 |
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
|
Large language models (LLMs), such as Chat-GPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval), a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples automatically, we propose a two-stage framework, i.e., samplingthen-filtering. Besides, we hire some human labelers to annotate the hallucinations in Chat-GPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content related to specific topics by fabricating unverifiable information (i.e., about 19.5% responses). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. However, our experiments also prove that providing external knowledge or adding reasoning steps can help LLMs recognize hallucinations. Our benchmark can be accessed at
|
The advent of large language models (LLMs) Despite these prominent capabilities of LLMs trained on large text corpus, recent work has shown that LLMs are prone to suffer from hallucination generations across various applications To facilitate research in this direction, we present the Hallucination Evaluation benchmark for Large Language Models (HaluEval): a large collection of 35,000 hallucinated/normal samples for LLMs analysis and evaluation. HaluEval includes 5,000 general user queries with ChatGPT responses and 30,000 task-specific examples from three tasks, i.e., question answering, knowledge-grounded dialogue, and text summarization. The construction pipeline of HaluEval is depicted in Figure
|
Furthermore, for the task-specific examples, we design an automatic two-stage approach to generate hallucinated samples. First, based on existing task datasets (e.g., HotpotQA) as seed data, we employ ChatGPT to generate hallucinated samples with two styles of task-specific instructions, i.e., onepass and conversational. We expect that these two methods will generate diverse hallucinated samples from different aspects. Second, to select the most plausible and difficult hallucinated sample for LLMs evaluation, we elaborate the filtering instruction enhanced by ground-truth examples and leverage ChatGPT for sample selection. Through the proposed sampling-then-filtering approach, we can generate a hallucinated counterpart for each specific task example. These hallucinated samples are designed to challenge the ability of LLMs in hallucination recognition and analyze the information blind spots of LLMs. To better understand the performance of LLMs in HaluEval, we conduct experiments with several existing powerful LLMs (e.g., ChatGPT, GPT-3). Our key findings can be summarized as follows: • First, ChatGPT is likely to generate hallucinated content by fabricating unverifiable information in its responses (i.e., about 19.5% responses). The hallucinated texts from ChatGPT cover topics including language, climate, and technology. • Second, existing LLMs face significant challenges to identify the hallucinations in the generated text, even for ChatGPT which is used to generate these hallucinated samples (e.g., only 62.59% accuracy for ChatGPT in question answering). • Finally, the deficient performance of LLMs in recognizing hallucinations can be improved by providing explicit knowledge and adding intermediate reasoning steps. While, contrasting hallucinated samples with ground-truth makes LLMs more confused and leads to worse performance. As the goal of HaluEval is to understand what types of content and to which extent LLMs tend to hallucinate, the benchmark contains a myriad of correct samples and their hallucinated counterparts. This collection is created via two ways, i.e., automatic generation and human annotation. Our generation pipeline includes two steps: 1) diverse hallucination sampling, and 2) high-quality hallucination filtering. We employ ChatGPT to execute the creation pipeline automatically. Diverse Hallucination Sampling. Since a factual text can be hallucinated from different aspects, we propose two different hallucination sampling meth-I want you act as a hallucination answer generator. Given a question, right answer, and related knowledge, your objective is to write a hallucinated answer that sounds plausible but is factually incorrect. You SHOULD write the hallucinated answer using the following method (each with some examples): You are trying to answer a question but there is a factual contradiction between the answer and the knowledge. You can fabricate some information that does not exist in the provided knowledge. Table 2: Instruction of hallucination sampling for question answering. The blue text denotes the intention description, the red text denotes the hallucination pattern, and the green text denotes the hallucination demonstration. ods to generate diverse samples. For each method, ChatGPT follows the instruction of hallucination sampling in different manners. As shown in Figure Instruction Design. In our approach, the key is to design an effective instruction for ChatGPT to generate hallucinated samples. In our design, the hallucination sampling instruction consists of three important parts, including intention description, hallucination pattern, and hallucination demonstration, which have been shown in Table High-quality Hallucination Filtering. To construct a challenging benchmark for LLMs, we aim to select the most plausible and difficult hallucinated samples from the above two sampling methods. As shown in Table Through the sampling-then-filtering process, we end up generating a total of 30, 000 hallucinated samples for the three tasks. Our approach can also be adapted to other tasks and datasets. Besides generating hallucinated samples, we also invite human labelers to annotate whether ChatGPT responses contain hallucinated content. We annotate the general user queries and Chat-GPT responses from the 52K instruction tuning dataset from Alpaca According to recent work Labeler Details. Annotating the hallucination in ChatGPT responses is a very challenging task, which requires good reading comprehension skills and using search engine to look up relevant information for judgement. Thus, from an initial pool of labeler candidates, we select labelers who are good at English passage reading with at least an undergraduate-level education. Besides, following With the automatic two-step generation process in Section 2.1, we produce a total of 30, 000 hallucinated samples with 10, 000 examples for each task of QA, dialogue, and summarization. We show the number of generated samples for each hallucination pattern in Table To use our benchmark, users can run the code in our project repository to conduct the corresponding evaluation and analysis. Users can use our provided instructions on their own datasets to evaluate LLMs on hallucinations. Implementation Details. We execute the generation process of hallucinated samples using Azure OpenAI ChatGPT API. We use a temperature of 1.0 to generate samples and set the maximum number of tokens for generation to 256. Moreover, we set the frequency penalty to zero and top-p to 1.0. For evaluation, we set the temperature to zero for all models to reduce output randomness and ensure more focused and deterministic outputs. In the following, we first conduct hallucination recognition experiments, then propose several potentially useful strategies to improve the recognition, and finally we perform qualitative analysis to understand the hallucination in LLMs. To evaluate the ability of LLMs to recognize hallucinations, we randomly select the hallucinated or normal output (e.g., an answer) of each sample for classification. The evaluation instructions of QA, dialogue, and summarization are presented in Table Table With respect to the hallucinated samples where ChatGPT fails to recognize, we present the number of each hallucination pattern in Table In this part, we design several strategies to improve the ability of LLMs to recognize hallucination. The results are shown in Table Knowledge Retrieval. Retrieving relevant knowledge is a widely used strategy to eliminate hallucination CoT Reasoning. In previous work Compared to retrieving knowledge, adding chainof-thought before output might interfere with the final judgement. While, in text summarization, generating reasoning steps improve the accuracy from 58.53 to 61.21. The reason might be that the factual contradiction between document and summary can be identified through logic reasoning. Sample Contrast. We further provide ground-truth examples for ChatGPT to test whether it can distinguish the right sample from the hallucinated sample. As we can see from Table In the above, we have observed that providing external knowledge can be beneficial for LLMs to mitigate and recognize hallucinations. To demonstrate the effectiveness of knowledge retrieval in mitigating hallucinations, we present two hallucinated responses from ChatGPT and refined responses after augmented with retrieved knowledge in Table 7. In the first example, the generated span (i.e., "July 4, 1776 -Declaration of Independence signing") contains hallucinated information because it gives a wrong time of Declaration of Independence signing. By providing retrieved information about Declaration of Independence signing, ChatGPT is able to correct the hallucinated span and give the right information. Analogously, in the second example, ChatGPT gives incorrect GDP growth rates of China and India, which is due to that API-based ChatGPT cannot access the web to obtain the official data. After providing official information retrieved from World Bank, the refined span displays answers that contain the correct information. The above two examples illustrate that retrieving knowledge related to queries can help ChatGPT significantly reduce the hallucinations in the response, especially those factual errors. Hallucination in LLMs. Hallucination in LLMs is concerning since it hinders performance and raises safety risks in real-world application. To alleviate this issue, prior studies have proposed to use a verification system to identify non-factual entities in text summarization Hallucination Evaluation. Another line of work focusing on evaluating the hallucination of models in different NLP tasks We introduce HaluEval, a large-scale collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucinations. To automatically generate large-scale samples, we propose a two-step approach, i.e., sampling-then-filtering. We first introduce two different sampling methods to generate diverse samples using instructions and then filter and select the difficult one. Besides, we invite qualified human labelers to annotate the hallucinations of ChatGPT responses given user queries. We find that, existing LLMs mostly fail to recognize the hallucinations in text and tend to generate hallucinated content. Finally, we suggest several strategies to help LLMs recognize hallucinations. Our benchmark can facilitate research in understanding what types of content and to which extent LLMs tend to hallucinate, ultimately paving the way for building more effective and reliable LLMs in the future. In our approach, we leverage a LLM, i.e., ChatGPT, to automatically generate the hallucinated samples. Therefore, the quality of our hallucinated samples is limited by the capacity of ChatGPT in following the complex instruction of hallucination sampling. Although we design the high-quality hallucination filtering process, it is still necessary to apply quality control to the generation of hallucinated samples. Besides, our benchmark focuses on evaluating the ability of LLMs in recognizing the hallucinations in text but does not investigate the underlying reasons behind the appearance of hallucinations like prior work As for the potential issue, since the hallucinated samples in our benchmark looks highly similar to the ground-truth samples, which might be misused for an unexpected purpose than we planned. To alleviate this issue, we should monitor and regulate the spread and usage of our benchmark. I want you act as an assistant in a conversation with human. Given a dialogue history, the true response, and related knowledge, your objective is to write a hallucinated response that sounds plausible but is factually incorrect. You SHOULD write the hallucinated response using the following method (each with some examples): You are trying to write a response to human but you replace the true entity with a highly similar entity. #Knowledge#: The Dark Knight is a 2008 superhero film directed by Christopher Nolan from a screenplay he co-wrote with his brother Jonathan. Christopher Nolan is a film director. Table I want you act as a hallucination summary generator. Given a document and the right summary, your objective is to write a hallucinated summary that sounds plausible but is factually incorrect. You SHOULD write the hallucinated summary using the following method (each with some examples): You are trying to write a summary which is factual but some information cannot be directly inferred or entailed from the document. #Document#: The panther chameleon was found on Monday by a dog walker in the wooded area at Marl Park. It had to be put down after X-rays showed all of its legs were broken and it had a deformed spine. RSPCA Cymru said it was an "extremely sad example of an abandoned and neglected exotic pet". Inspector Selina Chan said: "It is a possibility that the owners took on this animal but were unable to provide the care he needs and decided to release him to the wild. "We are urging potential owners of exotic animals to thoroughly research what is required in the care of the particular species before taking one on. "Potential owners need to make sure they can give their animal the environment it needs and they have the facilities, time, financial means and longterm commitment to maintain a good standard of care, as required under the Animal Welfare Act 2006." She added it was illegal to release non-native species into the wild. #Right Summary#: Owners of exotic animals have been urged to do research before having them as pets after a seriously neglected chameleon was found in Cardiff Bay. #Hallucinated Summary#: A chameleon that was found in a Cardiff park has been put down after being abandoned and neglected by its owners. or You are trying to write a summary but there exist some non-factual and incorrect information. You can fabricate some information that does not exist in the provided document. <Demonstrations> or You are trying to write a summary but there is a factual contradiction between the summary and the document. <Demonstrations> You should try your best to make the summary become hallucinated. #Hallucinated Summary# can only have about 5 more words than #Right Summary#. #Document#: <Here is the test document> #Right Summary#: <Here is the right summary of the test document> #Hallucinated Summary#: I want you act as a summary judge. Given a document and two summaries, your objective is to select the best and correct summary without hallucination and non-factual information. Here are some examples: #Document#:The panther chameleon was found on Monday by a dog walker in the wooded area at Marl Park. It had to be put down after X-rays showed all of its legs were broken and it had a deformed spine. RSPCA Cymru said it was an "extremely sad example of an abandoned and neglected exotic pet". Inspector Selina Chan said: "It is a possibility that the owners took on this animal but were unable to provide the care he needs and decided to release him to the wild. "We are urging potential owners of exotic animals to thoroughly research what is required in the care of the particular species before taking one on. "Potential owners need to make sure they can give their animal the environment it needs and they have the facilities, time, financial means and long-term commitment to maintain a good standard of care, as required under the Animal Welfare Act 2006." She added it was illegal to release non-native species into the wild. #Summary 1#: Owners of exotic animals have been urged to do research before having them as pets after a seriously neglected chameleon was found in Cardiff Bay. #Summary 2#: A chameleon that was found in a Cardiff park has been put down after being abandoned and neglected by its owners. #Your Choice#: The best summary is Summary 1. ... <Demonstrations> ... You should try your best to select the best and correct summary. If both summaries are incorrect, choose the better one. You MUST select a summary from the provided two summaries. #Document#: <Here is the test document> #Summary 1#: <Here is the hallucinated summary generated by the first channel> #Summary 2#: <Here is the hallucinated summary generated by the second channel> #Your Choice#: I want you act as an answer judge. Given a question and an answer, your objective is to determine if the provided answer contains non-factual or hallucinated information. You SHOULD give your judgement based on the following hallucination types and the world knowledge. You are trying to determine if there is a factual contradiction between the answer and the world knowledge. Some information in the answer might be fabricated. I want you act as a summary judge. Given a document and a summary, your objective is to determine if the provided summary contains non-factual or hallucinated information. You SHOULD give your judgement based on the following hallucination types and the world knowledge. You are trying to determine if the summary is factual but some information cannot be directly inferred or entailed from the document. #Document#: The panther chameleon was found on Monday by a dog walker in the wooded area at Marl Park. It had to be put down after X-rays showed all of its legs were broken and it had a deformed spine. RSPCA Cymru said it was an "extremely example of an abandoned and neglected exotic pet". Inspector Selina Chan said: "It is a possibility that the owners took on this animal but were unable to provide the care he needs and decided to release him to the wild. "We are urging potential owners of exotic animals to thoroughly research what is required in the care of the particular species before taking one on. "Potential owners need to make sure they can give their animal the environment it needs and they have the facilities, time, financial means and longterm commitment to maintain a good standard of care, as required under the Animal Welfare Act 2006." She added it was illegal to release non-native species into the wild. #Summary#: A chameleon that was found in a Cardiff park has been put down after being abandoned and neglected by its owners. #Your Judgement#: Yes You are trying to determine if there exists some non-factual and incorrect information in the summary. <Demonstrations> You are trying to determine if there is a factual contradiction between the summary and the document. <Demonstrations> You should try your best to determine if the summary contains non-factual or hallucinated information according to the above hallucination types. The answer you give MUST be "Yes" or "No". #Document#: <Here is the test document> #Summary#: <Here is the hallucinated summary or right summary> #Your Judgement#:
| 1,134 | 708 | 1,134 |
SemFace: Pre-training Encoder and Decoder with a Semantic Interface for Neural Machine Translation
|
While pre-training techniques are working very well in natural language processing, how to pre-train a decoder and effectively leverage it for neural machine translation (NMT) still remains a tricky issue. The main reason is that the cross-attention module between the encoder and decoder cannot be pre-trained, and the combined encoder-decoder model cannot work well in the fine-tuning stage because the inputs of the decoder cross-attention come from unknown encoder outputs. In this paper, we propose a better pre-training method for NMT by defining a semantic interface (SemFace) between the pre-trained encoder and the pre-trained decoder. Specifically, we propose two types of semantic interfaces, including CL-SemFace which regards cross-lingual embeddings as an interface, and VQ-SemFace which employs vector quantized embeddings to constrain the encoder outputs and decoder inputs in the same language-independent space. We conduct massive experiments on six supervised translation pairs and three unsupervised pairs. Experimental results demonstrate that our proposed Sem-Face can effectively connect the pre-trained encoder and decoder, and achieves significant improvement by 3.7 and 1.5 BLEU points on the two tasks respectively compared with previous pre-training-based NMT models.
|
In recent years, pre-trained language models The above method essentially pre-trains a BERTlike In parallel to the idea of DALL•E Our contributions are listed as follows: • To the best of our knowledge, this is the first work to investigate and define a semantic interface between encoder and decoder for the MT pre-train-finetune framework. • We design and compare two effective types of semantic interfaces, which utilize crosslingual embeddings and vector quantized embeddings respectively. • We extensively verify the effectiveness of our proposed model on supervised and unsupervised NMT tasks. Particularly, our proposed CL-SemFace and VQ-SemFace lead to significant improvements of 3.38 and 3.76 BLUE points on low-resource language pairs.
|
The overview of our proposed SemFace is illustrated in Figure The encoder is pre-trained to map the input from the monolingual semantic space into the interface, while the decoder is pre-trained to use the content from the interface via the cross attention module to finish decoding. The parameters of the encoder and the decoder are updated independently, thus their pre-training processes can be either jointly or separately done. Then, we remove the semantic interface, and connect the pre-trained encoder and decoder with the pre-trained cross-attention as a sequence-to-sequence model for the subsequent machine translation fine-tuning. Note that in Figure There are three types of semantic interface. The first is the default output space of pre-trained encoder with the masked language model (MLM) training loss. In fact, previous work CL-SemFace uses the cross-lingual embedding space as the interface between the encoder and the decoder during pre-training. We first concatenate the monolingual corpora of two languages and learn joint BPE, and then train cross-lingual BPE embeddings with VecMap As shown in Figure To stabilize training, we calculate the MSE loss before the last normalization layer of the encoder. Formally, given an input sample x, the encoder pre-training loss function is: where x i is the masked tokens in the input sentence, h i is the activation of the final layer of the encoder but before the final layer normalization LN, W i is the output embedding of the ground-truth token, and p is the output probability of the Softmax. When pre-training the decoder, we attempt to use the content from the semantic interface to simulate encoder outputs. To achieve that, given a monolingual training sample x, we first add some noise (2) or (3) where s is the final output hidden of the decoder and p is the output probability of the Softmax. The CL semantic space is constrained with the cross-lingual word embedding, which is contextindependent, meaning that the different meanings of the same word share the same embedding, and the number of semantic units should be the same with the size of the vocabulary. In order to learn context-dependent semantic units freely, we also propose another interface type, vector quantized embeddings, inspired by the recent success of VQbased speech pre-training 1 . The nearest neighbor search is performed between the encoder outputs and the embedding of the latent code using the L2 distance metric. Formally, given the encoder output h(x), the discrete latent variable assignment is given by where K is the number of codes in the code-book, z j is j-th quantized vector in the code-book. That means, z i is the output of the VQ layer corresponding to h(x). The main issue of this method is that the arg min operation is not differentiable. Following where v = -log(-log(u)) and u are uniform samples from U(0, 1). In the forward pass, only the embedding in the code-book with the largest probability is used, which means the output of the VQ layer is z i , where i = arg max i p i , while in the backward pass, the gradient is passed to all the Gumbel-Softmax outputs. The VQ layer groups the context-aware hidden states into limited semantic units (codes), and the space of these codes can be used as our second language-independent semantic interface. As shown in Figure where pk is the averaged probability of choosing the k-th code in the code-book across a batch, and p k is calculated by Eq.( For the decoder, similar to CL-SemFace, we also use the content from the VQ interface to simulate the encoder output during pre-training. To get the VQ output, given a training sample, we first feed it into an embedding layer and then pass the readout embeddings to a two-layer Transformer, which can be regarded as a feature extractor. We use the Transformer output as the representations of each word and find the corresponding codes in the codebook according to Eq.( The semantic interface acts as a bridge to connect the encoder and decoder during pre-training. The encoder is pre-trained to project the input to the features in the semantic interface space, while the decoder is pre-trained to leverage the features from the interface space through the cross-attention to generate outputs. With this method, we can pretrain all the parameters of the whole sequenceto-sequence model, including the cross-attention between the encoder and the decoder. After pretraining, we connect the encoder and the decoder via the cross-attention directly by removing the semantic interface as shown in Figure The languages we choose for our experiments are English (en), French (fr), German (de), Romanian (ro), Finnish (fi), Estonian (et), Latvian (lv), Lithuanian (lt), Gujarati (gu), and Kazakh (kk). The details of the datasets and statistics for each language pair are listed in Table We compare our method with two baselines. The first is XLM comparison, we use their pre-training method on the concatenated corpora of each language pair, i.e., mBART02 in their paper. For the low-resource supervised settings, we also compare our method with the basic Transformer without pre-training. If there is a parallel corpus for a certain language pair, we use the parallel data to fine-tune the pretrained models in the two baselines. If there is only a monolingual corpus, we use the denoising autoencoder and iterative back-translation to fine-tune the pre-trained models. We implement our method based on the code released by During MT fine-tuning, the learning rate is 0.0001 with 4,000 warm-up steps, and then decayed based on the inverse square root of the update number. The loss of the denoising auto-encoder objective is weighted by a coefficient α, and it is linearly decreased to 0.1 in the first 100,000 steps and decreased to 0 in the next 200,000 steps. For VQ-SemFace, the code-book contains 102,400 codes with their dimensions being 1024. In this section, we report the result of our pretraining method fine-tuned with neural machine translation. We have two settings. The first setting is low-resource supervised machine translation, which uses additional parallel corpus to fine-tune the pre-trained encoder and decoder. The second is unsupervised neural machine translation, which uses the two objectives of denoising auto-encoder and back-translation to fine-tune the model. The results on the low-resource language pairs are shown in Table We also report the results of three unsupervised language pairs in Table may be because the cross-lingual embeddings of these rich-resource language pairs are of higher quality, thus the semantic interface is initialized better during the pre-training. In this subsection, we first investigate the influence of the encoder losses (Eq. 1) by removing each of them independently in the encoder pre-training. Besides, note that there are two types of loss used in our decoder pre-training, MLM and CLM, as shown in Eq. ( In this section, we investigate the influence of the data quantity in the experiments. The language pair we choose is de-en, which has a large parallel corpus and makes it possible to conduct our investigation. We compare the performance of the model with our pre-training method and the model without pre-training. Note that we do not use any monolingual data in the training so the result here is not comparable with that in Table As mentioned in Sec.2.3, VQ space could be regarded as a language-independent semantic interface for the encoder and decoder pre-training. To test whether VQ space is trained to contain crosslingual representations, we carry out an analysis with a parallel sample of de-en. For each token pair (w en , w de ) in the two sentences, we collect top-100 codes according to Eq. ( Pre-training has been widely used in NLP tasks to learn better language representations Recently, a prominent line of work has been proposed to improve NMT with pre-training. These techniques can be broadly classified into two categories. The first category usually uses pre-trained models as feature extractors of a source language, or initializes the encoder and decoder with pretrained models separately The second category methods pre-train a whole sequence-to-sequence model for NMT. MASS We propose SemFace, a better pre-training method for neural machine translation. The key point is to use a semantic interface to connect the pre-trained encoder and decoder. By defining this interface, we can pre-train the encoder and decoder separately with the same intermediate language-independent space. The cross-attention can also be pre-trained with our method so that we can naturally combine the pre-trained encoder and decoder for fine-tuning. We introduce and compare two semantic interfaces, e.g., CL-SemFace and VQ-SemFace, which leverage unsupervised cross-lingual embeddings and vector quantized embeddings as the intermediate interfaces respectively. Massive experiments on supervised and unsupervised NMT translation tasks show that our proposed SemFace obtains substantial improvements over the state-of-the-art baseline models. In the future, we will design and test more semantic interface types for extensions.
| 1,295 | 746 | 1,295 |
Measuring and Mitigating Name Biases in Neural Machine Translation
|
Neural Machine Translation (NMT) systems exhibit problematic biases, such as stereotypical gender bias in the translation of occupation terms into languages with grammatical gender. In this paper we describe a new source of bias prevalent in NMT systems, relating to translations of sentences containing person names. To correctly translate such sentences, a NMT system needs to estimate the gender of names. We show that leading systems are particularly poor at this task, especially for female given names. This bias is deeper than given name gender: we show that the translation of terms with ambiguous sentiment can also be affected by person names, and the same holds true for proper nouns denoting race. To mitigate these biases we propose a simple but effective data augmentation method based on randomly switching entities during translation, which effectively eliminates the problem without any effect on translation quality.
|
Natural language processing systems are seeing widespread adoption, prompting careful study into cultural biases they exhibit, and methods for bias mitigation. Gender bias is common in automated systems NMT systems are not only biased for gender, and gender bias is not limited to gender pronouns. Other biases include racial biases, professional biases, and individual biases, among others. In this paper, we focus on two kinds of biases of person name translations by NMT systems: gender biases and sentiment biases. As an important category of named entity, person names are particularly sensitive to translation errors since they refer to realworld individuals, and systematic biases may cause serious distress to users, and reputational damage, libel or other legal consequences for vendors. Gender bias in the translation of person names is a natural extension of gender biases in previous work. For instance, Biases pertaining to sentiment of sentences containing person names have been studied in sentiment analysis To mitigate the above biases against person names in translation, we propose a dataaugmentation method 'switch-entity' (SE), which works by altering training sentences containing named entities by randomly switching the entities for other entities of the same type (e.g., with matching gender). This simple strategy normalises the distribution of named entities, such that all names are observed sufficiently many times and in a diverse range of contexts. This ensures gender signals are learned correctly, and also stops the translation system from associating the name with idiosyncracies of the contexts in which is appears, thus mitigating sentiment bias. Modifying the training data carries the risk of degrading sentence quality, and thus degrading accuracy. Although replacing a named entity with another does change sentence meaning, it is unlikely to compromise grammaticality or render the sentence semantically incoherent. Our results show that SE beneficially mitigates gender bias when translating names into gendered languages, which we show leads to more accurate morphological inflection in sentences with female entities. At the same time, it does not sacrifice accuracy: the BLEU score of the SE-trained model is the same as for standard training.
|
• We show two new biases for person names in NMT, relating to gender and sentiment. In languages with rich grammatical gender, the gender of people referenced in a sentence will often affect the morphology of the other words in the sentence. For example, "[PER] is a Royal Designer" translates into German as either Masc. [PER] ist ein königlicher Designer; or Fem. [PER] ist eine königliche Designerin. where gender agreement holds between the person (PER) and the determiner, adjective and occupation noun. Accordingly, knowing the gender of She is the developer of the company. Sie ist die Entwicklerin der Unternehmens. Gloria is the developer of the company. Gloria ist ::: der ::::::::: Entwickler der Unternehmens. He wants to be an excellent dancer. Er möchte ein ::::::::::: hervorragende Tänzer sein. Reggie wants to be an excellent dancer. ::: the person is critical when translating from a language like English, where gender is rarely marked, into a gendered language. Ignoring this issue will affect the quality of outputs, and consistent mistakes can constitute a form of gender bias. Previous works Here, we propose an evaluation method for assessing whether gender is translated accurately for English→German and English→French. We created a range of templates encoding various syntactic relations which require gender agreement, and assess whether the translation includes the correct morphological inflection (e.g., for the above, the choice between Designer vs. Designerin). Table We conducted similar evaluation progress as gender agreement test (Section 2.2). The labelled translation words shown in the Table Metrics We have two evaluation metrics for names' sentiment tendencies: word-level positiveness t and sentence-level positiveness s. The wordlevel positiveness is evaluated by checking the translations of sentiment ambiguous words, calculating the ratio of the number of sentences that sentiment ambiguous words translated to positive words, to the total number of template sentences. The sentence-level positiveness is scored by a sentiment analysis classifier In order to measure the overall degree of sentiment bias of models, we report the highest and lowest mean scores among all person names, as well as the gap between these values, denoted △t and △s for word and sentence level, respectively. 4 To remove potentially confound bias from the sentiment classier, we masked PERSON names, replacing all names with masculine pronouns "他"[en: he]. For example, when we use sentiment analysis to score translation "爱丽丝很自豪。", we first convert sentence into "他很自豪。" Names For sentiment biases, we used the full names of celebrities, for which we expect sufficient data for NMT systems to learn biases. We selected the top 10 popular male celebrities and 10 female celebrities across 7 different occupations (see list in Table Gender, race and nationality Our templates can be used not only to test names but also to test other sentiment biases, such as gender, race and nationality. We used 8 different races and nationalities to fill the templates, which we minimally adapted to ensure they are grammatically correct. Additionally, we add "man" or "woman" (e.g., "Asian men") to measure intersectional racial and gender bias. and Models: We tested English→German and English→French, chosen based on English not having grammatical gender while German and French both do. In both settings we compare three online translation systems, Overall bias Table Alice's speech is very sensational. 爱 丽 丝 的 演 讲 非 常 ::::::::::::::: 耸人听闻[appalling]。 James's speech is very sensational. Alice is slack. 爱丽丝很 :::::::: 懒散[lazy]。 James is slack. Alice concocted this plan. 爱丽丝 ::::::::::: 编造[fabricate]了这个 计划。 James concocted this plan. Table the less the name bias is present. This is because the larger the amount of data, the model is exposed to more names, and can better distinguish their gender. However, obtaining more data is usually not easy, especial for low-resource language. In NMT training corpora, names appear in different contexts, which can result in sentiment biases for specific names. For instance, a popular celebrity is likely to appear in many more positive sentiment contexts than a reviled mafioso, which may mean a NMT system mistakenly associate person names with translation sentiment. We set about measuring whether this manifests in NMT output using templated ambiguous contexts in English in which the ambiguity must be resolved when translating into the target language. To do so we use sentiment ambiguous words: a kind of homograph which has both commendatory and derogatory meanings. This is illustrated in Table [PERSON]'s speech is very 轰动[startling] sensational. [PERSON] used tricks to win the 技巧[skill] game. Table simple since we want to eliminate the influence of context, and thereby assess how person names affect the translations of sentiment ambiguous words. We tested three commercial systems, as before; and two research models: a pretrained model opus.en-zh and a custom transformer model custom.wmt17 trained with wmt17 enzh corpus. We further split the results by occupation and gender, as shown in Figure Biases on race and nationality The results for testing race and nationality terms are shown in Figure Bias in NMT models are mainly caused by the training data, which is typically unbalanced, e.g., females are much rarer than males in the training corpus, leading to gender bias. One simple way to balance out gender biases is to add a number of female data to balance the ratio of male to female sentences. However, obtaining new data can be difficult, especially for low-resource languages. Here, we propose a data augmentation method that does not require additional data, SWITCHENTITY. By switching names in the training corpus, the model can train with more correct translation patterns about female names, so that the model can correctly identify the gender of the name, and achieve the effect of reducing biases. This method can be applied not only to PERSON entities, but also to other classes of named entities. Let ⟨x t , y t ⟩ be the language pair containing the named entity t and ⟨t x ,t y ⟩ be the named entity pair. L e l be the candidate list of named entities, where e is the entity type and l the language. The replacement candidate list L can be obtained from different resources. Here we present a method to extract L from the original corpus, NER models (at least one side) and alignment tool are required: 1. Use NER to identify named entities on both the source and target sentences; Once the candidate list of entities has been computed, the last step in applying SE involves switching each of the named entities identified above with another named entity during each epoch training, which is drawn uniformly from the set of entities of the same type (and gender, when considering persons). To illustrate, in the following we switch out "Al Gore" for "JAY-Z": (1) Candidate Al Gore concedes the US election. Kandidat Al Gore räumt die US-Wahlen ein. (2) Candidate JAY-Z concedes the US election. Kandidat JAY-Z räumt die US-Wahlen ein. In corpora, the distribution of names is usually skewed such that the majority of names have very low frequency, and these names are not well learned by the model. SE has the effect of flattening the distribution over entity strings, while preserving the natural distribution over entity types, ensuring the model focuses more on learning to translate names in the tail. Switching any parts of a training sentence carries the risk of corrupting the data, both grammatically and semantically, and this will depend on the granularity of named entity labels. Switching named entities with others of the same type is key to maintain the sentences' quality. For instance, if we mistakenly switch male and female names, it will corrupt training and may result in gender agreement mistakes in translation. In the example shown above, we cannot switch "Al Gore" with a female name without changing "Kandidat" from masculine to feminine gender. For this reason we refine the PERSON entity category to include gender, and only switch like-gender entities. We experimented with SE on the three custom models we mentioned in Section 2, use the same training configuration (see Appendix A for details). Quality of translation First, we test whether SE has an effect on translation accuracy. In terms of BLEU score, Table Although SE does not introduce the new female training samples, it does balance the frequency of female names, such that contexts of high-frequency female names are shared with lowfrequency female names, thereby better training the NMT model to learn general gender cues. We also tested SE on sentiment biases, the results show SE can help to mitigate sentiment biases on names, with △t reducing from 0.40 to 0.21. This is because training with SE means PERSON names will have chance to appear in different contexts during training, instead of may only appearing in a specific context like vanilla training, which can help to reduce the model's stereotype of names. We did not attempt to use SE to mitigate race or nationality biases, although in principle this could be possible using the method. Gender bias is a central concern in machine translation research. Other social biases and stereotypes have also been investigated. Our mitigation method SWITCHENTITY is based on data augmentation. Similar methods of entity switching have been proposed for named entity recognition (NER), either for data augmentation in training to increase model coverage over named entities In this paper, we revealed two biases in the NMT systems, gender biases and sentiment biases against names. Our results show that the existing research models and commercial translation systems have serious biases, which not only affects translation quality, but also have ethical implications on fairness and bias. In order to mitigate biases, we proposed SWITCHENTITY, a simple training strategy which can reduce name biases without the need for any additional data. We discuss ethical considerations and limitations of our work. First, we focus solely on binary gender, as this can be directly observed in many languages with grammatical gender. Our use of binary gender is not intended to promulgate an inappropriate binary gender focus, but rather allows the study of gender bias in translation, based on the text contained in translation corpora. Admittedly our method has limitations, for instance, it will not be able to adequately handle trans-gendered and non-binary individuals; to do so would require substantial additional translation corpora, as well as extensions to the technique, which we leave for future research. Second, we evaluate only a small number of language pairs, but we expect similar behaviour for translation into many other gendered languages, the exploration of which we leave for future work. For English→German, we evaluated a range of models, the pre-trained models being: transformer.wmt19, transformer.wmt16 and conv.wmt17 from FairSeq; 7 and custom models: custom.wmt18 and custom.iwslt17, those two models were trained on the WMT18 en-de corpus and the IWSLT17 en-de corpus respectively. For English→French, we compare two pretrained models conv.wmt14 and transformer.wmt14 and a custom model, custom.iwslt16. For all custom models we use the FAIRSEQ To perform SE, we need NER models for both parallel setting and monolingual setting, and need an alignment tool for parallel setting. Here, we used SpaCy to recognized named entities, en_core_web_trf for English and de_core_news_lg for German fr_core_news_lg for French. We used fast align Language
| 934 | 2,289 | 934 |
A Span-based Multimodal Variational Autoencoder for Semi-supervised Multimodal Named Entity Recognition
|
Multimodal named entity recognition (MNER) on social media is a challenging task which aims to extract named entities in free text and incorporate images to classify them into userdefined types. The existing semi-supervised named entity recognition methods focus on the text modal and are utilized to reduce labeling costs in traditional NER. However, the previous methods are not efficient for semisupervised MNER. Because the MNER task is defined to combine the text information with image one and needs to consider the mismatch between the posted text and image. To fuse the text and image features for MNER effectively under semi-supervised setting, we propose a novel span-based multimodal variational autoencoder (SMVAE) model for semisupervised MNER. The proposed method exploits modal-specific VAEs to model text and image latent features, and utilizes product-ofexperts to acquire multimodal features. In our approach, the implicit relations between labels and multimodal features are modeled by multimodal VAE. Thus, the useful information of unlabeled data can be exploited in our method under semi-supervised setting. Experimental results on two benchmark datasets demonstrate that our approach not only outperforms baselines under supervised setting, but also improves MNER performance with less labeled data than existing semi-supervised methods.
|
Multimodal named entity recognition (MNER) has become a fundamental task to extract named entities from unstructured texts and images on social media To reduce labeling costs in MNER, semisupervised learning is widely utilized to exploit the useful information of unlabeled data in text modal. Unlike the supervised setting with adequate labeled data, there are small amount of labeled data and large amount of unlabeled one in semi-supervised setting as shown in Figure To overcome the above disadvantages of the existing methods, we propose the span-based multimodal variational autoencoder (SMVAE) 1 for semi-supervised multimodal named entity recognition. The previous MNER models fused the sentence-level features and image ones for predicting sequence labels and had the difficulty to model mulitmodal features of unlabeled data under semi-supervised setting. Because the semantic correlation between sentences and images should be focused on the specific tokens. Therefore, the proposed method splits the texts into span-level tokens, and combines the span-level features of texts with image features for predicting labels of all spans in each text. SMVAE utilizes modal-specific VAEs to model latent representations of images and span-level texts respectively, and acquires the multimodal features by applying product-of-experts (PoE) 1. We analyze that the existing semi-supervised NER methods are not efficient for MNER under semi-supervised setting. To the best of our knowledge, we are the first one to focus on the semi-supervised MNER problem. 2. For semi-supervised MNER, we propose the span-based multimodal variational autoencoder to implicitly model the correlation between span label and multimodal features which takes advantage of unlabeled multimodal data effectively. 3. We compare the proposed model with the 1 When ready, the code will be published at 2 Related Work 2.1 Multimodal Named Entity Recognition The above studies are under the supervised setting, and we focus on the semi-supervised MNER to reduce the labeling costs. Unlike the supervised learning with adequate labeled data, the semisupervised learning is focused on utilizing the useful information of unlabeled data.
|
For traditional named entity recognition, the labeled data is not always adequate because of the labeling costs. Therefore, semi-supervised learning is an important way to improve NER model performance without enough labeled data. Two widely used semi-supervised learning methods selftraining (ST) Considering to combine virtual adversarial training (VAT) Before getting into the details of the proposed model, we introduce the notations for semisupervised MNER. The labeled and unlabeled datasets are denoted as D l and D u respectively. The unlabeled dataset D u with |D u | samples is formulated as . And the labeled dataset D l with |D l | samples is defined as i=1 where S l i and V l i are the text and image of i-th sample, and y i is the task defined label for MNER. According to the conventional MNER studies The SMVAE model is shown in Figure Given the multimodal data as input, we need to preprocess them and map them into the dense representations for deep neural networks as shown in Figure 2. We denote the input text with N s words as S = {w 1 , w 2 , . . . , w Ns }. With the impressive performance of pre-trained language models, we utilize BERT and H e = BiLSTM(B; θ e ) = {h e i } Ns+1 i=0 where θ g and θ e are trainable weights in BiLSTM networks. As mentioned above, we focus on the span features and exploit them to predict the entities in the text. The spans of the text can be formulated as {S (i,j) |1 ≤ i ≤ j ≤ N s } where S (i,j) = {w i , w i+1 , . . . , w j }. And the global representations of spans are denoted as {c g (i,j) |1 ≤ i ≤ j ≤ N s } where c g (i,j) = 1 j-i+1 j k=i h g k . The edge representations of spans are calculated as {c e (i,j) |1 ≤ i ≤ j ≤ N s } where c e (i,j) = h e i ; h e j ; h e i -h e j ; h e i h e j and is the elementwise vector product. For the visual modality, we utilize ResNet To model the latent representations of the text and image modalities, the proposed SMVAE model consists of two modal-specific VAE networks named text-VAE and image-VAE. The encoders of VAEs contain dense layers to map the input features to the mean vector µ and standard deviation vector σ. For the text modality, the global representations of spans c g are fed into text-VAE to parameterize the mean vector µ s and standard deviation vector σ s . The true posterior p(z s |c g ) can be approximated by the above parameters, and the distribution of z s is formulated as z s ∼ q(z s |c g ) = N (µ s , σ 2 s ). Therefore, µ s and σ s are computed by µ s = FFNN(c g ; θ s µ ), σ s = FFNN(c g ; θ s σ ) where FFNN is short for feed-forward neural networks, and θ s µ and θ s σ are trainable parameters in the encoder of text-VAE. For the visual modality, the global image features V g are also fed into the encoder of the image-VAE. And the mean vector µ v and standard deviation vector σ v for image latent representations are calculated as where θ v µ and θ v σ are trainable weights in the encoder of image-VAE. We exploit the above parameters to approximate the true posterior p(z v |V g ), and the distribution of z v is formulated as To bridge the semantic gap between the text and image representations, we need to calculate the multimodal features for predicting the results. The previous studies treated the text and image features as equals and mapped the concatenated features of the two modalities into the same latent representations . To train the model in an end-to-end way, we utilize the reparameterization strategy (Kingma and Welling, 2014) to sample the latent representations. The latent variable z m for multimodal representations can be calculated as z m = µ m +σ m where ∼ N (0, I). We utilize the multimodal features to predict the probabilities by ŷ = FFNN([z m ; c e ]; θ o ) where θ o is the trainable weights of the prediction FFNN. Given the annotated entity set y, the all negative instance candidates are defined as where Y is the label space and O is the label for non-entity spans. To confirm the balanced class distribution of the samples in one batch, we randomly select a subset ỹ from the candidate set ỹ with the same size of y. The span-level cross entropy loss for training the model is defined as where ŷ(i,j) is the prediction probability for the phrase S (i,j) . The decoders of SMVAE are trained to reconstruct the representations of samples. For the text modality, the span types are correlated to the representations of spans. Therefore, we combine the true labels of labeled data or prediction probabilities of unlabeled data with the text latent representations and feed them into the decoder of text-VAE. The reconstructed representation of span is calculated as ĉg = FFNN([z s ; ȳ]; θ s d ) for labeled data where z s = µ s + σ s . The latent representations of images are fed into the decoder of image-VAE directly and the reconstructed representation is calculated as According to the evidence lower bound (ELBO) function of VAE (Kingma and Welling, 2014), the training loss for SMVAE on labeled data is formulated as follows: (2) where ĉg (i,j) is the reconstructed representation of the phrase S (i,j) . For the unlabeled data, the reconstructed representation of span is calculated as Considering that there are more non-entity spans than named entity ones in a sample, we only learn the latent representations for the latter. And the training loss for unlabeled data is defined as follows: After acquiring the pre-processed multimodal labeled and unlabeled data, we feed them into the model to learn the latent representations of different modalities and extract the named entities. To train the model with different objectives at once, we introduce the hyper-parameter to sum Equation where λ is the hyper-parameter to balance the different losses. We feed the multimodal data into the model and acquire the loss according to Equation 4. To train the parameter weights of the model, we utilize the stochastic gradient descent (SGD) methods to update them based on the overall loss. We compare the proposed model with the existing methods on the two widely used MNER datasets including: Twitter-2015 In the proposed model, we utilize the BERT-base 2 version of pre-trained language model BERT Considering that there is no previous studies on semi-supervised MNER, we compare the proposed 2 The above baseline methods are only for text modality. Besides, we also combine the effective MNER models with the above semisupervised learning methods as semi-supervised MNER baselines. The uniform multimodal transformer (UMT) We compare SMVAE with the baseline methods on two benchmark datasets under semi-supervised setting, and report the metrics of F1 score (F1) for every single type and overall precision (P), recall (R) and F1 score (F1). The detailed experi- To dig into the model, we conduct the analysis for presenting it in different aspects. We discuss the effect of the labeled data percent to the original training set and latent variable dimension. To demonstrate the effectiveness of SMVAE, we compare it with the superior MNER models under supervised setting and conduct ablation study to verify the usefulness of multimodal VAE. Effect of Labeled Dataset Size. We explore the SMVAE performance with the percent of labeled data to the original training data under different settings. As shown in Figure Ablation Study. To investigate the effectiveness of multimodal VAE (MVAE) module in our model under different settings, we perform comparisons between the full model and the ablation method. The overall results of the models on two datasets are shown in Table Effect of Latent Variable Dimension. The dimension of latent variables in MVAE is the key hyper-parameter to affect the performance of SM-VAE, and we discuss the effect of it to the model under supervised setting. We set the dimension range from 64 to 1024 and take 2 times as an adjustment step. As shown in Figure In this manuscript, we propose the semi-supervised multimodal named entity recognition (MNER) task and pose the critical challenge of it compared with traditional semi-supervised named entity recognition (NER). Further more, we analyze the disadvantage of the existing semi-supervised NER methods that are not sufficient to multimodal data. Therefore, we propose the span-based multimodal variational autoencoder to tackle semi-supervised MNER. The proposed model exploits multimodal VAE including two modal-specific VAEs to learn the multimodal latent representations and jointly model the implicit correlation between labels and multimodal features to make use of unlabeled multimodal data effectively. The experimental results verify that our approach not only outperforms supervised learning baselines, but also gains superior The proposed model is limited to the length of input sentence because it needs to predict the type of all candidate spans during inference time. And the number of spans is proportional to the length of the sentence. Therefore, the inference time is increased with the length of sentence. Besides, our model has poor scalability to process more than one image, and the posted Twitter message may contain more than one image. Therefore, the future MNER model should be able to process the text with more images.
| 1,360 | 2,207 | 1,360 |
A Review of Cross-Domain Text-to-SQL Models
|
WikiSQL and Spider, the large-scale crossdomain text-to-SQL datasets, have attracted much attention from the research community. The leaderboards of WikiSQL and Spider show that many researchers propose their models trying to solve the text-to-SQL problem. This paper first divides the top models in these two leaderboards into two paradigms. We then present details not mentioned in their original paper by evaluating the key components, including schema linking, pretrained word embeddings, and reasoning assistance modules. Based on the analysis of these models, we want to promote understanding of the text-to-SQL field and find out some interesting future works, for example, it is worth studying the text-to-SQL problem in an environment where it is more challenging to build schema linking and also worth studying combing the advantage of each model toward text-to-SQL.
|
Text-to-SQL is a task to translate the natural language query (input) written by users into the SQL query (output) automatically. For example, in Table 3, we want to input the question in the table into the model to get the SQL output. Early work on text-to-SQL focused on small-scale domainspecific databases such as Restaurants, GeoQuery, ATIS, IMDB, and Yelp In this paper, we discuss the top models for the WikiSQL and Spider benchmarks. Since relatively high generation accuracy has already been achieved for the WikiSQL benchmark, and the SQL structures in Spider cover all SQL structures in Wik-iSQL, we focus more on models designed for Spider. This paper starts from the comparison of the overall paradigms of the models and then discusses the key modules used by most models. Overall, our contributions are as follows: • We divide existing text-to-SQL models into two paradigms: 1) Generate SQL structure ⇒ Fill schema 2) Label the question ⇒ Generate SQL. • We study that pretrained embeddings improve performance by improving schema linking and SQL structure generation. • We evaluate the applicability and advantages of the reasoning assistance modules of previous work. • We suggest three directions for the future. 1) How to generate SQL if it is more challenging to build the schema linking. 2) How to combine the different paradigms (in section 3) toward text-to-SQL. 3) How to use graph neural networks to improve SQL structure generation.
|
We only discuss two paradigms achieving relatively high performance in the text-to-SQL task, shown in Figure 3.1 Paradigm One (Generate SQL structure ⇒ Fill schema) The most common text-to-SQL paradigm is to generate the SQL structure first and then fill the schema items (schema columns and tables). In WikiSQL, because the dataset only contains simple SQL, most models decompose the SQL synthesis into several independent classification sub-tasks. Each sub-task employs an independent classifier taking the entire sentence as input. For example, one classifier would be used to determine which column is the column in SELECT clause, and another separate classifier to determine which aggregation function is correct. These models include: SQLNet Although these later models are based on one unified module, they also treat SQL structure generation and filling the schema items as separate processes. SQL structure generation depends on analysis of the sentence, while filling the schema items depends on the similarity between schema items and sentence tokens. For example, in Table What is the average miles per gallon of the cars with 4 cylinders? Paradigm One: Step 1) Generate SQL Structure 'SELECT avg( ) FROM WHERE = ' Step 2) Fill the schema items mpg cars_data cylinder Paradigm Two: Step 1) Label the question: 'What is the average miles per gallon of the cars with 4 cylinders ?' Step 2) Generate SQL from labels: ' SELECT avg( COL-1 ) FROM Although this approach seems to avoid the problem in Table Schema linking is to establish a link between the question token and schema items. There must be a value or weight that guide a model to choose one schema item but not others. We name this value or weight as schema linking value. Any text-to-SQL model with decent performance needs a schema linking value. In Paradigm One approaches, only the schema items strongly related to the question tokens (with high schema linking value) will be filled into the SQL structure. In Paradigm Two, schema linking helps to generate the schema related labels. There are different ways to construct a schema linking. The most common method is to train a neural network model that gives a higher similarity score to the link between a word token in a question and a schema item when they have the same meaning Besides, we can improve the extra schema linking through the database (DB) contents where the IRNet, RAT-SQL, GNN models all improve their performance by using the DB contents. For example, in Table However, most models in WikiSQL do not implement extra schema linking but achieve good performance. We conjecture that this is because the schema items in WikiSQL are much less than in Spider and top models use BERT To better understand the contribution of BERT, we list the component F1 score of RAT-SQL with and without BERT in Table Although BERT can improve the schema linking and SQL structure generation, boosting the performance by extending BERT is computational resource consuming. For example, in Table Some SQL clauses in Spider need reasoning to generate, but WikiSQL has almost no such clauses. For example, we cannot make out the JOIN ON clause directly from the question in Figure To our knowledge, there is no WikiSQL model using graph neural networks, but some Spider models use it. The reason is that there is only one table in the WikiSQL databases. Every node that came from columns is equivalent in a graph built by only one table. Graph neural networks can not give different information or values to the equivalent nodes, which restrict the usage of graph neural networks in WikiSQL. However, if we build a graph from the table and question tokens, it may work well in WikiSQL, such as the RAT-SQL. The models in the Spider leaderboard using graph neural networks include GNN We discuss the existing cross-domain SOTA textto-SQL models from the whole to the detailed modules to give a clear picture of the current textto-SQL research progress. We illustrate that pretrained embeddings improve the models by constructing a better schema linking and a more accurate SQL structure through experiments. This paper also provide many details that are not mentioned in the original papers, such as . However, due to space limitations, this paper cannot cover all the details of these SOTA models. We hope this paper will help you understand the key connections and differences between the previous models and have a comprehensive understanding of the text-to-SQL field. Most questions in Spider and WikiSQL directly use the words related to schema item names instead of synonyms, which means all existing models can build a schema linking by locating the same words. If you want to use these models to implement a natural language interface for database systems, you need to avoid synonyms. However, in some cases, synonyms cannot be avoided, so it is worth studying the text-to-SQL problem in an environment where it is more challenging to build schema linking. Although only following the Paradigm Two step toward text-to-SQL in Spider needs a lot of works, a method of combining the advantages of two paradigms may boost the performance. For example, we can generate a label to every word token and then use a machine learning model to learn the word tokens with the label to generate SQL. To improve the text-to-SQL reasoning ability, designing a new IR to simplify SQL structure generation is also a good research topic. Besides, the graph neural networks are all focused on improving the schema linking. How to use graph neural networks to improve SQL structure generation is also worth looking forward to.
| 876 | 1,457 | 876 |
Agreement Prediction of Arguments in Cyber Argumentation for Detecting Stance Polarity and Intensity
|
In online debates, users express different levels of agreement/disagreement with one another's arguments and ideas. Often levels of agreement/disagreement are implicit in the text and must be predicted to analyze collective opinions. Existing stance detection methods predict the polarity of a post's stance toward a topic or post, but don't consider the stance's degree of intensity. We introduce a new research problem, stance polarity and intensity prediction in response relationships between posts. This problem is challenging because differences in stance intensity are often subtle and require nuanced language understanding. Cyber argumentation research has shown that incorporating both stance polarity and intensity data in online debates leads to better discussion analysis. We explore five different learning models: Ridge-M regression, Ridge-S regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP for predicting stance polarity and intensity in argumentation. These models are evaluated using a new dataset for stance polarity and intensity prediction collected using a cyber argumentation platform. The SVR-RF-R model performs best for prediction of stance polarity with an accuracy of 70.43% and intensity with RMSE of 0.596. This work is the first to train models for predicting a post's stance polarity and intensity in one combined value in cyber argumentation with reasonably good accuracy.
|
Many major online and social media and networking sites, such as Facebook, Twitter, and Wikipedia, have taken over as the new public forum for people to discuss and debate issues of national and international importance. With more participants in these debates than ever before, the volume of unstructured discourse data continues to increase, and the need for automatic processing of this data is prevalent. A critical task in processing online debates is to automatically determine the different argumentative relationships between online posts in a discussion. These relationships typically consist of a stance polarity (i.e., whether a post is supporting, opposing, or is neutral toward another post) and the degree of intensity of the stance. Automatically determining these types of relationships from a given text is a goal in both stance detection and argumentation mining research. Stance detection models seek to automatically determine a text's stance polarity Research in Cyber Argumentation has shown that incorporating both stance polarity and intensity information into online discussions improves the analysis of discussions and the various phenomena that arise during a debate, including opinion polarization To that end, in this paper, we introduce a new research problem, stance polarity and intensity prediction in a responsive relationship between posts, which aims to predict a text's stance polarity and intensity which we combine into a single continuous agreement value. Given an online post A, which is replying to another online post B, we predict the stance polarity and intensity value of A towards B using A's (and sometimes B's) textual information. The stance polarity and intensity value is a continuous value, bounded from -1.0 to +1.0, where the value's sign (positive, negative, or zero) corresponds to the text's stance polarity (favoring, opposing, or neutral) and the value's magnitude (0 to 1.0) corresponds to the text's stance intensity. Stance polarity and intensity prediction encapsulates stance detection within its problem definition and is thus a more difficult problem to address. While stance polarity can be identified through specific keywords (e.g., "agree", "disagree"), the intensity is a much more fuzzy concept. The difference between strong opposition and weak opposition is often expressed through subtle word choices and conversational behaviors. Thus, to accurately predict agreement intensity, a learned model must understand the nuances between word choices in the context of the discussion. We explore five machine learning models for agreement prediction, adapted from the topperforming models for stance detection: Ridge-M regression, Ridge-S regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP. These models were adapted from Results from our empirical analysis show that the SVR-RF-R ensemble model performed the best for agreement prediction, achieving an RMSE score of 0.596 for stance polarity and intensity predic-tion, and an accuracy of 70% for stance detection. Further analysis revealed that the models trained for stance polarity and intensity prediction often had better accuracy for stance classification (polarity only) compared to their counterpart stance detection models. This result demonstrates that the added difficulty of detecting stance intensity does not come at the expense of detecting stance polarity. To our knowledge, this is the first time that learning models can be trained to predict an online post's stance polarity and intensity simultaneously. The contributions of our work are the following: • We introduce a new research problem called stance polarity and intensity prediction, which seeks to predict a post's agreement value that contains both the stance polarity (value sign) and intensity (value magnitude), toward its parent post. • We apply five machine learning models on our dataset for agreement prediction. Our empirical results reveal that an ensemble model with many hand-crafted features performed the best, with an RMSE of 0.595, and that models trained for stance polarity and intensity prediction do not lose significant performance for stance detection. 2 Related Work
|
Stance detection research has a wide interest in a variety of different application areas including opinion mining (Hasan and Ng, 2013), sentiment analysis For example, stance detection on Twitter often determines the author's stance (for/against/neutral) toward a proposition or target This dataset has many similarities to our data in terms of post length and topics addressed. Approaches to Twitter stance detection include SVMs Argumentation mining is applied to argumentative text to identify the major argumentative components and their relationships to one another The major tasks of argumentation mining include: 1) identify argumentative text from the nonargumentative text, 2) classify argumentation components (e.g., Major Claim, Claims, Premise, etc.) in the text, 3) determine the relationships between the different components, and 4) classify the relationships as supporting, attacking, or neutral Cyber argumentation systems help facilitate and improve understanding of large-scale online discussions, compared to other platforms used for debate, such as social networking and media platforms, online forums, and chat rooms Our research group has developed an intelligent cyber argumentation system, ICAS, for facilitating large scale discussions among many users ICAS implements an IBIS structure In ICAS, arguments have two components: a textual component and an agreement value. The textual component is the written argument the user makes. ICAS does not limit the length of argument text; however, in practice, the average argument This section describes the models we applied to the stance polarity and intensity prediction problem. We applied five different models, adapted from top-performing stance classification models based on their performance and approach on the SemEval 2016 stance classification Twitter dataset Our first two models use a linear ridge regression as the underlying model. We created two ridge regression models using two feature sets. The first ridge model (Ridge-M) used the feature set described in The second ridge model (Ridge-S) used the feature set described in Sobhani, Mohammad, and Kiritchenko's follow-up paper (2016). In that paper, they found the sum of trained word embeddings with 100 dimensions, in addition to the N-gram features outlined by This model (SRV-RF-R) consisted of an averagevoting ensemble containing three different regression models: an Epsilon-Support Vector Regression model, a Random Forest regressor, and a ridge regression model. This model is an adaption of the ensemble model presented by • Linguistic Features: Word 1-3 grams as binary vectors, count vectors, and tf-idf weighted vectors. Character 1-6 grams as count vectors. POS tag 1-3 grams concatenated with their words (ex: word1 pos1 . . . ) and concatenated to the end of the post (ex: word1, word2, . . . , POS1, POS2, . . . ). • Topic Features: Topic membership of each 1 Please refer to the supplemental material for a full description of the feature set. post after LDA topic modeling • Word Embedding Features: The 100dimensional word embedding sums for each word in a post and the cosine similarity between the summed embedding vectors for the target post and its parent post. • Lexical Features: Sentiment lexicon features outlined in We tested using the top 50 features selected using reliefF and reducing the feature size to 50 using Principal Component Analysis (PCA), as well as using the full feature set. We found that the full feature set (2855 total) performed significantly better than the reliefF and PCA feature sets. We used the full feature set in our final model. The highest performing CNN model, pkudblab, applied to the SemEval 2016 benchmark dataset, was submitted by The RNN model (T-PAN-PIP) is adapted from the T-PAN framework by The weighted attention application layer combines the attention weighs to their corresponding hidden state output, as shown in (1). Where a s is the attention signal for word s, h s is the hidden layer output of the Bi-LSTM for word s, |s| is the total number of words, and Q is the resulting attention weighted vector of size 256, the size of the output of the hidden units of the Bi-LISTM. The output Q feeds into a fully-connected sigmoid layer and outputs the predicted agreement value. We train the model using a mean absolute error loss function. The dataset was constructed from three separate empirical studies collected in Fall 2017, Spring 2018, and Spring 2019. In each study, a class of undergraduate students in an entry-level sociology class was offered extra credit to participate in discussions in ICAS. Each student was asked to discuss four different issues relating to the content they were covering in class. The issues were: 1) Healthcare: Should individuals be required by the government to have health insurance? 2) Same Sex Adoption: Should same-sex married couples be allowed to adopt children? 3) Guns on Campus: Should students with a concealed carry permit be allowed to carry guns on campus? 4) Religion and Medicine: Should parents who believe in healing through prayer be allowed to deny medical treatment for their child? Under each issue, there were four positions (with the exception of the Healthcare issue for Fall 2017, which had only 3 positions) to discuss. The positions were constructed such that there was one strongly conservative position, one moderately conservative position, one moderately liberal position, and one strongly liberal position. The students were asked to post ten arguments under each issue. The combined dataset contains 22,606 total arguments from 904 different users. Of those arguments, 11,802 are replying to a position, and 10,804 are replying to another argument. The average depth of a reply thread tends to be shallow, with 52% of arguments on the first level (reply to position), 44% on the second level, 3% on the third level, and 1% on the remaining levels (deepest level was 5). When a student posted an argument, they were required to annotate their argument with an agree- The annotated labels in this dataset are selflabeled, meaning that when a user replies to a post, they provide their own stance polarity and intensity label. The label is a reflection of the author's intended stance toward a post, where the post's text is a semantic description of that intention. While these label values are somewhat subjective, they are an accurate reflection of their author's agreement, which we need to capture to analyze opinions in the discussion. Self-annotated datasets like this one have been used in stance detection for argumentation mining in the past (see In this study, we want to evaluate the models' performance on the stance polarity and intensity prediction problem. We separated the dataset into training and testing sets using a 75-25 split. For the neural network models (pkudblab-PIP and T-PAN-PIP), we separated out 10% of the training set as a validation set to detect over-fitting. The split was performed randomly without consideration of the discussion issue. Each issue was represented proportionally in the training and testing data sets with a maximum discrepancy of less than 1%. For evaluation, we want to see how well the regression models are able to predict the continuous agreement value for a post. We report the root-mean-squared error (RMSE) for the predicted results. We wanted to investigate whether training models for agreement prediction would degrade their performance for stance detection. Ideally, these models should learn to identify both stance intensity without impacting their ability to identify stance polarity. To test this, we compared each model to their original stance classification models described in their source papers. Thus, ridge-H is compared with an SVM trained on the same feature set (SVM-H), ridge-S is compared to a Linear-SVM trained on the same feature set (SVM-S), SVR-RF-R is compared to a majority-voting ensemble of a linear-SVM, Random Forest, and Naïve Bayes classifier using the same feature set (SVM-RF-NB), pkudblab-PIP is compared to the original pkudblab model trained using a softmax cross-entropy loss function, and T-PAN-PIP is compared to the original T-PAN model trained using a softmax crossentropy loss function. We trained the classification models for stance detection by converting the continuous agreement values into categorical polarity values. When converted into categorical values, all of the positive agreement values are classified as Favoring, all negative values are classified as Opposing, and zero values are classified as Neutral. In the dataset, 12,258 arguments are Favoring (54%), 8962 arguments are Opposing (40%), and 1386 arguments are Neutral (6%). To assess the stance detection performance of the models trained for agreement prediction, we converted the predicted continuous agreement values output by the models into the categorical values using the same method. For evaluation, we report both the accuracy value of the predictions and the macro-average F1-scores for the Favoring and Opposing classes on the testing set. This scoring scheme allows us to treat the Neutral category as a class that is not of interest The results for agreement prediction are shown in Table We compare the models trained on the agreement prediction task to their classification model counterparts in terms of performance on the stance detection task. Tables The models behaved very similarly on the agreement prediction problem, where the difference between the best performing model and the worst performing model is only 0.061. Overall, the best model received an RMSE of 0.596, which is reasonably good but can be improved. T-PAN-PIP had the worst performance, which is surprising, as it was the only model to include the parent post's information into its prediction, which should have helped improve its performance. It is possible that its architecture is unsuitable for agreement prediction; other architectures have been deployed that include a post's parent and ancestors into a stance prediction, which might be more suitable for agreement prediction. Future model designs should better incorporate a post's parent information into their predictions. The difference in performance between the agreement prediction models and the classification models on the stance detection task was small and sometimes better. This demonstrates that the models learning to identify stance intensity do so without significant loss of performance in identifying stance polarity. Larger gains in performance will likely require information about the post's author. Some post authors will state strong levels of agreement in their statements, but annotate their argument with weaker agreement levels. For example, one author wrote, "Agree completely. Government should stay out of healthcare." and annotated that argument with an agreement value of +0.6. The authors were instructed on how to annotate their posts, but the annotations themselves were left to the post's author's discretion. Thus including author information into our models would likely improve the stance polarity and intensity prediction results. We introduce a new research problem called stance polarity and intensity prediction in a responsive relationship between posts, which predicts both an online post's stance polarity and intensity value toward another post. This problem encapsulates stance detection and adds the additional difficulty of detecting subtle differences in intensity found in the text. We introduced a new large empirical dataset for agreement prediction, collected using a cyber argumentation platform. We implemented five models, adapted from top-performing stance detection models, for evaluation on the new dataset for agreement prediction. Our empirical results demonstrate that the ensemble model SVR-RF-R performed the best for agreement prediction and models trained for agreement prediction learn to differentiate between intensity values without degrading their performance for determining stance polarity. Research into this new problem of agreement prediction will allow for a more nuanced annotation and analysis of online debate. • Maximum Sentence Length (|s|): 150. Posts longer than 150 words were truncated from the beginning and posts less than 150 words were padded at the end. • LSTM hidden units: 256 total (128 for each direction). The model was trained using a batch size of 64 and used an Adam optimizer.
| 1,406 | 4,186 | 1,406 |
Enhancing Authorship Attribution By Utilizing Syntax Tree Profiles
|
The aim of modern authorship attribution approaches is to analyze known authors and to assign authorships to previously unseen and unlabeled text documents based on various features. In this paper we present a novel feature to enhance current attribution methods by analyzing the grammar of authors. To extract the feature, a syntax tree of each sentence of a document is calculated, which is then split up into length-independent patterns using pq-grams. The mostly used pq-grams are then used to compose sample profiles of authors that are compared with the profile of the unlabeled document by utilizing various distance metrics and similarity scores. An evaluation using three different and independent data sets reveals promising results and indicate that the grammar of authors is a significant feature to enhance modern authorship attribution methods.
|
The increasing amount of documents available from sources like publicly available literary databases often raises the question of verifying disputed authorships or assigning authors to unlabeled text fragments. The original problem was initiated already in the midst of the twentieth century by Mosteller and Wallace, who tried to find the correct authorships of The Federalist Papers In this paper we present a novel feature for the traditional, closed-class authorship attribution task, following the assumption that different authors have different writing styles in terms of the grammar structure that is used mostly unconsciously. Due to the fact that an author has many different choices of how to formulate a sentence using the existing grammar rules of a natural language, the assumption is that the way of constructing sentences is significantly different for individual authors. For example, the famous Shakespeare quote "To be, or not to be: that is the question." (S1) could also be formulated as "The question is whether to be or not to be." (S2) or even "The question is whether to be or not." (S3) which is semantically equivalent but differs significantly according to the syntax (see Figure The rest of this paper is organized as follows: Section 2 sketches the main idea of the algorithm which incorporates the distance metrics explained in detail in Section 3. An extensive evaluation us- ing three different test sets is shown in Section 4, while finally Section 5 and Section 6 summarize related work and discuss future work, respectively.
|
The basic idea of the approach is to utilize the syntax that is used by authors to distinguish authorships of text documents. Based on our previous work in the field of intrinsic plagiarism detection The number of choices an author has to formulate a sentence in terms of grammar is rather high, and the assumption in this approach is that the concrete choice is made mostly intuitively and unconsciously. Evaluations shown in Section 4 reinforce that solely parse tree structures represent a significant feature that can be used to distinguish between authors. From a global view the approach comprises the following three steps: (A) Creating a grammar profile for each author, (B) creating a grammar profile for the unlabeled document, and (C) calculating the distance between each author profile and the document profile and assigning the author having the lowest distance (or the highest similarity, depending on the distance metric chosen). As this approach is based on profiles a key criterion is the creation of distinguishable author profiles. In order to calculate a grammar profile for an author or a document, the following procedure is applied: (1) Concatenate all text samples for the author into a single, large sample document, (2) split the resulting document into single sentences and calculate a syntax tree for each sentence, (3) calculate the pq-gram index for each tree, and (4) compose the final grammar profile from the normalized frequencies of pq-grams. At first the concatenated document is cleaned to contain alphanumeric characters and punctuation marks only, and then split into single sentences Having computed a syntax tree for every sentence, the pq-gram index With the use of the syntax tree profiles calculated for each candidate author as well as for the unlabeled document, the last part is to calculate a distance or similarity, respectively, for every author profile. Finally, the unseen document is simply labeled with the author of the best matching profile. To investigate on the best distance or similarity metric to be used for this approach, several metrics for this problem have been adapted and evaluated 3 : 1. CNG Stamatatos-CNG For the latter, we modified the original SPI score The approach described in this paper has been extensively evaluated using three different English data sets, whereby all sets are completely unrelated and of different types: (1.) CC04: the training set used for the Ad-hoc-Authorship Attribution 3 The algorithm names are only used as a reference for this paper, but were not originally proposed like that Competition workshop held in 2004 • topPQGramCount t c : by assigning a value to this parameter, only the corresponding amount of mostly used pq-grams of a grammar profile are used. • topPQGramOffset t o : based on the idea that all authors might have a frequently used and common set of syntax rules that are predefined by a specific language, this parameter allows to ignore the given amount of mostly used pq-grams. For example if t o = 3 in Table 1, the first pq-gram to be used would be [NP-NNP-* -* -* ]. The evaluation results are depicted in Table
| 858 | 1,560 | 858 |
Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors
|
In a spoken dialogue system, an NLU model is preceded by a speech recognition system that can deteriorate the performance of natural language understanding. This paper proposes a method for investigating the impact of speech recognition errors on the performance of natural language understanding models. The proposed method combines the back transcription procedure with a fine-grained technique for categorizing the errors that affect the performance of NLU models. The method relies on the usage of synthesized speech for NLU evaluation. We show that the use of synthesized speech in place of audio recording does not change the outcomes of the presented technique in a significant way.
|
Regardless of the near-human accuracy of automatic speech recognition in general-purpose transcription tasks, speech recognition errors can still significantly deteriorate the performance of a natural language understanding model that follows the speech-to-text module in a conversational system. The problem is even more apparent when an automatic speech recognition system from an external vendor is used as a component of a virtual assistant without any further adaptation. The goal of this paper is to present a method for investigating the impact of speech recognition errors on the performance of natural language understanding models in a systematic way. The method that we propose relies on the use of back transcription, a procedure that combines a text-to-speech model with an automatic speech recognition system to prepare a dataset contaminated with speech recognition errors. The augmented dataset is used to evaluate natural language understanding models and the outcomes of the evaluation serve as a basis for defining the criteria of * The author performed the work while being affiliated with both organizations. NLU model robustness. Contrary to conventional adversarial attacks, which aim at determining the samples that deteriorate the model performance under study The proposed method depends on speech processing models, but it does not rely on the availability of spoken corpora. Therefore, it is suitable for inspecting NLU models for which only textual evaluation data are present. It makes use of the semantic representation of the user utterance, but it does not require any additional annotation of data. Thus, the dataset used for training and testing the NLU model can be repurposed for robustness assessment at no additional costs. For illustration, we decided to apply the presented method to Transformer-based models since they demonstrate state-of-the-art performance in the natural language understanding task, but the method does not depend on the architecture of the underlying NLU model. The limitations of our approach are discussed at the end of the paper.
|
Data augmentation is a commonly employed method for improving the performance of neural models of vision, speech and language. Back translation The first experiments with augmenting ASR data with text-to-speech tools were conducted by Recently, there have been several papers addressing the issue of the robustness of natural language understanding systems to various types of input errors. The proposed method of evaluation consists of three stages: the execution of the back transcription procedure that transfers NLU data between text and audio domains, the automatic assessment of the outcome from the NLU model on a per-sample basis, and the fine-grained method of inspecting the results with the use of edit operations (see Figure The back transcription procedure applied with respect to the NLU dataset consists of three steps. First, textual data are synthesized with the use of a text-to-speech model. Next, the automatic speech recognition system converts the audio signal back to text. In the last step, both the input utterances and the recognized texts are passed to the NLU model to obtain their semantic representations. The NLU commands are tracked in consecutive steps. As a result, we obtain an augmented NLU dataset providing the following data for each sample s: 1. r(s): the reference text that comes from the initial NLU dataset; 2. h(s): the hypothesis, i.e. the r(s) text synthesized with the text-to-speech model and transcribed with the automatic speech recognition system; 3. e(s): the expected outcome of the NLU model for r(s) as given in the initial NLU dataset; A simple method for coarse-grain assessment of NLU robustness relies on measuring performance drop with respect to the commonly used metrics such as accuracy for intent classification or F-score for slot values extraction. This is a widely accepted practice in the case of adversarial attacks A correct result obtained for the reference text is changed to an incorrect one in the case of the back-transcribed text, i.e. b(s) = e(s) ∧ a(s) ̸ = e(s). An incorrect result returned for the reference text is replaced by another incorrect result in the case of the back-transcribed text, i.e. b(s) ̸ = e(s) ∧ a(s) ̸ = e(s) ∧ b(s) ̸ = a(s). An incorrect result obtained for the reference text is changed to a correct result in the case of the back-transcribed text, i.e. b(s) ̸ = e(s) ∧ a(s) = e(s). The first category is always considered to have a negative impact on the robustness of the NLU model. However, with respect to I → I and I → C categories of samples, alternative options can be considered. For I → I samples, it is reasonable to treat them as negative if we want to obtain the definition of robustness that penalizes changes. It is also sensible to consider them to be irrelevant since such samples do not affect the performance of the NLU model. I → C samples, once again, can be considered to be negative if we want to penalize all changes. They can be treated as irrelevant, making the definition of robustness unaffected by the changes that improve the performance of the NLU model. Finally, they can also be considered to have a positive impact on the robustness of the model since they improve the NLU performance. A common practice of measuring the difference in accuracy before and after back transcription treats I → C samples as positive and ignores I → I samples. Such a procedure underestimates the impact of C → I samples on the NLU module due to the performance gain introduced by I → C samples. It also does not track I → I changes which can deteriorate the behavior of downstream modules of a dialogue system that consume the outcome of the NLU model. As we show in Section 4, I → I and I → C cases account respectively for up to 30% and 10% of all the changes introduced by back transcription. Thus, the decision to ignore or promote them should be a result of careful planning. The relationship between the outlined categories of changes and the building blocks of the F-score is even more complicated. Let C α → I β denote the change from correct label α to incorrect label β, I α → I β the change from incorrect label α to incorrect label β, and I α → C β the change from incorrect label α to correct label β. Let T P l , F P l , F N l , Table Name P l and R l denote true positives, false positives, false negatives, precision and recall with regard to label l. Table Proper combinations of the aforementioned categories of NLU outcome changes lead to six alternative robustness measures with their own rationale. We present them in Table NLU model TTS model del delete a token "a" → "" replace_{r} replace token with string r "cat" → "hat" insert_before_{w} insert word w before current token "cat" → "a cat" insert_after_{w} insert word w after current token "cat" → "cat that" affix operations add_prefix_{p} prepend prefix p to the token "owl" → "howl" add_suffix_{s} append suffix s to the token "he" → "hey" del_suffix_{n} remove n characters from the end "cats" → "cat" del_prefix_{n} remove n characters from the start "howl" → "owl" replace_suffix_{s} replace last len(s) characters with s "houl" → "hour" sreplace_{s}_{r} replace substring s with string r "may" → "my" split/join operations join_{s} join tokens using character s "run in" → "run-in" split_aftert_{n} split word after n-th character "today" → "to day" split_on_first_{c} split word on first character c "run-in" → "run in" split_on_last_{c} split word on last character c "forenoon" → "for noon" To detect speech recognition errors that deteriorate the robustness of the NLU model in the most significant way, we determine the differences between the reference texts and their back-transcribed counterparts and confront them with the impact caused by the change in the NLU outcome. For identifying the differences between reference and back-transcribed utterances, we align them with the use of the Ratcliff-Obershelp algorithm Afterward, we assess the impact of speech recognition errors on the robustness of the NLU model by extracting the regression coefficients that correspond to the edit operations that transform correct utterances into incorrect ones. Framing the problem as a supervised classification task has several advantages. First, it allows us to incorporate any combination of the criteria outlined in Section 3.2 into the detection process. Second, it allows us to consider different dimensions of the semantic representation of an NLU command, such as domain, intent, and slot values, either separately or in conjunction, enabling the evaluation of joint NLU models. Third, any classification method that quantifies the importance of the features specified at the input can be used to study the impact of speech recognition errors on the robustness of the NLU model. We rely on logistic regression because the regression coefficients are easy to interpret and the logistic model fits well to the provided data. However, a more elaborate model such as gradient boosted trees Given that the back transcription technique does not require spoken data on the input, we decided to use the MASSIVE dataset We trained three separate XLM-RoBERTa (Conneau et al., 2020) models for three separate tasks: domain classification Table Our evaluation method relies on a combination of speech synthesis and automatic speech recognition models. For speech synthesis, we use two models. The first, FastSpeech 2 We report the robustness scores determined using the metrics proposed in Section 3.2 for the NLU models under study in Table A qualitative comparison of the top 20 most frequent speech recognition errors demonstrated in Table We also checked if the overall quality of the synthesized audio is acceptable. For this purpose, we back-transcribed the dataset with both TTS models. Then, for each TTS model, we randomly sampled 10% of the prompts for which the result of back transcription differed from the input. The backtranscribed prompts were presented to the annotator along with the original prompts and the recording of the TTS output. The goal of the annotator was to choose which of the two transcripts was closer to the content of the recording. The order of the options was randomized so that the annotator did not know which was the original prompt and which was the back-transcribed one. If both options were equally viable, the annotator was allowed to choose both as the answer. Table To confirm that TTS-generated speech samples can be used in place of voice recordings, we verified that the robustness scores obtained for the synthesized samples are similar to the scores obtained for the recordings. For this purpose, we conducted an experiment using audio samples from the SLURP dataset First, we applied the back transcription procedure to the text prompts extracted from SLURP. Next, we ran the ASR model on the audio recordings corresponding to the extracted prompts and applied the NLU models to the transcribed texts. Finally, we compared the robustness scores calculated for back-transcribed and transcribed texts. As shown in Table In this paper, we proposed a method for assessing the robustness of NLU models to speech recognition errors. The method repurposes the NLU data used for model training and does not depend on the availability of spoken corpora. We introduced criteria for robustness that rely on the outcome of the NLU model but do not assume any particular semantic representation of the utterances. We showed how these criteria can be used to formulate summary metrics and constructed an analytical model that prioritizes individual categories of speech recognition errors on the basis of their impact on the (non-)robustness of the NLU model. Finally, we performed an experimental evaluation of the robustness of Transformer-based models and investigated the impact of using text-to-speech models in place of audio recording. The presented method compares input utterances with the same input synthesized by TTS and processed by ASR. This setting introduces two limitations for the NLU component. First, the architecture, the training data, and finally, the quality of TTS and ASR systems impact generated data
| 689 | 2,096 | 689 |
How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?
|
Long short-term memory (LSTM) networks and their variants are capable of encapsulating long-range dependencies, which is evident from their performance on a variety of linguistic tasks. On the other hand, simple recurrent networks (SRNs), which appear more biologically grounded in terms of synaptic connections, have generally been less successful at capturing long-range dependencies as well as the loci of grammatical errors in an unsupervised setting. In this paper, we seek to develop models that bridge the gap between biological plausibility and linguistic competence. We propose a new architecture, the Decay RNN, which incorporates the decaying nature of neuronal activations and models the excitatory and inhibitory connections in a population of neurons. Besides its biological inspiration, our model also shows competitive performance relative to LSTMs on subject-verb agreement, sentence grammaticality, and language modeling tasks. These results provide some pointers towards probing the nature of the inductive biases required for RNN architectures to model linguistic phenomena successfully.
|
For the last couple of decades, neural networks have been approached primarily from an engineering perspective, with the key motivation being efficiency, consequently moving further away from biological plausibility. Recent developments Recurrent Neural Networks (RNNs) have been used to analyze the principles and dynamics of neural population responses by performing the same tasks as animals The decaying nature of the potential in the neuron membrane after receiving signals (excitatory or inhibitory) from the surrounding neurons is also well-studied Subject-verb agreement, where the main noun and the associated verb must agree in number, is considered as evidence of hierarchical structure in English. This is exemplified using a sentence taken from the dataset made available by 1. *All trips on the expressway requires a toll. 2. All trips on the expressway require a toll. The effect of agreement attractors (nouns having number opposite to the main noun; expressway in the above example 1 ) between the main noun and main verb of a sentence has been well-studied • A chair created by a hobbyist as a gift to someone is not a commodity. 2 In the number prediction task, if a model correctly predicts the grammatical number of the verb (singular in case of 'is'), it might be due to the (helpful) interference of non-attractor intervening nouns ('hobbyist', 'gift', 'someone') rather than necessarily capturing its dependence the main noun ('chair'). From our investigation in Section 6.2, we find that the linear recurrent models take cues present in the vicinity of the main verb to predict its number, apart from the agreement with the main noun. In the subsequent sections, we investigate the performance of the Decay RNN and other recurrent networks, showing that no single sequential model generalizes well on all (grammatical) phenomena, which include subject-verb agreements, reflexive anaphora, and negative polarity items as described in 1. Designing a relatively simple and bio-inspired recurrent model: the Decay RNN, which performs on-par with LSTMs for linguistic tasks such as subject-verb agreement and grammaticality judgement. 2. Pointing to some limitations of analyzing the intervening attractor nouns alone for the subject-verb agreement task and attempting joint analysis of non-attractor intervening nouns and attractor nouns in the sentence. 3. Showing that there is no linear recurrent scheme which generalizes well on a variety of sentence types and motivating research in better understanding of the nature of biases induced by varied RNN structures.
|
There has been prior work on using LSTMs 2 Sentence taken from the dataset made available by From the biological point of view, According to Dale's principle, a neuron is either excitatory or inhibitory In the postsynaptic neuron, the integration of synaptic potentials is realized by the addition of excitatory (+ve) and inhibitory (-ve) postsynaptic potentials (PSPs). PSPs are electronic voltages, that decay as a function of time due to spontaneous reclosure of the synaptic channels. The decay of the PSPs is controlled by the membrane constant τ , i.e., the time required by the PSP to decay to 37% of its peak value Here we present our proposed architecture, which we call the Decay RNN (DRNN). Our architecture aims to model the decaying nature of the voltage in a neuron membrane after receiving impulses from the surrounding neurons. At the same time, we incorporate Dale's principle in our architecture. Thus, our model captures both the microscopic and macroscopic properties of a group of neurons. Adhering to the stated phenomena, we define our model with the following update equations for given input x (t) at time t: Here f is a nonlinear activation function, W and U are weight matrices, b is the bias and h (t) represents the hidden state (analogous to voltage). We define α ∈ (0,1) as a learnable parameter to incorporate a decay effect in the hidden state (analogous to the decay in the membrane potential). Here α acts as a balancing factor between the hidden state h (t-1) and c (t) . First, the presence of α acts as a coupled gating mechanism to the flow of information (Figure Second, our model also has an intrinsic skip connection deriving out of its formulation. To examine the importance of Dale's principle in the learning process, we made a variant of our Decay RNN without Dale's principle, which we call the Slacked Decay RNN (SDRNN), with updates to c (t) made as follows: To understand the role of the correlation between the hidden states in the Decay RNN formulation, we devised an ablated version of our architecture, which we refer to as the Ab-DRNN. With the following update equation, we remove the mathematical factor (Wh (t-1) ) that gives rise to a correlation between hidden states: For the number prediction (Section 6.1) and grammaticality judgment (Section 6.3) tasks, we used a corpus of 1.57 million sentences from Wikipedia Despite having a large number of training points, these datasets have certain drawbacks, including the lack of a sufficient number of syntactically challenging examples leading to poor generalization over the sentences out of the training data distribution. Therefore, we construct a generalization set as described in Here we will describe our experiments The number prediction task was proposed by 1. The path to success is not straight forward. The model will take the second sentence as input and has to predict the number of the verb (here, singular). Table So far in the literature, when looking at intervening material in agreement tasks, the research has tended to focus on agreement attractors, the intervening nouns with the opposite number to the main noun Table The previous objective was predicting the grammatical number of the verb after providing the model an input sentence only up to the verb. However, this way of training may give the model a cue to the syntactic clause boundaries. In this section, we describe the grammaticality judgment task. Given an input sentence, the model has to predict whether it is grammatical or not. To perform well on this task, the model would presumably need to allocate more resources to determine the locus of ungrammaticality. For example, consider the following pair of sentences 2 : 1. The roses in the vase by the door are red. 2. *The roses in the vase by the door is red. The model has to decide, for input sentences such as the above, whether each one is grammatically correct or not. Table Word-level language modeling is a task that helps in the evaluation of the model's capacity to capture the general properties of language beyond what is tested in specialized tasks focused on, e.g., subjectverb agreement. We use perplexity to compare our model's performance against standard sequential recurrent architectures. Table Targeted syntactic evaluation In this paper, we proposed the Decay RNN, a bioinspired recurrent network that emulates the decaying nature of neuronal activations after receiving excitatory and inhibitory impulses from upstream neurons. We have found that the balance between the free term (h (t) ) and the coupled term (Wh (t) ) enabled the model to capture syntax-level dependencies. As shown by From the cognitive neuroscience perspective, it would be interesting to investigate if the proposed Decay RNN can capture some aspects of actual neuronal behaviour and language cognition. Our results here do at least indicate that the complex gating mechanisms of LSTMs (whose cognitive plausibility has not been established) may not be essential to their performance on many linguistic tasks, and that simpler and perhaps more cognitively plausible RNN architectures are worth exploring further as psycholinguistic models. In this section, we present the trends in the testing performance of the LSTM and the Decay RNN (DRNN) for the grammaticality judgment task. Figure In Section 6.3, we saw that in terms of testing accuracy for grammaticality judgment, the Slacked Decay RNN (SDRNN) outperformed the Decay RNN (DRNN). For a robust investigation of this behaviour, we tested our models on the generalization set and mentioned a subset of our results on grammaticality judgment in Table In the main text, we describe the balancing effect of α in the Decay RNN model. We present the trend in the learned value of α throughout training for the grammaticality task for various initializations in Figure
| 1,107 | 2,587 | 1,107 |
CLIReval: Evaluating Machine Translation as a Cross-Lingual Information Retrieval Task
|
We present CLIReval, an easy-to-use toolkit for evaluating machine translation (MT) with the proxy task of cross-lingual information retrieval (CLIR). Contrary to what the project name might suggest, CLIReval does not actually require any annotated CLIR dataset. Instead, it automatically transforms translations and references used in MT evaluations into a synthetic CLIR dataset; it then sets up a standard search engine (Elasticsearch) and computes various information retrieval metrics (e.g., mean average precision) by treating the translations as documents to be retrieved. The idea is to gauge the quality of MT by its impact on the document translation approach to CLIR. As a case study, we run CLIReval on the "metrics shared task" of WMT2019; while this extrinsic metric is not intended to replace popular intrinsic metrics such as BLEU, results suggest CLIReval is competitive in many language pairs in terms of correlation to human judgments of quality. CLIReval is publicly available at https: //github.com/ssun32/CLIReval.
|
Machine translation (MT) is the task of automatically translating sentences from a source language to a target language. A natural question that arises is how do we determine whether an MT system is translating sentences well? One answer is that we can engage human translators to evaluate the translated sentences manually. Unfortunately, evaluating translations can be relatively time-consuming and worse, the fact that the quality of translation is inherently subjective can lead to variations among different human translators. The desire for fast and consistent evaluation has led to the emergence of a plethora of automatic evaluation metrics such as BLEU There are also some proposals to evaluate the quality of translations with the help of extrinsic proxy tasks. One downstream task that relies heavily on MT but has not been used as a method to evaluate MT systems is the task of Cross-Lingual Information Retrieval (CLIR). CLIR is a task in which search queries are issued in one language, and the retrieved relevant documents are written in a different language. Two commonly used methods in CLIR are query translation, where queries are translated into the same language as the documents and document translation where documents are translated into the same language as the queries CLIR is an active field of research, and previ-ous works suggest that the performance of CLIR correlates highly with the quality of the MT CLIReval is a lightweight python-based MT evaluation toolkit that consumes the same inputs as other automatic MT evaluation tools such as multibleu.perl and SacreBLEU As a case study, we test CLIReval on the metrics shared task of WMT2019 Our key contributions in this work can be summarized as follows: 1. We release CLIReval, 2. We demonstrate that CLIReval can perform as well as popular intrinsic MT metrics on recent WMT metrics shared task, without supervision from external datasets and domain-based parameter tuning. Results suggest that CLIR is a feasible proxy task for MT evaluation and is worth further exploration in future research.
|
Given a set of source documents S, an MT system φ converts S into a set of translated documents, T = φ(S) . Intrinsic MT metrics directly calculate an aggregated score between the sentences in T and sentences in R, where R is a set of reference documents. This approach makes several assumptions. First, CLIReval implements the document translation approach to CLIR and evaluates MT quality in that context; additionally, we assume that ρ is a robust and reasonable IR engine that can be used across a wide range of situations. Second, we assume R contains the "correct" translations of S, and that ρ(Q, R) is a good approximation of the optimal search results. Third, we assume that automatically-generated Q can mimic that actual information needs of manually-crafted queries. If these caveats are acknowledged, then CLIReval is a reasonable tool for MT evaluation. 5. Finally, CLIReval evaluates the search results from MT-IR and relevance judgment labels from REF-IR with trec eval, 3 a standard evaluation toolkit used by the information retrieval community. We emphasize that the above steps are achieved with a single easy-to-use script: CLIReval is as simple as executing the following command: where the inputs are standard text files that 3 CLIReval ingests a system output translation (MT) file which contains documents translated by an MT system and a reference (REF) file, which contains reference translations of the same source documents. Our system supports two input file formats: 1. The SGML format commonly used by the news translation shared task from the annual conference on machine translation 2. A text file where each line contains a sentence. A user can supply an optional mapping file that maps a line number to a (document id and, segment id) tuple. If a mapping file is not specified, CLIReval will create an artificial document boundary every N sentences. The query generator module ingests data in the REF file and automatically generates search queries. CLIReval has two modes for query generation, which can be specified with the query mode argument: 1. In sentences mode, the query generator extracts all reference sentences from the input 2. In unique terms mode, the query generator treats all unique terms as queries. For Elasticsearch, these terms can be obtained from the term vectors of all indexed documents. We recognize that using sentences or unique terms as queries might be less ideal than using real search queries, but getting relevant humangenerated queries can be time-consuming and expensive. Our query generation methods are cheap and fast, which enables quick experimentation. Examples of R and T are shown in Figure To ensure consistent and reproducible results, we choose Elasticsearch First, Elasticsearch has built-in analyzers for a wide variety of languages, which allows CLIReval to support many translation tasks beyond English as the target language. Analyzers are Elasticsearch modules that preprocess and tokenize queries and documents according to language-specific rules. It also implements stopwords removal and stemming. These are important operations that affect the quality of search results. Second, Elasticsearch implements many competitive retrieval models used by IR researchers and practitioners. By default, CLIReval uses the Okapi BM25 Third, Elasticsearch is a widely used search engine solution that is supported on various platforms. This increases the ease of installation for users of CLIReval. CLIReval separately indexes the documents from MT and REF files into two instances of Elasticsearch. It then queries the Elasticsearch instances with the generated query strings. For every query, Elasticsearch returns the top 100 documents ranked by BM25 scores. Since trec eval only accepts discrete relevance judgment labels, the relevance label converter module is used to convert search scores from REF-IR into discrete labels. We implement three ways of converting raw BM25 scores of REF-IR into discrete relevance judgment labels: The query in document method The percentile method assigns 1 to documents with BM25 scores in the top 25 percentile of all document scores returned by the IR system and 0 otherwise. The cutoff percentile value can be adjusted with the n percentile argument. Th Jenks methods uses Jenks natural breaks optimization 9 to automatically break a list of BM25 scores into different classes. This is achieved by minimizing the variance of BM25 scores within a class and at the same time maximize the variance of average BM25 scores between classes To summarize: after the queries and relevance labels are prepared (as in Section 3.2 and 3.4), the MT output T (e.g. Figure The trec eval toolkit returns a large number of IR metrics but CLIReval is configured to return only two of the most popular IR metrics by default: • Mean average precision (MAP) is the mean of the average precision scores for each query (Buckley and Voorhees, 2005). • Normalized discounted cumulative gain (NDCG) is a metric that measures the usefulness of documents based on their ranks in the search results We choose MAP because it is a widely understood metric, and NDCG because it allows for multiple levels of relevance labels. We follow standard practice in IR benchmark datasets such as CLIReval is written in Python 3 and works on Python 3.5 and later. Elasticsearch requires at least Java 8. We provide a shell script that automatically downloads and installs Elasticsearch 6.5.3 and the latest version of trec eval. It also installs additional Elasticsearch plugins that support additional languages. In total, CLIReval has built-in support for 36 languages and for unsupported languages, it will fall back to the default standard analyzer, which is based on the Unicode text segmentation algorithm. To demonstrate the utility of CLIReval, we test it on the metrics shared task of WMT2019. In total, there are 18 language directions, and for every language direction, a reference file and 11 to 22 system generated translation files are provided. In every reference file, there are around 1000 to 2000 sentences in 70 to 140 documents. The only exceptions are French-German and German-French, where all sentences are placed in the same document. Since document boundaries are not clearly defined in these language directions, we are excluding them from this case study. We used an Intel Xeon E5 Linux server with 64GB RAM. For every language direction, CLIReval runs consistently at the rate of around 0.2 to 0.3 seconds per document and it takes less than a minute to get results. We use the official evaluation scripts Table We present CLIReval, an open-source pythonbased evaluation toolkit for machine translation. LD BLEU NIST TER BEER MAP@10 NDCG@10 MAP@10 NDCG@10 de→cs 0.941 0. Rather than directly evaluating translated sentences against reference sentences, CLIReval transforms the inputs into the closely related task of CLIR, without the need for annotated CLIR dataset. The aim of this project is not to replace current automatic evaluation metrics or fix the limitations in those metrics, but to bridge the gap between machine translation and cross-lingual information retrieval and to show that CLIR is a feasible proxy task for MT evaluation. Our case study on the WMT2019 metrics shared task further highlights the potential of CLIR as a proxy task for MT evaluation, and we hope that CLIReval can facilitate future research in this area.
| 1,036 | 2,080 | 1,036 |
Language Independent Probabilistic Context-Free Parsing Bolstered by Machine Learning
|
Unlexicalized probabilistic context-free parsing is a general and flexible approach that sometimes reaches competitive results in multilingual dependency parsing even if a minimum of language-specific information is supplied. Furthermore, integrating parser results (good at long dependencies) and tagger results (good at short range dependencies, and more easily adaptable to treebank peculiarities) gives competitive results in all languages.
|
Unlexicalized probabilistic context-free parsing is a simple and flexible approach that nevertheless has shown good performance
|
For development, we chose the initial sentences of every treebank, where is the number of the sentences in the test set. In this way, the sizes were realistic for the task. For parsing the test data, we added the development set to the training set. All the evaluations on the test sets were performed with the evaluation script supplied by the conference organizers. For development, we used labelled Fscore computed from all tokens except the ones employed for punctuation (cf. section 3.2). Basically, we investigated the performance of a straightforward unlexicalized statistical parser, viz. BitPar In order to determine the grammar rules required by the context-free parser, the dependency trees in the CONLL format have to be converted to constituency trees. Finally the placement of punctuation signs has a major impact on the performance of a parser The most important language-specific information that we made use of was a classification of dependency relations into complements, coordinators/conjuncts, and other relations (adjuncts). Given knowledge about complement relations, it is fairly easy to construct subcategorization frames for word occurrences: A subcategorization frame is simply the set of the complement relations by which dependents are attached to the word. To give the parser access to these lists, we annotated the category of a subcategorizing word with its subcategorization frame. In this way, the parser can learn to associate the subcategorization requirements of a word with its local syntactic context Coordination constructions are marked either in the conjuncts (CH, CZ, DA, DU, GE, PO, SW) or the coordinator (AR, SL). If conjuncts show coordination, a common representation of asyndetic coordination has one conjunct point to another conjunct. It is therefore important to distinguish coordinators from conjuncts. Coordinators are either singled out by special dependency relations (DA, PO, SW) or by their POS tags (CH, DU). In German, the first conjunct phrase is merged with the whole coordinated phrase (due to a conversion error?) so that determining the coordinator as a head is not possible. We also experimented with attaching the POS tags of heads to the categories of their adjunct dependents. In this way, the parser could differentiate between e.g. verbal and nominal adjuncts. In our experiments, the performance gains achieved by this strategy were low, so we did not incorporate it into the system. Possibly, better results could be achieved by restricting annotation to special classes of adjuncts or by generalizing the heads' POS tags. As the treebanks provide a lot of information with every word token, it is a delicate question to de- Figure Whereas the dep-rel information is submitted to the parser directly in terms of the categories, the information in the lemma, POS tag and morphosyntactic features slot was used only for back-off smoothing when associating lexical items with cate- Instead of using the category generalizations supplied with the treebanks directly, manual labour can be put into discovering classifications that behave better for the purposes of statistical parsing. Another strategy that is often used in statistical parsing is Markovization . Generic symbols designate beginning (¡ £B @7 CB ) and end (¡ !¢ ED 9¦7 FD 9¨) of the sibling lists. The method can be transferred to plain unlexicalized PCFG HG ¡ ¢ PI 7 Q7 ¡ ¢ 7 ¡ ¢ R5 §S I 7 7 ¡ 32 TD 9Ü7 ¡ !2 S G ¡ 32 I 7 7 ¡ 32 V7 ¡ 32 65 ¨S I 7 7 ¡ ©Ẅ7 ¡ !B S G YX ` 7 7 ¢ a7 ¢ b5 Vc ¢ X ` 7 7 2 TD 9d7 2 ec G fX ` 7 Q7 2 g7 2 65 Vc 2 X ` 7 7 d7 CB §c G If the bigram symbols X ` 7 Q7 2 7 2 65 ¨c and I 7 7 ¡ 32 V7 ¡ !2 A5 ¨S occur in less than a certain number of rules (50 in our case), we smooth to unigram symbols instead ( X ` 7 Q7 h2 $c and I 7 7 ¡ !2 S ). We used a script of For time reasons, Markovization was not taken into account in the submitted results. We refer to Figures In a last step, we converted the constituent trees back to dependency trees, using the algorithm of While the results coming from the statistical parser are not really competitive, we believe that they nevertheless present valuable information for a machine learner. To give some substance to this claim, we undertook experiments with the Zhang Le's Max-Ent Toolkit In a second experiment we added parsing results (obtained by 10-fold cross validation on the training set) in two features: proposed dependency relation and proposed head. Results of the extended learning approach are shown in Figure We have presented a general approach to parsing arbitrary languages based on dependency treebanks that uses a minimum overhead of language-specific information and nevertheless supplies competitive results in some languages (Da, Du). Even better results can be reached if POS tag classifications are used in the categories that are optimized for specific languages (Ge). Markovization usually brings an improvement of up to 2%, a higher gain is reached in Slovene (where many new rules occur in the testset) and Chinese (which has the highest number of dependency relations). Comparable results in the literature are Our second result is that context-free parsing can also boost the performance of a simple taggerlike machine learning system. While a maximumentropy learner on its own achieves competitive results for only three languages (Ar, Po, Sl), competitive results in basically all languages are produced with access to the results of the probabilistic parser. Thanks go to Helmut Schmid for providing support with his parser and the Markovization script.
| 444 | 127 | 444 |
Is the Brain Mechanism for Hierarchical Structure Building Universal Across Languages? An fMRI Study of Chinese and English
|
Evidence from psycholinguistic studies suggests that the human brain builds a hierarchical syntactic structure during language comprehension. However, it is still unknown whether the neural basis of such structures is universal across languages. In this paper, we first analyze the differences in language structure between two diverse languages: Chinese and English. By computing the working memory requirements when applying parsing strategies to different language structures, we find that top-down parsing generates less memory load for the right-branching English and bottomup parsing is less memory-demanding for Chinese. Then we use functional magnetic resonance imaging (fMRI) to investigate whether the brain has different syntactic adaptation strategies in processing Chinese and English. Specifically, for both Chinese and English, we extract predictors from the implementations of different parsing strategies, i.e., bottom-up and top-down. Then, these predictors are separately associated with fMRI signals. Results show that for Chinese and English, the brain utilizes bottom-up and top-down parsing strategies separately. These results suggest that the brain adopts parsing strategies with less memory load according to different language structures.
|
A hallmark of human language ability is combining linear sequential word inputs into a hierarchical structure using abstract syntactic rules. This ability enables us to create infinite expressions from finite words. Previous studies have shown that several brain regions are involved in building the hierarchical syntactic structure Therefore, we have two hypotheses about how the brain processes different languages: • H1:The brain mechanism of syntactic processing is universal across different languages. Even though Chinese and English have different dominant structures, the brain uses the same parsing strategy no matter what structure of the language they are processing. • H2: The brain mechanism of syntactic processing is relatively flexible across different languages. The parsing strategy adopted by the brain is regulated by cognitive resources and the strategy with less cognitive load would be preferred. That is, the brain uses different strategies when processing Chinese and English. To test these two hypotheses, we associate the complexity predictors derived from different parsing strategies with the brain imaging data collected when native speakers were listening to stories. The complexity predictors are the number of parsing operations when using a parsing strategy to integrate each word into the tree. The key assumption is that brain regions engaged in syntactic structurebuilding would show increased activity as the number of parsing operations increases. Therefore, if a brain region builds trees following a parsing strategy, then the complexity predictors of this strategy would be able to predict the activation of this brain region. By comparing the prediction performances of different predictors, we can evaluate which parsing strategy better accounts for the brain activity in Chinese and English. From the comparative study, we have the following interesting findings: the dominant predictor for Chinese is bottom-up but for English it is topdown, which is consistent with the less-memorydemanding strategies for each language. The brain regions with significant effects are also different between Chinese and English. However, in further analysis, we find that the data size gap and the correctness of constituency trees both contribute to the brain-region differences. These results support the second hypothesis that the brain adopts parsing strategies with less cognitive load for different languages. In conclusion, our main contributions include: • We investigated the brain mechanism of hierarchical structure building for Chinese and English by exploring the relationship between parsing strategies, language branching directions, and brain activation. • We found that the processing load of parsing strategies correlates with the branching directions and the brain adopts the less-demanding parsing strategies for each language. • Our results help to further understand how the brain processes language and would hopefully inspire artificial neural models to process or represent language more efficiently.
|
Building hierarchical syntactic structures is an important sub-process of language understanding. Existing work that has investigated this sub-process can be categorized into two groups. One is often called controlled experiments that design artificial stimuli to separate the brain activation, such as comparing structured complex sentences or phrases with word lists As a complement to controlled experiments, another line of work used naturalistic experimental paradigms and explored the brain mechanism using encoding models This paper follows the naturalistic experimental paradigm and aims to explore the structural differences between Chinese and English and whether these differences drive the brain to use different parsing strategies in structure-building. The English and Chinese fMRI datasets we use were both collected when native speakers were listening to narrative stories. All these audio stories are naturalistic stimuli and highly representative of the language that humans encounter in everyday life. The English fMRI data we use comes from We collected the Chinese fMRI data from 12 Chinese native speakers when they were listening to a total of 60 stories. Each of the subjects listened to all 60 stories, and each story was listened to once by one subject. During the scanning of fMRI, subjects were instructed to stay still and pay attention to the story they were hearing. All stories were downloaded from the Renmin Daily Review website 3 and each of them lasts from 4 to 7 minutes. The 60 stories contain 52,269 words, forming a vocabulary of 9,153 words. This fMRI dataset is publicly available at Both the Chinese and English fMRI data were preprocessed following the HCP pipeline Chinese and English are two very diverse languages and differ in many aspects. We focus on the branching direction, which is directly correlated with the tree structure. The branching direction of language is about the presented order of the head and the modifier in sentences. In English, sentences are largely left-headed and right-branching, which means the heads usually come before the modifiers. Whereas Chinese is more mixed with rightbranching and left-branching categories We first computed the proportion of the left and right branching structures of the stimuli to see whether there is a real branching-direction difference. To better classify the branching direction of a syntactic tree, we define a subtree as complete if it has at least two nodes and the children of each node in this subtree are also included in this subtree (see Figure With this definition, we computed the branching direction of each phrase node in the English and Chinese stimuli corpora. The results, as shown in Figure Strategies and Branching Directions In the building process of a hierarchical constituency tree, each word in the sentence is parsed following the syntactic rules. A parsing strategy defines the specific parsing directions, whether moving from the words to abstract structures such as phrases and sentences, or starting at the abstract level and working down to the words. Here, we adopt two parsing strategies: top-down parsing, where the parsing begins from the most abstract level (root) to the word level (leaves); and bottomup parsing, where the parsing begins from the word level to the abstract level. The parsing process of top-down and bottom-up parsing is illustrated in Figure To further investigate the correlation between the branching direction and the parsing strategy, we computed the required working memory space when applying different parsing strategies to the stimuli corpus of the two languages. We used the incomplete node defined in The results are shown in Table To test the two hypotheses, whether the brain uses the same parsing strategy regardless of language structures or whether the brain chooses a strategy with less cognitive load when processing different structures, we conducted an fMRI experiment as follows. The overall framework is shown in Figure In the building process of a constituency tree, node count is the number of paring operations needed to integrate each word into the tree structure. Therefore, the syntactic node count is directly related to the process of syntactic structure building Apart from the node count predictors, we also compute two low-level linguistic features: sound envelope and word rate, to control for the confounding effects. The sound envelope is computed to represent how the amplitude and frequency of speech sound change over time. And the word rate, following the For the English stimuli, only the transcribed text for all the stories is provided in the dataset. To annotate the constituency trees of the story text, we used the Stanford CoreNLP parser The node count predictors were extracted from these annotated constituency trees using different parsing strategies. We investigate the mapping between the structurebuilding process and brain activation using voxelwise encoding models; that is, using node count features x to predict brain activation y. In practice, fMRI measures the blood-oxygen-level-dependent (BOLD) signal, which changes slowly after the neurons fire. Besides, the frequency of fMRI collection is comparatively slow compared to the speech rate of words. To account for the influence of these two factors, the node count values of words are convolved with a canonical hemodynamic response function (HRF) 5 and then down-sampled to the same sampling rate as the fMRI collection. To control the low-level linguistic effects represented by the word rate wr and the sound envelope snd, we adopt a stepwise ridge regression method as the formalization of encoding models. Specifically, we perform a two-step regression. In the first step, we train the encoding models with word Specifically, for both two steps of ridge regression, we run a nested cross-validation training which contains two loops: the inner loop and the outer loop. Both the inner-loop and the outer loop are 10-fold standard cross-validation. The inner loop chooses the best hyper-parameters (uniformly selected from the log-space from 10 -5 to 10 5 ) and computes the regression weights for each outer loop, and the outer loop tests the computed regression weight. After the two-step regression, we conduct a paired t-test on the outer loop results to extract voxels where adding each parsing node count can significantly improve the prediction accuracy. After brain encoding and the significance test, Figure In both Chinese and English, only one parsing strategy has significant effects, and the significant parsing strategies are different between the two languages. For Chinese, bottom-up parsing involves significant brain regions in the left temporal lobe and the left frontal lobe. Whereas for English, only the top-down parsing shows significant effects. The significant parsing strategy for each language is consistent with the one of less working-memory load as described in section 4.2. These results of fMRI experiments support the second hypothesis that the brain adopts parsing strategies with less cognitive load during the hierarchical structurebuilding process. The memory constraint during language understanding has been discussed in existing work. As a sentence unfolds, new words rapidly obliterate previous words As shown in Figure For Chinese, the bottom-up predictor shows significant effects in the LSTG, the LpSTS, and the LIFG, as shown in Figure In this section, we investigate the possible reasons for the cross-language differences in the correlated brain regions. Our analysis is conducted on two experimental aspects, including the data size and the correctness of constituency trees between Chinese and English. As a data-driven method, the results of encoding models would inevitably be affected by data size. The size of our Chinese fMRI data is remarkably larger than the English fMRI data. Therefore, we tested whether the cross-language brain-region differences related to the gap in data size by reducing the size of the Chinese fMRI data to the same level as the English data. As described in section 3, the English fMRI data includes 51 naturalistic stories and 19 subjects, with each subject listening to a subset of the audio stories, and each story being listened to twice. To reduce the Chinese fMRI data to a similar size as the English fMRI data, we randomly divided all subjects into 6 groups with 2 subjects in each group. The fMRI response to each story is averaged within each group. Then, we randomly chose the averaged fMRI response of 55 stories across all groups to form a reduced fMRI dataset, which is approximately the size of the English fMRI data. The same voxel-encoding and significance test were conducted on this Chinese-small dataset. The results are shown in Figure Apart from the data size, the correctness of constituency trees may also influence the encoding results of node count predictors. As mentioned in section 5.2, the constituency trees are manuallylabelled for Chinese stimuli but annotated by the trained Stanford CoreNLP parser for English stimuli. Therefore, the Chinese trees are correct, and the English trees inevitably have mistakes. These mistakes may further affect the encoding performance of node count predictors. To test whether the correctness of constituency trees affects the encoding results, we conduct encoding for Chinese with node count values extracted from the constituency trees annotated by the Stanford CoreNLP parser. Results are shown in Figure In conclusion, the size and quality of data both affect the significant brain regions that the encoding models can find, which also highlights the importance of large-scale high-quality data. Reducing the data size or quality of Chinese data makes the significant brain regions more similar to the significant brain regions in English top-down parsing. However, none of these experimental factors affects the dominant parsing strategy for Chinese, which further supports that the different branching directions are the reason for the different dominant parsing strategies between Chinese and English. To investigate whether the brain mechanism for hierarchical structure building is universal across languages, this work investigated the correlation between language branching directions, parsing strategies, and brain activation. By comparing the fitness of the complexity metrics extracted from different parsing strategies in two diverse languages, i.e., Chinese and English, we find experimental results supporting the hypothesis that the language structure may play an important role in determining the parsing strategy that the brain uses. That is, the brain may use different parsing strategies for different language structures to reduce the cognitive load. Our results demonstrate the flexibility of the brain mechanism for language processing and highlight the importance of cross-language studies in studying the brain language comprehension. This work has several limitations, which may restrict the generalization of our findings. Although we speculate that the language branching direction affects the parsing strategy the brain uses and try to prove it through working memory demand, we cannot directly verify it using an encoding framework. Because the Chinese experimental stimuli are rather mixed, the fMRI response of left-branching phrases can hardly be separated from the right ones. Future research can carefully design language stimuli with left-branching and right-branching structures separated, or use a metric other than node count to study the relationship between the brain parsing strategy and the language branching direction. In addition, node count is only associated with parsing difficulty. More detailed information during the tree-building process, such as the phrase nodes to be generated, or the specific parsing operation to be performed, cannot be represented by such a simple metric. Therefore, more powerful representations of the parsing process and the information in the hierarchical tree are needed if we wish to further uncover the mechanism of bran syntactic computation. Future work can use neural language models like BERT to generate more powerful representations.
| 1,265 | 3,055 | 1,265 |
Continual Named Entity Recognition without Catastrophic Forgetting
|
Continual Named Entity Recognition (CNER) is a burgeoning area, which involves updating an existing model by incorporating new entity types sequentially. Nevertheless, continual learning approaches are often severely afflicted by catastrophic forgetting. This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type. In this paper, we introduce a pooled feature distillation loss that skillfully navigates the trade-off between retaining knowledge of old entity types and acquiring new ones, thereby more effectively mitigating the problem of catastrophic forgetting. Additionally, we develop a confidence-based pseudo-labeling for the non-entity type, i.e., predicting entity types using the old model to handle the semantic shift of the non-entity type. Following the pseudo-labeling process, we suggest an adaptive re-weighting type-balanced learning strategy to handle the issue of biased type distribution. We carried out comprehensive experiments on ten CNER settings using three different datasets. The results illustrate that our method significantly outperforms prior state-of-the-art approaches, registering an average improvement of 6.3% and 8.0% in Micro and Macro F1 scores, respectively. 1 * Equal contributions.
|
Named Entity Recognition (NER) is a essential research area in Natural Language Understanding (NLU). Its purpose is to assign each token in a sequence with multiple entity types or non-entity type Deep learning approaches to CNER encounter two primary challenges. The first one is common to all continual learning methods, known as catastrophic forgetting The second one is specific to CNER, involving the semantic shift of the non-entity type. In the conventional NER paradigm, tokens are marked as the non-entity type, indicating that they do not belong to any entity type. In contrast, in the CNER paradigm, tokens marked as the non-entity type imply that they do not belong to any of the current entity types. This implies that the non-entity type may encompass: the true non-entity type, previously learned old entity types, or future ones not yet encountered. As depicted in Figure In this paper, we present a novel approach named CPFD, an acronym for Confidence-based pseudolabeling and Pooled Features Distillation, which utilizes the old model in two significant ways to address the aforementioned challenges inherent in CNER. Firstly, we introduce a pooled features distillation loss that strikes a judicious trade-off between stability and plasticity, thus effectively alleviating catastrophic forgetting. These features are grounded in the attention weights learned by PLMs, capturing crucial linguistic knowledge necessary for the NER task, including coreference and syntax information • We design a pooled features distillation loss to alleviate catastrophic forgetting by retaining linguistic knowledge and establishing a suitable balance between stability and plasticity. • We develop a confidence-based pseudolabeling strategy to better recognize previous entity types for the current non-entity type tokens and deal with the semantic shift problem. To cope with the imbalanced type distribution, we propose an adaptive re-weighting type-balanced learning strategy for CNER. • Extensive results on ten CNER settings of three datasets indicate that our CPFD achieves remarkable improvements over the existing State-Of-The-Art (SOTA) approaches with an average gain of 6.3% and 8.0% in Micro and Macro F1 scores, respectively.
|
Continual Learning learns continuous tasks without reducing performance on previous tasks CNER Traditional NER focuses on the development of various deep learning models aimed at extracting entities from unstructured text CNER aims to train a model across t = 1, ..., T steps, progressively learning an expanding set of entity types. Each step has its unique training set D t , comprising multiple pairs (X t , Y t ), where X t represents an input token sequence with a length of |X t | and Y t represents the corresponding ground truth label sequence encoded in a one-hot format. Notably, Y t only includes labels for the current entity types E t , with all other labels (for example, future entity types E t+1:T or potential old entity types E 1:t-1 ) collapsed into the non-entity type e o . At step t (t>1), considering the old model M t-1 as well as the current training set D t , our objective is to update a novel model M t capable of recognizing entities from all types seen thus far, represented by t i=1 E i . In the above formulation, we pinpoint two significant challenges in CNER. The first one is the issue of catastrophic forgetting Recent studies where A t ℓ and A t-1 ℓ ∈ R K×|X t |×|X t | correspond to the attention weights of layer ℓ for M t-1 and M t respectively, ℓ = 1, ..., L, with K standing for the count of attention heads. However, both the output probabilities distillation found in prior CNER methods FeedForward Network Inputs: Bin was in Japan yesterday Current ground-truth labels Refined pseudo labels Union ℒ !"#"$%&'()*+,- Figure 2021; To this goal, we incorporate a pooling operation into our proposed loss, thus permitting a level of plasticity by consolidating the pooled dimensions By pooling the sequence dimensions, we derive a more permissive loss that preserves only the head dimension, formulated as follows: (2) Our proposed pooling-based framework facilitates the formulation of a more flexible feature distillation loss, striking an improved balance between plasticity and stability. Based on Equation (2), we can moderately sacrifice plasticity to enhance stability through less aggressive pooling, achieved by aggregating statistics across only one of the head and sequence dimensions: L PFD achieves an appropriate balance between excessive rigidity (as demonstrated by Equation ( As previously noted, tokens marked as the nonentity type at step t might actually belong to the authentic non-entity type, previous entity types, or future entity types. Simply categorizing these tokens as the non-entity type could intensify catastrophic forgetting. To tackle this issue of semantic shift linked to the non-entity type, we design a pseudolabeling strategy Formally, we represent the cardinality of the current entity types with E t = card(E t ). We denote the current model's predictions, encompassing the true non-entity type, all the old entity types, and the current ones, with as the target at step t, calculated using the one-hot ground-truth label sequence Y t ∈ R |X t |×(1+E 1 +...+E t ) in step t and pseudo-labels extracted from the predictions of the old model Y t-1 ∈ R |X t |×(1+E 1 +...+E t-1 ) . The process is described as follows: In other words, if a token is not marked as the nonentity type e o , we replicate the ground truth label. Otherwise, we utilize the label predicted by the old model. This pseudo-labeling strategy enables the assignment of the actual semantic label to each token labeled as the non-entity type, provided the token belongs to any of the previous entity types. Nevertheless, labeling all non-entity type tokens as pseudo-labels can be unproductive, e.g., on uncertain tokens where the old model is likely to falter. Thus, this Vanilla Pseudo-Labeling (VPL) strategy inadvertently propagates errors from the previous model's incorrect predictions to the new model. Inspired by where u denotes the uncertainty of token i as well as τ e is a type-specific confidence threshold. Equation (5) only retains pseudo-labels where the previous model is "confident" enough (u < τ e ). Following the pseudo-labeling process, we find that the count of new-type tokens present in current sequences generally exceeds the count of pseudolabeled old-type tokens. This type-imbalance issue typically skews the updated classifier towards new types. Inspired by where η i denotes the weight of the token at the location i in the sequence X t , computed as follow: where N old , N new and σ(•) are the number of tokens belonging to old entity types E 1:t-1 , the number of tokens belonging to the new entity types E t and the sigmoid function, respectively. Finally, the total loss in CPFD is: with λ a hyper-parameter for balancing losses, and Θ t is the set of learnable parameters for M t . Datasets We conduct the evaluation of CPFD using three widely adopted NER datasets: CoNLL2003 (Sang and De Meulder, 1837), I2B2 In terms of training, we introduce entity types in the same alphabetical order as CFNER Performance Metrics Consistent with CFNER, we utilize Micro F1 (Mi-F1) and Macro F1 score to evaluate the model's performance, taking into account the issue of entity type imbalance in NER. We present the mean result across all steps, encompassing the first, as the ultimate performance. Further, to offer a more detailed analysis, we introduce step-wise performance comparison line plots. To assess the statistical significance of the improvements, we perform a paired t-test with a significance level of 0.05. Baseline Methods We benchmark our CPFD against the recent CNER methods, namely Extend-NER Implementation Details In alignment with prior CNER methods Comparisons with Baselines To substantiate the efficacy of our CPFD method across various CNER settings, we conduct exhaustive experiments on the CoNLL2003, I2B2, and OntoNotes5 datasets. The results obtained from the I2B2 and OntoNotes5 datasets are presented in Table As indicated in Tables In this paper, we lay the foundation for future research in CNER, an emerging field in NLU. We pinpoint two principal challenges in CNER: catastrophic forgetting and the semantic shift problem of the non-entity type. To address these issues, we first introduce a pooled feature distillation loss that carefully establishes the balance between stability and plasticity, thereby better alleviating catas-trophic forgetting. Subsequently, we present a confidence-based pseudo-labeling strategy to explicitly extract old entity types contained in the current non-entity type, better reducing the impact of label noise and dealing with the semantic shift problem. We evaluate CPFD on ten CNER settings across three datasets and demonstrate that CPFD significantly outperforms the previous SOTA methods across all settings. Our pooled features distillation loss necessitates additional computational effort to align with the intermediary features of the old model. Our confidencebased pseudo-labeling strategy, which employs median entropy as the confidence threshold, necessitates pre-calculation for each old entity type based on the current training set and the old model, thus extending the training duration. Moreover, although our confidence-based pseudo-labeling strategy helps reduce the prediction errors of the old model, it is not entirely foolproof, and some mislabeled instances may still persist. In relation to ethical considerations, we provide the following clarifications: (1) Our research does not engage with any sensitive data or tasks. ( (3) We offer comprehensive descriptions of the dataset statistics and the hyper-parameter configurations for our method. Our analyses align with the experimental results. (4) In the interest of promoting reproducibility, we plan to make our code accessible via GitHub.
| 1,381 | 2,241 | 1,381 |
Logic-driven Indirect Supervision: An Application to Crisis Counseling
|
Ensuring the effectiveness of text-based crisis counseling requires observing ongoing conversations and providing feedback, both labor-intensive tasks. Automatic analysis of conversations-at the full chat and utterance levels-may help support counselors and provide better care. While some session-level training data (e.g., rating of patient risk) is often available from counselors, labeling utterances requires expensive post hoc annotation. But the latter can not only provide insights about conversation dynamics, but can also serve to support quality assurance efforts for counselors. In this paper, we examine if inexpensive-and potentially noisy-session-level annotation can help improve label utterances. To this end, we propose a logic-based indirect supervision approach that exploits declaratively stated structural dependencies between both levels of annotation to improve utterance modeling. We show that adding these rules gives an improvement of 3.5% f-score over a strong multi-task baseline for utterance-level predictions. We demonstrate via ablation studies how indirect supervision via logic rules also improves the consistency and robustness of the system.
|
Text-based crisis counseling services like Crisis Text Line Addressing the twin problems of managing counselor workload and ensuring quality requires training new counselors and providing feedback to existing ones. In particular, understanding suicide risk in client utterances may help counselors learn to prioritize high-risk client situations, especially when dealing with multiple chats simultaneously or when fatigued. As Utterance-level risk labeling requires post hoc annotation by experts who follow a coding manual; the process can be slow and expensive. In contrast, session-level risk data is relatively easier to obtain. At the end of a session, in their standard workflow, counselors can tag the risk level (e.g., low-or highrisk) for record keeping requirements. Sessionlevel assessments are undeniably useful In this paper, we ask: Can the easy-to-obtain session-level risk data help improve utterance risk classifiers? These two tasks have structural dependencies between them: session-level classification of risk should be dependent on utterance-level classification, such that a session containing any highrisk utterances should be deemed high risk. This connection paves the way to extract auxiliary signal Overview of the problem and our approach. We have two disjoint datasets of annotated crisis sessions: one set is labeled at the session level (with Higher or Lower risk) by the counselor immediately after the chat ends, the other set is labeled post hoc at the utterance level (with one or more risk status codes shown in Table from the easily obtained session labels to indirectly supervise utterance models. Prior work on indirect supervision with structured prediction We show that the auxiliary supervision via constraints significantly improves utterance risk prediction over both direct supervision and strong multitask baselines. Our analysis reveals that the rules also improve model consistency and robustness. In summary, our contributions are: We introduce a framework for indirect supervision that uses relaxed logic. We instantiate it to the problem of using cheap, abundant, but noisy annotation (sessionlevel risk labels) as auxiliary signal to improve the performance on a low-resource task (utterance-level labels). We show that structural dependencies across tasks help outperform a directly supervised and a strong multi-task baselines.
|
In text-based crisis intervention, a client starts a chat session (also called an encounter) by typing a message, and the first available counselor replies to it. The session goes on till either the client finishes the conversation, or a certain amount of time elapses with no client response. The volume of messages to text-based crisis services presents quality assurance challenges and demands increased counselor training. NLP-based tools can help both with quality control and for counselor feedback during training Once a session concludes, counselors tag the conversation as being higher or lower suicide risk as part of their routine reporting requirements. Consequently, we can organically obtain session-level Table annotation, but perhaps with some noise due to provider fatigue. In contrast, labeling the suicide risk status of client utterances needs careful post hoc analysis over the session. For this process, a group of expert annotators label each utterance using a standard coding system for risk. In this work, we use the crisis chat coding scheme of With these two types of counseling annotationthe cheap and noisy session-level data, and the expensive and slow utterance-level data-we seek to use the naturally occurring session risk assessment signal to improve an utterance risk status model. Datasets. We use two datasets from the regional suicide crisis hotline SafeUT. Both contain encounters consisting of text messages between the client and possibly multiple counselors. Since they were created in different development stages of SafeUT, they are disjoint: one with client utterances labeled for risk status and the other with labeled encounters. No encounter is annotated at both levels. The first dataset, denoted as U , contains 425 sessions labeled by seven annotators: six graduate students and a psychology professor. The average session has 23 utterances, with 13 from the client. Each annotator independently labeled client utterances with a nine-dimensional label indicating a no-code or a combination of risk status Dcodes (Table The two types of labels are tied by structural dependencies. From the definitions of the D-codes, and their associated coding manual Beyond the cross-task dependencies, the definition of the D-codes also entails that the occurrence of certain labels logically necessitates the occurrence of certain others. For example, a client who has attempted suicide in the past had (at least) one lifetime suicide ideation. Hence, an utterance coded with Prior attempt(s) (D 9 ) must also be coded with Lifetime ideation (D 1 ). The structural dependencies between the tasks open the possibility of using encounter-level annotation as indirect supervision for utterance-level risk status coding. Moreover, the dependencies between D-codes can be used to guide models to-wards more consistent and robust utterance risk status prediction despite the paucity of data. We ask: Can we exploit the structural dependencies between the two kinds of annotation and within the D-codes to aid utterance-level prediction? In this work, we introduce a logic-guided indirect supervision framework that uses cross-task dependencies to transfer signal from the session data to the utterance models. The declarative nature of the structural dependencies between the two tasks allows us to express them as predicate logic rules. The question of indirect supervision with structural constraints has been studied in the structured prediction literature Instead, we build on the approach presented in The rest of this section expands on this intuition to present a declarative formulation of the problem. The next section focuses on using the formulation to design a loss function for learning. We denote by e = {m 1 , m 2 , . . . , m n } an encounter with n utterances where each m i represents a client or a counselor utterance. We denote by R = {Lower, Higher} the set of risk labels at the session level, and by D the set of all risk status utterance labels in Table We represent the fact that an encounter e has risk r ∈ R as the predicate Risk(e, r). Similarly, we define the predicates HasCode(m, d) to denote the fact that an utterance m has the label d ∈ D, and NoCode(m) to denote that the label of m is No-D. For the declarative loss learning approach, we first need to represent the labeled data and structural constraints in predicate logic. Data Constraints. The dataset of encounters E sets the Risk for each session it contains: ∀(e, r) ∈ E, Risk(e, r). (1) To represent the fact that a client utterance m is labeled with a set of D-codes D * ⊂ D, we need to ensure that (a) the labels of m are in D * , and (b) neither the No-D, nor other D-codes should apply for the message. For notational convenience, we will call these M 1 and M 2 respectively. Using these helper predicates, we can represent a session in the utterance labeled data U . Each client utterance in a session e ∈ U either has a set of D codes associated with it, or has the No-D label. Joint constraint. A session assessed with Lower risk must not contain a client utterance with a high-risk D-code from the set H. This constraint applies for every utterance in the session. Importantly, the rule applies to all sessions, whether they are labeled or not, and in particular, to sessions in both datasets E and U . We can write: ∀e ∈ E ∪ U, ∀m ∈ e, ∀d ∈ H Risk(e, Lower) → ¬HasCode(m, d). (5) D-Code constraints. For a set of pairs of Dcodes (d i , d j ), if the former applies to a message, so should the latter. We will refer to the full set of pairs (Table NoCode constraint. Our final constraint enforces structural consistency among the utterance risk predictions. In the multi-label setting, every utterance either has the No-D label or a combination labels in D, but never both. The constraint holds for all encounters in our data. We write: Full declarative specification. We can state the desired properties involving our predicates as a formula composed by the conjunction of the expressions (1), ( In our declarative formulation, we have three atomic predicates: HasCode, NoCode and Risk. We model the truth value of these predicates as the output probabilities of a transformer-based classifier. We denote the relaxed truth value of the predicate classifiers with square brackets. For instance, given a session e, we denote the predicted probability that the fact Risk(e, Higher) holds as [Risk(e, Higher)]. All the constraints we have encountered will be relaxed into differentiable forms, such that the truth values of the atomic predicates define the truth value of the entire loss under the relaxation. Consequently, learning the three predicates will require optimizing their parameters to maximize the truth value of the relaxed declarative loss. We use a joint neural model for the relaxed truth values of the predicates NoCode, HasCode and Risk. The network receives an input session and predicts the probabilities of risk for the entire session, and client risk status for each utterance. Our models are based on RoBERTa Given a session, we obtain representations for each utterance by averaging its token RoBERTa embeddings. We input the utterance representations into a 2-layer transformer encoder to obtain session-contextualized utterance embeddings. The average of the utterance embeddings is used to represent the entire session. The session embedding is the input of a linear layer with two outputs, whose softmaxed values serve as the Lower and Higher risk probabilities of the session. These probabilities model the truth values of the Risk predicate. To each utterance embedding in the session, we apply a linear layer with |D| + 1 outputs followed by an element-wise sigmoid activation. These give us the utterance risk status probabilities and the No-D probability, which model the truth value of the HasCode and NoCode predicates. Appendix C gives additional details about the model architecture. Note that since the output probabilities share a common session-contextualized embedding model, they represent a simple multitask model where each one task has the opportunity to influence and improve the other. The key idea behind our relaxation approach is that each boolean operator can be softened into a sub-differentiable function. We follow the recommendations of Medina Grespan et al. ( Applying the relaxation to rules in section 3.2, we can construct loss functions that we then optimize. In other words, every loss defined below has an analogue in section 3.2. Data losses. The expression (1) requires all the predicates representing the labeled sessions in E should hold. This is equivalent to asking the conjunction of Risk(e, r) facts for all (e, r) pairs in E to hold, which is relaxed as the product of its conjuncts. Equivalently, we can minimize the negative log the expression, and recover the standard cross-entropy loss for encounter risk classification. Analogously, we can write the losses for the helper predicates in expressions (2) and (3), These helper losses let us write the loss of the utterance labeled data U , thus relaxing the Boolean expression (4) to recover the binary cross entropy loss for multi-label classification: Joint constraint loss. For the joint constraint (5), using the R-Product definition of implication, we obtain a loss composed of the sum of ReLU functions: where, D-Code constraints loss. In a similar fashion as above, we can derive the D-code dependencies (6). where, NoCode constraint loss. Following the structure of the NoCode constraint (7), we can write the NoCode loss as However, unlike the cases we have seen so far, naively applying the conversion rules gives us a loss that is not stable for learning. This was also observed by Full logic-based loss. Just as the full declarative specification is the conjunction of individual components, the problem of learning the predicate models requires minimizing the total loss: Here, the λ's are non-negative hyper-parameters that regulate the signal from each loss term. Importantly, the unsupervised losses L Joint , L D and L NoCode apply to encounters in both datasets E and U ; they are not defined over ground truth labels. The joint loss serves to transfer signal from the encounter data to the utterance predictors, while the other two unsupervised losses enforce structural consistency in the utterance predictors. 5 Experiments and Results Data. We partition the utterance-level dataset U with stratified splits of 135, 144 and 146 encounters for training, development and testing respectively. We split the encounter-level dataset E into 5,393 encounters for training and 597 encounters for testing. Tables Baselines. Our proposed approach optimizes the total loss that includes all the relaxed rule components. We compare our system against two baselines with the same architecture but simplified rule-less losses: predicates HasCode and NoCode, and Risk. Since our goal is to build a better utterance predictor, we use the development set from U for hyperparameter tuning and model selection using the microaverage of the F1 score in multi-label utterance classification. We train the models for 150 epochs with early stopping after 50 epochs using AdamW optimizer Evaluation. For the utterance and chat labels, we report the precision, recall and F 1 micro-averages. Further, we measure the consistency of model predictions by analyzing how much they violate the declarative rules we are incorporating. We report the average performance of the models on the test splits across five different training random seeds. Utterance results. Table We expect the baseline to already have some domain tuning because the RoBERTa embeddings were additionally pre-trained on counseling text. Standard multi-task (MT) classification improves the F 1 score by 3.3% with respect to the baseline, corresponding to a 6.1% increase in recall. We can attribute this improvement to the shared feature space in the transformer encoder layers becoming better from the encounter labeled data. Finally, we observe that introducing the relaxed rules loss components (MT+Rules) produces a F 1 gain of 3.5% over the already improved multi-task system (corresponding to a 10.2% improvement in recall). Each subsequent F 1 improvement is statistically significant at p < 0.05 using the paired t-test. Related to the recall increase, we observe that the F 1 for the majority label No-D dropped. Compared to the baseline's 95.2%, the full and multi-task systems' scores dropped to 91.5% and 88.9% respectively. Importantly, in this domain, the recall improvements are desired. False positive D-code predictions are preferable to missing any important suicide-related cues. Session results. Table The baseline is unsurprisingly as good as random; it does not have any access to session-level risk supervision. Compared to the multi-task baseline, we observe a drop in F 1 performance in our system. We discover that this difference corresponds to a 10.7% drop in precision, but also to a significant gain of 23.5% in recall. These results show that incorporating indirect signal from the rules prioritizes recall which aligns with the goals of suicide risk detection application: Improved recall for the Higher risk label can help focus counselors attention to such clients. For the NoCode constraint, which introduces a mutual exclusion between the No-D label and any D-code for every utterance, we find that the multitask system has more violations than the baseline. This implies that multi-task model's gain in utterance D-code recall over the baseline (Table For the D-Code rules, which enforce dependencies between D-codes, even the baselines have only few violations. Nevertheless, our system recovers perfect consistency with respect to these rules. Lastly, the joint rule prohibits all client utterances from Lower risk encounters from having any high-risk D-code. Given that we have a random risk classifier in the baseline, we only compare system violation performance against the multi-task system for this rule. We observe that our system improves in terms of violations for the joint rule implying that it successfully incorporates the knowledge from the L Joint loss during training. To better understand the impact of each rule, we perform an ablation study with respect to the multitask baseline. Tables 7, 8 and 9 report the impact of each rule individually added during training. Adding only NoCode rule. As expected, we see that NoCode rule violations drop when adding only the NoCode rule loss (Table We observe a more dramatic effect on the en- counter risk classifier with a big improvement in recall at the cost of a significant drop in precision, resulting in an overall F 1 drop of 4.8% (Table Adding only D-Code rules. For D-code classification, precision increases at the cost of recall (Table The system has perfect consistency for the D-Code rules as expected. Analyzing the effect of the D-Code loss on the risk classifier, we observe a similar behaviour as using only the NoCode loss. This similarity implies that adding constraints at the utterance level affects the weights in the shared feature space to make the risk classifier more sensitive to risk, i.e. more recall at the cost of precision. Adding only the joint rule. We observe a significant 3.2% gain in F 1 performance corresponding to precision and recall gains of 1.9% and 3.8% respectively (Table Analyzing the performance on the risk classifier we observe a comparable F 1 performance with respect to the multi-task baseline with a considerable 7.5% drop in precision offsetting a significant 17% gain in recall (Table We manually examined false positive and false negative predictions of the MT+Rules model on the development split of U . For this analysis, we used the model corresponding to the random seed that provided the best micro F1 performance on Table Passive [D 2b ] vs Current [D 2 ] Ideation. Confusion between passive and current ideation accounts for 27% of the total errors. We observe that half of these mistakes are edge cases which can be hard to discern even for a human. For example, the D 2b utterance "I am having those thoughts again. Being better off dead" is classified with both D 2b and D 2 . Lifetime [D 1 ] vs Current [D 2 ] Ideation. The inability to distinguish lifetime and current suicidal ideation (perhaps related to deficiencies in temporal reasoning) accounts for 15% of the errors. For example, the D 2 utterance "I'm worried. She has sent me a text saying she was going to commit suicide" is classified with both D 2 and D 1 . Excessive No-code Commonsense Knowledge. We observe that 4% of the errors come from poor commonsense reasoning. For example, our model does not predict D 2 and D 2b for the utterances "a kid on social media posted bloody cuts, the caption said bye bye!...", and "I know I am only alive for my friends and food! LOL" respectively. The table in Appendix E shows additional examples of these errors. Mental Health NLP-based methods have proven useful to detect risk in mental health counseling. Indirect supervision. Our work is conceptually related to an indirect supervision joint inference paradigm (e.g., Logic-driven learning. Among a variety of logic-driven learning approaches (e.g., In this work, we study the problem of predicting utterance-level labels in a suicide crisis chat with the goal of better understanding such sessions and providing better feedback to fatigued counselors. We propose a fully declarative framework that integrates different data sources with a logic-guided loss. We experiment with two text-based crisis counseling datasets from the same source, but with different and disjoint annotations. One level of annotation-the session level-occurs naturally but is noisy, while the other level of annotationthe utterance level-is expensive but precise. Our results show that exploiting the structural dependencies among the sources of annotations allows the session labels to help improve the utterance model. Our experiments reveal that simultaneously incorporating more rules into the loss produces better performance in the task of interest (Table Due to hardware limitations of the protected environment server that stores the datasets we use, RoBERTa-base was the best model that could fit in the available GPUs. Although other pre-trained embeddings could provide better performance, we argue that this is orthogonal to our contribution of incorporating indirect supervision under a fully declarative learning framework. Moreover, integrating logic-driven frameworks and prompt-based models like T5 is an interesting future line of work. Choosing RoBERTa as the underlying embedding foundation of our system introduces all the inherent limitations of large language models Hovy and Spruit (2016) list several ethical issues in the study and application of natural language processing, and advocate increased awareness of possible adverse social impacts. This is especially true in mental health care in general, and in crisis services in particular. The data was anonymized following HIPAA compliance guidelines. We use special mask tokens for identifiable information, including names, locations, ZIP codes, ages, phone numbers, related entities (e.g., school, hospital, etc.), and any other numbers. All the data are stored in a HIPAAcompliant cloud folder. Only staff signed under the IRB approval of this project (IRB_00131153) were allowed to have access to the folder. The staff have all been trained with basic knowledge on data confidentiality, privacy, and protection. We pre-process both datasets U and E by prepending special tokens indicating the originator of each utterance in a session: we added the token [#COUNSELOR] or [#CLIENT] to counselor and client utterances accordingly. Each utterance is then encoded with a domain-adapted RoBERTa model of 768-dimensional outputs. Before the utterance encoding, we add the originator tokens to the matrix embedding of RoBERTa. We respectively initialize these tokens by averaging the pre-trained embeddings of the words "client", "counselor" with corresponding direct synonyms (e.g.,"patient"/"therapist"). Similarly, we add and initialize the special anonymization mask tokens (e.g., [#SCHOOL], [#ZIP-CODE], [#PERSON]). Following On top of the RoBERTa utterance embeddings, we use two trasformer encoder layers. Each transformer layer has 8 heads, 2048 feedforward dimension, ReLU activation on the intermediate layer and 1e-5 eps stability value at the normalization layer. We applied a positional encoding layer with dropout probability of 0.2 and a eps value of 1e-12 to the input utterance embeddings before the transformer block. In all, our system has 275 million parameters. To obtain an entire session embedding s we average (as described in C.1) the transformer utterance embeddings average({u 1 , u 2 , . . . , u n }) = s We apply a linear layer P u of length 9 and an element-wise sigmoid activation to each client utterance u c ∈ u obtaining a nine-dimensional vector σ(P u (u c )). Each entry in σ(P u (u c )) represents the probability that the utterance u c having each of the D-codes is True. For instance, the first and second coordinates of σ(P u (u c )) are the probabilities that the facts NoCode(u c ) and HasCode(u c , D 1 ) respectively hold. This is, Similarly, we apply a linear layer P s of length 2 and a softmax activation to the session embedding s obtaining a two-dimensional vector softmax(P s (s)). Here, we have that softmax(P s (s))[1] = Risk(e, Lower) and softmax(P s (s))[2] = Risk(e, Higher) We use the relaxed truth-values in the utterance and encounter vectors-σ(P u (u c )) and softmax(P s (s))-to compute all the loss components in (15) using the R-product logic. We do not fine-tune the underlying domain adapted RoBERTa model due to hardware limitations. The data for this project is housed in a secure compute infrastructure whose GPUs size do not allow us to load entire input sessions and their gradients in memory. Multiple runs We train the system using the training splits of the utterance U and session E datasets using 5 different seeds (0,1,2,3,4). We randomly select batches from U (we denote B U ) and E (denoted B E ) until completing each epoch. For B U batches we have the labels to compute the utterance multi-label loss L U and not the session binary loss L E , therefore the latter does not contribute during back-propagation. Similarly, input B E batches update the L E loss but not the L U loss. Importantly, the unsupervised losses L Joint , L D and L NoCode can be computed from both B U and B E batches. We use rescaling weights on L U and L E to compensate label imbalance. In this setting, the size of a batch is defined by the number of sessions, and sessions can have different sizes in terms of contained utterance. Hence, we normalize the loss for B U batches (also for B E batches for implementation convenience) by averaging the utterance losses from all sessions in the batch. This strategy makes the system performance more stable across epochs. Training with rules The MT+Rules system reported in the tables from section 5.2 is obtained from training the baseline Multi-Task (MT) system for 75 epochs until convergence and then continue training adding the rules for 75 epochs more. We found that this strategy mitigates high variance in performance across different runs. Evaluation and model selection We run hyperparameter tuning for 75 epochs, and then train with the best combination for 150 epochs (using seed 1). We select the model from the epoch with best micro averaged F 1 over the client utterances labeled with at least one D-code in the development split of the set U . We stop training after 50 epochs of nonincrease in F 1 and keep the model from the latest best epoch. The hyperparameter search space is the following: • Learning rate (lr): 1e-4, 2e-4, 5e-4, 1e-5, 2e-5, 5e-5, 1e-6, 2e-6, 5e-6 • λ's (eq. 15) : 0.0001, 0.001, 0.01, 0.1, 1, 5, 10 • Batch size (bs): 4, 8, 16 Due to the size of the search space we do not perform full grid hyper-parameter search for all the systems reported. We first select the best hyperparameters exploring the search space for the baseline models that only includes learning rate, batch size, and λ E (for the multi-task baseline). From this process we discover values for which the baselines do not converge, and discard them for the subsequent search-when adding the rules into the system. For instance, we discard the learning rate values 1e -4, 2e -4, 5e -4, 1e -6, 2e -6, 5e -6, and the λ E values 0.1, 1, 5, 10. We further reduce the search space by incrementally adding rules into the system and exploring the influence of different λ values. For instance, we observe that the multi-task baseline system trained using only the NoCode rule under-performs with λ NoCode values smaller below 1. Due to the running time of each hyper-parameter combination, this aggressive pruning strategy was necessary to make the Code and computing infrastructure We implemented all our experiments in Python, using the PyTorch, Pandas, and scikit-learn libraries. We used a server located in an IRB approved HIPAA protected environment with the following configuration: • CPU: Intel (R) Xeon (R), E5-2640, 2.40 GHz • GPU: NVIDIA TITAN X (Pascal) • RAM 12GB As discussed in we also add their respective contrapositive in the learning loss. In the declarative definition of the loss, we can incorporate each constraint along its contrapositive in two logically equivalent ways -as a conjunction or as a disjunction. For instance, let F 1 and F 2 be Boolean formulas. A constraint of the form F 1 → F 2 , can be added along its contrapositive into a declarative boolean statement as the conjunctive term (F 1 → F 2 ) ∧ (¬F 2 → ¬F 1 ) or as the disjunctive term (F 1 → F 2 ) ∨ (¬F 2 → ¬F 1 ). Although the latter equivalent expressions also generate different relaxation signals, we found through preliminary experiments that adding the constraintcontrapositive disjunction terms accelerates system convergence. As an example, by adding the contrapositive to the joint constraint (5) we obtain: We use the S-Gödel over the R-Product logic (Table Table D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? A D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
| 1,178 | 2,382 | 1,178 |
Facilitating Terminology Translation with Target Lemma Annotations
|
Most of the recent work on terminology integration in machine translation has assumed that terminology translations are given already inflected in forms that are suitable for the target language sentence. In day-to-day work of professional translators, however, it is seldom the case as translators work with bilingual glossaries where terms are given in their dictionary forms; finding the right target language form is part of the translation process. We argue that the requirement for apriori specified target language forms is unrealistic and impedes the practical applicability of previous work. In this work, we propose to train machine translation systems using a source-side data augmentation method 1 that annotates randomly selected source language words with their target language lemmas. We show that systems trained on such augmented data are readily usable for terminology integration in real-life translation scenarios. Our experiments on terminology translation into the morphologically complex Baltic and Uralic languages show an improvement of up to 7 BLEU points over baseline systems with no means for terminology integration and an average improvement of 4 BLEU points over the previous work. Results of the human evaluation indicate a 47.7% absolute improvement over the previous work in term translation accuracy when translating into Latvian.
|
Translation into morphologically complex languages involves 1) making a lexical choice for a word in the target language and 2) finding its morphological form that is suitable for the morphosyntactic context of the target sentence. Most of the recent work on terminology translation, however, has assumed that the correct morphological forms are apriori known For terminology translation to be viable for translation into morphologically complex languages, terminology constraints have to be soft. That is, terminology translation has to account for various natural language phenomena, which cause words to have more than one manifestation of their root morphemes. Multiple root morphemes complicate the application of hard constraint methods, such as constrained-decoding We propose a necessary modification for the method introduced by systems that are capable of applying terminology constraints: instead of annotating source-side terminology with their exact target language translations, we annotate randomly selected source language words with their target language lemmas. First of all, preparing training data in such a way relaxes the requirement for access to bilingual terminology resources at the training time. Second, we show that the model trained on such data does not learn to simply copy inline annotations as in the case of Our results show that the proposed approach not only relaxes the requirement for apriori specified target language forms but also yields substantial improvements over the previous work
|
To train NMT systems that allow applying terminology constraints Our work is similar to work by Languages and Data. As our focus is on morphologically complex languages, in our experiments we translate from English into Latvian and Lithuanian (Baltic branch of the Indo-European language family) as well as Estonian (Finnic branch of the Uralic language family). For comparability with the previous work, we also use English-German (Germanic branch of the Indo-European language family). For all language pairs, we use all data that is available in the Tilde Data Libarary with an exception for English-Estonian for which we use data from WMT 2018. The size of the parallel corpora after pre-processing using the Tilde MT platform To prepare data with TLA, we first lemmatise and part-of-speech (POS) tag the target language side of parallel corpora. For lemmatisation and POS tagging, we use pre-trained Stanza For validation during training, we use development sets from the WMT news translation shared tasks. For EN-ET and EN-DE, we used the data from WMT 2018, for EN-LV -WMT 2017, and for EN-LT -WMT 2019. MT Model and Training. For the most part, we use the default configuration of the Transformer Evaluation Methods and Data. In previous work, methods were tested on general domain data We compare our work with an NMT system without means for terminology integration (Baseline) and the previous work by Similarly to the previous work, we use two auto- matic means for evaluation: BLEU We use BLEU as an extrinsic evaluation metric as we expect that, when successful, the methods for terminology translation should yield substantial overall translation quality improvements due to correctly translated domain-specific terms. For significance testing, we use pairwise bootstrap resampling We are aware that the automatic evaluation methods are merely an approximation of translation quality. For example, we use lemmatised term exact match accuracy to measure term use in target language translations; however, it does not capture whether the term is inflected correctly. Thus human evaluation is in place. We use the EN-LV language pair to compare TLA against baseline and ETA. We use a 100 sentences large randomly selected ATS subset that contains 147 terms of the original test suite. We employ four professional translators and Latvian native speakers to compare each system's translations according to their overall translation quality and judge individual term translation quality. Specifically, given the original sentence and its two translations (in a randomised order), raters are asked to answer "which system's translation is better overall?". Raters are also given a list of the terms being evaluated and their reference translations (from the term collection) and are asked to classify translations as either "Correct", "Wrong lexeme", "Wrong inflection", or "Other". Figure Automatic Evaluation. We first validate our reimplementation of ETA by testing on the English-German WMT 2017 test set annotated with terms from IATE as used by When evaluated on the ATS, systems using TLA always yield results that are better than the baseline both in terms of BLEU scores (+1.4-7 BLEU) and term translation accuracy (29.8%-47.8%) (see columns 4-11 of Table Results also confirm the finding of the previous work by Human Evaluation. Results of human evaluation of EN-LV systems are summarised in Table The overall sentence translation quality judgements (Table Productivity of NMT models. Terminology translation frequently involves the translation of niche lexemes with rare or even unseen inflections. Thus the model's ability to generate novel wordforms is critical for high-quality translations. To verify if our NMT models are lexically and morphologically productive, we analysed Latvian translations of ATS produced by the system using TLA and looked for wordforms that are not present in either source or target language side of the training data. We found 72 such wordforms. Of those 45 or 62.5% were valid wordforms that were not present in training data, of which 28 were novel inflections related to ATS terminology use, while the remaining 17 where novel forms of general words. We interpret this as some evidence that the NMT model, when needed, generates novel wordforms. The remaining 27 or 37.5% were not valid, albeit sometimes plausible, Latvian language words, common types of errors being literal translations and transliterations of English words as well as words that would have been correct, if not for errors with consonant mutation. We proposed TLA-a flexible and easy-toimplement method for terminology integration in NMT. Using TLA does not require access to bilingual terminology resources at system training time as it annotates ordinary words with lemmas of their target language translations. This simplifies data preparation greatly and also relaxes the requirement for apriori specified target language forms during the translation, making our method practically viable for terminology translation in reallife scenarios. Results from experiments on three morphologically complex languages demonstrated substantial and systematic improvements over the baseline NMT systems without means for terminology integration and the previous work both in terms of automatic and human evaluation judging term and overall translation quality.
| 1,366 | 1,527 | 1,366 |
STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework *
|
Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios but is notoriously difficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we propose a novel prefix-to-prefix framework for simultaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very simple yet surprisingly effective "wait-k" policy trained to generate the target sentence concurrently with the source sentence, but always k words behind. Experiments show our strategy achieves low latency and reasonable quality (compared to full-sentence translation) on 4 directions: zh↔en and de↔en. * M.M. and L.H. contributed equally; L.H. conceived the main ideas (prefix-to-prefix and wait-k) and directed the project, while M.M. led the implementations on RNN and Transformer. See example videos, media reports, code, and data at
|
Simultaneous translation aims to automate simultaneous interpretation, which translates concurrently with the source-language speech, with a delay of only a few seconds. This additive latency is much more desirable than the multiplicative 2× slowdown in consecutive interpretation. With this appealing property, simultaneous interpretation has been widely used in many scenarios including multilateral organizations (UN/EU), and international summits (APEC/G-20). However, due to the concurrent comprehension and production in two languages, it is extremely challenging and exhausting for humans: the number of qualified simultaneous interpreters worldwide is very limited, and each can only last for about 15-30 minutes in one turn, whose error rates grow exponentially after just minutes of interpreting Unfortunately, simultaneous translation is also notoriously difficult for machines, due in large part to the diverging word order between the source and target languages. For example, think about simultaneously translating an SOV language such as Japanese or German to an SVO language such as English or Chinese: We instead present a very simple yet effective solution, designing a novel prefix-to-prefix framework that predicts target words using only prefixes of the source sentence. Within this framework, we study a special case, the "wait-k" policy, whose translation is always k words behind the input. Consider the Chinese-to-English example in Figs. • Our prefix-to-prefix framework is tailored to simultaneous translation and trained from scratch without using full-sentence models. • It seamlessly integrates implicit anticipation and translation in a single model that directly predicts target words without explictly hallucinating source ones. • As a special case, we present a "wait-k" policy that can satisfy any latency requirements. • This strategy can be applied to most sequence-to-sequence models with relatively minor changes. Due to space constraints, we only present its performance the Transformer • Experiments show our strategy achieves low latency and reasonable BLEU scores (compared to full-sentence translation baselines) on 4 directions: zh↔en and de↔en. 2 Preliminaries: Full-Sentence NMT We first briefly review standard (full-sentence) neural translation to set up the notations. Regardless of the particular design of different seq-to-seq models, the encoder always takes the input sequence x = (x 1 , ..., x n ) where each x i ∈ R dx is a word embedding of d x dimensions, and produces a new sequence of hidden states h = f (x) = (h 1 , ..., h n ). The encoding function f can be implemented by RNN or Transformer. On the other hand, a (greedy) decoder predicts the next output word y t given the source sequence (actually its representation h) and previously generated words, denoted y <t = (y 1 , ..., y t-1 ). The decoder stops when it emits <eos>, and the final hypothesis y = (y 1 , ..., <eos>) has probability At training time, we maximize the conditional probability of each ground-truth target sentence y given input x over the whole training data D, or equivalently minimizing the following loss: 3 Prefix-to-Prefix and Wait-k Policy In full-sentence translation (Sec. 2), each y i is predicted using the entire source sentence x. But in simultaneous translation, we need to translate concurrently with the (growing) source sentence, so we design a new prefix-to-prefix architecture to (be trained to) predict using a source prefix. 3.1 Prefix-to-Prefix Architecture Definition 1. Let g(t) be a monotonic nondecreasing function of t that denotes the number of source words processed by the encoder when deciding the target word y t . For example, in Figs. 1-2, g(3) = 4, i.e., a 4word Chinese prefix is used to predict y 3 ="met". We use the source prefix (x 1 , ..., x g(t) ) rather than the whole x to predict y t : p(y t | x ≤g(t) , y <t ). Therefore the decoding probability is: and given training D, the training objective is: Generally speaking, g(t) can be used to represent any arbitrary policy, and we give two special cases where g(t) is constant: (a) g(t) = |x|: baseline full-sentence translation; (b) g(t) = 0: an "oracle" that does not rely on any source information. Note that in any case, 0 ≤ g(t) ≤ |x| for all t. Definition 2. We define the "cut-off" step, τ g (|x|), to be the decoding step when source sentence finishes: For example, in Figs. 1-2, the cut-off step is 6, i.e., the Chinese sentence finishes right before y 6 ="in". Training vs. Test-Time Prefix-to-Prefix. While most previous work in simultaneous translation, in particular Using the example in Figs. 1-2, the anticipation of the English verb is possible because the training data contains many prefix-pairs in the form of (X zài Y ..., X met ...), thus although the prefix x ≤4 ="Bùshí zǒngtǒng zài Mòsikē" (lit. "Bush president in Moscow") does not contain the verb, it still provides enough clue to predict "met".
|
As a very simple example within the prefix-toprefix framework, we present a wait-k policy, which first wait k source words, and then translates concurrently with the rest of source sentence, i.e., the output is always k words behind the input. This is inspired by human simultaneous interpreters who generally start translating a few seconds into the speakers' speech, and finishes a few seconds after the speaker finishes. For example, if k = 2, the first target word is predicted using the first 2 source words, and the second target word using the first 3 source words, etc; see Fig. For this policy, the cut-off point τ g wait-k (|x|) is exactly |x|k + 1 (see Fig. Test-Time Wait-k. As an example of testtime prefix-to-prefix in the above subsection, we present a very simple "test-time wait-k" method, i.e., using a full-sentence model but decoding it with a wait-k policy (see also Fig. 4 New Latency Metric: Average Lagging Beside translation quality, latency is another crucial aspect for evaluating simultaneous translation. We first review existing latency metrics, highlighting their limitations, aand then propose our new latency metric that address these limitations. Consecutive Wait (CW) The CW of a sentence-pair (x, y) is the average CW over all consecutive wait segments: In other words, CW measures the average source segment length (the best case is 1 for wordby-word translation or our wait-1 and the worst case is |x| for full-sentence MT). The drawback of CW is that CW is local latency measurement which is insensitive to the actual lagging behind. Another latency measurement, Average Proportion (AP) AP has two major flaws: First, it is sensitive to input length. For example, consider our wait-1 policy. When |x| = |y| = 1, AP is 1, and when |x| = |y| = 2, AP is 0.75, and eventually AP approaches 0.5 when |x| = |y| → ∞. However, in all these cases, there is a one word delay, so AP is not fair between long and short sentences. Second, being a percentage, it is not obvious to the user the actual delays in number of words. Inspired by the idea of "lagging behind the ideal policy", we propose a new metric called "average lagging" (AL), shown in Fig. We can infer that the AL for wait-k is exactly k. When we have more realistic cases like the right side of Fig. where τ g (|x|) denotes the cut-off step, and r = |y|/|x| is the target-to-source length ratio. We observe that wait-k with catchup has an AL k. While RNN-based implementation of our wait-k model is straightforward and our initial experiments showed equally strong results, due to space constraints we will only present Transformerbased results. Here we describe the implementation details for training a prefix-to-prefix Transformer, which is a bit more involved than RNN. We first briefly review the Transformer architecture step by step to highlight the difference between the conventional and simultaneous Transformer. The encoder of Transformer works in a self-attention fashion and takes an input sequence x, and produces a new sequence of hidden states z = (z 1 , ..., z n ) where z i ∈ R dz is as follows: Here P W V (•) is a projection function from the input space to the value space, and α ij denotes the attention weights: ) where e ij measures similarity between inputs. Here P W Q (x i ) and P W K (x j ) project x i and x j to query and key spaces, resp. We use 6 layers of self-attention and use h to denote the top layer output sequence (i.e., the source context). On the decoder side, during training time, the gold output sequence y * = (y * 1 , ..., y * m ) goes through the same self-attention to generate hidden self-attended state sequence c = (c 1 , ..., c m ). Note that because decoding is incremental, we let α ij = 0 if j > i in Eq. 11 to restrict self-attention to previously generated words. In each layer, after we gather all the hidden representations for each target word through selfattention, we perform target-to-source attention: similar to self-attention, β ij measures the similarity between h j and c i as in Eq. 11. Simultaneous translation requires feeding the source words incrementally to the encoder, but a naive implementation of such incremental encoder/decoder is inefficient. Below we describe a faster implementation. For the encoder, during training time, we still feed the entire sentence at once to the encoder. But different from the self-attention layer in conventional Transformer (Eq. 11), we constrain each source word to attend to its predecessors only (similar to decoder-side self-attention), effectively simulating an incremental encoder: Then we have a newly defined hidden state sequence z (t) = (z When a new source word is received, all previous source words need to adjust their representations. 6 Experiments We evaluate our work on four simultaneous translation directions: German↔English and Chinese↔English. For the training data, we use the parallel corpora available from WMT15 Our implementation is adapted from PyTorchbased OpenNMT Tab. 1 shows the results of a model trained with wait-k but decoded with wait-k (where ∞ means full-sentence). Our wait-k is the diagonal, and the last row is the "test-time wait-k" decoding. Also, the best results of wait-k decoding is often from a model trained with a slightly larger k . Figs. 5-8 plot translation quality (in BLEU) against latency (in AL and CW) for full-sentence baselines, our wait-k, test-time wait-k (using fullsentence models), and our adaptation of Eventually, both wait-k and test-time wait-k approaches the full-sentence baseline as k → ∞. These results are consistent with our intuitions. We next compare our results with our adaptation of Tab. 2 shows human evaluations on anticipation rates and accuracy on all four directions, using 100 examples in each language pair from the dev sets. As expected, we can see that, with increasing k, the anticipation rates decrease (at both sentence and word levels), and the anticipation accuracy improves. Moreover, the anticipation rates are very different among the four directions, with en→zh > de→en > zh→en > en→de Interestingly, this order is exactly the same with the order of the BLEU-score gaps between our wait-9 and full-sentence models: en→zh: 2.7 > de→en: 1.1 > zh→en: 1.6 † > en→de: 0.3 Figure ( † : difference in 4-ref BLEUs, which in our experience reduces by about half in 1-ref BLEUs). We argue that this order roughly characterizes the relative difficulty of simultaneous translation in these directions. In our data, we found en→zh to be particularly difficult due to the mandatory long-distance reorderings of English sentencefinal temporal clauses (such as "in recent years") to much earlier positions in Chinese; see Fig. We showcase some examples in de→en and zh→en from the dev sets and online news in Figs. 9 to 12. In all these examples except Fig. The work of In a parallel work, Press and Smith (2018) propose an "eager translation" model which also outputs target-side words before the whole input sentence is fed in, but there are several crucial differences: (a) their work still aims to translate full sentences using beam search, and is therefore, as the authors admit, "not a simultaneous translation model"; (b) their work does not anticipate future words; and (c) they use word alignments to learn the reordering and achieve it in decoding by emitting the token, while our work integrates reordering into a single wait-k prediction model that is agnostic of, yet capable of, reordering. In another recent work, Alinejad et al. ( We have presented a prefix-to-prefix training and decoding framework for simultaneous translation with integrated anticipation, and a wait-k policy that can achieve arbitrary word-level latency while maintaining high translation quality. This prefixto-prefix architecture has the potential to be used in other sequence tasks outside of MT that involve simultaneity or incrementality. We leave many open questions to future work, e.g., adaptive policy using a single model As mentioned in Sec. 3, the wait-k decoding is always k words behind the incoming source stream. In the ideal case where the input and output sentences have equal length, the translation will finish k steps after the source sentence finishes, i.e., the tail length is also k. This is consistent with human interpreters who start and stop a few seconds after the speaker starts and stops. However, input and output sentences generally have different lengths. In some extreme directions such as Chinese to English, the target side is significantly longer than the source side, with an average gold tgt/src ratio, r = |y |/|x|, of around 1.25 To address this problem, we devise a "wait-k+catchup" policy so that the user is still k word behind the input in terms of real information content, i.e., always k source words behind the ideal perfect synchronization policy denoted by the diagonal line in Fig. More formally, with catchup frequency c, the new policy is: g wait-k, c (t) = min{k + t -1ct , |x|} (13) and our decoding and training objectives change accordingly (again, we train the model to catchup using this new policy). We also evaluate our work using Average Proportion (AP) on both de↔en and zh↔en translation comparing with full sentence translation and
| 981 | 4,953 | 981 |
A Multi-Modal Context Reasoning Approach for Conditional Inference on Joint Textual and Visual Clues
|
Conditional inference on joint textual and visual clues is a multi-modal reasoning task that textual clues provide prior permutation or external knowledge, which are complementary with visual content and pivotal to deducing the correct option. Previous methods utilizing pretrained vision-language models (VLMs) have achieved impressive performances, yet they show a lack of multimodal context reasoning capability, especially for text-modal information.
|
Cross modal reasoning is a hot research topic both in natural language processing and computer vision communities. Most cross modal reasoning tasks, such as Visual Question Answering Previous methods To verify the effectiveness of ModCR, we conduct extensive experiments on two cross modal reasoning data sets: PMR Our contributions can be summarised as follows: • We propose a multi-modal in-context reasoning framework for conditional inference on joint textual and visual clues, utilizing the incontext learning capability of PLMs. • To the best of our knowledge, we are the first to introduce the multi-view alignment information between vision and language into the language model to perform cross modal reasoning, bridging the semantic gap between vision and language in PLMs. • Experimental results show that ModCR achieves state-of-the-art performance on two corresponding data sets. It significantly outperforms previous vision-aided language models and pretrained VLMs-based approaches.
|
Pretrained VLMs for Cross Modal Reasoning. Cross modal reasoning Vision-aided Language Models. Images can provide explicit and diverse visual information to improve the imaginative representation of language. Recent works show that vision-aided language models have achieved promising performance on natural language understanding 3 Methodology ModICR focuses on infusing the given multi-modal information: premise, image, and answer, into the language model to make conditional inferences based on textual and visual clues. The overview of ModICR is illustrated in Figure Considering a semantic gap between visual prefixes and text when the language model performs context learning, we devise an alignment mapping network based on a multi-grained vision-language semantic alignmenter to gain the cross-modal align-ment prefix. Finally, the two-type prefixes, premise text, and answer candidate are fed to the language model via the instruction learning way to perform multi-modal context reasoning. Previous methods We denote the obtained sequence representation of the image and the text aligned with the image features to where W dr and b dr are learnable parameters. cross represents the cross-attention calculation process. After obtaining two types of the prefix, we infuse them into an context reasoner to conduct cross modal reasoning, where we adopt the pretrained language model RoBERTa where x i is the output probability on i th answer candidate and q is the label. To make Eq. 2 in the alignment mapping network capture pivotal multi-view alignment information, we will first train it about one epoch for alleviating the cold start problem leading to the collapse of the network. Concretely, we use a linear function to project h ag into the confidence score and employ the cross entropy loss to optimize it locally with the golden label q. The training process is regarded as L 1 . Thus, the whole training process could be defined as where steps shows the optimization step during training and N whole represents the start of the whole training. For inference, we input each answer candidate with premise and image into ModICR to obtain the confidence score and adopt the maximum one as the final result. Conditional inference on joint textual and visual clues is a task that the text provides the prior permutation or the complementary information (external knowledge) with the image. There are few data sets that meet the above requirement in the community. To verify the effectiveness of the proposed model, we first adopt the high-quality human-constructed PMR We compare the proposed method to pretrained LMs and VLMs as follows: BERT VL-BERT ERNIE-VL UNITER (Chen et al., 2020) also expands the BERT architecture to incorporate visual information and power heterogeneous downstream visionlanguage tasks with joint multi-modal embeddings. Oscar OFA CALeC PromptFuse We use the Adam (Kingma and Ba, 2014) optimizer to train the above models on 2 A100 GPUs with a base learning rate of 2e-5, a batch size of 32, and a dropout rate of 0.1. For each sample, we set the maximum number of visual regions extracted by BERT-B Overall Performance. We report the performance of models on PMR and VCR (QR→A) data sets, which are shown in Tables Through the above analysis, we can obtain that it is necessary to introduce vision-language semantic alignment information for vision-aided language models. Furthermore, there is still a large room for improvement in the contextual reasoning capability of the pretrained VLMs. To analyze the effectiveness of ModCR in detail, we design multiple model variants and the experimental results are shown in Tables From the performance of the visual prefix and alignment prefix at different lengths in Table Strategies. We present the detailed performance of ModCR with different training strategies on Table 5. By comparing the experimental results of "frozen VLM" and "fine-tune VLM" on two data sets, we observe that the performance of the proposed method is further improved when all parameters of ModCR are updated during training. Although the training speed is slower, this could further integrate the complementary reasoning capabil-ities of VLM and LM. In addition, only finetuning MappNet has inferior performances, which may be addressed via pretraining on external large-scale image-text corpus. We report two cases in Figure In this paper, we propose a multi-modal context reasoning approach named ModCR for the scenario of conditional inference on joint visual and textual clues. It regards the given image and text as the two types of pre-context states and infuses them into the language model via the instruction learning method to perform such multi-modal reasoning. The experimental results on two data sets show the effectiveness of ModCR. For the future, we will explore two research directions: 1) how to improve the context learning capability of pretrained VLMs. 2) exploring the conditional inference on complex visual and textual clues, where it contains multiple clues lying in more modalities. The proposed method has several limitations: 1) The current approach achieves hunky context reasoning performance in the cross-modal scene of a single text clue and image, but the context reasoning capability in the scene containing multiple textual and visual clues still needs to be further explored, such as video and long text. 2) From the experimental results, we observed that the visual prefix length greatly impacts the stability of language models infused with visual information. Hence, we still need to explore effective and stable vision-aided language models for natural language processing and multi-modal scenarios. 3) We also hope this work could spark further research on improving the long context reasoning capability of pretrained vision-language models. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? sec4.3 implementation details C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? sec4.4, 4.5, and 4.6 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? the experimental results are evaluated by accuracy; we run all models on the python environment D Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
| 454 | 996 | 454 |
M2D2: A Massively Multi-Domain Language Modeling Dataset
|
We present M2D2, a fine-grained, massively multi-domain corpus for studying domain adaptation in language models (LMs). M2D2 consists of 8.5B tokens and spans 145 domains extracted from Wikipedia and Semantic Scholar. Using ontologies derived from Wikipedia and ArXiv categories, we organize the domains in each data source into 22 groups. This two-level hierarchy enables the study of relationships between domains and their effects on in-and out-of-domain performance after adaptation. We also present a number of insights into the nature of effective domain adaptation in LMs, as examples of the new types of studies M2D2 enables. To improve in-domain performance, we show the benefits of adapting the LM along a domain hierarchy; adapting to smaller amounts of fine-grained domainspecific data can lead to larger in-domain performance gains than larger amounts of weakly relevant data. We further demonstrate a tradeoff between in-domain specialization and outof-domain generalization within and across ontologies, as well as a strong correlation between out-of-domain performance and lexical overlap between domains. 1 * Currently at Google Research 1 We release our dataset publicly at
|
Even though they can contain a wide variety of different types of domains, the texts that make up the corpora used to train and evaluate language models (LMs) are often treated as if they are all the same. This makes it challenging to characterize LM performance under diverse data distributions and understand how to effectively adapt LMs to new ones. To address these challenges, we develop M2D2, a Massively Multi-Domain Dataset, with 145 subdomains and a human-curated hierarchy for studying fine-grained domain adaptation. Prior work on domain transfer focuses on a small number of broad domains
|
Art Using M2D2, we investigate the following questions, as examples of the broad classes of new questions that can be asked: (1) how well do coarse and fine domains transfer to each other across the hierarchy? (2) which features and aspects of a domain are important for transfer? (3) how important is domain specificity versus breadth? We perform preliminary experiments analyzing transfer between similar domains, disparate domains, and hierarchically related domains. Moreover, we explore how to select source domains to improve transfer performance. We present baseline experiments using a GPT2 Given the importance of fine granularity domains in language modeling, we hope that M2D2 will encourage the community to further study domain transfer: how do we identify hierarchical finegrained domains in naturally occurring text, and how do we leverage this fine-grained domain hier-archy to improve domain transfer. M2D2 consists of a large quantity of fine-grain domains. Unlike prior work that defines the domain of a corpus using its source (e.g. the web text domain; One of the unique properties of M2D2 is its hierarchical nature, enabling the study of transfer at different levels of domain granularity. We assume a particular corpus to have L 0 , . . . , L K levels of hierarchy, where L 0 refers to the lowest or most coarsegrained/broad level (i.e. the whole dataset), and L K refers to the highest or most fine-grained/specific level. A given level of hierarchy D i j is composed of multiple subdomains {D i+1 0 , . . . , D i+1 N i+1 }, which are represented in the next level of the hierarchy L i+1 . Similarly, we assume that a given subdomain is contained within a larger domain. For the rest of the paper, we use L1 and L2 to represent the two levels of a K level hierarchy that we consider in this paper. We collect M2D2 from two resources, Wikipedia and Semantic Scholar. This allows us to explore domain adaptation in a massively multi-domain setting among domains of varying granularity, while also allowing us to test whether our findings hold across different data sources. Wikipedia We crawl the Wikipedia ontology, M2D2 has the following major unique properties when compared to previous domain adaptation datasets. First, it is massively multi-domain: we have 145 L2 domains grouped into 22 L1 domains, which allows us to test domain adaptation for language modeling on a variety of axes (such as hierarchy, subject matter, and ontology) that would be more difficult with more coarse-grained datasets. Second, M2D2 is hierarchical: this al-lows us to also test the performance of domain specificity versus domain breadth in more flexible adaptation settings. We describe dataset statistics in Table We split each domain into the respective train, validation, and test sets. To prevent data leakage between the domains when pages belong to two or more domains, we construct validation and test sets from pages that are not contained within any other domains on the same level of hierarchy. For example, the page for "Biotechnology" overlaps in domain with both Biology ∈ Natural and Physical Sciences and Engineering ∈ Technology and Applied Sciences so this would not be included in any evaluation set due to the potential for direct leakage. However, the page for "Computer" is only in Computing ∈ Technology and Applied Sciences and therefore could be included in an evaluation set. We include at least 1 million tokens in the validation and test sets, respectively. This enables us to have a precise evaluation set of texts that only belong to a single fine-grained domain. As examples of the types of new studies M2D2 enables, we explore a number of key questions about the nature of effective domain adaptation in language models. For example, how does one best specialize a language model to a domain, given an ontology? How well can adapted models be applied out-of-domain, within and across ontologies? What features of target domains are predictive of out-ofdomain transfer? In this section, we present a set of experiments that begin to answer these questions. First, we study the impact of adapting to the L1 and L2 domains of our dataset on in-domain ( §3.2) and outof-domain ( §3.3) language modeling performance. Then, we perform an analysis of lexical features in domains that are predictive of out-of-domain performance ( §3.4). In all experiments, we use the 112M GPT2 model When adapting our GPT2 model to domains in M2D2, we use one of three settings: L1 Adaptation We continue training on a given L1 domain (e.g. Computer Science). We continue training on a given L2 domain (e.g. Machine Learning). L1-to-L2 Adaptation Given a L2 domain (e.g. Machine Learning), we first perform L1 adaptation on its corresponding L1 domain (e.g. Computer Science), and then we further perform L2 adaptation. This setting similar to multi-stage adaptive pretraining approaches used for supervised tasks For all techniques, we evaluate test perplexity on L2 domains validation sets. Due to the large quantity of L2 domains, we aggregate L2 results by their corresponding L1. For each ontology, we report the average and standard deviation (average s.d. ) of perplexities across L2 domains in each L1. The first set of experiments in this study considers the impact of adapting the language model to different levels of the M2D2 ontologies. We only consider in-domain perplexity, or the perplexity of model on the domain it is adapted to. Adaptation improves in-domain performance despite pretraining. Table been exposed to during pretraining (as is the case with Wikipedia; L1 adaptation results in a 5.8 decrease in perplexity). For domains which the language model is less likely to have been exposed to during pretraining, this is more pronounced (as is the case with S2ORC; L1 adaptation results in a 12.7 decease in perplexity). Specificity and hierarchy is more important than broad coverage in adaptation. Next, we observe that in most cases, adapting to L2 domains is more beneficial to in-domain performance than adapting to L1 domains. Adaptation to finer-grained domains better specializes a language model, even though these domains are much smaller than their L1 counterparts. Finally, we observe that using L1-to-L2 adaptation further benefits in-domain performance over L2 adaptation in all cases. Our results suggest that adapting to smaller amounts of domain-specific data leads to more effective in-domain specialization than adapting to large quantities of data that may be more weakly domain-relevant. Moreover, the best results may be achieved by organizing the target domain into subsets of broader and fine-grained data, and adapting along this hierarchy. However, this approach has increased memory and computational requirements relative to solely relying on L1 Adaptation. We also study the effects of our adaptation techniques on out-of-domain performance, by performing zero-shot inference with adapted models on domains (e.g. Art) other than the ones they are adapted to (e.g. Machine Learning). We first transfer models between domains in the same ontology (e.g. Wikipedia → Wikipedia), and then across ontologies (e.g. Wikipedia → S2ORC). L2 Adaptation decreases out-of-domain performance. We show out-of-domain performance for each adaptation technique in Table L2 and L1-to-L2 settings when compared to L1 Adaptation. Specific adaptation transfers better to related categories across ontology. Although the two data sources in M2D2 differ considerably in style and content, their ontological categories partially overlap. For example, Mathematics and Art appear in both Wikipedia and Semantic Scholar. Is it possible to transfer between corresponding categories across ontologies? To answer this question, we first manually align L1 domains from Wikipedia and Semantic Scholar with similar ontological categories (e.g., grouping Mathematics from Wikipedia and Mathematics from S2ORC). We then apply a model adapted to an L1 domain in a source ontology onto its corresponding L1 domain in a target ontology. We compare this cross-ontology performance with two baselines: 1) the average out-of-domain performance of other L1 adapted models in the target ontology and 2) the in-domain performance of a model adapted to the target L1 domain. Our results are displayed in Table Summary Our investigations into the out-ofdomain performance of adapted language models reveals a tradeoff between specialization and generalization. The more fine-grained the specialization of the language model, the less one can expect it to be applicable outside of the domain it was trained on. This effect size increases as we move outside the ontology: models trained on one ontology are not useful in other ontologies, despite being trained on similar categories of data. These findings lead us to believe that domain adaptation should be studied from a multi-faceted perspective to exploit specific aspects of domain (e.g. style, content). Future work may look at reducing the tradeoff between highly domain specialized models and out of domain performance, perhaps through ensembling or other approaches. Looking closer at the out-of-domain performance of L1 models, we see intuitive relationships be-tween subject similarity and zero-shot out-ofdomain transfer performance (Table Vocabulary overlap strongly correlates with transfer regardless of part-of-speech. Figure Related domains mostly transfer domainspecific tokens. We analyse domain adaptation at a token-level to characterize what different adaptation settings transfer. Specifically, we measure which tokens are most impacted in terms of perword perplexity when we finetune on a domainspecific corpus. We do this by taking the difference between the softmax-normalized probability of pre- dicting a given word in a given domain when comparing two models adapted to different corpora. We compare S2ORC adapted models in four settings: two best-transferred domains (a proxy for similar domains; easy transfers), two worst transferred L1 domains (a proxy for distant domains; difficult transfers), L1-to-L2 Adaptation (hierarchical domain transfer), and no adaptation (zero-shot performance of the base LM). We show the distribution between domain-specific (terms that appear less than 0.00001% of the time in any other domain) and non-domain-specific terms in Table Summary Our preliminary analyses suggest that simple lexical characteristics of domains are strong indicators of how well an adapted model may generalize. Developing computationally inexpensive indicators of transfer (as lexical overlap is), is important for domain transfer to find the best out of a large set of candidate corpora to perform adaptation to a target domain. This would allow one to approximately find the best corpus, without the computational overhead of adapting to all candidate corpora. 4 Related Work Domain Adaptation Techniques One approach toward improved pre-trained language models includes building large-scale pre-training datasets that contain a diverse set of domains, such as the Pile We developed M2D2, a new massively multidomain language modeling dataset for studying domain adaptation in language models. M2D2 consists of 145 fine-grained domains (curated from Wikipedia and Semantic Scholar) that are hierarchically organized using domain-specific ontologies. Using M2D2, we find that domain precision is more important than data quantity to improve in-domain performance, a tradeoff between specialization and out-of-domain generalization. We release M2D2 publicly to spur further research on building effective language models on highly heterogeneous data. In this work, we only consider adaptation techniques that assume domains are monolithic and non-overlapping. Future work may instead explore modeling the data as a mixture of domains, which may improve out-of-domain performance. In addition, M2D2 only covers two data sources (Wikipedia and Semantic Scholar). Future work could expand this corpus with ontologies from other data sources, such as Reddit, which have a fine-grained and hierarchical domains. Moreover, data sourced from the web may contain hate speech and other harmful content, which may be reproduced by language models adapted to such data. The data sources we use adhere to research-friendly data licenses, but training models on web-curated data while maintaining the rights of authors as data subjects and creators remains an open problem.
| 1,191 | 600 | 1,191 |
NeuInfer: Knowledge Inference on N-ary Facts
|
Knowledge inference on knowledge graph has attracted extensive attention, which aims to find out connotative valid facts in knowledge graph and is very helpful for improving the performance of many downstream applications. However, researchers have mainly poured attention to knowledge inference on binary facts. The studies on n-ary facts are relatively scarcer, although they are also ubiquitous in the real world. Therefore, this paper addresses knowledge inference on n-ary facts. We represent each n-ary fact as a primary triple coupled with a set of its auxiliary descriptive attribute-value pair(s). We further propose a neural network model, NeuInfer, for knowledge inference on n-ary facts. Besides handling the common task to infer an unknown element in a whole fact, NeuInfer can cope with a new type of task, flexible knowledge inference. It aims to infer an unknown element in a partial fact consisting of the primary triple coupled with any number of its auxiliary description(s). Experimental results demonstrate the remarkable superiority of NeuInfer.
|
With the introduction of connotative valid facts, knowledge inference on knowledge graph improves the performance of many downstream applications, such as vertical search and question answering In existing studies for knowledge inference on nary facts, each n-ary fact is represented as a group of peer attributes and attribute values. In practice, for each n-ary fact, there is usually a primary triple (the main focus of the n-ary fact), and other attributes along with the corresponding attribute values are its auxiliary descriptions. Take the above 5-ary fact for example, the primary triple is (John Bardeen, award-received, N obel P rize in P hysics), and other attribute-value pairs including point-in-time : 1956 , together-with : W alter Houser Brattain and together-with : W illiam Shockley are its auxiliary descriptions. Actually, in The above 5-ary fact is a relatively complete example. In the real-world scenario, many n-ary facts appear as only partial ones, each consisting of a primary triple and a subset of its auxiliary description(s), due to incomplete knowledge acquisition. For example, (John Bardeen, awardreceived, N obel P rize in P hysics) with pointin-time : 1956 and it with {together-with : W alter Houser Brattain, together-with : W illiam Shockley} are two typical partial facts corresponding to the above 5-ary fact. For differentiation, we call those relatively complete facts as whole ones. We noticed that existing studies on n-ary facts infer an unknown element in a welldefined whole fact and have not paid attention to knowledge inference on partial facts. Later on, we refer the former as simple knowledge inference, while the latter as flexible knowledge inference. With these considerations in mind, in this paper, by discriminating the information in the same n-ary fact, we propose a neural network model, called NeuInfer, to conduct both simple and flexible knowledge inference on n-ary facts. Our specific contributions are summarized as: • We treat the information in the same n-ary fact discriminatingly and represent each n-ary fact as a primary triple coupled with a set of its auxiliary descriptive attribute-value pair(s). • We propose a neural network model, NeuInfer, for knowledge inference on n-ary facts. NeuInfer can particularly handle the new type of task, flexible knowledge inference, which infers an unknown element in a partial fact consisting of a primary triple and any number of its auxiliary description(s). • Experimental results validate the significant effectiveness and superiority of NeuInfer. 2 Related Works
|
They can be divided into tensor/matrix based methods, translation based methods, and neural network based ones. The quintessential one of tensor/matrix based methods is RESCAL Translation based methods date back to TransE Neural network based methods model the validity of binary facts or the inference processes. For example, As aforesaid, only a few studies handle this type of knowledge inference. The m-TransH method In these methods, the information in the same n-ary fact is equal-status. Actually, in each n-ary fact, a primary triple can usually be identified with other information as its auxiliary description(s), as exemplified in Section 1. Moreover, these methods are deliberately designed only for the inference on whole facts. They have not tackled any distinct inference task. In practice, the newly proposed flexible knowledge inference is also prevalent. Different from the studies that define n-ary relations first and then represent n-ary facts where each a i : v i (i = 1, 2, . . . , m) is an attributevalue pair, also called an auxiliary description to the primary triple. An element of F ct refers to h/r/t/a i /v i ; A F ct = {a 1 , a 2 , . . . , a m } is F ct's attribute set and a i may be the same to a j (i, For example, the representation of the 5-ary fact, mentioned in Section 1, is: Note that, in the real world, there is a type of complicated cases, say, where more than two entities participate in the same n-ary fact with the same primary attribute. We follow Wikidata In this paper, we handle both the common simple knowledge inference and the newly proposed flexible knowledge inference. Before giving their definitions under our representation form of n-ary facts, let us define whole fact and partial fact first. Definition 1 (Whole fact and partial fact). For the fact F ct, assume its set of auxiliary description(s) as S d = {a i : v i |i = 1, 2, . . . , m}. Then a partial fact of F ct is: F ct = (h, r, t), S d , where S d ⊂ S d , i.e., S d is a subset of S d . And we call F ct the whole fact to differentiate it from F ct . Notably, whole fact and partial fact are relative concepts, and a whole fact is a relatively complete fact compared to its partial fact. In this paper, partial facts are introduced to imitate a typical openworld setting where different facts of the same type may have different numbers of attribute-value pair(s). Definition 2 (Simple knowledge inference). It aims to infer an unknown element in a whole fact. Definition 3 (Flexible knowledge inference). It aims to infer an unknown element in a partial fact. The framework of NeuInfer is illustrated in Figure For an n-ary fact F ct, we look up the embeddings of its relation r and the attributes in A F ct from the embedding matrix M R ∈ R |R|×k of relations and attributes, where R is the set of all the relations and attributes, and k is the dimension of the latent vector space. The embeddings of h, t, and the attribute values in V F ct are looked up from the embedding matrix M E ∈ R |E|×k of entities and attribute values, where E is the set of all the entities and attribute values. In what follows, the embeddings are denoted with the same letters but in boldface by convention. As presented in Figure This component estimates the validity of (h, r, t), including the acquisition of its interaction vector and the assessment of its validity, corresponding to "hrt-FCNs" and "FCN 1 " in Figure Detailedly, the embeddings of h, r, and t are concatenated and fed into a fully-connected neural network. After layer-by-layer learning, the last layer outputs the interaction vector o hrt of (h, r, t): where f (•) is the ReLU function; n 1 is the number of the neural network layers; {W 1,1 , W 1,2 , . . . , W 1,n 1 } and {b 1,1 , b 1,2 , . . . , b 1,n 1 } are their weight matrices and bias vectors, respectively. With o hrt as the input, the validity score val hrt of (h, r, t) is computed via a fully-connected layer and then the sigmoid operation: where W val and b val are the weight matrix and bias variable, respectively; σ(x) = 1 1+e -x is the sigmoid function, which constrains val hrt ∈ (0, 1). For simplicity, the number of hidden nodes in each fully-connected layer of "hrt-FCNs" and "FCN 1 " gradually reduces with the same difference between layers. This component estimates the compatibility of F ct. It contains three sub-processes, i.e., the capture of the interaction vector between (h, r, t) and each auxiliary description a i : v i (i = 1, 2, . . . , m), the acquisition of the overall interaction vector, and the assessment of the compatibility of F ct, corresponding to "hrtav-FCNs", "min" and "FCN 2 " in Figure Similar to "hrt-FCNs", we obtain the interaction vector o hrta i v i of (h, r, t) and a i : v i : where n 2 is the number of the neural network layers; {W 2,1 , W 2,2 , . . . , W 2,n 2 } and {b 2,1 , b 2,2 , . . . , b 2,n 2 } are their weight matrices and bias vectors, respectively. The number of hidden nodes in each fully-connected layer also gradually reduces with the same difference between layers. And the dimension of the resulting o hrta i v i is d. All the auxiliary descriptions share the same parameters in this sub-process. The overall interaction vector o hrtav of F ct is generated based on o hrta i v i . Before introducing this sub-process, let us see the principle behind first. Straightforwardly, if F ct is valid, (h, r, t) should be compatible with any of its auxiliary description. Then, the values of their interaction vector, measuring the compatibility in many different views, are all encouraged to be large. Therefore, for each dimension, the minimum over it of all the interaction vectors is not allowed to be too small. Thus, the overall interaction vector o hrtav of (h, r, t) and its auxiliary description(s) is: where min(•) is the element-wise minimizing function. Then, similar to "FCN 1 ", we obtain the compatibility score comp F ct of F ct: where W comp of dimension d × 1 and b comp are the weight matrix and bias variable, respectively. The final score s F ct of F ct is the weighted sum ⊕ of the above validity score and compatibility score: where w ∈ (0, 1) is the weight factor. If the arity of F ct is 2, the final score is equal to the validity score of the primary triple (h, r, t). Then, Equation ( Currently, we obtain the final score s F ct of F ct. In addition, F ct has its target score l F ct . By comparing s F ct with l F ct , we get the binary crossentropy loss: Here, T is the training set and T -is the set of negative samples constructed by corrupting the n-ary facts in T . Specifically, for each n-ary fact in T , we randomly replace one of its elements with a random element in E/R to generate one negative sample not contained in T . We then optimize NeuInfer via backpropagation, and Adam We conduct experiments on two n-ary datasets. The first one is JF17K To run NeuInfer on JF17K and WikiPeople, we transform the representation of their n-ary facts. For JF17K, we need to convert each attribute value sequence of a specific n-ary relation to a primary triple coupled with a set of its auxiliary description(s). The core of this process is to determine the primary triple, formed by merging the two primary attributes of the n-ary relation and the corresponding attribute values. The two primary attributes are selected based on RAE The statistics of the datasets after conversion or reorganization are outlined in Table As for metrics, we adopt the standard Mean Re- ciprocal Rank (MRR) and Hits@N . For each n-ary test fact, one of its elements is removed and replaced by all the elements in E/R. These corrupted n-ary facts are fed into NeuInfer to obtain the final scores. Based on these scores, the n-ary facts are sorted in descending order, and the rank of the n-ary test fact is stored. Note that, except the nary test fact, other corrupted n-ary facts existing in the training/validation/test set, are discarded before sorting. This process is repeated for all other elements of the n-ary test fact. Then, MRR is the average of these reciprocal ranks, and Hits@N is the proportion of the ranks less than or equal to N . Knowledge inference includes entity inference and relation inference. As presented in Table The hyper-parameters of NeuInfer are tuned via grid search in the following ranges: The embedding dimension k ∈ {50, 100}, the batch size β ∈ {128, 256}, the learning rate λ ∈ {5e -6 , 1e -5 , 5e -5 , 1e -4 , 5e -4 , 1e -3 }, the numbers n 1 and n 2 of the neural network layers of "hrt-FCNs" and "hrtav-FCNs" in {1, 2}, the dimension d of the interaction vector o hrta i v i in {50, 100, 200, 400, 500, 800, 1000, 1200}, the weight factor w of the scores in {0.1, 0.2, . . . , 0.9}. The adopted optimal settings are: k = 100, β = 128, λ = 5e -5 , n 1 = 2, n 2 = 1, d = 1200, and w = 0.1 for JF17K; k = 100, β = 128, λ = 1e -4 , n 1 = 1, n 2 = 1, d = 1000, and w = 0.3 for WikiPeople. Simple knowledge inference includes simple entity inference and simple relation inference. For an nary fact, they infer one of the entities/the relation in Method JF17K WikiPeople MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 RAE 0.310 the primary triple or the attribute value/attribute in an auxiliary description, given its other information. Knowledge inference methods on n-ary facts are scarce. The representative methods are m-TransH The experimental results of simple entity inference are reported in Table Since RAE is deliberately developed only for simple entity inference, we compare NeuInfer only with NaLP on simple relation inference. We perform an ablation study to look deep into the framework of NeuInfer. If we remove the compatibility evaluation component, NeuInfer is reduced to a method for binary but not n-ary facts. Since we handle knowledge inference on n-ary facts, it is inappropriate to remove this component. Thus, as an ablation, we only deactivate the validity evaluation component, denoted as NeuInfer -. The experimental comparison between NeuInfer and NeuInfer - is illustrated in Figure The newly proposed flexible knowledge inference focuses on n-ary facts of arities greater than 2. It includes flexible entity inference and flexible relation inference. For an n-ary fact, they infer one of the entities/the relation in the primary triple given any number of its auxiliary description(s) or infer the attribute value/attribute in an auxiliary description given the primary triple and any number of other auxiliary description(s). In existing knowledge inference methods on n-ary facts, each n-ary fact is represented as a group of peer attributes and attribute values. These methods have not poured attention to the above flexible knowledge inference. Thus, we conduct this new type of task only on MRR Hits@1 Hits@3 Hits@10 Ablation study of simple entity inference on JF17K NeuInfer. Before elaborating on the experimental results, let us look into the new test set used in this section first. We generate the new test set as follows: • Collect the n-ary facts of arities greater than 2 from the test set. • For each collected n-ary fact, compute all the subsets of the auxiliary description(s). The primary triple and each subset form a new n-ary fact, which is added to the candidate set. • Remove the n-ary facts that also exist in the training/validation set from the candidate set and then remove the duplicate n-ary facts. The remaining n-ary facts form the new test set. The size of the resulting new test set on JF17K is 34,784, and that on WikiPeople is 13,833. The experimental results of flexible entity and relation inference on these new test sets are presented in Table To further analyze the effectiveness of the proposed NeuInfer method, we look into the breakdown of its performance on different arities, as well as on primary triples and auxiliary descriptions. Without loss of generality, here we report only the experimental results on simple entity inference. The test sets are grouped into binary and n-ary (n > 2) categories according to the arities of the facts. Table Where does the above performance improvement come from? Is it from inferring the head/tail JF17K WikiPeople MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 NaLP In this paper, we distinguished the information in the same n-ary fact and represented each n-ary fact as a primary triple coupled with a set of its auxiliary description(s). We then proposed a neural network model, NeuInfer, for knowledge inference on n-ary facts. NeuInfer combines the validity evaluation of the primary triple and the compatibility evaluation of the n-ary fact to obtain the validity score of the n-ary fact. In this way, NeuInfer has the ability of well handling simple knowledge inference, which copes with the inference on whole facts. Furthermore, NeuInfer is capable of dealing with the newly proposed flexible knowledge inference, which tackles the inference on partial facts consisting of a primary triple coupled with any number of its auxiliary descriptive attributevalue pair(s). Experimental results manifest the merits and superiority of NeuInfer. Particularly, on simple entity inference, NeuInfer outperforms the state-of-the-art method significantly in terms of all the metrics. NeuInfer improves the performance of Hits@3 even by 16.2% on JF17K. In this paper, we use only n-ary facts in the datasets to conduct knowledge inference. For future works, to further improve the method, we will explore the introduction of additional information, such as rules and external texts.
| 1,067 | 2,584 | 1,067 |
Modeling Multi-hop Question Answering as Single Sequence Prediction
|
Fusion-in-decoder (FID) (Izacard and Grave, 2021) is a generative question answering (QA) model that leverages passage retrieval with a pre-trained transformer and pushed the state of the art on single-hop QA. However, the complexity of multi-hop QA hinders the effectiveness of the generative QA approach. In this work, we propose a simple generative approach (PATHFID) that extends the task beyond just answer generation by explicitly modeling the reasoning process to resolve the answer for multihop questions. By linearizing the hierarchical reasoning path of supporting passages, their key sentences, and finally the factoid answer, we cast the problem as a single sequence prediction task. To facilitate complex reasoning with multiple clues, we further extend the unified flat representation of multiple input documents by encoding cross-passage interactions. Our extensive experiments demonstrate that PATHFID leads to strong performance gains on two multihop QA datasets: HotpotQA and IIRC. Besides the performance gains, PATHFID is more interpretable, which in turn yields answers that are more faithfully grounded to the supporting passages and facts compared to the baseline FID model.
|
Leveraging knowledge to make complex reasoning has been a fundamental problem of artificial intelligence. Open-domain question answering (QA) Recent work In this work, we propose PATHFID, a generative QA model that learns to generate an answer along with a reasoning path to improve its capability of multi-hop reasoning. PATHFID extends multi-hop QA beyond just answer generation by explicitly modeling the full reasoning path to resolve the answer with a generative sequence-tosequence model. To this end, we cast the problem as a single sequence prediction task that simultaneously models reasoning path consisting of supporting passages and facts, and eventually the factoid answer. Furthermore, we extend PATHFID to allow for cross-passage interactions between the Figure retrieved passages to obtain more expressive representations from the encoder to facilitate modeling a complex reasoning chain by the decoder. Figure
|
In this section, we formally introduce the problem setup and establish the necessary background. We first describe the multi-hop QA task in a general way. We assume that a collection of K passages are given for a question q: D q = {p 1 , p 2 , . . . , p K }, where D q can be a pre-defined set, or it can also be an output from a text retrieval system (e.g., DPR Fusion-in-Decoder (FID) is a generative reader based on a sequence-to-sequence architecture, initialized from pre-trained models such as T5 Then, the overall answer generation is modeled as a conditional generation p θ (a|X) given X consuming the unified input representation X, where θ represents the set of all model parameters. The model is trained to minimize the cross-entropy loss for generating answer tokens on the decoder side. At inference time, FID first computes X based on the retrieved passages, and then decodes the answer token by token following p θ (a i |a <i , X) with the learned model parameters θ. In this section, we introduce a generative reader (PATHFID) for K-hop QA that jointly generates an alternating sequence of passage-level and factlevel clues on the reasoning path by more explicit fusion of evidence from the pool of input passages to arrive at the correct answer. As illustrated in Figure The opaqueness of the FID model, which makes understanding of the reasoning process more difficult, motivated our approach and its emphasis on exposing the reasoning path. Instead of only modeling answer generation, we propose to jointly model it with the full reasoning path in an hierarchical fashion to derive the answer in a unified way using multi-task maximum likelihood training. We utilize the core input encoding architecture from FID approach (Section 2.2) by introducing a new passage representation that will facilitate supporting fact generation on the reasoning path as illustrated in Figure path n := question: q title: t n context: p path n where we redefine the context representation by inserting special tokens (<f i >) before each sentence of the passage as where s n denotes the i-th sentence of passage p n , and l n is the number sentences it contains. Having redefined the input blocks (b path n ) per passage, we then compute the global input representation similar to Eq. 1 by Note that sentence indicators (<f i >) are shared across all passages, encouraging a more hierarchical passage representation by explicitly breaking them down into sentence-level sub-blocks using the same indicator tokens. The hierarchical design of reasoning path is inspired by the human reasoning process for multihop QA task. More precisely, if a question q requires K-hop reasoning, then we process these K passages in a sequential order alternating between their passage-level and sentence-level evidence until we reach the answer. To this end, let R q = {p r 1 , p r 2 , . . . , p r K } with r i ∈ [1, N ] denote the sequence of passages from the larger pool D q reflecting this reasoning process for locating the answer a for question q. As shown in Figure where T r i represents the i-th title block obtained by inserting a special token (<title-i>) before the title t r j and A denotes the answer block derived by prepending a special token (<answer>) to the answer a as illustrated in Figure PATHFID enables more explicit evidence fusion through the reasoning path to guide the model to towards correct answer in a structured way. However, it still relies on the decoder to combine all the clues together, which might still struggle due to lack of cross-passage interactions as input blocks are encoded independently. To address this potential limitation, we propose PATHFID+, where we further extend PATHFID in a way that enables crosspassage interaction by redefining the input block consisting of a pair of passages assuming that a set of passage pairs (p n 1 , p n 2 ) are available for model to consume. In particular, we derive a set of pairs of passages from the initial set D q by D + q = {(p * , p 1 ), (p * , p 2 ), . . . , (p * , p N )} where p * corresponds to the first passage that is possible to immediately hop to from question q, which may be determined by another model, or by executing the original PATHFID on D q in our case. Global input representation X path+ q is obtained similarly (Eq. 3) by except encoding the new blocks b path+ n 1 ,n 2 allowing for cross-passage interactions, while the target reasoning path Y path+ q remains the same as Y path q . Note that <title-i> special markers are shared between new input block b path+ n 1 ,n 2 and target reasoning path Y path+ q to provide the model with additional clue regarding the first passage on the reasoning path while still relaying the complete evidence fusion to the decoder via information redundancy encoded in X path+ q . Having defined global input representation X path q , the decoder autoregressively generates the reasoning path Y path q per token at each step by following self-attention, cross-attention on the entire X path q , and feed-forward modules. So, the overall reasoning path generation is modeled as conditional generation p θ path (Y i=1 log p θ (y i |y <i , X path q ) with teacher forcing over a training set of {(q, a, D q )}. In the inference, the decoder consumes the input representation X path q computed by encoder, and generates the full reasoning path token by token. We then post-process the decoded sequence using the answer indicator (<answer>) to first obtain the answer, followed by recursively parsing the remaining sequence using the special separator tokens (<title-k>, <facts-k>) to reconstruct the title and retrieve its relevant sentences at each hop k. As illustrated in Figure We conduct experiments on two multi-hop question answering datasets: HotpotQA and IIRC. HotpotQA We present our main results on the HotpotQA distractor setting in Table Baseline How faithfully grounded are the generated answers on supporting facts? In Table The first row focuses on the passage-level answer grounding computed by the percentage of the answers found in one of the gold supporting passages, while the second row reports the same analysis on sentence-level. We can observe that PATHFID models significantly improves on how faithfully the generated answers are grounded on the supporting facts both at passage-level and sentence-level granularities. The next two rows provide further insight into the quality of the generated supporting facts by PATHFID models by measuring how often the gold answer can be found in them. This analysis shows that the generated supporting facts are of quite high-quality including the gold answer for more than 95.3% and 96.2% at sentence-level and passage-level, respectively. The last two rows measure the faithfulness of the generated answers on the model generated supporting facts, which is not applicable to FID model as it does not perform supporting fact prediction. We observe that the generated answers are quite faithfully grounded on the predicted supporting facts, showing the path generation not only improves the answer EM performance but also successfully grounds them on the evidence it generates as part of the full reasoning path. It is important emphasize here that extractive reader models can be guaranteed to output perfectly grounded answers simply by locating the answer in their predicted supporting facts. On the other hand, it is difficult for generative models to ensure 100% answer grounding simply due to its generative na- ture. However, we are able to provide additional evidence validating the answers generated by PATH-FID are significantly grounded in the supporting facts it generates, which might implicitly indicate that the generated reasoning path tightly aligns with the model's underlying process for answer generation. Although this is a strong evidence, it is still quite implicit in exposing the model's prediction process, so we see our approach as a step in the right direction rather than a complete solution. Performance breakdown by the number of supporting facts and question types. In Table A more important motivation behind the performance breakdown analysis was to understand how the supporting fact prediction of PATHFID would change as the number of gold supporting facts grows. Although it starts degrading on examples with more than 2 supporting facts, it still achieves more than 25% Support-EM for bridge questions with up to 4 supporting facts. Recalling the average performance on the whole dataset is less than 60%, we conclude this result might be satisfactory enough, especially for a fully generative Analyzing the evolution of sub-tasks during joint training with PATHFID. In Figure In addition to our main experiments presented in greater detail, we also conduct experiments on IIRC dataset to verify the generalization of the proposed approach. To this end, we closely follow the authors' model-free retrieval setting (referred to as Oracle L+C in Table -3) because the model checkpoints for the baseline retrieval model are not available in the public release. We use a python script 2 provided in the open-sourced repository to replicate the same setting for a fair comparison. In Table In Table Multi-hop question answering. Research on multi-hop QA aims to tackle complex questions that require reasoning across multiple pieces of evidence in multiple documents In this work, we propose a generative question answering (QA) approach that models multi-hop QA as a single sequence prediction task. It learns to generate an answer along with a reasoning path to improve its capability of multi-hop reasoning. Our experiments on prominent multi-hop QA benchmarks, HotpotQA and IIRC, validate the promise and effectiveness of our proposed method PATH-FID and its extension PATHFID+. Future work will explore (1) our PATHFID approach more closely with text retrieval models in open-domain QA scenarios and (2) more explicit grounding on the input information to make our approach even more interpretable and controllable. underlying retriever. Table As discussed in Section D, fine-tuning PATHFID+ with T5-large initialization might require significant resources and non-trivial memory efficient optimization (e.g., gradient checkpointing). To provide a baseline with a smaller model for future research, here we include the results of PATHFID+ with T5-base initialization using the same setting reported in Table Hop ordering. HotpotQA benchmark provides annotation only for unordered gold passages, without explicitly specifying which passage corresponds to the k-th hop (e.g., first-hop, second-hop, etc.) on the reasoning path. In our implementation, we combine the heuristic strategies applied by GRR Post-processing for passage title reconstruction. Note that PATHFID generates the titles of the passages on the reasoning path token by token including the separator tokens. However, the decoder might fall into some minor errors during the generation process, which may cause the resulting titles to end up slightly different from the original ones. To account for such minor errors, we leverage the set of titles coming from the input passages and find the most similar among them to our generated passage titles based on token-level F1-score. We call this process title reconstruction and apply it while reporting the performance for supporting fact predictions. Table Model selection. For all the models reported in this work, we perform evaluation at every 500 steps during training by decoding the whole development set on a separate machine in a non-blocking fashion. We then select the best model based on the answer exact-match score performance. However, since PATHFID variants generate more than just the answer, it can be leveraged to optimize for a more holistic metric including the supporting fact prediction performance, offering further control on model selection. We leave further exploration of this phenomenon to future work. Scaling to larger evidence pools for full-wiki setting. As briefly noted in Appendix B, we report results in full-wiki setting using only top-25 paths returned by MDR In Tables 9, 8 and 10, we provide the full set of important hyperparameters used for the models reported both in the main paper (HotpotQA-distractor and IIRC) and in the Appendix B (HotpotQAfullwiki), respectively.
| 1,197 | 926 | 1,197 |
A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space
|
The representation degeneration problem in Contextual Word Representations (CWRs) hurts the expressiveness of the embedding space by forming an anisotropic cone where even unrelated words have excessively positive correlations. Existing techniques for tackling this issue require a learning process to re-train models with additional objectives and mostly employ a global assessment to study isotropy. Our quantitative analysis over isotropy shows that a local assessment could be more accurate due to the clustered structure of CWRs. Based on this observation, we propose a local cluster-based method to address the degeneration issue in contextual embedding spaces. We show that in clusters including punctuations and stop words, local dominant directions encode structural information, removing which can improve CWRs performance on semantic tasks. Moreover, we find that tense information in verb representations dominates sense semantics. We show that removing dominant directions of verb representations can transform the space to better suit semantic applications. Our experiments demonstrate that the proposed cluster-based method can mitigate the degeneration problem on multiple tasks. 1
|
Despite their outstanding performance, CWRs are known to suffer from the so-called representation degeneration problem that makes the embedding space anisotropic To better understand the representation degeneration problem in pre-trained models, we analyzed the embedding space of GPT-2 In addition, we provide an analysis on the reasons behind the effectiveness of our cluster-based technique. The empirical results show that most clusters contain punctuation tokens, such as periods and commas. The PCs of these clusters encode structural information about context, such as sentence style; hence, removing them can improve CWRs performance on semantic tasks. A similar structure exists in other clusters containing stop words. The other important observation is about verb distribution in the contextual embedding space. Our experiments reveal that verb representations are separated across the tense dimension in distinct sub-spaces. This brings about an unwanted peculiarity in the semantic space: representations for different senses of a verb tend to be closer to each other in the space than the representations for the same sense that are associated with different tenses of the same verb. Indeed, removing such PCs improves model's ability in downstream tasks with dominant semantic flavor.
|
Isotropy is a desirable property of word embedding spaces and arguably any other vector representation of data in general We measure the isotropy of embedding space using the partition function of where is a unit vector, is the corresponding embedding for the ℎ word in the embedding matrix W ∈ IR N×D , N is the number of words in the vocabulary, and D is the embedding size. 2.1 Analyzing Isotropy in pre-trained CWRs Using the above metric, we analyzed the representation degeneration problem globally and locally. Global assessment. We quantified isotropy in all layers for GPT-2, BERT, and RoBERTa on the development set of STS-Benchmark Local assessment. In the light of the clustered structure of the embedding space in CWRs Table Tables 2 and 3 report experimental results. As can be seen, globally increasing isotropy can make a significant improvement for all the three pre-trained models. However, our cluster-based approach can achieve notably higher performance compared to the global approach. We attribute this improvement to our cluster-specific discarding of dominant directions. Both global and cluster-based methods null out the optimal number of top dominant directions (tuned separately, cf. Appendix B), but the latter identifies them based on the specific structure of a sub-region in the embedding space (which might not be similar to other sub-regions). In this section, we provide a brief explanation for reasons behind the effectiveness of the clusterbased approach through investigating the linguistic knowledge encoded in the dominant local directions. We also show that enhancing isotropy reduces convergence time. Punctuations and stop words. We observed that local dominant directions for the clusters of punctuations and stop words carry structural and syntactic information about the sentences in which they appear. For example, the two sentences "A man is crying." and "A woman is dancing." from STS-B do not have much in common in terms of semantics but are highly similar with respect to their style. To quantitatively analyze the distribution of this type of tokens in CWRs, we designed an experiment based on the dataset created by Figure nearest neighbours which are in the same group before and after removing local dominant directions. We evaluated this for period and comma, which are the most frequent punctuations, and "the" and "of" as the most contextualized stop words Verb Tense. Our experiments show that tense is more dominant in verb representations than senselevel semantic information. To have a precise examination of this hypothesis, we used SemCor In the previous experiments, we showed that the contextual embeddings are extremely anisotropic and highly correlated. Such embeddings can slow down the learning process of deep neural networks. Figure In this paper, we proposed a cluster-based method to address the representation degeneration problem in CWRs. We empirically analyzed the effect of clustering and showed that, from a local sight, most clusters are biased toward structural information. Moreover, we found that verb representations are distributed based on their tense in distinct sub-spaces. We evaluated our method on different semantic tasks, demonstrating its effectiveness in removing local dominant directions and improving performance. As future work, we plan to study the effect of fine-tuning on isotropy and on the encoded linguistic knowledge in local regions. Table B.1 Dataset details STS. In the Semantic Textual Similarity task, the provided labels are between 0 and 5 for each paired sentence. We first calculate sentence embeddings by averaging all word representations in each sentence and then compute the cosine similarity between two sentence representations as a score of semantic relatedness of the pair. RTE. The Recognizing Textual Entailment dataset is a classification task from the GLUE benchmark CoLA. The Corpus of Linguistic Acceptability For the classification tasks, we trained a simple MLP on the features extracted from BERT. The proposed cluster-based approach has two hyperparameters: the number of clusters and the number of PCs to be removed. We selected both of them from range In the cluster-based approach,The optimal number of clusters for GPT-2, BERT, and RoBERTa are respectively 10, 27, and 27. For BERT and RoBERTa, 12 top dominant directions have been removed, while the number is 30 for GPT-2 regarding its extremely anisotropic embedding space. The tuning of the number of PCs to be eliminated in the global method has been done similarly to the cluster-based approach (on the STS-B dev set): 30, 15, and 25 for GPT-2, BERT, and RoBERTa, respectively. In Table CWRs are biased towards their frequency information, and words with similar frequency create local regions in the embedding space
| 1,197 | 1,299 | 1,197 |
Prefix Lexicalization of Synchronous CFGs using Synchronous TAG
|
We show that an ε-free, chain-free synchronous context-free grammar (SCFG) can be converted into a weakly equivalent synchronous tree-adjoining grammar (STAG) which is prefix lexicalized. This transformation at most doubles the grammar's rank and cubes its size, but we show that in practice the size increase is only quadratic. Our results extend Greibach normal form from CFGs to SCFGs and prove new formal properties about SCFG, a formalism with many applications in natural language processing.
|
Greibach normal form (GNF; By using prefix lexicalized synchronous context-free grammars (SCFGs), This work investigates the formal properties of prefix lexicalized synchronous grammars as employed by
|
An SCFG is a tuple G = (N, Σ, P, S) where N is a finite nonterminal alphabet, Σ is a finite terminal alphabet, S ∈ N is a distinguished nonterminal called the start symbol, and P is a finite set of synchronous rules of the form (1) must be linked to exactly one nonterminal in α 2 , and vice versa. We write these links using numerical annotations, as in (2). (2) An SCFG has rank k if no rule in the grammar contains more than k pairs of linked nodes. In every step of an SCFG derivation, we rewrite one pair of linked nonterminals with a rule from P , in essentially the same way we would rewrite a single nonterminal in a non-synchronous CFG. For example, (3) shows linked A and B nodes being rewritten using ( Note how the 1 and An SCFG derivation is complete when it contains no more nonterminals to rewrite. A completed derivation represents a string pair generated by the grammar. An STAG where t 1 and t 2 are elementary trees as defined in In every step of an STAG derivation, we rewrite one pair of linked nonterminals with a tree pair from T , using the same substitution and adjunction operations defined for non-synchronous TAG. For example, Figure We use synchronous production as a cover term for either a synchronous rule in an SCFG or a synchronous tree pair in an STAG. Following We call a grammar ε-free if it contains no productions whose source or target side produces only the empty string ε. Previous work Formally, this means that every synchronous rule in a prefix lexicalized SCFG (PL-SCFG) is of the form (5) Every synchronous tree pair in a prefix lexicalized STAG (PL-STAG) is of the form (6) We now prove that the class SCFG is not closed under prefix lexicalization. Theorem 1. There exists an SCFG which cannot be converted to an equivalent PL-SCFG. Proof. The SCFG in (7) generates the language L = { a i b j c i , b j a i | i ≥ 0, j ≥ 1}, but this language cannot be generated by any PL-SCFG: Suppose, for the purpose of contradiction, that some PL-SCFG does generate L; call this grammar G. Then the following derivations must all be possible in G for some nontermials U, V, X, Y : i and ii follow from the same arguments used in the pumping lemma for (non-synchronous) context free languages Now we obtain a contradiction. Given that G can derive all of i through iv, the following derivation is also possible: But since n, r ≥ 1, the target string derived this way contains an a before a b and does not belong to L. This is a contradiction: if G is a PL-SCFG then it must generate i through iv, but if so then it also generates strings which do not belong to L. Thus no PL-SCFG can generate L, and SCFG must not be closed under prefix lexicalization. There also exist grammars which cannot be prefix lexicalized because they contain cyclic chain rules. If an SCFG can derive something of the form X 1 , Y 1 ⇒ * xX 1 , Y 1 , then it can generate arbitrarily many symbols in the source string without adding anything to the target string. Prefix lexicalizing the grammar would force it to generate some terminal symbol in the target string at each step of the derivation, making it unable to generate the original language where a source string may be unboundedly longer than its corresponding target. We call an SCFG chain-free if it does not contain a cycle of chain rules of this form. The remainder of this paper focuses on chain-free grammars, like (7), which cannot be converted to PL-SCFG despite containing no such cycles. We now present a method for prefix lexicalizing an SCFG by converting it to an STAG. Theorem 2. Given a rank-k SCFG G which is εfree and chain-free, an STAG H exists such that H is prefix lexicalized and L(G) = L(H). The rank of H is at most 2k, and |H| = O(|G| Proof. Let G = (N, Σ, P, S) be an ε-free, chainfree SCFG. We provide a constructive method for prefix lexicalizing the target side of G. We begin by constructing an intermediate grammar G XA for each pair of nonterminals X, A ∈ N \ {S}. For each pair X, A ∈ N \ {S}, G XA will be constructed to generate the language of sentential forms derivable from X 1 , A 1 via a target-side terminal leftmost derivation (TTLD). A TTLD is a derivation of the form in Figure we add a tree pair of the form in Figure • For each rule in G of the form • For each rule in G of the form , we add a tree pair of the form in Figure As a special case, if Y = Z we collapse the root node and adjunction site to produce a tree pair of the following form: (9) , we add a tree pair of the form in Figure Figure Proof. This can be shown by induction over derivations of increasing length. The proof is straightforward but very long, so we provide only a sketch; the complete proof is provided in the supplementary material. As a base case, observe that a tree of the shape in Figure which is a TTLD starting from X, A . By construction, therefore, every TTLD of the shape in (10) corresponds to some tree in G XA of shape 3(a); likewise every derivation in G XA comprising a single tree of shape 3(a) corresponds to a TTLD of the shape in (10). As a second base case, note that a tree of the shape in Figure In the other direction, the last step of any TTLD of the shape in (11) will involve some rule of the shape Y → α 1 , B → aα 2 ; by construction G XA must contain a corresponding tree pair of shape 3(b). Together, these base cases establish a one-toone correspondence between single-tree derivations in G XA and the last step of a TTLD starting from X, A . Now, assume that the last n steps of every TTLD starting from X, A correspond to some derivation over n trees in G XA , and vice versa. Then the last n + 1 steps of that TTLD will also correspond to some n + 1 tree derivation in G XA , and vice versa. To see this, consider the step n + 1 steps before the end of the TTLD. This step may be in the middle of the derivation, or it may be the first step of the derivation. If it is in the middle, then this step must involve a rule of the shape The existence of such a rule in G implies the existence of a corresponding tree in G XA of the shape in Figure The existence of such a rule implies the existence of a corresponding tree in G XA of the shape in Figure 5 Although trees in GXA may contain symbols from the nonterminal alphabet of G, these symbols belong to the terminal alphabet in GXA. Only nonterminals in NXA will be involved in this derivation, and by construction there is at most one such nonterminal per tree. Thus a well-formed derivation structure in GXA will never branch, and we can refer to the n + 1th tree pair as the one which is at depth n in the derivation structure. existence of a production in G of the shape in (13). By assumption the first n trees of the derivation in G XA correspond to some TTLD in G; by prepending the rule from (13) to this TTLD we obtain a new TTLD of length n + 1 which corresponds to the entire n + 1 tree derivation in G XA . Taken together, these cases establish a one-toone correspondence between derivations in G XA and TTLDs which start from X, A ; in turn they confirm that G XA generates the desired language L XA . Once we have constructed an intermediate grammar G XA for each X, A ∈ N \ {S}, we obtain the final STAG H as follows: 1. Convert the input SCFG G to an equivalent STAG. For each rule , create a tree pair of the form ( where each pair of linked nonterminals in the original rule become a pair of linked substitution sites in the tree pair. The terminal and nonterminal alphabets and start symbol are unchanged. Call the resulting STAG H. 2. For all X, A ∈ N \ {S}, add all of the tree pairs from the intermediate grammar G XA to the new grammar H. Expand N to include the new nonterminal symbols in N XA . 3. For every X, A ∈ N , in all tree pairs where the target tree's leftmost leaf is labeled with A and this node is linked to an X, replace this occurrence of A with S XA . Also replace the linked node in the source tree. 4. For every X, A ∈ N , let R XA be the set of all tree pairs rooted in S XA , and let T XA be the set of all tree pairs whose target tree's leftmost leaf is labeled with S XA . For every s, t ∈ T XA and every s , t ∈ R XA , substitute or adjoin s and t into the linked S XA nodes in s and t, respectively. Add the derived trees to H. 5. For all X, A ∈ N , let T XA be defined as above. Remove all tree pairs in T XA from H. 6. For all X, A ∈ N , let R XA be defined as above. Remove all tree pairs in R XA from H. We now claim that H generates the same language as the original grammar G, and all of the target trees in H are prefix lexicalized. The first claim follows directly from the construction. Step 1 merely rewrites the grammar in a new formalism. From Lemma 1 it is clear that steps 2-3 do not change the generated language: the set of string pairs generable from a pair of S XA nodes is identical to the set generable from X, A in the original grammar. Step 4 replaces some nonterminals by all possible alternatives; steps 5-6 then remove the trees which were used in step 4, but since all possible combinations of these trees have already been added to the grammar, removing them will not alter the language. The second claim follows from inspection of the tree pairs generated in Figure Our conversion generates a subset of the class of prefix lexicalized STAGs in regular form, which we abbreviate to PL-RSTAG (regular form for TAG is defined in Rogers 1994). This section discusses some formal properties of PL-RSTAG. Generative Capacity PL-RSTAG is weakly equivalent to the class of ε-free, chain-free SCFGs: this follows immediately from the proof that our transformation does not change the language generated by the input SCFG. Note that every TAG in regular form generates a context-free language Alignments and Reordering PL-RSTAG generates the same set of reorderings (alignments) as SCFG. Observe that our transformation does not cause nonterminals which were linked in the original grammar to become unlinked, as noted for example in Figure ated by linked nonterminals in the original grammar will still be generated by linked nonterminals in the final grammar, so no reordering information is lost or added. 6 This result holds despite the fact that our transformation is only applicable to chainfree grammars: chain rules cannot introduce any reorderings, since by definition they involve only a single pair of linked nonterminals. Grammar Rank If the input SCFG G has rank k, then the STAG H produced by our transformation has rank at most 2k. To see this, observe that the construction of the intermediate grammars increases the rank by at most 1 (see Figure In the general case, rank-k STAG is more powerful than rank-k SCFG; for example, a rank-4 SCFG is required to generate the reordering in following rank-3 STAG: For this reason, we speculate that it is possible to further transform the grammars produced by our lexicalization in order to reduce their rank, but the details of this transformation remain as future work. This potentially poses a solution to an issue raised by Parse Complexity Because the grammar produced is in regular form, each side can be parsed in time O(n 3 ) Grammar Size and Experiments If H is the PL-RSTAG produced by applying our transformation to an SCFG G, then H contains O(|G| 3 ) elementary tree pairs, where |G| is the number of synchronous productions in G. When the set of nonterminals N is small compared to |G|, a tighter bound is given by O(|G| 2 |N | 2 ). Table We also investigated how the proportion of prefix lexicalized rules in the original grammar affects the overall size increase. We sampled grammars with varying proportions of prefix lexicalized rules from the grammar in The LR decoding algorithm from Combined with the transformation in Section 4, this suggests a method for using LR decoding without sacrificing translation quality. Previously, LR decoding required the use of heuristically generated PL-SCFGs, which cannot model some reorderings Note that, since applying our transformation may double the rank of a grammar, this method may prove prohibitively slow. This highlights the need for future work to examine the generative power of rank-k PL-RSTAG relative to rankk SCFG in the interest of reducing the rank of the transformed grammar. Our work continues the study of TAGs and lexicalization (e.g. Other extensions of GNF to new grammar formalisms include Lexicalization of synchronous grammars was addressed by Analogous to our closure result, Aho and Ullman (1969) prove that SCFG does not admit a normal form with bounded rank like Chomsky normal form. We rely on Finally, We have demonstrated a method for prefix lexicalizing an SCFG by converting it to an equivalent STAG. This process is applicable to any SCFG which is εand chain-free. Like the original GNF transformation for CFGs our construction at most cubes the grammar size, though when applied to the kinds of synchronous grammars used in machine translation the size is merely squared. Our transformation preserves all of the alignments generated by SCFG, and retains properties such as O(n 3k ) parsing complexity for grammars of rank k. We plan to verify whether rank-k PL-RSTAG is more powerful than rank-k SCFG in future work, and to reduce the rank of the transformed grammar if possible. We further plan to empirically evaluate our lexicalization on an alignment task and to offer a comparison against the lexicalization due to
| 498 | 200 | 498 |
MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations
|
Emotion Recognition in Conversations (ERC) is an increasingly popular task in the Natural Language Processing community, which seeks to achieve accurate emotion classifications of utterances expressed by speakers during a conversation. Most existing approaches focus on modeling speaker and contextual information based on the textual modality, while the complementarity of multimodal information has not been well leveraged, few current methods have sufficiently captured the complex correlations and mapping relationships across different modalities. Furthermore, existing state-ofthe-art ERC models have difficulty classifying minority and semantically similar emotion categories. To address these challenges, we propose a novel attention-based correlation-aware multimodal fusion framework named MultiEMO, which effectively integrates multimodal cues by capturing cross-modal mapping relationships across textual, audio and visual modalities based on bidirectional multi-head crossattention layers. The difficulty of recognizing minority and semantically hard-to-distinguish emotion classes is alleviated by our proposed Sample-Weighted Focal Contrastive (SWFC) loss. Extensive experiments on two benchmark ERC datasets demonstrate that our MultiEMO framework consistently outperforms existing state-of-the-art approaches in all emotion categories on both datasets, the improvements in minority and semantically similar emotions are especially significant. * Corresponding author. potentials in social media analysis
|
Emotion Recognition in Conversations (ERC) is an emerging task in the field of Natural Language Processing (NLP), which aims to identify the emotion of each utterance in a conversation based on textual, audio and visual cues of the speaker. ERC has attracted an enormous amount of attention from both academia and industry, due to its widespread To solve the problem of ERC, numerous approaches have been proposed. The majority of existing works concentrate on modeling speaker dependencies and conversational contexts To address the above problems, in this paper, we propose a novel attention-based correlationaware multimodal fusion framework named Mul-tiEMO. Firstly, unimodal feature extraction and context modeling are performed for each modality, in which we introduce a visual feature extractor named VisExtNet based on a Multi-task Cascaded Convolutional Network (MTCNN) The main contributions of this work can be sum- marized as follows: • We propose a novel visual feature extraction network named VisExtNet, which effectively captures visual cues of interlocutors without modeling redundant scene information. • We design a multimodal fusion model called MultiAttn based on bidirectional multi-head cross-attention layers, which successfully models the complicated correlations across textual, audio and visual modalities. • We innovatively introduce a SWFC loss to address the difficulty of classifying minority and semantically similar emotion classes. • We conduct extensive experiments on MELD and IEMOCAP, results show that our proposed MultiEMO framework achieves stateof-the-art performances on both datasets, the improvements in minority and semantically similar emotions are especially notable. 2 Related Work
|
Multimodal Fused Graph Convolutional Network (MMGCN) is proposed by The overall framework of MultiEMO is illustrated in Figure Existing research often adopts two different paradigms to extract contextualized textual features: (1) Two-stage paradigm To be specific, following Audio Feature Extraction: We follow Visual Feature Extraction: Most existing works To illustrate, a large proportion of conversations in MELD take place at home, but the emotions of these conversations vary significantly. In addition, the scene normally remains unchanged throughout the conversation. Therefore, capturing scenerelated visual information for each utterance is unnecessary and may lead to a wrong understanding of the speaker's actual emotional tendency due to the influence of irrelevant scene information. To address this problem, we propose a novel visual feature extractor named VisExtNet, which is made up of a MTCNN and a ResNet-101 For an utterance video u v i , visual feature extraction is performed on 20 frames of the utterance clip, with each frame selected using a step of number of frames 20 . Specifically, each frame is first sent into a MTCNN to accurately detect the faces of all interlocutors present in the scene at that frame, each detected face is then passed through a VGGFace2 pretrained ResNet-101 to extract a emotion-rich visual feature vector. The concatenation of facial expression features from all participants is regarded as the visual representation of that frame. The same process is repeated for each of the 20 frames, after which the output features of all frames are average pooled over the frame axis to obtain a 1000dimensional visual feature vector h v i . Visual Context Modeling: Similar to audio context modeling, after visual feature extraction, we utilize another DialogueRNN to learn a 256dimensional contextualized visual representation c v i for each video clip. Existing literature fails to effectively integrate multimodal information, the complex correlations and mapping relationships across multiple modalities have not been well captured. To tackle this issue, inspired by The architecture of MultiAttn is shown in Figure MultiAttn is made up of three components: MultiAttn text , MultiAttn audio and MultiAttn visual , each of which aims to integrate one modality with complementary information from the other two modalities. As illustrated in Figure Given the Queries of all utterances T , the calculation of MultiAttn text at layer j is illustrated as follows: MH ta (j) = Cat(A ta (j) 1 , . . . , A ta (j) H )W O ta (j) (4) F ta (j) = LayerNorm(F t (j-1) + MH ta (j) ) (5) ], h ∈ {1, . . . , H} (6) Where After multimodal fusion, the learned multimodalfused textual, audio and visual feature representations f t i , f a i and f v i are concatenated and then sent into a fully-connected layer and a subsequent 2layer Multilayer Perceptron (MLP) with a ReLU. Finally, a Softmax layer is utilized to compute a probability distribution over the emotion category set, where the emotion label with the highest probability is chosen as the prediction ŷi for the i-th utterance. The calculation is illustrated as follows: Where ⊕ denotes concatenation, W z , W l and W smax are weight matrices, b z , b l and b smax are bias parameters. The SWFC loss is defined as follows: Where z i,j is the output of the fully-connected layer (Equation Soft-HGR Loss: We utilize a Soft Hirschfeld-Gebelein-Rényi (Soft-HGR) loss Where Expectations and covariances are approximated through sample means and sample covariances. Cross-Entropy Loss: In addition, we adopt a Cross-entropy loss to measure the difference between predicted probabilities and true labels: Where p i,j is the probability distribution over the emotion classes for utterance j in dialogue i, y i,j is the ground-truth label of utterance j in dialogue i. Full Loss Function: A linear combination of SWFC loss, Soft-HGR loss and Cross-entropy loss is leveraged as the full loss function: Where µ 1 and µ 2 are tunable hyperparameters, λ is the L 2 regularization weight, θ is the set of all trainable parameters. 4 Experimental Settings 4.1 Datasets IEMOCAP BC-LSTM DialogueRNN DialogueGCN IterativeERC Modality Setting: We utilize textual, audio and visual modalities of utterances to conduct experiments on both MELD and IEMOCAP. Hyperparameter Settings: (1) Dataset-specific settings: Since MELD is significantly more classimbalanced than IEMOCAP, the batch size is designed to be 64 on IEMOCAP and 100 on MELD. (2) Dataset-generic settings: The number of training epochs is 100, the optimizer is Adam (Kingma and Ba, 2015) with β 1 = 0.9 and β 2 = 0.99, the learning rate is initialized with 0.0001 and decays by 0.95 after every 10 epochs, the L 2 regularization weight λ is 0.00001. To avoid overfitting, we apply Dropout 5 Results and Analysis The comparisons between MultiEMO and existing state-of-the-art approaches on IEMOCAP and MELD are shown in Table The comparison of MultiEMO with different modality settings on IEMOCAP and MELD is illustrated in Table To study the contributions of different components in MultiEMO to model performances, we conduct ablation studies on both IEMOCAP and MELD, the results are shown in Table A case study is illustrated in Appendix A.1. In Although our proposed MultiEMO framework has achieved state-of-the-art performances on both IEMOCAP and MELD, there are some limitations with this work: • Our proposed visual feature extractor Vi-sExtNet does not distinguish between speakers and irrelevant people in the scene, which can be problematic in some scenarios. For instance, one scene in MELD is the cafeteria, where a lot of background actors sit and drink coffee. The facial expressions of these background people have no impact on the emotion of the speaker since they do not participant in the conversation. However, VisExtNet captures visual features of everyone appeared in the cafeteria with no differentiation, which may lead to a wrong comprehension of the speaker's emotional tendency due to the effects of facial expressions from irrelevant people. We plan to explore effective ways to distinguish between interlocutors and irrelevant people in the scene in our future work. • The effects of hyperparameters in the SWFC loss (temperature parameter τ , sample-weight parameter α and focusing parameter γ) on model performances have not been fully studied, which will be thoroughly analyzed in our future research. • Due to the class imbalanced issue with MELD, the SWFC loss requires a large batch size on MELD to ensure that for each training sample there exists at least one positive pair in the batch, which can be computationally expensive. We will investigate effective approaches to tackle this challenge in our future research. • Even though MultiEMO has achieved remarkable improvements in minority emotion categories, the performances of MultiEMO in minority emotions are still worse than majority classes. How to further improve performances in low-resource emotion classes will be explored in the future. Since the one-stage paradigm (Section 3.3.1) simultaneously performs unimodal textual feature extraction and textual context modeling, to better illustrate the role of context modeling to emotion classification, in the section of case study, the textual modality of the selected utterance is processed using a two-stage paradigm
| 1,520 | 1,729 | 1,520 |
Hypergraph Transformer: Weakly-Supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering
|
Knowledge-based visual question answering (QA) aims to answer a question which requires visually-grounded external knowledge beyond image content itself. Answering complex questions that require multi-hop reasoning under weak supervision is considered as a challenging problem since i) no supervision is given to the reasoning process and ii) highorder semantics of multi-hop knowledge facts need to be captured. In this paper, we introduce a concept of hypergraph to encode highlevel semantics of a question and a knowledge base, and to learn high-order associations between them. The proposed model, Hypergraph Transformer, constructs a question hypergraph and a query-aware knowledge hypergraph, and infers an answer by encoding inter-associations between two hypergraphs and intra-associations in both hypergraph itself. Extensive experiments on two knowledgebased visual QA and two knowledge-based textual QA demonstrate the effectiveness of our method, especially for multi-hop reasoning problem. Our source code is available at
|
Visual question answering (VQA) is a semantic reasoning task that aims to answer questions about visual content depicted in images In this paper, we focus on the task which is called knowledge-based visual question answering, where a massive number of knowledge facts from a general knowledge base (KB) is given with an image-question pair. To answer the given question as shown in Figure Under weak supervision, previous studies proposed memory-based methods To address the above limitation, we propose a novel method, Hypergraph Transformer, which exploits hypergraph structure to encode multi-hop relationships and transformer-based attention mechanism to learn to pay attention to important knowledge evidences for a question. We construct a question hypergraph and a knowledge hypergraph to explicitly encode high-order semantics present in the question and each knowledge fact, and capture multi-hop relational knowledge facts effectively. Then, we perform hyperedge matching between the two hypergraphs by leveraging transformer-based attention mechanism. We argue that introducing the concept of hypergraph is powerful for multi-hop reasoning problem in that it can encode high-order semantics without the constraint of length and learn cross-modal high-order associations. The main contributions of this paper can be summarized as follows. i) We propose Hypergraph Transformer which enhances multi-hop reasoning ability by encoding high-order semantics in the form of a hypergraph and learning inter-and intrahigh-order associations in hypergraphs using the attention mechanism. ii) We conduct extensive experiments on two knowledge-based VQA datasets (KVQA and FVQA) and two knowledge-based textual QA datasets (PQ and PQL) and show superior performances on all datasets, especially multi-hop reasoning problem. iii) We qualitatively observe that Hypergraph Transformer performs robust in-ference by focusing on correct reasoning evidences under weak supervision.
|
Knowledge-based visual question answering Multi-hop knowledge graph reasoning is a process of sequential reasoning based on multiple evidences of a knowledge graph, and has been broadly used in various downstream tasks such as question answering To capture high-order semantics inherent in the knowledge sources, we adopt the concept of hypergraph. Formally, directed hypergraph H = {V, E} is defined by a set of nodes V = {v 1 , ..., v |V| } and a set of hyperedges E = {h 1 , ..., h |E| }. Each node is represented as a w-dimensional embedding vector, i.e., v i ∈ R w . Each hyperedge connects an arbitrary number of nodes and has partial order itself, i.e., A hyperedge is flexible to encode different kinds of semantics in the underlying graph without the constraint of length. As shown in Figure Query-aware knowledge hypergraph A knowledge base (KB), a vast amount of general knowledge facts, contains not only knowledge facts required to answer a given question but also unnecessary knowledge facts. Thus, we construct a queryaware knowledge hypergraph H k = {V k , E k } to extract related information for answering a given question. It consists of a node set V k and hyperedge set E k , which represent a set of entities in knowledge facts and a set of hyperedges, respectively. Each hyperedge connects the subset of vertices We consider a huge number of knowledge facts in the KB as a huge knowledge graph, and construct a hypergraph by traversing the knowledge graph. Such traversal, called graph walk, starts from the node linked from the previous module (see section 3.2) and considers all entity nodes associated with the start node. We define a triplet as a basic unit of graph walk to preserve high-order semantics inherent in knowledge graph, i.e., every single graph walk contains three nodes {head, predicate, tail}, rather than having only one of these three nodes. In addition to the triplet-based graph walks, a multihop graph walk is proposed to encode multiple relational facts that are interconnected. Multi-hop graph walk connects multiple facts by setting the arrival node (tail) of the preceding walk as the starting (head) node of the next walk, thus, n-hop graph walk combines n facts as a hyperedge. Question hypergraph We transform a question sentence into a question hypergraph H q consisting of a node set V q and a hyperedge set E q . We assume that each word unit (a word or named entity) of the question is defined as a node, and has edges to adjacent nodes. For question hypergraph, each word unit is used as a start node of a graph walk. The multi-hop graph walk is conducted in the same manner as the knowledge hypergraph. A n-gram phrase is considered as a hyperedge in the question hypergraph (see Figure To consider high-order associations between knowledge and question, we devise structural semantic matching between the query-aware knowledge hypergraph and the question hypergraph. We introduce an attention mechanism over two hypergraphs based on guided-attention Guided-attention To learn inter-association between two hypergraphs, we first embed a knowledge hyperedge and a question hyperedge as follows: •] is a hyperedge in E [•] . Here, f [•] is a hyperedge embedding function and ϕ [•] is a linear projection function. The design and implementation of f We define the knowledge hyperedges E k and the question hyperedges E q as a query and key-value pairs, respectively. We set a query , and a value V q = E q W Vq , where all projection matrices W [•] ∈ R d×dv are learnable parameters. Then, scaled dot product at-tention using the query, key, and value is calculated where d v is the dimension of the query and the key vector. In addition, the guided-attention which uses the question hyperedges as query and the knowledge hyperedges as key-value pairs is performed in a similar manner: Self-attention The only difference between guided-attention and self-attention is that the same input is used for both query and key-value within self-attention. For example, we set query, key, and value based on the knowledge hyperedges E k , and the self-attention for knowledge hyperedges is conducted by Attention(Q k , K k , V k ). For question hyperedges E q , self-attention is performed in a similar manner: Following the standard structure of the transformer, we build up guided-attention block and selfattention block where each block consists of each attention operation with layer normalization, residual connection, and a single feed-forward layer. By passing the guided-attention blocks and selfattention blocks sequentially, representations of knowledge hyperedges and question hyperedges are updated and finally aggregated to single vector representation as z k ∈ R dv and z q ∈ R dv , respectively. To predict an answer, we first concatenate the representation z k and z q obtained from the attention blocks and feed into a single feed-forward layer (i.e., R 2dv → R w ) to make a joint representation z. We then consider two types of answer predictor: multi-layer perceptron and similarity-based answer predictor. Multi-layer perceptron as an answer classifier p = ψ(z) is a prevalent for visual question answering problems. For similarity-based answer, we calculate a dot product similarity p = zC T between z and answer candidate set C ∈ R |A|×w where |A| is a number of candidate answers and w is a dimension of representation for each answer. The most similar answer to the joint representation is selected as an answer among the answer candidates. For training, we use only supervision from QA pairs without annotations for ground-truth reasoning paths. To this end, cross-entropy between prediction p and ground-truth t is utilized as a loss function. In this paper, we evaluate our model across various benchmark datasets: Knowledge-aware VQA (KVQA) Each node in the knowledge hypergraph and the question hypergraph is represented as a 300dimensional vector (i.e., w = 300) initialized using GloVe For entity linking for KVQA, we apply the wellknown pre-trained models for face identification: RetinaFace 5 Quantitative Results We compare the proposed model, Hypergraph Transformer, with other comparative state-of-theart methods. We report performances on original (ORG) and paraphrased (PRP) questions according to the number of graph walk. For comparative models, three kinds of methods are considered, which are graph-based, memory-based and attention-based networks. The detailed description about the comparative models is described in Appendix E. To evaluate a pure reasoning ability of the models regardless of the performance of entity linking, we first conduct experiments in the oracle setting which ground-truth named entities in an image are given. As shown in Table Entity linking setting We also present the experimental results on the entity linking setting where the named entities are not provided as the oracle setting, but detected by the module as described in Section 3.2. As shown in Table We conduct experiments on Fact-based Visual Question Answering (FVQA) as an additional benchmark dataset for knowledge-based VQA. Different from KVQA focusing on world knowledge for named entities, FVQA considers commonsense knowledge about common nouns in a given image. Here, we assume that the performance of entity linking is perfect, and evaluate the pure reasoning ability of our model. As shown in Table We confirm that our model works effectively as a general reasoning framework without considering characteristics of different knowledge sources (i.e., Wikidata for KVQA, DBpedia, ConceptNet, WebChild for FVQA). To required to answer a given question is unknown. The experimental results on diverse split of PQ and PQL datasets are provided in Table Our model shows comparable performances on PQ-{2H, 3H, M} to the state-of-the-art weaklysupervised model, SRN. Especially, Hypergraph Transformer shows significant performance improvement (78.6% → 90.5% for PQL-2H, 78.3% → 94.5% for PQL-M) on PQL. We highlight that PQL is more challenging dataset than PQ in that PQL not only covers more knowledge facts but also has fewer QA instances. We observe that the accuracy on PQL-3H is relatively lower than the other splits. This is due to the insufficient number of training QA pairs in PQL-3H. When we use PQL-3H-More which has twice more QA pairs (1031 → 2062) on the same knowledge base as PQL-3H, our model achieves 95.4% accuracy. We verify the effectiveness of each module in Hypergraph Transformer. To analyze the performances of the variants in our model, we use KVQA which is a representative and large-scale dataset for knowledge-based VQA. Here, we mainly focus on two aspects: i) effect of hypergraph and ii) effect of attention mechanism. To evaluate a pure reasoning ability of the models, we conduct experiments in the oracle setting. To analyze the effectiveness of hypergraph-based input representation, we conduct comparative experiments on the different types of input formats for Transformer architecture. Here, we consider the two types of input format, which are single-wordunit and hyperedge-based representations. Compared to hyperedge-based inputs considering multiple relational facts as a input token, single-wordunit takes every entity and relation tokens as separate input tokens. We note that using single-wordunit-based input format for both knowledge and question is the standard settings for the Transformer network and using hyperedge-based input format for both is the proposed model, Hypergraph Transformer. We set the Transformer (SA+GA) as a backbone model, and present the results in Table We compare the performances with different number of graph walks used to construct a knowledge hypergraph (i.e., 1-hop, 2-hop, and 3-hop). All models except ours show slightly lower performance on the 3-hop graph than on the 2-hop graph. We observe that the number of extracted knowledge facts increases when the number of graph walk increases, and unnecessary facts for answering a given question are usually included. Nonetheless, our model shows robust reasoning performance when a large and noisy knowledge facts are given. To investigate the impacts of each attention block (i.e., GA and SA), ablation studies are shown in Table Figure In this paper, we proposed Hypergraph Transformer for multi-hop reasoning over knowledge graph under weak supervision. Hypergraph Transformer adopts hypergraph-based representation to encode high-order semantics of knowledge and questions and considers associations between a knowledge hypergraph and a question hypergraph. Here, each node representation in the hypergraphs is updated by inter-and intra-attention mechanisms in two hypergraphs, rather than by iterative message passing scheme. Thus, Hypergraph Transformer can mitigate the well-known over-smoothing problem in the previous graph-based methods exploiting the message passing scheme. Extensive experiments on various datasets, KVQA, FVQA, PQ, and PQL validated that Hypergraph Transformer conducts accurate inference by focusing on knowledge evidences necessary for question from a large knowledge graph. Although not covered in this paper, an interesting future work is to construct heterogeneous knowledge graph that includes more diverse knowledge sources (e.g. documents on web). Appendix. This supplementary material provides additional information not described in the main text due to the page limit. The contents of this appendix are as follows: In Section A, we show the detailed statistics for the diverse splits of four benchmark datasets, i.e., KVQA, FVQA, PQ and PQL. In Section B and C, we present the additional quantitative and qualitative analyses on KVQA and PQ datasets, respectively. In Section D, we describe the experimental details for each dataset. In Section E, we depict the implementation details of comparative models for KVQA. The diverse split statistics for four benchmark datasets, KVQA Here, we analyze more in-depth on KVQA dataset concerning i) categories of question, and ii) types of answer selector. All models are under the same setting of ORG+3-hop reported in Table We analyze QA performances over different question categories in Table To validate the impact of similarity-based answer selector, we replace the similarity-based answer selector (SIM) with a multi-layer perceptron (MLP). We first note that KVQA dataset includes a large number of unique answers (19,360), and contains a lot of zero-shot and few-shot answers in test phase. As shown in Table We follow the experimental settings suggested in We follow the experimental settings suggested in We follow the same experimental settings suggested in For comparative models for KVQA, three kinds of methods are considered, which are graph-based, memory-based and attention-based networks. Graph-based networks. Graph convolutional networks (GCN) Attention-based networks. Bilinear attention networks (BAN) The knowledge and question graph are encoded separately by two graph convolutional networks (GCN) ) where  = A + I, A is an adjacency matrix of the graph, I is an identity matrix, D is a degree matrix of A, W (l) is the model parameters of l-th layer, and H (l) is the representations of the graph in the l-th layer. Here, H (0) is the word embeddings of each entity in the knowledge and question graph. After propagation and aggregation phase, the knowledge and question graph representations are obtained. Then, the two graph representations are concatenated and fed into a single layer feed-forward layer to get joint representation. As the same as graph convolutional networks, the knowledge and question graph are encoded separately by two gated graph neural networks (GGNN). Each GGNN model consists of three gated recurrent propagation layers and a graphlevel aggregator. Motivated by Gated Recurrent Units T where x v is the v-th word embedding of each en-tity in the knowledge and question graph, a ] T + b where the matrix A determines how nodes in the graph communicate each other and b is a bias vector. Then, the update gate and reset gate are computed as follows: ) where σ is a logistic sigmoid function, and W [•] and U [•] are learnable parameters. Finally, the hidden states of nodes in the given graph are updates as h ). After the propagation phase, the nodes in the graph are aggregated to a graph-level representation as h G = tanh( v∈V σ(i(h where i and j are a single layer feed-forward layer, respectively. Then, the two aggregated graph representations are concatenated and fed into another single layer feed-forward layer to get joint representation of question and knowledge graph. We reproduce end-to-end memory networks Bilinear attention networks exploit a multi-head co-attention mechanism between knowledge and question. BAN calculates soft attention scores between knowledge entities and question words as follows: where M q , M k are a row-wise concatenated question words and knowledge entities, W [•] is learn-able matrices, and • is element-wise multiplication. Based on the attention map A, the joint feature is obtained as follows: where the subscript i denotes the i-th index of column vectors in each matrix. For multi-head attention, the attended outputs with different heads are concatenated and fed into a single layer feedforward layer to make a final representation. Here, we use four attention heads as multi-head. The model architecture and detailed operation of hypergraph attention networks are similar to that of BAN. The difference between BAN and HAN is the abstraction level of the input. For HAN, the hyperedges sampled by stochastic graph walk are fed into the co-attention mechanism. What HAN and our model have in common is introducing a hypergraph to consider high-order relationships in question graph and knowledge graph. Both models share the similar motivation, but the core operations are quite different. Especially, HAN employs stochastic graph walk to construct question and knowledge hypergraph. Due to the randomness of the stochasticity, misinformed or incomplete hyperedges can be extracted. The model architectures of Transformer (SA) and Transformer (SA+GA) presented in this paper are the same as Hypergraph Transformer. The only difference is the abstraction level of input. The Transformer (SA) and Transformer (SA+GA) take single-word-unit as input tokens, and Hypergraph Transformer takes hyperedges as input tokens. Following
| 1,034 | 1,973 | 1,034 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.