id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
d883c7bb1e95895cd4f57a8f430e35_2
Second, we combine the cross-lingual projection part-of-speech taggers of<cite> Das and Petrov (2011)</cite> with the grammar induction system of Naseem et al. (2010) -which requires a universal tagset -to produce a completely unsupervised grammar induction system for multiple languages, that does not require gold POS tags in the target language.
extends differences motivation
d883c7bb1e95895cd4f57a8f430e35_3
In our experiments, we did not make use of refined categories, as the POS tags induced by<cite> Das and Petrov (2011)</cite> were all coarse.
uses
d883c7bb1e95895cd4f57a8f430e35_4
We present results on the same eight IndoEuropean languages as<cite> Das and Petrov (2011)</cite> , so that we can make use of their automatically projected POS tags.
similarities uses
d8a9f0578e389f8cbe153783dc47b3_0
VQA Datasets: We conduct our analysis on a total of 459,861 visual questions and 4,598,610 answers coming from today's largest freely-available VQA benchmark <cite>[3]</cite> . The remaining 90,000 VQAs are about abstract scenes that were created with clipart and show 100 types of everyday objects often observed in real images <cite>[3]</cite> .
uses
d8a9f0578e389f8cbe153783dc47b3_1
More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video [2] , <cite>[3]</cite> , [4] .
background
d8a9f0578e389f8cbe153783dc47b3_2
Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question <cite>[3]</cite> , [1] , [5] .
background
d8a9f0578e389f8cbe153783dc47b3_3
Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question <cite>[3]</cite> , [1] , [5] . We instead propose to dynamically solicit the number of human responses based on each visual question.
differences background
d8a9f0578e389f8cbe153783dc47b3_4
Specifically, researchers in fields as diverse as computer vision <cite>[3]</cite> , computational linguistics [2] , and machine learning [4] rely on large datasets to improve their VQA algorithms.
background
d8a9f0578e389f8cbe153783dc47b3_5
Current methods to create these datasets assume a fixed number of human answers per visual question <cite>[3]</cite> , [5] , thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant.
background
d8a9f0578e389f8cbe153783dc47b3_6
Visual Question Answering Services: Researchers spanning communities as diverse as human computer interaction, machine learning, computational linguistics, and computer vision have proposed a variety of ways to answer questions about images [2] , <cite>[3]</cite> , [1] , [4] .
background
d8a9f0578e389f8cbe153783dc47b3_7
For example, crowd-powered systems aim to supply a prespecified, fixed number of answers per visual question [1] and automated systems return a single answer for every visual question [2] , <cite>[3]</cite> , [4] .
background
d8a9f0578e389f8cbe153783dc47b3_8
We demonstrate the predictive advantage of our system over relying on the uncertainty of a VQA algorithm in its predicted answer <cite>[3]</cite> .
differences
d8a9f0578e389f8cbe153783dc47b3_9
Other systems ensure a fixed number of answers are collected per visual question <cite>[3]</cite> , [5] .
background
d8a9f0578e389f8cbe153783dc47b3_10
Other systems ensure a fixed number of answers are collected per visual question <cite>[3]</cite> , [5] . Unlike prior work, our goal is to collect answers in a way that is both economical and complete in capturing the diversity of plausible answers for all visual questions.
background differences
d8a9f0578e389f8cbe153783dc47b3_11
VQA Datasets: We conduct our analysis on a total of 459,861 visual questions and 4,598,610 answers coming from today's largest freely-available VQA benchmark <cite>[3]</cite> .
uses
d8a9f0578e389f8cbe153783dc47b3_12
The remaining 90,000 VQAs are about abstract scenes that were created with clipart and show 100 types of everyday objects often observed in real images <cite>[3]</cite> .
background
d8a9f0578e389f8cbe153783dc47b3_13
Towards this aim, visual questions were collected by asking three Amazon Mechanical Turk (AMT) crowd workers to look at a given image and generate a text-based question about it that would "stump a smart robot" <cite>[3]</cite> .
uses
d8a9f0578e389f8cbe153783dc47b3_14
Each answer was collected by showing a worker an image with associated question and asking him/her to respond with "a brief phrase and not a complete sentence" <cite>[3]</cite> .
uses
d8a9f0578e389f8cbe153783dc47b3_15
We pre-process each answer by converting all letters to lower case, converting numbers to digits, and removing punctuation and articles (i.e., "a", "an", "the"), as was done in prior work <cite>[3]</cite> .
uses
d8a9f0578e389f8cbe153783dc47b3_16
In practice, prior work deems answers as 100% valid using blind trust (i.e., m = 1 person) [22] as well as more conservative answer validation schemes (i.e., m = 3 people) <cite>[3]</cite> .
background
d8a9f0578e389f8cbe153783dc47b3_17
We establish valid answers by tallying the number of times each unique answer is given and then only accepting answers observed from at least m people, where m is an application-specific parameter to set. In practice, prior work deems answers as 100% valid using blind trust (i.e., m = 1 person) [22] as well as more conservative answer validation schemes (i.e., m = 3 people) <cite>[3]</cite> .
background differences
d8a9f0578e389f8cbe153783dc47b3_18
We capitalize on today's largest visual question answering dataset <cite>[3]</cite> to evaluate our prediction system, which includes 369,861 visual questions about real images.
uses
d8a9f0578e389f8cbe153783dc47b3_19
Therefore, we employ as a baseline a related VQA algorithm [25] , <cite>[3]</cite> which produces for a given visual question an answer with a confidence score.
uses
d8a9f0578e389f8cbe153783dc47b3_20
Therefore, we employ as a baseline a related VQA algorithm [25] , <cite>[3]</cite> which produces for a given visual question an answer with a confidence score. However, it predicts the system's uncertainty in its own answer, whereas we are interested in the humans' collective disagreement on the answer.
uses differences
d8a9f0578e389f8cbe153783dc47b3_21
Both our proposed classification systems outperform the VQA Algorithm <cite>[3]</cite> baseline; e.g., Ours -RF yields a 12 percentage point improvement with respect to AP.
differences
d8a9f0578e389f8cbe153783dc47b3_22
Our overall finding that most of the predictive power stems from language-based features parallels feature analysis findings in the automated VQA literature <cite>[3]</cite> , [22] .
similarities
d8a9f0578e389f8cbe153783dc47b3_23
Today's status quo is to either uniformly collect N answers for every visual question <cite>[3]</cite> or collect multiple answers where the number is determined by external crowdsourcing conditions [1] .
background
d8a9f0578e389f8cbe153783dc47b3_24
Today's status quo is to either uniformly collect N answers for every visual question <cite>[3]</cite> or collect multiple answers where the number is determined by external crowdsourcing conditions [1] . Our system instead spends a human budget by predicting the number of answers to collect Fig. 6 : We propose a novel application of predicting the number of redundant answers to collect from the crowd per visual question to efficiently capture the diversity of all answers for all visual questions.
differences background
d8a9f0578e389f8cbe153783dc47b3_26
Baselines: We compare our approach to the following baselines: VQA Algorithm <cite>[3]</cite> :
uses
d8a9f0578e389f8cbe153783dc47b3_27
This predictor illustrates the best a user can achieve today with crowd-powered systems [1] , [6] or with current dataset collection methods <cite>[3]</cite> , [5] .
background
d8a9f0578e389f8cbe153783dc47b3_28
Figure 6b also illustrates the advantage of our system over a related VQA algorithm <cite>[3]</cite> for our novel application of costsensitive answer collection from a crowd.
differences
d8c3d04514c0867d78a7603e49ea9b_0
The rest of the corpus was automatically validated synthetic material using general data from Leipzig (Goldhahn et al., 2012 Engine customization The data was cleaned using the Bicleaner tool <cite>(Sánchez-Cartagena et al., 2018)</cite> .
uses
d8c3d04514c0867d78a7603e49ea9b_1
Engine customization The data was cleaned using the Bicleaner tool <cite>(Sánchez-Cartagena et al., 2018)</cite> .
uses
d8e3cdea7f61152ed37395c5f9393e_0
These methodologies evaluate coherence over the top-N topic words, where N is selected arbitrarily: for Chang et al. (2009) , N = 5, whereas for<cite> Newman et al. (2010)</cite> , Aletras and Stevenson (2013) and Lau et al. (2014) , N = 10.
background
d8e3cdea7f61152ed37395c5f9393e_1
We also test the PMI methodology<cite> (Newman et al., 2010)</cite> and make the same observation.
uses
d8e3cdea7f61152ed37395c5f9393e_2
Although there are existing datasets with human-annotated coherence scores<cite> (Newman et al., 2010</cite>; Aletras and Stevenson, 2013; Lau et al., 2014; Chang et al., 2009) , these topics were annotated using a fixed cardinality setting (e.g. 5 or 10).
background
d8e3cdea7f61152ed37395c5f9393e_3
Although there are existing datasets with human-annotated coherence scores<cite> (Newman et al., 2010</cite>; Aletras and Stevenson, 2013; Lau et al., 2014; Chang et al., 2009) , these topics were annotated using a fixed cardinality setting (e.g. 5 or 10). We thus develop a new dataset for this experiment.
differences
d8e3cdea7f61152ed37395c5f9393e_4
2 There are two primary approaches to assessing topic coherence: (1) via word intrusion (Chang et (2) by directly measuring observed coherence<cite> (Newman et al., 2010</cite>; Lau et al., 2014) .
background
d8e3cdea7f61152ed37395c5f9393e_5
2 There are two primary approaches to assessing topic coherence: (1) via word intrusion (Chang et (2) by directly measuring observed coherence<cite> (Newman et al., 2010</cite>; Lau et al., 2014) . As such, we use the second approach as the means for generating our gold standard, asking users to judge topic coherence directly over different topic cardinalities.
uses
d8e3cdea7f61152ed37395c5f9393e_6
3 To collect the coherence judgements, we used Amazon Mechanical Turk and asked Turkers to rate topics in terms of coherence using a 3-point ordinal scale, where 1 indicates incoherent and 3 very coherent<cite> (Newman et al., 2010)</cite> .
uses
d91913d0c8e669153e8d477e19aef2_0
The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (<cite>Conneau et al., 2018</cite>), which we examine here.
uses
d91913d0c8e669153e8d477e19aef2_1
Some researchers have even presented unsupervised methods that do not rely on any form of cross-lingual supervision at all (Barone, 2016; <cite>Conneau et al., 2018</cite>; Zhang et al., 2017) .
background
d91913d0c8e669153e8d477e19aef2_2
Unsupervised approaches to learning crosslingual word embeddings are based on the assumption that monolingual word embedding graphs are approximately isomorphic, that is, after removing a small set of vertices (words) (Mikolov et al., 2013b; Barone, 2016; Zhang et al., 2017; <cite>Conneau et al., 2018</cite>) .
background
d91913d0c8e669153e8d477e19aef2_3
Contributions We focus on the recent stateof-the-art unsupervised model of <cite>Conneau et al. (2018)</cite> .
uses
d91913d0c8e669153e8d477e19aef2_4
Our results indicate this assumption is not true in general, and that approaches based on this assumption have important limitations. Contributions We focus on the recent stateof-the-art unsupervised model of <cite>Conneau et al. (2018)</cite> .
motivation
d91913d0c8e669153e8d477e19aef2_5
1 Our contributions are: (a) In §2, we show that the monolingual word embeddings used in <cite>Conneau et al. (2018)</cite> are not approximately isomorphic, using the VF2 algorithm (Cordella et al., 2001 ) and we therefore introduce a metric for quantifying the similarity of word embeddings, based on Laplacian eigenvalues. (b) In §3, we identify circumstances under which the unsupervised <cite>bilingual dictionary induction (BDI)</cite> algorithm proposed in <cite>Conneau et al. (2018)</cite> does not lead to good performance.
motivation uses
d91913d0c8e669153e8d477e19aef2_6
Our main finding is that the performance of unsupervised <cite>BDI</cite> depends heavily on all three factors: the language pair, the comparability of the monolingual corpora, and the parameters of the word embedding algorithms.
uses
d91913d0c8e669153e8d477e19aef2_7
As mentioned, recent work focused on unsupervised <cite>BDI</cite> assumes that monolingual word embedding spaces (or at least the subgraphs formed by the most frequent words) are approximately isomorphic.
background
d91913d0c8e669153e8d477e19aef2_8
As mentioned, recent work focused on unsupervised <cite>BDI</cite> assumes that monolingual word embedding spaces (or at least the subgraphs formed by the most frequent words) are approximately isomorphic. In this section, we show, by investigating the nearest neighbor graphs of word embedding spaces, that word embeddings are far from isomorphic.
differences motivation
d91913d0c8e669153e8d477e19aef2_9
In §4.7, we correlate our similarity metric with performance on unsupervised <cite>BDI</cite>.
uses
d91913d0c8e669153e8d477e19aef2_10
If we take the top k most frequent words in English, and the top k most frequent words in German, and build nearest neighbor graphs for English and German using the monolingual word embeddings used in <cite>Conneau et al. (2018)</cite> , the graphs are of course very different.
uses
d91913d0c8e669153e8d477e19aef2_11
Eigenvector similarity Since the nearest neighbor graphs are not isomorphic, even for frequent translation pairs in neighboring languages, we want to quantify the potential for unsupervised <cite>BDI</cite> using a metric that captures varying degrees of graph similarity.
motivation
d91913d0c8e669153e8d477e19aef2_12
We discuss the correlation between unsupervised <cite>BDI</cite> performance and approximate isospectrality or eigenvector similarity in §4.7.
uses
d91913d0c8e669153e8d477e19aef2_13
Unsupervised neural machine translation relies on <cite>BDI</cite> using cross-lingual embeddings (Lample et al., 2018a; Artetxe et al., 2018) , which in turn relies on the assumption that word embedding graphs are approximately isomorphic.
background
d91913d0c8e669153e8d477e19aef2_14
The work of <cite>Conneau et al. (2018)</cite> , which we focus on here, also makes several implicit assumptions that may or may not be necessary to achieve such isomorphism, and which may or may not scale to low-resource languages. <cite>The algorithms</cite> are not intended to be limited to learning scenarios where these assumptions hold, but since they do in the reported experiments, it is important to see to what extent these assumptions are necessary for <cite>the algorithms</cite> to produce useful embeddings or dictionaries.
uses
d91913d0c8e669153e8d477e19aef2_15
We focus on the work of <cite>Conneau et al. (2018)</cite> , who present a fully unsupervised approach to aligning monolingual word embeddings, induced using fastText (Bojanowski et al., 2017) . We describe the <cite>learning algorithm</cite> in §3.2.
uses
d91913d0c8e669153e8d477e19aef2_16
<cite>Conneau et al. (2018)</cite> consider a specific set of learning scenarios:
background
d91913d0c8e669153e8d477e19aef2_17
3 We evaluate <cite>Conneau et al. (2018)</cite> on (English to) Estonian (ET), Finnish (FI), Greek (EL), Hungarian (HU), Polish (PL), and Turkish (TR) in §4.2, to test whether the selection of languages in the original study introduces a bias.
motivation uses
d91913d0c8e669153e8d477e19aef2_18
We evaluate <cite>Conneau et al. (2018)</cite> on pairs of embeddings induced with different hyper-parameters in §4.4.
uses
d91913d0c8e669153e8d477e19aef2_19
We also investigate the sensitivity of unsupervised <cite>BDI</cite> to the dimensionality of the monolingual word embeddings in §4.5.
motivation
d91913d0c8e669153e8d477e19aef2_21
We now introduce the method of <cite>Conneau et al. (2018)</cite> .
uses
d91913d0c8e669153e8d477e19aef2_22
We now introduce the method of <cite>Conneau et al. (2018)</cite> . 4 <cite>The approach</cite> builds on existing work on learning a mapping between monolingual word embeddings (Mikolov et al., 2013b; Xing et al., 2015) and consists of the following steps: 1) Monolingual word embeddings: An off-the-shelf word embedding algorithm (Bojanowski et al., 2017 ) is used to learn source and target language spaces X and Y .
background
d91913d0c8e669153e8d477e19aef2_23
In the experiments in <cite>Conneau et al. (2018)</cite> , as well as in ours, the iterative Procrustes refinement improves performance across the board.
similarities
d91913d0c8e669153e8d477e19aef2_24
Using this seed dictionary, we then run the refinement step using <cite>Procrustes analysis</cite> of <cite>Conneau et al. (2018)</cite> .
uses
d91913d0c8e669153e8d477e19aef2_25
In the following experiments, we investigate the robustness of unsupervised cross-lingual word embedding learning, varying the language pairs, monolingual corpora, hyper-parameters, etc., to obtain a better understanding of when and why unsupervised <cite>BDI</cite> works.
uses
d91913d0c8e669153e8d477e19aef2_26
We use bilingual dictionaries compiled by <cite>Conneau et al. (2018)</cite> as gold standard, and adopt <cite>their</cite> evaluation procedure: each test set in each language consists of 1500 gold translation pairs.
uses
d91913d0c8e669153e8d477e19aef2_27
Following a standard evaluation practice (Vulić and Moens, 2013; Mikolov et al., 2013b; <cite>Conneau et al., 2018</cite>) , we report Precision at 1 scores (P@1): how many times one of the correct translations of a source word w is retrieved as the nearest neighbor of w in the target language.
uses
d91913d0c8e669153e8d477e19aef2_28
Our default experimental setup closely follows the setup of <cite>Conneau et al. (2018)</cite> .
uses
d91913d0c8e669153e8d477e19aef2_29
The third columns lists eigenvector similarities between 10 randomly sampled source language nearest neighbor subgraphs of 10 nodes and the subgraphs of their translations, all from the benchmark dictionaries in <cite>Conneau et al. (2018)</cite> .
uses
d91913d0c8e669153e8d477e19aef2_31
Agglutinative languages with mixed or double marking show more morphological variance with content words, and we speculate whether unsupervised <cite>BDI</cite> is challenged by this kind of morphological complexity.
motivation
d91913d0c8e669153e8d477e19aef2_32
To evaluate this, we experiment with Estonian and Finnish, and we include Greek, Hungarian, Polish, and Turkish to see how their approach fares on combinations of these two morphological traits. The results are quite dramatic. The approach achieves impressive performance for Spanish, one of the languages <cite>Conneau et al. (2018)</cite> include in <cite>their</cite> paper. For the languages we add here, performance is less impressive.
extends motivation
d91913d0c8e669153e8d477e19aef2_34
In general, unsupervised <cite>BDI</cite>, using the approach in <cite>Conneau et al. (2018)</cite> , seems challenged when pairing En-glish with languages that are not isolating and do not have dependent marking.
uses
d91913d0c8e669153e8d477e19aef2_35
Monolingual word embeddings used in <cite>Conneau et al. (2018)</cite> are induced from Wikipedia, a nearparallel corpus.
background
d91913d0c8e669153e8d477e19aef2_36
In order to assess the sensitivity of unsupervised <cite>BDI</cite> to the comparability and domain similarity of the monolingual corpora, we replicate the experiments in <cite>Conneau et al. (2018)</cite> using combinations of word embeddings extracted from three different domains: 1) parliamentary proceedings from EuroParl.v7 (Koehn, 2005) , 2) Wikipedia (Al- Rfou et al., 2013) , and 3) the EMEA corpus in the medical domain (Tiedemann, 2009) .
uses
d91913d0c8e669153e8d477e19aef2_37
For each pair of monolingual corpora, we compute their domain (dis)similarity by calculating the Jensen-Shannon divergence (El-Gamal, 1991) , based on term distributions. 8 We show the results of unsupervised <cite>BDI</cite> in Figures 2g-i .
uses
d91913d0c8e669153e8d477e19aef2_38
<cite>Conneau et al. (2018)</cite> use the same hyperparameters for inducing embeddings for all languages.
background
d91913d0c8e669153e8d477e19aef2_39
<cite>Conneau et al. (2018)</cite> use the same hyperparameters for inducing embeddings for all languages. This is of course always practically possible, but we are interested in seeing whether <cite>their</cite> approach works on pre-trained embeddings induced with possibly very different hyper-parameters.
motivation
d91913d0c8e669153e8d477e19aef2_41
<cite>BDI</cite> models are evaluated on a held-out set of query words.
background
d91913d0c8e669153e8d477e19aef2_42
<cite>BDI</cite> models are evaluated on a held-out set of query words. Here, we analyze the performance of the unsupervised approach across different parts-ofspeech, frequency bins, and with respect to query words that have orthographically identical counterparts in the target language with the same or a different meaning.
motivation
d91913d0c8e669153e8d477e19aef2_43
Finally, in order to get a better understanding of the limitations of unsupervised <cite>BDI</cite>, we correlate the graph similarity metric described in §2 (right column of Table 2 ) with performance across languages (left column).
uses
d91913d0c8e669153e8d477e19aef2_44
Since we already established that the monolingual word embeddings are far from isomorphic-in contrast with the intuitions motivating previous work (Mikolov et al., 2013b; Barone, 2016; Zhang et al., 2017; <cite>Conneau et al., 2018</cite> )-we would like to establish another diagnostic metric that identifies embedding spaces for which the approach in <cite>Conneau et al. (2018)</cite> is likely to work.
motivation differences
d91913d0c8e669153e8d477e19aef2_46
Cross-lingual word embeddings Cross-lingual word embedding models typically, unlike <cite>Conneau et al. (2018)</cite> , require aligned words, sentences, or documents (Levy et al., 2017) .
background
d91913d0c8e669153e8d477e19aef2_47
Zhang et al. (2017) , in addition, use different forms of regularization for convergence, while <cite>Conneau et al. (2018)</cite> uses additional steps to refine the induced embedding space.
background
d91913d0c8e669153e8d477e19aef2_48
All unsupervised NMT methods critically rely on accurate unsupervised <cite>BDI</cite> and back-translation.
background
d91913d0c8e669153e8d477e19aef2_49
We investigated when unsupervised <cite>BDI</cite> (<cite>Conneau et al., 2018</cite> ) is possible and found that differences in morphology, domains or word embedding algorithms may challenge this approach.
uses
d91913d0c8e669153e8d477e19aef2_50
Further, we found eigenvector similarity of sampled nearest neighbor subgraphs to be predictive of unsupervised <cite>BDI</cite> performance. We hope that this work will guide further developments in this new and exciting field.
future_work
d9877fc29c2e4f20805076392a70d0_0
These two lines of research converge in prior work to show, e.g., the increasing association of the lexical item 'gay' with the meaning dimension of homosexuality (Kim et al., 2014;<cite> Kulkarni et al., 2015)</cite> .
background
d9877fc29c2e4f20805076392a70d0_1
It is thus a continuation of prior work, in which we investigated historical English texts only (Hellrich and Hahn, 2016a) , and also influenced by the design decisions of Kim et al. (2014) and <cite>Kulkarni et al. (2015)</cite> which were the first to use word embeddings in diachronic studies.
uses background
d9877fc29c2e4f20805076392a70d0_2
These models must either be trained in a continuous manner where the model for each time span is initialized with its predecessor (Kim et al., 2014; Hellrich and Hahn, 2016b) , or a mapping between models for different points in time must be calculated<cite> (Kulkarni et al., 2015</cite>; Hamilton et al., 2016) .
background
d9877fc29c2e4f20805076392a70d0_3
These models must either be trained in a continuous manner where the model for each time span is initialized with its predecessor (Kim et al., 2014; Hellrich and Hahn, 2016b) , or a mapping between models for different points in time must be calculated<cite> (Kulkarni et al., 2015</cite>; Hamilton et al., 2016) . The first approach cannot be performed in parallel and is thus rather time-consuming, if texts are not subsampled.
motivation
d9877fc29c2e4f20805076392a70d0_4
Following <cite>Kulkarni et al. (2015)</cite> , we trained our models on all 5-grams occurring during five consecutive years for the two time spans, 5 1900-1904 and 2005-2009 ; the number of 5-grams 6 for each time span is listed in Table 1 .
uses
d9877fc29c2e4f20805076392a70d0_5
The averaged cosine values between word embeddings before and after an epoch are used as a convergence measure c (Kim et al., 2014;<cite> Kulkarni et al., 2015)</cite> .
uses
d9877fc29c2e4f20805076392a70d0_6
The convergence criterion proposed by <cite>Kulkarni et al. (2015)</cite> , i.e., c = 0.9999, was never reached (this observation might be explained by Kulkarni et al.'s decision not to reset the learning rate for each training epoch, as was done by us and Kim et al. (2014) ).
differences
d9877fc29c2e4f20805076392a70d0_7
SVD PPMI , which are conceptually not bothered by the reliability problems we discussed here, were not a good fit for the hyperparameters we adopted from <cite>Kulkarni et al. (2015)</cite> .
uses differences
d987872352e4602fd48936cf2fdab8_0
1 It is becoming more challenging since recent conversational interaction systems such as Amazon Alexa, Google Assistant, and Microsoft Cortana support more than thousands of domains developed by external developers [4, 3,<cite> 5]</cite> .
background
d987872352e4602fd48936cf2fdab8_1
Evaluating on an annotated dataset from the user logs of a large-scale conversation interaction system, we show that the proposed approach significantly improves the domain classification especially when hypothesis reranking is used [14,<cite> 5]</cite> .
differences
d987872352e4602fd48936cf2fdab8_2
We take a hypothesis reranking approach, which is widely used in large-scale domain classification for higher scalabil- ity [14,<cite> 5]</cite> .
similarities uses
d987872352e4602fd48936cf2fdab8_3
Figure 2 shows the overall architecture of the hypothesis reranker that is similar to<cite> [5]</cite> .
similarities
d987872352e4602fd48936cf2fdab8_4
We set k=3 in our experiments following<cite> [5]</cite> .
similarities
d987872352e4602fd48936cf2fdab8_5
We utilize utterances with explicit invocation patterns from an intelligent conversational system for the model training similarly to<cite> [5]</cite> and [18] .
uses