id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
d9c0e641f8ceb61e5d6e416bfc6492_0
It has been done for constituency parsing for example by Collins (1999) but also for dependency parsing for example by <cite>Nilsson et al. (2007)</cite> .
background
d9c0e641f8ceb61e5d6e416bfc6492_1
<cite>Nilsson et al. (2007)</cite> modified the representation of several constructions in several languages and obtained a consistent improvement in parsing accuracy.
background
d9c0e641f8ceb61e5d6e416bfc6492_2
In this paper, we will investigate the case of the verb group construction and attempt to reproduce the study by <cite>Nilsson et al. (2007)</cite> on UD treebanks to find out whether or not the alternative representation is useful for parsing with UD.
uses
d9c0e641f8ceb61e5d6e416bfc6492_3
<cite>Nilsson et al. (2007)</cite> have shown that these same modifications as well as the modification of nonprojective structures helps parsing in four languages.
background
d9c0e641f8ceb61e5d6e416bfc6492_5
<cite>Nilsson et al. (2007)</cite> show that making the auxiliary the head of the dependency as in Figure 2 is useful for parsing Czech and Slovenian.
background
d9c0e641f8ceb61e5d6e416bfc6492_6
We will follow the methodology from <cite>Nilsson et al. (2007)</cite> , that is, to transform, parse and then detransform the data so as to compare the original and the transformed model on the original gold standard.
uses
d9c0e641f8ceb61e5d6e416bfc6492_7
For comparability with the study in <cite>Nilsson et al. (2007)</cite> , and because we used a slightly modified version of their algorithm, we also tested the approach on the versions of the Czech and Slovenian treebanks that they worked on, respectively version 1.0 of the PDT (Hajic et al., 2001 ) and the 2006 version of SDT (Deroski et al., 2006) .
uses differences
d9c0e641f8ceb61e5d6e416bfc6492_8
In this paper, we have attempted to reproduce a study by <cite>Nilsson et al. (2007)</cite> that has shown that making auxiliaries heads in verb groups improves parsing but failed to show that those results port to parsing with Universal Dependencies.
uses background
da2429450c8d1f1f3e72383c86ec73_0
Our approach was to build up on the system of the last year's winning approach by NRC Canada 2013 <cite>(Mohammad et al., 2013)</cite> , with some modifications and additions of features, and additional sentiment lexicons. Furthermore, we used a sparse ( 1 -regularized) SVM, instead of the more commonly used 2 -regularization, resulting in a very sparse linear classifier.
extends
da2429450c8d1f1f3e72383c86ec73_1
Our approach was to build up on the system of the last year's winning approach by NRC Canada 2013 <cite>(Mohammad et al., 2013)</cite> , with some modifications and additions of features, and additional sentiment lexicons.
extends uses
da2429450c8d1f1f3e72383c86ec73_2
Compared to the previous NRC Canada 2013 approach <cite>(Mohammad et al., 2013)</cite> , our main changes are the following three: First we use sparse linear classifiers instead of classical dense ones.
extends
da2429450c8d1f1f3e72383c86ec73_3
We tried to reproduce the same classifier as in <cite>(Mohammad et al., 2013)</cite> as a baseline for comparison.
uses
da2429450c8d1f1f3e72383c86ec73_4
Unfortunately our replica system of<cite> Mohammad et al. (2013)</cite> only achieved an F1-score of 63.25 on the Twitter-2013 test set, while their score in the 2013 competition on the same test set was 69.02, nearly 6 points higher in F1.
differences
da2429450c8d1f1f3e72383c86ec73_5
For each lexicon, the 4 scores were the same as in <cite>(Mohammad et al., 2013)</cite> , i.e. per tweet, we use the number of tokens appearing in the lexicon, the sum and the max of the scores, and the last non-zero score.
similarities
da2429450c8d1f1f3e72383c86ec73_6
All text was transformed to lowercase (except for those features in <cite>(Mohammad et al., 2013)</cite> which use case information).
uses
da2429450c8d1f1f3e72383c86ec73_7
We used the same set of lexicons as in <cite>(Mohammad et al., 2013)</cite> , with one addition:
extends uses
da2429450c8d1f1f3e72383c86ec73_8
To construct the lexicon, we extracted the POS n-grams (as we described in Section 3.1.1 above) from all texts. In comparison,<cite> Mohammad et al. (2013)</cite> used noncontiguous n-grams (unigram-unigram, unigrambigram, and bigram-bigram pairs) . We only used POS n-grams with 2 tokens kept original, and the remaining ones replaced by their POS tag, with n ranging from 3 to 6.
differences
da2429450c8d1f1f3e72383c86ec73_9
While in <cite>(Mohammad et al., 2013)</cite> , the score for each n-gram was computed using point-wise mutual information (PMI) with the labels, we trained a linear classifier on the same labels instead.
differences
da2429450c8d1f1f3e72383c86ec73_10
We used the same 3 existing sentiment lexicons as in <cite>(Mohammad et al., 2013)</cite> .
similarities uses
da2429450c8d1f1f3e72383c86ec73_11
The NRC hashtag sentiment lexicon was generated automatically from a set of 775k tweets containing a hashtag of a small predefined list of positive and negative hashtags <cite>(Mohammad et al., 2013)</cite> .
uses
da2429450c8d1f1f3e72383c86ec73_12
Our system is built up on the approach of NRC Canada <cite>(Mohammad et al., 2013)</cite> , with several modifications and extensions (e.g. sparse linear classifiers,
extends uses
db1fd6f10a3ee22e22093d50395217_0
The other attempt of same 6 way PIBOSO classification on the same dataset is presented by <cite>(Verbeke et al., 2012)</cite> .
background
db1fd6f10a3ee22e22093d50395217_1
The other attempt of same 6 way PIBOSO classification on the same dataset is presented by <cite>(Verbeke et al., 2012)</cite> . Unlike us and Kim et al. (2011) they have used SVM-HMM 2 for learning.
differences
db1fd6f10a3ee22e22093d50395217_2
Please note that the way we categorised an abstract as structured or unstructured might be a bit different from previous approaches by Kim et al. (2011) and<cite> Verbeke et al. 2012</cite> .
differences
db1fd6f10a3ee22e22093d50395217_3
Using sentence ordering labels for unstructured abstracts is the main difference compared to earlier methods (Kim et al., 2011;<cite> Verbeke et al., 2012)</cite> .
differences
db1fd6f10a3ee22e22093d50395217_4
However, we compare our results with (Kim et al., 2011) and <cite>(Verbeke et al., 2012)</cite> using the microaveraged F-scores as in Table 3 .
uses
db1fd6f10a3ee22e22093d50395217_5
However, we compare our results with (Kim et al., 2011) and <cite>(Verbeke et al., 2012)</cite> using the microaveraged F-scores as in Table 3 . Our system outperformed previous works in unstructured abstracts (22% higher than state-of-the-art).
differences
db1fd6f10a3ee22e22093d50395217_6
Our system outperformed earlier existing state-of-art systems (Kim et al., 2011;<cite> Verbeke et al., 2012)</cite> .
differences
db6794da83b12336ab946e5777346d_0
The title of our talk-an implicit reference to the English cliché like a spider weaving her webintends to attract one's attention to the metaphor that can be drawn between the dance of a spider weaving her web and a new lexicographic gesture that is gradually emerging from the work on Net-like lexical resources (Fellbaum, 1998; Baker et al., 2003;<cite> Gader et al., 2012)</cite> .
uses
db6794da83b12336ab946e5777346d_1
The title of our talk-an implicit reference to the English cliché like a spider weaving her webintends to attract one's attention to the metaphor that can be drawn between the dance of a spider weaving her web and a new lexicographic gesture that is gradually emerging from the work on Net-like lexical resources (Fellbaum, 1998; Baker et al., 2003;<cite> Gader et al., 2012)</cite> .
uses
db6794da83b12336ab946e5777346d_2
Work performed on the French Lexical Network <cite>(Gader et al., 2012</cite> ) will serve to demonstrate how the lexicographic process can be made closer to actual navigation through lexical knowledge by the speaker.
future_work
db6794da83b12336ab946e5777346d_3
Computational aspects of the work on the French Lexical Network are dealt with in<cite> (Gader et al., 2012)</cite> .
background
dc6d4eb1870ed5b0bbcbbf6686e5be_0
Understanding the temporal information in natural language text is an important NLP task (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013; Minard et al., 2015; Bethard et al., 2016 Bethard et al., , 2017 . A crucial component is temporal relation (TempRel; e.g., before or after) extraction (Mani et al., 2006; Bethard et al., 2007; Do et al., 2012; Mirza and Tonelli, 2016; <cite>Ning et al., 2017</cite> Ning et al., , 2018a .
background
dc6d4eb1870ed5b0bbcbbf6686e5be_1
Annotators in this setup usually focus only on salient relations but overlook some others. It has been reported that many event pairs in TimeBank should have been annotated with a specific TempRel but the annotators failed to look at them (Chambers, 2013;<cite> Ning et al., 2017)</cite> .
motivation
dc6d4eb1870ed5b0bbcbbf6686e5be_2
It has been reported that many event pairs in TimeBank should have been annotated with a specific TempRel but the annotators failed to look at them (Chambers, 2013;<cite> Ning et al., 2017)</cite> .
background
dc6d4eb1870ed5b0bbcbbf6686e5be_3
Two recent TempRel extraction systems (Mirza and Tonelli, 2016; <cite>Ning et al., 2017</cite> ) also reported their performances on TB-Dense (F) and on TempEval-3 (P) separately. However, there are no existing systems that jointly train on both.
motivation
dc6d4eb1870ed5b0bbcbbf6686e5be_4
Two recent TempRel extraction systems (Mirza and Tonelli, 2016; <cite>Ning et al., 2017</cite> ) also reported their performances on TB-Dense (F) and on TempEval-3 (P) separately.
background
dc6d4eb1870ed5b0bbcbbf6686e5be_5
Note that Algorithm 1 is only for the learning step of TempRel extraction; as for the inference step of this task, we consistently adopt the standard method by solving Eq. (1), as was done by (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012;<cite> Ning et al., 2017)</cite> .
uses
dc6d4eb1870ed5b0bbcbbf6686e5be_6
Then the ILP objective is formulated aŝ where {r m 3 } is selected based on the general transitivity proposed in<cite> (Ning et al., 2017)</cite> .
uses
dc6d4eb1870ed5b0bbcbbf6686e5be_7
A standard way to perform global inference is to formulate it as an Integer Linear Programming (ILP) problem (Roth and Yih, 2004 ) and enforce transitivity rules as constraints. Then the ILP objective is formulated aŝ where {r m 3 } is selected based on the general transitivity proposed in<cite> (Ning et al., 2017)</cite> .
background
dc6d4eb1870ed5b0bbcbbf6686e5be_8
We believe that global inference makes better use of the information provided by P; in fact, as we show in Sec. 4, it does perform better than local inference. A standard way to perform global inference is to formulate it as an Integer Linear Programming (ILP) problem (Roth and Yih, 2004 ) and enforce transitivity rules as constraints. Then the ILP objective is formulated aŝ where {r m 3 } is selected based on the general transitivity proposed in<cite> (Ning et al., 2017)</cite> .
uses
dc6d4eb1870ed5b0bbcbbf6686e5be_9
Results are shown in Table 2 , where all systems were compared in terms of their performances on "same sentence" edges (both nodes are from the same sentence), "nearby sentence" edges, all edges, and the temporal awareness metric used by the TempEval3 workshop. The first part of Table 2 (Systems 1-5) refers to the baseline method proposed at the beginning of Sec. 3, i.e., simply treating P as F and training on their union. The second part (Systems 6-7) serves as an ablation study showing the effect of bootstrapping only. While System 7 can be regarded as a reproduction of<cite> Ning et al. (2017)</cite> , the original paper of<cite> Ning et al. (2017)</cite> achieved an overall score of P=43.0, R=46.4, F=44.7 and an awareness score of P=42.6, R=44.0, and F=43.3, and the proposed System 9 is also better than<cite> Ning et al. (2017)</cite> on all metrics.
differences
dc6d4eb1870ed5b0bbcbbf6686e5be_10
While incorporating transitivity constraints in inference is widely used,<cite> Ning et al. (2017)</cite> proposed to incorporate these constraints in the learning phase as well.
background
dc6d4eb1870ed5b0bbcbbf6686e5be_11
One of the algorithms proposed in<cite> Ning et al. (2017)</cite> is based on Chang et al. (2012) 's constraint-driven learning (CoDL), which is the same as our intermediate System 7 in Table 2 ; the fact that System 7 is better than System 1 can thus be considered as a reproduction of<cite> Ning et al. (2017)</cite> .
uses
dc6d4eb1870ed5b0bbcbbf6686e5be_12
Despite the technical similarity, this work is motivated differently and is set to achieve a different goal:<cite> Ning et al. (2017)</cite> tried to enforce the transitivity structure, while the current work attempts to use imperfect signals (e.g., partially annotated) taken from additional data, and learn in the incidental supervision framework.
similarities differences
dc6d4eb1870ed5b0bbcbbf6686e5be_13
System 7 can also be considered as a reproduction of<cite> Ning et al. (2017)</cite> (see the discussion in Sec. 5 for details).
uses
dcc866dcfb5f9233170d633d052e8b_0
The use of various synchronous grammar based formalisms has been a trend for statistical machine translation (SMT) (Wu, 1997; Eisner, 2003; Galley et al., 2006; <cite>Chiang, 2007</cite>; Zhang et al., 2008) .
background
dcc866dcfb5f9233170d633d052e8b_1
For instance, in our investigations for SMT (Section 3.1), the Formally SCFG based hierarchical phrase-based model (hereinafter FSCFG)<cite> (Chiang, 2007)</cite> has a better generalization capability than a Linguistically motivated STSSG based model (hereinafter LSTSSG) (Zhang et al., 2008) , with 5% rules of the former matched by NIST05 test set while only 3.5% rules of the latter matched by the same test set.
uses background
dcc866dcfb5f9233170d633d052e8b_2
The rule extraction in current implementation can be considered as a combination of the ones in<cite> (Chiang, 2007)</cite> and (Zhang et al., 2008) .
uses
dcc866dcfb5f9233170d633d052e8b_3
For example,<cite> (Chiang, 2007)</cite> adopts a CKY style span-based decoding while (Liu et al., 2006 ) applies a linguistically syntax node based bottom-up decoding, which are difficult to integrate.
background
dcc866dcfb5f9233170d633d052e8b_4
FSCFG An in-house implementation of purely formally SCFG based model similar to<cite> (Chiang, 2007)</cite> .
similarities
ddd23a034c366b62b53d15128edd45_0
Our contributions are summarized as follows: (1) we extend a probablistic model used in the <cite>previous work</cite> which concurrently performs word reordering and dependency parsing; (2) we conducted an evaluation experiment using our semi-automatically constructed evaluation data so that sentences in the data are more likely to be spontaneously written by natives than the automatically constructed evaluation data in the <cite>previous work</cite>.
extends
ddd23a034c366b62b53d15128edd45_1
To solve the problem, we previously proposed a method for concurrently performing word reordering and dependency parsing and confirmed the effectiveness of their proposed method using evaluation data created by randomly changing the word order in newspaper article sentences <cite>(Yoshida et al., 2014)</cite> .
background
ddd23a034c366b62b53d15128edd45_2
This paper proposes a new method on Japanese word reordering based on concurrent execution with dependency parsing by extending the probablistic model proposed by <cite>Yoshida et al. (2014)</cite> , and describes an evaluation experiment using our 1 Bunsetsu is a linguistic unit in Japanese that roughly corresponds to a basic phrase in English.
extends
ddd23a034c366b62b53d15128edd45_3
We use the same search algorithm as one proposed by <cite>Yoshida et al. (2014)</cite> , which can efficiently find the approximate solution from a huge number of candidates of the pattern by extending CYK algorithm used in conventional dependency parsing.
uses
ddd23a034c366b62b53d15128edd45_4
In this paper, we refine the probabilistic model proposed by <cite>Yoshida et al. (2014)</cite> to improve the accuracy.
extends
ddd23a034c366b62b53d15128edd45_5
The structure S is defined as a tuple S = ⟨O, D⟩ where In the probablistic model proposed by <cite>Yoshida et al. (2014)</cite> , P (S|B) was calculated as follows:
uses
ddd23a034c366b62b53d15128edd45_6
We extend <cite>the above model</cite> and calculate P (S|B) as follows:
extends
ddd23a034c366b62b53d15128edd45_7
Therefore, we mix Formulas (3) and (4) by adjusting the weight α depending on the adequacy of word order in an input sentence, instead of using the constant 0.5 in the previous model proposed by <cite>Yoshida et al. (2014)</cite> .
differences
ddd23a034c366b62b53d15128edd45_8
Each factor in Formula (2) is estimated by the maximum entropy method in the same approximation procedure as that of <cite>Yoshida et al. (2014)</cite> .
uses
ddd23a034c366b62b53d15128edd45_9
Therefore, <cite>our previous work</cite> (<cite>Yoshida et al., 2014</cite>) artificially generated sentences which were not easy to read, by just automatically changing the word order of newspaper article sentences in Kyoto Text Corpus 3 based on the dependency structure.
background
ddd23a034c366b62b53d15128edd45_10
Therefore, <cite>our previous work</cite> (<cite>Yoshida et al., 2014</cite>) artificially generated sentences which were not easy to read, by just automatically changing the word order of newspaper article sentences in Kyoto Text Corpus 3 based on the dependency structure. However, just automatically changing the word order may create sentences which are unlikely to be written by a native.
motivation
ddd23a034c366b62b53d15128edd45_11
That is, if a subject judges that a sentence generated by automatically changing the word order in the same way as <cite>the previous work</cite> (<cite>Yoshida et al., 2014</cite> ) may have spontaneously written by a native.
background
ddd23a034c366b62b53d15128edd45_12
We compared our method to <cite>Yoshida</cite>'s method (<cite>Yoshida et al., 2014</cite>) and two conventional sequential methods.
uses
ddd23a034c366b62b53d15128edd45_13
All of the methods used the same training features as those described in <cite>Yoshida et al. (2014)</cite> .
uses
ddd23a034c366b62b53d15128edd45_14
The dependency accuracy of our method was significantly lower than that of the two sequential methods, and was higher than that of <cite>Yoshida's method</cite> although there was no significant difference.
differences
ddd23a034c366b62b53d15128edd45_15
On the other hand, the sentence accuracy of our method was highest among <cite>all the methods</cite> although there were no significant differences in them.
differences
ddd23a034c366b62b53d15128edd45_16
As a result of analysis, especially, our method and <cite>Yoshida's method</cite> tended to improve the sentence accuracy very well in case of short sentences.
similarities
ddd23a034c366b62b53d15128edd45_17
Especially, we extended the probablistic model proposed by <cite>Yoshida et al. (2014)</cite> to deal with sentences spontaneously written by a native.
extends
debdaa202ebd856991e09e5e00a12b_0
This architecture has been empirically shown to perform well at Named Entity Recognition (NER) tasks <cite>(Lample et al., 2016)</cite> .
background
debdaa202ebd856991e09e5e00a12b_1
In the Named Entity Recognition task, we utilized a deep learning approach, given the demonstrated effectiveness of such an architecture in this domain <cite>(Lample et al., 2016)</cite> .
uses motivation
e0b72115e1905226d22876e72aa304_0
The basic structure of our CKB completion model is similar to that of<cite> Li et al. (2016b)</cite> .
similarities
e0b72115e1905226d22876e72aa304_1
The basic structure of our CKB completion model is similar to that of<cite> Li et al. (2016b)</cite> . The main difference between ours and theirs is that our method learns the CKB completion and generation tasks jointly.
differences
e0b72115e1905226d22876e72aa304_2
Previous model<cite> Li et al. (2016b)</cite> defined a CKB completion model that estimates a confidence score of an arbitrary triple ⟨t 1 , r, t 2 ⟩. They used a simple neural network model to formulate score(t 1 , r, t 2 ) ∈ R.
background
e0b72115e1905226d22876e72aa304_3
Our model Our CKB completion model is based on Li et al.'s (2016b) .
extends
e0b72115e1905226d22876e72aa304_4
Li et al. (2016b) formulate the phrase embedding by using attention pooling of LSTM and a bilinear function.
background
e0b72115e1905226d22876e72aa304_5
For the experiments with English, we used the ConceptNet 100K data released by<cite> Li et al. (2016b)</cite> 1 .
uses
e0b72115e1905226d22876e72aa304_7
CKB completion As baselines, we used the DNN AVG and DNN LSTM models<cite> (Li et al., 2016b</cite> ) that were described in Section 3.1.
uses
e0b72115e1905226d22876e72aa304_8
The threshold was determined by using the validation1 data to maximize the accuracy of binary classification for each method, as in<cite> (Li et al., 2016b)</cite> .
uses similarities
e0b72115e1905226d22876e72aa304_9
The bottom two lines show the best performances reported in<cite> (Li et al., 2016b)</cite> . The results indicate that our method improved the accuracy of CKB completion compared with the previous method.
differences
e0b72115e1905226d22876e72aa304_11
In Wiki gen, we used triples extracted by using the POS tag sequence pattern for each relation according to<cite> Li et al. (2016b)</cite> and scored each triple with CKB completion scores.
uses
e0b72115e1905226d22876e72aa304_12
This tendency is similar to the results reported in Li et al.<cite> (Li et al., 2016b)</cite> .
similarities
e0b72115e1905226d22876e72aa304_13
In particular,<cite> Li et al. (2016b)</cite> and Socher et al. (2013) proposed a simple KBC model for CKB.
background
e0b72115e1905226d22876e72aa304_14
The formulations of CKB completion in the two studies are the same, and we evaluated<cite> Li et al. (2016b)</cite> 's method as a baseline.
uses
e0e21b4e473ad6fde28378b2dc4f34_0
Methods to learn sparse word-based translation correspondences from supervised ranking signals have been presented by Bai et al. (2010) and<cite> Sokolov et al. (2013)</cite> .
background
e0e21b4e473ad6fde28378b2dc4f34_1
Our approach extends the work of<cite> Sokolov et al. (2013)</cite> by presenting an alternative learningto-rank approach that can be used for supervised model combination to integrate dense and sparse features, and by evaluating both approaches on cross-lingual retrieval for patents and Wikipedia.
extends differences
e0e21b4e473ad6fde28378b2dc4f34_2
The algorithm of<cite> Sokolov et al. (2013)</cite> combines batch boosting with bagging over a number of independently drawn bootstrap data samples from R. In each step, the single word pair feature is selected that provides the largest decrease of L exp .
background
e0e21b4e473ad6fde28378b2dc4f34_3
The baseline consensus-based voting Borda Count procedure endows each voter with a fixed amount of voting points which he is free to distribute among the scored documents (Aslam and Montague, 2001; <cite>Sokolov et al., 2013)</cite> .
background
e0e21b4e473ad6fde28378b2dc4f34_4
We use BoostCLIR 1 , a Japanese-English (JP-EN) corpus of patent abstracts from the MAREC and NTCIR data<cite> (Sokolov et al., 2013)</cite> .
uses
e0e21b4e473ad6fde28378b2dc4f34_5
A JP-EN system was trained on data described and preprocessed by<cite> Sokolov et al. (2013)</cite> , consisting of 1.8M parallel sentences from the NTCIR-7 JP-EN PatentMT subtask (Fujii et al., 2008) and 2k parallel sentences for parameter development from the NTCIR-8 test collection.
uses
e0e21b4e473ad6fde28378b2dc4f34_6
PSQ on patents reuses settings found by<cite> Sokolov et al. (2013)</cite> ; settings for Wikipedia were adjusted on its dev set (n=1000, λ=0.4, L=0, C=1).
uses
e177758a227506bbf9de48f8f35715_0
The second step is to perform dictionary induction by learning a linear projection, in the form of a matrix, between language vector spaces <cite>(Mikolov et al., 2013b</cite>; Lazaridou et al., 2015) .
background
e177758a227506bbf9de48f8f35715_1
It is one of the most competitive methods for generating word vector representations, as demonstrated by results on a various semantic tasks (Baroni et al., 2014;<cite> Mikolov et al., 2013b)</cite> .
background
e177758a227506bbf9de48f8f35715_2
To induce a bilingual dictionary for a pair of languages, we use the projection matrix approach <cite>(Mikolov et al., 2013b</cite>; Lazaridou et al., 2015) .
uses
e264c45391853fb008c838aa7ccca8_0
This paper proposes an extension of Sumida and Torisawa's method of acquiring hyponymy relations from hierachical layouts in Wikipedia<cite> (Sumida and Torisawa, 2008)</cite> .
extends
e264c45391853fb008c838aa7ccca8_1
We extract hyponymy relation candidates (HRCs) from the hierachical layouts in Wikipedia by regarding all subordinate items of an item x in the hierachical layouts as x's hyponym candidates, while <cite>Sumida and Torisawa (2008)</cite> extracted only direct subordinate items of an item x as x's hyponym candidates.
differences
e264c45391853fb008c838aa7ccca8_2
Many NLP researchers have attempted to automatically acquire hyponymy relations from texts (Hearst, 1992; Caraballo, 1999; Mann, 2002; Fleischman et al., 2003; Morin and Jacquemin, 2004; Shinzato and Torisawa, 2004; Etzioni et al., 2005; Pantel and Pennacchiotti, 2006; Sumida et al., 2006;<cite> Sumida and Torisawa, 2008)</cite> .
background
e264c45391853fb008c838aa7ccca8_3
On the other hand, <cite>Sumida and Torisawa (2008)</cite> have shown that you could easily obtain numerous hyponymy relations from Wikipedia; in particular, they have acquired more than 0.63 million hyponymy relations only from hierarchical layouts in the 2.2GB Japanese version of Wikipedia (e.g., Figure 1 shows a hierarchical structure of a Wikipedia article shown in Figure 2) .
background
e264c45391853fb008c838aa7ccca8_4
Although the above studies extracted hyponymy relations from the English version of Wikipedia, <cite>Sumida and Torisawa (2008)</cite> extracted hyponymy relations from definition sentences, category labels, and hierarchical structures in Wikipedia articles.
background
e264c45391853fb008c838aa7ccca8_5
Although the above studies extracted hyponymy relations from the English version of Wikipedia, <cite>Sumida and Torisawa (2008)</cite> extracted hyponymy relations from definition sentences, category labels, and hierarchical structures in Wikipedia articles. We thus focus on the hierarchical structures to acquire more hyponymy relations.
uses