paper_id
stringlengths 9
13
| conference_year_track
stringclasses 139
values | reviewer_id
stringlengths 13
13
⌀ | messages
listlengths 5
21
⌀ |
---|---|---|---|
0gLzHrE_t3z | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<0gLzHrE_t3z>>```",
"role": "user"
},
{
"content": "Title: Excellent description of a critical COVID-19 dataset, some questions remaining\nReview Content: This manuscript describes an exemplary effort to address COVID-19 by bringing together much of the relevant literature into one corpus, CORD-19, and increasing its accessibility by providing a harmonized and standardized format convenient for use by automated tools. CORD-19 has been - and is likely to continue being - a critical resource for the scientific community to address COVID-19, and this manuscript not only reflects that importance, but also gives insight into the approach used, the design decisions taken, challenges encountered, use cases, shared tasks, and various discussion points. The manuscript is well-organized and readable, and (overall) an excellent case study in corpus creation. This manuscript is not only important for understanding the CORD-19 corpus and its enabling effect on current COVID-19 efforts, but is possibly also a historically important example of joint scientific efforts to address COVID-19.\n\nDespite the critical importance of this dataset, there are several questions left unanswered by this manuscript, and it would be unfortunate to not address these before publication.\n\nIt would be useful to have a very clear statement of the purpose for CORD-19. The inclusion of SARS and MERS makes intuitive sense, but it is less clear why other coronaviruses that infect humans (e.g. HCoV-OC43) are not explicitly included - I am not a virologist, but neither will be most of the audience for this manuscript. While many of the articles that discuss these lesser known cornaviruses would be included anyway because they would also mention \"coronavirus\", this is not guaranteed. \n\nWhile it seems appropriate for document inclusion to be query-based, it is important to consider the coverage of the query. The number of name variants in the literature for COVID-19 or SARS-CoV-2 is rather large, and not all of these documents will include other terms that will match, such as \"coronavirus\". For example, how would a document that mentions \"SARS CoV-2\" but none of the query terms listed be handled? This is not a theoretical case: the title and abstract for PMID 32584542 have this issue, and I was unable to locate this document in CORD-19. In addition to minor variations such as this, there are many examples of significant variations such as \"HCoV-19\", \"nCoV-19\" or even \"COIVD\". Are these cases worth considering? If not, can we quantify how much is lost? And if we can't quantify it, this is a limitation.\n\nHow is the following situation handled: querying source A returns a document (e.g. the source has full text and that matches), but the version in source B does not return it (e.g. the source only has title & abstract, and they do not match). From the description, I would assume that the version from source A is used and the version from source B is ignored; is any reasonably useful data lost by not explicitly querying source B for its version?\n\nThere are other efforts to provide a repository of scientific articles related to COVID-19, and it would be appropriate to mention these, if only to indicate why CORD-19 has unique value. I am aware of LitCovid (Chen Q, Allot A, Lu Z. Keep up with the latest coronavirus research. Nature. 2020;579(7798):193), are there others?\n\nThere are also non-COVID-19 efforts to provide a large percentage of the literature in formats appropriate for text mining or other processing. One is (Comeau, Donald C., et al. \"PMC text mining subset in BioC: about three million full-text articles and growing.\" Bioinformatics 35.18 (2019): 3533-3535.), which not only provides the full text of a large percentage of the articles in PubMed Central, but it is also kept up-to-date and converts all documents into a straightforward standardized XML format appropriate for text mining. While this effort is single-source, it specifically addresses some of the issues encountered in the creation of CORD-19 and the representation aspect of the \"Call to Action\".\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Thank you - new revision and responses\nThank you for the detailed review. We have uploaded a new revision which addresses most of the comments. More detailed explanations are provided below.\n\n> It would be useful to have a very clear statement of the purpose for CORD-19...\n\n> While it seems appropriate for document inclusion to be query-based, it is important to consider the coverage of the query...\n\nThe reviewer brings up a good point. For any such text corpus, inclusion/exclusion is defined by the curators of the corpus, dramatically affecting how the corpus can be used. We adopted the initial query from Elsevier after consultation with other groups, including PMC. The query has evolved slightly since the beginning and may continue to evolve as needed. Although we don't explicitly include other coronaviruses in the search query, the current query should be catching most if not all of these papers (we match the search terms over title, abstract, *and* body text). PMC also performs some query expansion, which would increase coverage and also handle some of the lexical variations and misspellings (though not all) brought up by the reviewer.\n\nAs for the other example brought up by the reviewer (PubMed 32584542); this paper is included in CORD-19 as of the latest versions (I checked 2020-07-04). The paper was published on 6/25, and is indexed in Medline. Papers outside of PMC take a longer period of time to make it into CORD-19 because of differences in the ingestion pipeline. PMC papers are queried directly from PMC every day, while papers from other sources are ingested by Semantic Scholar and have to go through our entire paper processing pipeline (which sometimes takes days to a week) before they become available in CORD-19. I hope this addresses some of the reviewer's concerns.\n\n> How is the following situation handled: querying source A returns a document (e.g. the source has full text and that matches), but the version in source B does not return it (e.g. the source only has title & abstract, and they do not match). From the description, I would assume that the version from source A is used and the version from source B is ignored; is any reasonably useful data lost by not explicitly querying source B for its version?\n\nIn the case where source A and source B provide the same paper, and major metadata fields (DOIs and other identifiers) match, we cluster them together (preserving both A and B for provenance). The title and abstract are selected from the source that provides full text and the most unrestrictive license. If major metadata fields do not match, which sounds like the case described by the reviewer, we keep these as separate entries in CORD-19. We prefer underclustering to avoid the situation described by the reviewer (that of lost information).\n\n> There are other efforts to provide a repository of scientific articles related to COVID-19, and it would be appropriate to mention these, if only to indicate why CORD-19 has unique value. I am aware of LitCovid (Chen Q, Allot A, Lu Z. Keep up with the latest coronavirus research. Nature. 2020;579(7798):193), are there others?\n\nWe have added a citation and description of LitCovid. We place additional emphasis on other areas of missing COVID-19 literature in the discussion (such as foreign language literature, works in the social sciences and other fields of study). Though we are not currently aware of any released corpora in these other areas, we would really appreciate if any of these are brought to our attention.\n\n> There are also non-COVID-19 efforts to provide a large percentage of the literature in formats appropriate for text mining or other processing. One is (Comeau, Donald C., et al. \"PMC text mining subset in BioC: about three million full-text articles and growing.\" Bioinformatics 35.18 (2019): 3533-3535.), which not only provides the full text of a large percentage of the articles in PubMed Central, but it is also kept up-to-date and converts all documents into a straightforward standardized XML format appropriate for text mining. While this effort is single-source, it specifically addresses some of the issues encountered in the creation of CORD-19 and the representation aspect of the \"Call to Action\".\n\nBioC is very useful for text mining, and the full text representation in CORD-19 is very similar to BioC in some ways (character-level span annotations etc). We have added references and descriptions of BioC to both the \"Design decisions\" and the \"Call to Action\" sections of the paper.\n\nWe adopt the JSON schema we use in CORD-19 from S2ORC (https://arxiv.org/abs/1911.02782), which is essentially an extended version of CORD-19 that covers all of the Semantic Scholar literature corpus. Some differences between CORD-19/S2ORC and BioC is the focus on full paper representation, with fields for bibliography entries, figure and table references, and so on included in S2ORC. \n",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It satisfactorily addresses the major issues I had. I stand by my favorable recommendation for this paper.",
"role": "assistant"
}
]
|
0gLzHrE_t3z | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<0gLzHrE_t3z>>```",
"role": "user"
},
{
"content": "Title: Overview of a highly important Covid-19 dataset\nReview Content: This is a paper that describes an important research dataset that has been produced during the Covid-19 epidemic. The CORD-19 collection is used for much research and some challenge evaluations. Even though this paper does not report any research results per se, and the paper is posted on the ArXiv preprint server, this version will give a citable description of the collection that will likely be widely referenced.\n\nThe authors describe well the process of dealing not only with the technical issues of processing heterogeneous scientific papers but also the non-technical issues, such as copyright and licensing.\n\nThe authors do not make any unreasonable claims, although I do question the value of this collection for non-computational researchers and clinicians. As the authors note, the collection is not complete, which is essential for clinical researchers and certainly for clinicians (who do not typically read primary research papers anyways, and tend to focus more on summations). But the dataset is of tremendous value to computational and informatics researchers, and that should be pronounced.\n\nI appreciate the Discussion that points out the limitations of how scientific information is currently published, and how it could be improved. One other concern that could be addressed is how long the Allen Institute for AI, which is to be commended for this work, will continue to maintain this tremendously valuable resource.\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Thank you and response\nThank you for your review!\n\n> I appreciate the Discussion that points out the limitations of how scientific information is currently published, and how it could be improved. One other concern that could be addressed is how long the Allen Institute for AI, which is to be commended for this work, will continue to maintain this tremendously valuable resource.\n\nWe have automated the maintenance of the dataset, so will continue to update it for the foreseeable future. However, the answer to this question depends more on the value of the dataset to the community: whether the pandemic continues, whether we derive further value from mining these texts, whether the publishing community continues to support our efforts, and the costs / resources available for maintenance. We will try to balance all of these considerations.",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. The response covers nearly all of the issues I raised. I maintain my positive recommendation for this work.",
"role": "assistant"
}
]
|
0gLzHrE_t3z | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<0gLzHrE_t3z>>```",
"role": "user"
},
{
"content": "Title: CORD-19 is an excellent resource with an impressive integration work for the research community to fight COVID-19.\nReview Content: The authors present the CORD-19 data set and describe how it has been developed and continues to be developed. The CORD-19 data set is a valuable resource that provides access to the latest literature about COVID-19 and coronaviruses and it is updated daily with over 200k downloads. The generation of the CORD-19 requires a coordinated integration and processing effort that is significant. The contribution of this corpus is of high significance and will have a strong impact on the biomedical domain and support the development, for instance, of COVID-19 vaccines. The manuscript is clearly written and it is easy to understand.\n\nThe effort in providing a version of the latest literature in formats that can be processed by text analytics methods is excellent, using the latest of the available technology to do so. In the paper, it is mentioned in the manuscript that there are some problems in turning tables into structured format and the authors provide examples of issues that they have found. Table processing is done by IBM, who has as well a method for table processing that seems to be resilient to the problems mentioned and would be relevant to consider it for table processing (https://arxiv.org/abs/1911.10683).\n\nThe authors give an example of conflict from which it can be inferred that the same DOI might be linked to two different PubMed identifiers, the reviewer is curious why this might be the case and if an example could be provided.\n\nWhen you mention “Classification of CORD-19 papers to Microsoft Academic Graph”, is this classification done by a method provided by the authors? is this classification provided as meta-data?\n\nDuring my review the only typo I could find is:\n* “other research activity.” —> “other research activities.”?\n* “by not allowing republication of **an** paper”, an —> a\n\nPlease consider the following guideline for NLM trademarks: https://www.nlm.nih.gov/about/trademarks.html\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Thank you and replies to comments\nThank you for your review! We've addressed most comments in a revision. Detailed responses below.\n\n> The authors give an example of conflict from which it can be inferred that the same DOI might be linked to two different PubMed identifiers, the reviewer is curious why this might be the case and if an example could be provided.\n\nThis happens for a couple hundred papers in the CORD-19 corpus. There can be two PubMed entries that correspond to the same DOI, sometimes due to duplicate entries in PubMed, but more often when a paper's abstract is available in two languages. In these cases, PubMed can have two entries for a paper with the same DOI. For example:\n\nhttps://pubmed.ncbi.nlm.nih.gov/32532586/\n\nhttps://pubmed.ncbi.nlm.nih.gov/32426303/\n\nBoth of these papers have DOI: 10.1016/j.remn.2020.04.004; the first has an English abstract, the second Spanish. We keep these as separate entries in CORD-19.\n\n> When you mention “Classification of CORD-19 papers to Microsoft Academic Graph”, is this classification done by a method provided by the authors? is this classification provided as meta-data?\n\nThis classification is performed in collaboration with the Microsoft Academic Graph research group. The methods used are described in Shen et al 2018. We have added the citation to the sentence to clarify. We provide a mapping to Microsoft Academic Graph identifiers as a supplementary file on the CORD-19 dataset landing page: https://www.semanticscholar.org/cord19/download.\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. The authors have answered almost all of my concerns. I keep my recommendation for this paper to be accepted.",
"role": "assistant"
}
]
|
EC1vWkJXpjy | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<EC1vWkJXpjy>>```",
"role": "user"
},
{
"content": "Title: Novel task with under-developed motivation\nReview Content: This paper presents a BERT-based model for solving the novel task of grounding sentences to 3D locations in a representation of a human male body. The authors induce training labels by automatically extracting a list of organs and their synonymous wordforms using the SciSpacy UmlsEntityLinker and extracting sentences containing at least one mention of an organ on their curated list. The ground-truth voxel coordinates are assigned using organ locations from a 3D atlas, and several BERT models are fine-tuned using a distance-based regression loss. The paper presents a novel task, is clearly written, and thoroughly explains their experimental choices and settings.\n\nIssues:\n\n1. One of the key premises of the paper is that physical proximity in the human body reflects functional and semantic relatedness to a significant degree. This statement needs more justification, especially since it is such an important part of the motivation for this work. Looking at a model of a human torso, it seems that the centroid of the left lung may be closer to the centroid of the stomach than to the centroid of the trachea. Similarly, the centroid of the stomach seems closer to that of the spleen and pancreas than of the large intestine.\n\n2. It's not clear what is gained by performing this as a regression task to directly predict 3D coordinates instead of as a classification task that can then be transformed to 3D coordinates using the atlas (the ClassCenter model). The motivation seems to be that in cases when the classification model gets the wrong organ, the predicted 3D location is significantly worse than in the regression model (i.e. NVD-O for ClassCenter is higher). However, the classification model identifies the correct organ with much significantly higher frequency (i.e. IOR is higher). The overall effect is that even in the masked setting, the average per-sentence voxel distance is much lower for ClassCenter (i.e. taking the linear combination of NVD and NVD-O w.r.t. IOR).\n\n3. The use case needs further justification. In what situations is specifying a position in a 3D atlas preferable to submitting a textual (or UMLS-keyword) query? Since the training data only uses the centroids of organs, is there a meaningful difference between querying for the centroid of the stomach vs. somewhere else within the organ? Are the relative positions of sentences embedded within the same organ semantically meaningful, or mostly a product of noise in the regression?\n\n4. Representing abstracts as a point cloud composed of individual sentences seems like it would cause issues for the information retrieval task presented in section 7.2. What happens to the majority class of sentences that don't contain a mention of an organ, either implicitly or explicitly? Does the frequency of sentences containing an organ mention accurately represent the relevance of the document?\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer1\nHello! Thank you for taking the time to read our paper and provide the feedback. Below, we respond to the issues you pointed out.\n\n1) We do not argue that the spatial proximity is always going imply a higher degree of relatedness between terms than the standard semantic similarity inferred from their distribution within the corpus (shared contexts ect.), but we nevertheless believe that it does reflect similarity between concepts to a significant degree. Additionally, in certain contexts, e.g. surgeries, spatial proximity in itself can be more relevant than semantic relatedness. And given the inherent ability to visualize texts within the human body that comes with the method, we believe it to be interesting to the research community.\n\n2) Our motivation for framing the task as regression is that a sentence concerning a certain organ should be mapped preferably to that organ, and if not, at least to the general vicinity of it. We argue that even though the classification model achieves a lower overall distance than the regression based model, the higher and intractable error that it makes when predicting the wrong organ makes it less suitable for general use, and especially for retrieval tasks that rely on representing larger texts as combinations of their sentence mappings.\n\n3) We proposed the use of a 3D point query as an alternative to textual queries for which many works already exist. The training objective takes into the account all voxels of a given organ, and not just organ centroids, and accommodates the functionality where clicking on the upper region of an organ retrieves sentences referring to it in the context of organs above it, while clicking onto its lower region retrieves sentences that also mention the organs below it.\n\n4) We agree that the retrieval task would benefit from a pre-processing step that would discard medically irrelevant sentences (eg. without any named entities from the medical domain) and from a refined distance measure between clusters, but we believe that this is not essential to the general principle of the method.",
"role": "user"
},
{
"content": "I value the authors' response. They have adequately addressed the majority of my concerns, and I am considering revising the rating.",
"role": "assistant"
}
]
|
EC1vWkJXpjy | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<EC1vWkJXpjy>>```",
"role": "user"
},
{
"content": "Title: interesting exploration with unclear justification\nReview Content: This paper describes methods for learning a mapping between sentence vectors (as produced by the [CLS] token in BERT) and 3D points in the human body. The idea is that sentences that are about some anatomical site can be mapped to that position in the body for visualization. It is never completely clear from the paper why this needs to be learned, since the training pre-processing already is finding anatomical site mentions and essentially normalizing them to a set of body parts with known coordinates. The only thing I can figure is that they are using their pipeline as a way to generate labeled training data, and they intend for this to be applied to sentences without explicit anatomical site mentions. This is maybe supported by the fact that their training does include 50% masking of anatomical sites. In that case, there is a missing element here, which is that the vast majority of sentences in a document are not \"about\" any organ at all, so mapping them into organ space is meaningless. So for this algorithm to be applicable, one would first need to pre-process sentences somehow for whether they should go onto the organ mapping stage at all. And in any case, their example application presumably is just mapping the sentences that did have anatomical site terms. Overall, I think the logic of the approach needs to be spelled out more clearly. As it is, it feels like an interesting trick that one can learn this mapping but I just don't see why it's necessary.\n\nOther issues:\n- This mapping will surely generate \"point clouds\" around organs (i.e. sentences mentioning lung will be spatially distributed near the lungs rather than all mapping directly to the centroid of the lung). This may give an expert in anatomy some sense that these minor differences are anatomically meaningful where I believe it would be largely noise, and if some finer-grained meaning were there it would be impossible to distinguish from noise.\n- Results tables are never referred to in the text\n- There should be some discussion of what you \"want\" if there are multiple organ mentions. I certainly don't want some \"average\" position, but also don't want it to just show up in 1 position, it should probably show up in all 3 positions. But that is not a function anymore so not really something you can ask a neural network to do.\n- When masking anatomical sites in the training data, do you ever use the same sentence in both the masked and unmasked form? I think it would be preferable with this dataset to randomly choose one form before training, rather than randomly choosing to mask each time an instance is loaded (which could result in different masking behavior for the same instance across epochs) because otherwise it could be memorized.\n- I don't know if self supervised is the right term here. It's more like a clever way of generating supervised instances.\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer3\nHello! Thank you for taking the time to read our paper and provide the feedback.\n\nThe main reason for learning a contextual mapping between sentences and parts of the human body is to go beyond the look-up table feature. Therefore, stochastic masking is applied during training. By stochastically masking the keyword references to the organ locations, the model is forced to learn something that is more meaningful than a 1-to-1 correspondence between the keyword and the organ location. This is further motivated by the fact that a sentence in a document may make an implicit reference to an organ, albeit never explicitly mentioning the organ by its name.\nWhen it comes to sentences without any medical relevance, it would indeed be possible to improve the tools via a pre-processing step that would ensure that the model operates more closely within its domain, for example by performing named entity recognition on input sentences and discarding the ones that do not contain any entity types from the medical domain.\n\nOther issues:\n- Our method and its use of softmin in the training objective accommodates the learning of meaningful spatial mappings around the organ from co-occurrences of different organ terms (e.g. bladder and kidney), and, by the virtue of the masking, to the co-occurrence of one organ term and contextual information typical for some other organ (e.g. bladder and nephron).\n- For the moment, the discussion regarding the results is done within the caption of the tables, but we can certainly update the text so that the results in the table are discussed in the main text.\n- One of the main issues with having an approach that does not use any annotated data is indeed the ambiguity when there are multiple mentions of an organ. However, our goal was to quickly prototype a solution that will help in the exploration of Covid-19 related documents, and not go through the cumbersome process of annotating medical documents which is extremely time consuming. Therefore we create a setup where we rely on the model to learn the more appropriate target during training.\n- Stochastically masking the keywords is essential to our approach, and defining the masking upfront will make the problem trivial for every sentence left unmasked. Namely, any sentence that does not have a masked keyword has a direct 1-to-1 correspondence between the organ name and the location in the human atlas. On the other hand, by stochastically masking the keywords, the network has to balance between the 1-to-1 correspondence, which is definitely desirable when there is an organ mention, and the contextual correspondence, which is essential in a real-life scenario. \n- We think that self-supervised is an appropriate term in a sense that the annotation comes with the occurrences of organ terms which come with the text itself, similarly to the way language modeling task is often referred to as self supervised as it relies on automatically available information found in the structure of the language itself.\n",
"role": "user"
},
{
"content": "Thanks for the response provided by the authors. Their response has resolved the majority of my concerns, and I am considering revising the rating.",
"role": "assistant"
}
]
|
EC1vWkJXpjy | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<EC1vWkJXpjy>>```",
"role": "user"
},
{
"content": "Title: Interesting anatomical exploration of COVID articles using BERT-based grounding\nReview Content: The paper proposes a self-supervised approach to ground sentences in COVID-19 articles to a 3D human atlas. This is the first study to use self-supervision to map sentences in medical articles to anatomical locations in the 3D atlas space. Unlike the prior works that utilized the atlas space, they formulated the grounding as a regression problem, thereby allowing a better idea about the semantic closeness of the sentences to the target body locations. The study has also proposed the potential use of this grounding approach for data visualization and document retrieval use cases.\n\nStrengths:\n-- The study employs a fully self-supervised technique for grounding sentences to a 3D human atlas. This is neat and has a lot of potential applications beyond COVID-19.\n-- This work takes into account the prevalence of multiple organs in the same sentence while calculating the loss while training.\n\nMajor issues:\n1. As discussed as one of the limitations that user study needs to be done to validate the query tools, as it is difficult to verify the actual performance of the document retrieval tool from the current version. Moreover, since the distance between the centroids of the point clouds is used to retrieve the articles relevant to a query, it is difficult to interpret whether this approach is useful as this does not take into account which of the sentences in the abstract are more close to the query. Some sentences in the article might just be providing background information and the corresponding organs might not be of interest to the user. To address this, it may be considered to have a way to identify the sentences of interest.\n\n2. A brief discussion of interpreting the results mainly to indicate any potential relation between the impact of using the masked technique and the evaluation metrics used can be included.\n\n3. Although the proposed method definitely obviates the expensive annotation part, it might be good to further compare the evaluations with a supervised model to validate that self-supervision performs better than a fully supervised approach. At the least, it would be good to compare the performance of these models with supervised models used on other datasets in the previous works for grounding medical text to 3D space.\n\nMinor issues:\n1. As already pointed out in the discussion that the models have only been trained on single sentences, it needs to be investigated whether a similar loss contribution strategy would work well when multiple sentences are included in the training process as organs appearing in different sentences might be very different from each other.\n2. An evaluation of the methods to illustrate their performance on sentences having rare or infrequent organs may be reported.\n3. How is the 8000 sentences mark determined for tackling the imbalance of different organs?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer2\nHello! Thank you for taking the time to read our paper and provide the feedback. Below, we respond to each one of the issues you pointed out.\n\nMajor issues:\n1) The most immediate consideration for improving the approach would be using more sophisticated ways to perform retrieval by using entire point clouds as opposed to using just their centroids, starting with the simple extension of incorporating the point cloud spreads in a way where more compact point clouds (therefore representing more specific documents) are given an advantage. \nAdditionally, in order to mitigate the effect of organ-unrelated sentences to retrieval, we are currently working on projecting such sentences (sentences without organ occurrences) to locations outside of the human body. We are expanding the current training set with sentences that do not mention any organ, and train new models that are more robust compared to the ones currently in the paper. For such sentences, the target used in the loss function is a set of points evenly spread across the outside surface of the convex hull of the body, and train the model such that it learns that any sentence that does not contain mentions of an organ should be mapped to a location outside of the human body. The loss function, using these anchor points just outside of the body in the same way as organ points, would encourage the mappings of such sentences to move towards the nearest location outside of the body. During inference, sentences projected to the outside of the human body could then be removed from document point clouds, effectively allowing us to isolate sentences of interest.\n2) The use of the masking during training is essential for achieving high results on the sentences where the organ keywords do not occur i.e. where we rely on the context. During the training process, the use of masking certainly reduces the result on the non-masked evaluations (distances, ior’s etc.). However, without masking, the model would learn a 1-to-1 correspondence between the organ keywords and the locations in the atlas. Consequently, in the masked evaluations, where we check whether the model has learned to associate the context with its corresponding organs, the obtained result would be dramatically lower. In the updated version, we can include an experiment where the model is trained without masking and report the results for each of the metrics. \n3) To the best of our knowledge, there haven’t been works that ground text in 3D space corresponding to the human body. Therefore, we could not obtain any previously annotated data that would be adequate for our setup, and given the current situation, it was not feasible to create any kind of new annotation that would require medical expertise. Considering the global pandemic, our main goal was to find a cheap and scalable technique that could be deployed rapidly, so we did not consider comparisons with supervised methods.\n\nMinor issues:\n1) Using longer sentences/more than one sentence can definitely be beneficial since it includes more context for the model to learn. However, using longer sentences potentially increases the risk of having too many organs occurring in a sentence which is not ideal for our self-supervised setup. Namely, if we expand our work in that direction (which certainly is interesting) we would aim for using some form of label propagation, instead of just inferring the label according to an organ occurrence within the sentence.\n2) Computing a per-organ IOR/distance i.e. an IOR/distance macro-average for each of the organs is certainly feasible and we can include it in the updated version.\n3) The threshold of 8000 was chosen as roughly the number of occurrences of the second, third, fourth and the fifth most occurring organ. \n",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It adequately addresses the points I raised. I continue to recommend this work positively.",
"role": "assistant"
}
]
|
HxIZzQZy_0F | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HxIZzQZy_0F>>```",
"role": "user"
},
{
"content": "Title: Engineering white-paper, but regrettably lacking in science\nReview Content: Summary:\nthe article describes the process and end result of creating a chatbot aimed at providing vetted COVID-related information to the public.\n\nGeneral impression:\nJennifer clearly represents a considerable effort, undertaken by a talented group under great time pressure, using vast amounts of information that is highly uncertain and in flux. \n\nNow for the submission: to put it perhaps soberingly, it is 60% helicopter-view engineering white paper, 20% sales pitch, 10% incidental observations, 8% future work, and 2% science. It is the virtual absence of scientific substance that is my main concern. \nAs one illustration: the manuscript has 24 citations, 11 of those look peer-reviewed, 5 of these are provided in the future work (‘ongoing challenges’) section, therefore not supporting the past activities but rather motivating future ones. Several of the other ones are referred to outside of their scope (e.g. the Vargas reference). Related research on chatbox design, the spine of this work, is all but absent. (2 citations, from 30 and 15 years ago). \n\nNote that the submission has no ‘Results’ section. No systematically collected observations or other evaluation materials are introduced. At the very least, end-user usage statistics would have been informative to assess the impact of this work.\nThe engineering decisions that are reported make sense, but are largely not substantiated with supporting literature or first-hand evaluation. \n\nWhile it is claimed, it is not clearly demonstrated that this project addresses misinformation (or disinformation). It is an access point (and repeater of) trustworthy information but not one that specifically points out misinformation, or hunts it down. It is not fully clear how accountability is ensured for the automated provision of information to the general public – can users easily vet the sources of such information, and is there a path of recourse when a user spots incorrect answers? Who is responsible for the answer in such a case, especially so if it led to harm? Are there mechanisms to prevent deliberate infiltration of the expert-crowd or other efforts to sabotage Jennifer (think of Tay AI)? \n\nMore detailed points:\nThe Abstract refers to COQB, but the main article never does.\n\nSection 2.4: \"Answers to questions should be verified against reliable sources in the language of question.\"\n--> That is a debatable statement: why would an English language source suddenly be necessarily inadequate when a person asks a question in French!?\n\nDiscussion: \"Jennifer has successfully demonstrated that, with the right combination of technology and human experts, information from reputable sources can be more quickly and effectively organized and shared at scale.\"\n--> This is a statement that is not supported by anything reported in the submission.\n\n\"Given the evolving tasks and the large number of volunteers with diverse background, putting the right process around tasks, workflow, and sequencing (Norheim-Hagtun and Meier, 2010) is key to ensuring efficient use of the volunteers’ time to the advantage of the project.\"\n--> Unfortunately, full documentation of tasks, workflow, and sequencing is not provided in this manuscript, and use of volunteer time is not assessed.\n\nConclusions: \n\"Jennifer leverages cutting-edge chatbot technology, as well as a diverse network of volunteers from around the globe, to combat misinformation related to the pandemic.\" \n--> Only one of these three claims is supported by the presented material.\n\nAt best, this submission could be included as a discussion article, certainly not a research article. Accepting it as a full peer-reviewed research contribution would lend it more credibility than it deserves.\n\nMinor points, stylistic points:\n\"to best prevention and management practices.\" -->?? \"to prevention and management best-practices.\"\n\nSection 2.3: \" following guidelines and recommendations in (National Academies of Sciences, Engineering, and Medicine, 2017)\". The citation appears to point to a research agenda document, rather than a set of guidelines and recommendations.\n\nThe sentence \"Much of the recent research focusing on automating the task of fact checking (e.g., Adair et al. (2017); Pathak and Srihari (2019)).\" is incomplete.\n\nSeveral citations are incomplete re. sources, most notably the Jurkowitz (Mark, not Marl) citation. \n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to the Concerns Raised. \nWe thank the reviewer for the helpful comments. \n\nAt the onset of COVID-19, our main goal was not so much creating novel scientific knowledge but quickly and effectively disseminating validated information about COVID-19 to help people to better protect themselves against this disease. To race against time, we chose to rely on existing technologies that may not be perfect and add scientific evaluation later during the field work. We believe that sharing our experience of building Jennifer help shred lights on research directions to help fight the current and future crisis. We also release a dataset that can be used to advance NLP research (e.g. QA and semantic similarity). \n\n>> Concerns about citations.\n\nReferences are provided to support information and statements incorporated in the article. We have added more citations to recent work on chatbots and converted some to footnotes. \n\n>> the submission has no ‘Results’ section. \n \nThe usage statistics are provided in Sec. 1. We do intend to conduct formal user studies in the future. \n \n>> The engineering decisions that ... largely not substantiated with supporting literature or first-hand evaluation.\n\nJennifer is a proof-of-principle system. This paper describes the first steps in its design, implementation, and initial evaluation. The chat design is informed by existing literature (e.g. Walker and Whittaker, 1990 and Pan, et al. 2005). To the best of our knowledge, the currently available literature does not report examples of other chatbots that were designed with a similar scope to Jennifer (Miner et al. 2020, Larajano et al., 2018). We do intend to conduct formal user studies in the future.\n>> it is not clearly demonstrated that this project addresses misinformation\n \nSocial media platforms and several websites available to the general public are plagued with misinformation. The UN secretary-general has warned we’re living through “a pandemic of misinformation,” and the head of the WHO has said we are in an “infodemic.” By being easily accessible to the general public, this platform combats misinformation by redirecting users to trusted sources, and answering questions in a way that satisfies the criteria described in section 2.3. \n\nIn addition, the WHO has published a COVID-19 myth busters page with questions such as whether the virus would be a bioweapon. Jennifer provides answers for such questions as well. \n\n>> It is not fully clear how accountability is ensured\n\nThe answers are manually curated by volunteers and validated by trusted medicine experts. Answers always include a link to a trusted source, so that end users can \"learn more\". In general, every answer is evaluated by at least three volunteers: one curator, one tester, and one administrator. As information related to COVID-19 is evolving over time (e.g. the usage of masks), some answers were revised, tested, and validated more than once. We have a dedicated team of volunteers who revise QA pairs as information evolves. In addition, when resources are conflicting, we provide an answer based on the most authoritative resource (e.g. CDC) but also include the alternative answer as appropriate. We rely on our trusted volunteer medicine experts to make the judgement call. \n\n>> The Abstract refers to COQB, but the main article never does.\n \nWe've added a reference to the data to the beginning of Sec. 2.3 and more details to Appendix B.\n \n>> why would an English language source suddenly be necessarily inadequate when a person asks a question in French!?\n\nWhile we agree that the main scientific concepts in an answer do not change with language, there are language style and cultural differences that need to be considered when drafting an answer. We also want to make sure that relevant information is available in language-specific websites, and direct end-users to trusted sources. Furthermore, we do not believe that we could assume that our readers understand English. \n\n>> \"Jennifer has successfully demonstrated that, ....\"\n--> This is a statement that is not supported by anything reported in the submission.\n\nAs we discuss in the text, Jennifer facilitates access to information by providing easy to understand, empathetic, and curated answers to end-user questions, while redirecting them to reputable sources. Rather than browsing independently and potentially accessing unreliable information, end users can rely on Jennifer to provide a trustworthy answer, that is backed up by reliable sources. \n\n>> full documentation of tasks, workflow, and sequencing is not provided in this manuscript, and use of volunteer time is not assessed.\n\nSec. 2.1 describes the workflow and sequencing of tasks. Some details are omitted due to space limit. \n\n>> Only one of these three claims (in conclusion) is supported by the presented material.\n\nWe have modified the text to emphasize areas where these claims are supported.\n\nWe have fixed all other minor issues related to citation and writing as pointed by the reviewer. ",
"role": "user"
},
{
"content": "Thank you for the authors' reply. They have addressed most of the issues I raised, and I will likely adjust the score higher.",
"role": "assistant"
}
]
|
HxIZzQZy_0F | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HxIZzQZy_0F>>```",
"role": "user"
},
{
"content": "Title: Impressive system, lacking crucial details\nReview Content: The authors describe a chatbot and a dataset of frequently asked questions regarding the SARS-CoV-2 disease. They enumerate the challenges in building an effective health chatbot. They introduce a dataset of SARS-CoV2 and COVID19 questions.\n\nI would like to congratulate the authors on having an actual deployed system. I have many concerns about the paper and the datasets themselves that hopefully the authors can address.\n\nPros:\n1. The authors collect what might be a high quality dataset harvested from many QA sources\n2. They provide compelling examples of good bot behavior.\n3. They provide a reasonable roadmap forward.\n4. They are displaying great care in deployment of their system, particularly beyond their initial input domain.\n5. I am a huge fan of the ongoing curation of the answers in their dataset.\n\nConcerns:\n1. The authors should provide a brief overview of the capabilities from Juji: Does the question/QA grouping come from Juji itself, or do the authors have some other approach? (From a brief skim of the Juji paper it seems that the authors must have taken some alternative approach)\n2. There is no measure of question/QA grouping. Presumably this should be a very high precision grouping since incorrectly including a question could mean it has a wrong answer.\n3. Similarly, there is no measure of inter-rater agreement on the evaluation of the answers in the crowd sourced context. A measure such as Krippendorf's alpha (or many others) would be appropriate here and essential to knowing how well the system performs overall. A measure like this becomes doubly important when the authors are attempting to fight the spread of misinformation.\n4. How were the automatically generated question generated? If you used Juji templates, what are some examples? Given the constraints of available structured data (e.g. CDC data), what types of questions did you choose to answer, and which ones did you not? \n5. For the questions gathered from multiple sources, how are conflicts resolved? Presumably two sources do not always have consistent answers.\n6. How many people evaluate each question/answer pair?\n7. How is the translation performed? I can't tell if it's automatic with manual curation or entirely manual. If there was an automatic component, what system did this?\n8. I am concerned about the use of \"engagement\" as a metric of success. This becomes even more concerning to me when more time spent interacting with a system is seen as a measure of its success - while I agree that when answering questions one might need further clarification or to know about related questions and answers - encouraging people to spend more time interacting with a system could lead to a perverse incentive where it keeps users \"engaged\" without answering their questions. \n\nThe authors have built an impressive system, but I am reluctant to recommend it for acceptance without further details in data generation and evaluation. If these details were added (even for the system/questions/answers at a particular point in time), I would gladly recommend this paper for acceptance.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to the Concerns Raised\nThank you for your detailed comments and feedback. We have seemed to address all concerns in the revised version as detailed below. \n\n>> The authors should provide a brief overview of the capabilities from Juji\nThe main capabilities Juji provides is Dialog Management, including given a user question, matching the question to a known question based on a pre-trained machine learning model and return the corresponding answer for the question group where the question below to. The ML model also gradually fine-tuned to the specific workspace based on user feedback (e.g. selection on ambiguous questions). We have added a brief description about to the beginning of Sec. 2.1 (this is a 4-page short paper submission, hence the brevity). \n\n>> 2. There is no measure of question/QA grouping. Presumably this should be a very high precision grouping since incorrectly including a question could mean it has a wrong answer.\n\nThe grouping is done manually, so it is very precise. We've clarified this at the beginning of Sec. 2.3. \n\n>> 3. Similarly, there is no measure of inter-rater agreement on the evaluation of the answers in the crowd sourced context. \n\nThe measure of inter-annotator agreement doesn't apply, as we only add/update an answer when it passes both validation stages by trusted medical experts. Similarly an alternative question is added only when it passes validation by trusted admins. We've clarified this further towards the end of the last paragraph of Sec. 2.1. \n\n>> 4. How were the automatically generated question generated? If you used Juji templates, what are some examples? Given the constraints of available structured data (e.g. CDC data), what types of questions did you choose to answer, and which ones did you not? \n\nWe have provided more details in Appendix A. To sumamrized, we focus on answer questions related to case numbers and death rates in US states with data from CDC and different countries with data from WHO. The auto-generated questions are also included in the COQB-19 dataset. We also provided more details in Appendix B about the data. \n\n>> 5. For the questions gathered from multiple sources, how are conflicts resolved? Presumably two sources do not always have consistent answers.\n\n\nThe answers are manually curated by volunteers and validated by trusted medicine experts. When two resources are conflicting with each other, we will provide an answer based on the most authoritative resource (e.g. CDC) but also include the alternative answer as appropriate. We rely on our trusted volunteer medicine experts to make the judgement call. \n\n>> 6. How many people evaluate each question/answer pair?\nThe exact number varies. In general, every answer has been evaluated by at least three volunteers: one curator, one tester, and one admin. As information related to COVID-19 is evolving over time (e.g. recommendation on the usage of masks), some of the answers were revised, tested, and validated more than once. \n\n>> 7. How is the translation performed? I can't tell if it's automatic with manual curation or entirely manual. If there was an automatic component, what system did this?\n\nIt is done manually. As questions and answers are added to the Jennifer QA database, our volunteer translators incorporate the respective QA translations and links to trusted sources. We've clarified this point in the last paragraph of Sec. 2.4. \n\n>> 8. I am concerned about the use of \"engagement\" as a metric of success. This becomes even more concerning to me when more time spent interacting with a system is seen as a measure of its success - while I agree that when answering questions one might need further clarification or to know about related questions and answers - encouraging people to spend more time interacting with a system could lead to a perverse incentive where it keeps users \"engaged\" without answering their questions. \nWe agree that that \"engagement\" should not be the ultimate measure of success of our system. We do plan to conduct formal user studies in the future to evaluate Jennifer with different metrics such as NPS and user sentiment. \n",
"role": "user"
},
{
"content": "Title: Going the right direction!\nOverall, I'm pretty pleased with the updates and will bump my score a little bit; remaining points below. Basically I want a plan for evaluation before I become enthusiastic about accepting this.\n\n> The measure of inter-annotator agreement doesn't apply, as we only add/update an answer when it passes both validation stages by trusted medical experts. Similarly an alternative question is added only when it passes validation by trusted admins. We've clarified this further towards the end of the last paragraph of Sec. 2.1\n\na) each of these stages is still probabilistic, you already understand this as you have multiple stages of QA\nb) if something can be rejected at any stage, you do in fact have something to measure agreement against (it's not a perfect measure, but it would be better than nothing)\nc) if you don't have some measure of agreement, you should at least have a plan for collecting one\n\nHaving something like this, in mind, becomes effectively non-negotiable for full conference publications, but as a work-in-progress I care more about a long-term plan.\n\n> How many people evaluate each question/answer pair? The exact number varies. In general, every answer has been evaluated by at least three volunteers: one curator, one tester, and one admin. As information related to COVID-19 is evolving over time (e.g. recommendation on the usage of masks), some of the answers were revised, tested, and validated more than once.\n\nA less important note - if you track sources and different versions of questions/answers, this could be a valuable resource for investigating the flow/change of information in a relatively fast moving world.\n\n\n> We agree that that \"engagement\" should not be the ultimate measure of success of our system. We do plan to conduct formal user studies in the future to evaluate Jennifer with different metrics such as NPS and user sentiment. \n\nThis should probably be detailed, at least in an appendix\n",
"role": "assistant"
},
{
"content": "Title: Preliminary Plan for Evaluation Added\nThank you for your positive feedback and insightful comments. \n\nWe have added Appendix C. Preliminary Plan for Evaluation with initial thoughts on how we plan to evaluate Jennifer from both users' as well as volunteers' perspective and both quantitively as well as qualitatively. Please do let us know if you have further comments or suggestions. \n\nWe also plan to seek collaborations with experts in human-computer interaction to carry out some of the studies. We welcome any researcher interested in such opportunities to contact us directly. \n\n",
"role": "user"
},
{
"content": "I am very happy to have the above discussion with the author, which has positively impacted the strength of this work.",
"role": "assistant"
}
]
|
HxIZzQZy_0F | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<HxIZzQZy_0F>>```",
"role": "user"
},
{
"content": "Title: N/A\nReview Content: The paper provides a good description of the COVID-19 Q&A chatbot and the process of creating the crowd-sourced Q/A pairs that the bot uses for question answering. The authors also outline the opportunities and challenges of implementing COVID-19 chatbots in the real world.\nWhile the study authors list examples of lessons learned as well as open challenges of using chatbots in a pandemic setting in which information changes quickly, it would be very helpful to better understand the use of a Jennifer-like chatbot in a situation in which the evidence-base is more stable. It would also be very helpful to understand the authors' perspectives on how chatbot technologies and their use could be effectively evaluated, and how such an evaluation could be better integrated into the chatbot development and validation of the provided responses. In addition, it might be helpful to provide examples of other chatbot technologies that have been used successfully during and prior to the pandemic, as this could put better into perspective the issues and imitations that are being raised in the manuscript. \nAnother important aspect regarding the use of chatbot technologies is their acceptance both by public health authorities as well as the intended users. In using novel technologies, governments and public health organizations typically tend to take a more conservative approach, and accept them only following rigorous testing and evaluation. Equally, there may be also hesitancy from certain population groups with formed/strong opinions to engage with chatbots, particularly when provided answers originate from just one or several sources (i.e. when the whole body of evidence is not provided). It remains unclear who is (are) the intended target audience(s) of the chatbot and, ultimately, its objective. It would be very helpful to have that information stated and discussed up front. Lastly, it is important to recognize the significant limitations of the use of chatbot technologies for \"fact-checking\" in the absence of a robust knowledge-base. What has been presented in the paper may be more applicable to situation in which there is a need to help users identify and navigate through the relevant content, particularly when information is not well structured or ambiguous. It would also be great if the authors could reflect in the sustainability of initiatives that have a very high reliance on volunteers \nI would recommend that the manuscript be accepted, as it provides a good example on the potential usefulness of chatbot technologies in supporting the general public to efficiently find credible information on the web. \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thank You and Some Clarifications\nThank you very much for your insightful comments. We completely agree that Jennifer is a great example to demonstrate the potential usefulness of chatbot technologies in supporting the general public to efficiently find credible information on the web. \n\n>> Better understand the use of a Jennifer-like chatbot in a situation in which the evidence-base is more stable. \n>> Examples of other chatbot technologies that have been used successfully during and prior to the pandemic\n\nTo the best of our knowledge, , the currently available literature does not report examples of other chatbots that were designed with a similar scope to Jennifer (Miner et al. 2020, Larajano et al., 2018). We do agree that there is a need to better understand the use of a Jennifer-like chatbot in a situation with more stable evidence-base. There are several studies conducted before that compare Juji against other chatbot technologies. (https://juji.io/publications; we cited the most recent one Xiao etc. 2020). We intend to conduct further study when information for COVID-19 is more stable and comapare different chatbots surfaced to help fight against COVID-19. We welcome researchers interesting in the problem reaching out to us for potential collaboration,. \n\nLaranjo L, Dunn AG, Tong HL, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25(9):1248-1258.\nMiner, A.S., Laranjo, L. & Kocaballi, A.B. Chatbots in the fight against the COVID-19 pandemic. npj Digit. Med. 3, 65 (2020). \n\n>> How chatbot technologies and their use could be effectively evaluated, and how such an evaluation could be better integrated into the chatbot development and validation of the provided responses.\n\nEvaluation of chatbots technologies is an important topic. \n\nThe design of Jennifer includes three different perspectives, which can be evaluated separately:\n(1) the technical design of Jennifer\n(2) the interaction of human users with Jennifer\n(3) the COVID-19 related information provided by Jennifer\n\nOur core techical design considerations are 1) Rapid development; 2) Ease of access; 3) Ease of maintenance; 4) Quality assurance. Because of (1) and (3), we selected Juji as our platform and was able to build the first version of Jennifer in less than a day. Several studies were conducted before that compare Juji against other chatbot technologies. (https://juji.io/publications; we cited the most recent one Xiao etc. 2020). It would be possible(but take some time) to compare the design of the chatbots reviewed by Larajano et al. 2018 with that chosen for Jennifer. The scope of these chatbots was different, however, as they were designed as assistants for self-diagnosis. \n\nIt would also be of interest to evaluation of the acceptance of Jennifer by human users in real life situations. We hope to conduct formal user studies in the future. \n\nCurrently, our evaluation and updates to the answers provided by Jennifer are mostly manual with the help from our volunteers. It would be interesting to create more automated evaluation methods and allow users of the chatbot to provide feedback more directly and incorporate them into the chatbot. We plan to explore this further in the future.\n>> Acceptance of chatbot technologies.\n\nChatbot technologies have been adapted by such organizations to help fight COVID-19. For instance, both WHO and CDC have used chatbot technology for assisting people in self-diagnosis (Miner et al., 2020.) We agree that it is important to provide evidence when giving answers. This is precisely when when Jennifer provides an answer to each question, references/links to the corresponding reliable resources are provided to help foster trust. \n\n>> Intended target audience(s) of the chatbot and, ultimately, its objective\n\nThe intended audience is general public who has difficulty navigating information about COVID-19 distributed across different resources. \n\n>> it is important to recognize the significant limitations of the use of chatbot technologies for \"fact-checking\" in the absence of a robust knowledge-base. \n\nAs with other scientific disciplines, knowledge on COVID-19 is evolving and,hence,the answers given by Jennifer have to be regularly reviewed and updated. A chatbot like Jennifer would likely be no good solution to navigate through information that is not well structured or ambiguous because it relies on a human team that performs a careful review ofthe questions that it can understand and the answers that it can give.\n\n>> Sustainability of initiatives that have a very high reliance on volunteers }}\n\nAs shared in Sec. 3, we found that we need to support the initiative with more dedicated resources and better infrastructure to manage the large number of volunteers. In general, people are eager to help, but we need to have better support to ensure that their time and efforts are used in the most effective way. ",
"role": "user"
},
{
"content": "I appreciate the authors' reply. The authors have answered almost all of my concerns. I will continue to recommend accepting this work.",
"role": "assistant"
}
]
|
JENSKEEzsoU | ACL 2020 Workshop NLP-COVID | AnonReviewer4 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer4, and the following are the paper you need to review:\n```<<JENSKEEzsoU>>```",
"role": "user"
},
{
"content": "Title: Potentially a good resource\nReview Content: This work presents a question-answering dataset created on a subset of CORD-19. Obviously, having a large QA dataset is valuable and if done well, can be a useful research both in the bioNLP field. The dataset is created by finding \"answers\" to potential questions that the text under examination would answer, surely, this effort has taken many hours to create. For what has been done, much less time and effort is put into writing about it. The manuscript is vert fragmented and hard to follow:\n- Abstract: Not clear what exactly is done and what contributions are\n- Introduction: You could reduce explanation on CORD-19 (removing the name of organisations for example) and refer to it in a citation. Instead it was expected to see what the NLP problem here was and what are the contributions of this work. None of that is there.\n- Related work is hardly a literature review. What is the common practice in the NLP community in creating a QA dataset. This is an old area. Surely there would be reputable (not arxiv) papers there to check and compare? Why tools are there and why are they relevant given they are two IR tools mentioned? \n- Section 3 needs a full rewrite. All the important details are missing. What were the annotation instructions and why? Ho w many annotators did you have? Was there any quality control? Any inter-annotator agreements for example?\n- Not sure if the paragraph before 3.1 makes any sense. BioBERT and SciBERT and so on are more representations. What is the relevance here?\n- In Section 3, it says annotators are told they are developing a \"method\". How is this a method? \n- Method section is quite weak but maybe enough for a dataset work. \n- Please check your references, even old ones are arxiv. Maybe they are published somewhere? or maybe you can find a peer-reviewed citation for some of those.\n\nOverall, even with the urgency of publishing COVID-19 related work, the paper should have acceptable quality which I believe this one need revision to get there. \nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Mixed review giving some good suggestions for improvement but lacks foundation in other parts\n> Reviewers Intro\n\nThe reviewer might be lacking important knowledge of the domain because the dataset is not created by “finding answers to potential questions” as stated by the reviewer but instead “The researcher uses the mouse to highlight the answer (pink) and creates a question based on that answer (box on left).” (caption of Figure 1 in paper). This is also the annotation procedure of the prominent SQuAD dataset that the reviewer should be aware of.\n\n> Abstract: Not clear what exactly is done and what contributions are\n\nPlease be more specific, because we believe the abstract is rather clear (“Our dataset consists of 2,019 question/answer pairs annotated by volunteer biomedical experts”) as well as states our contributions (dataset creation + “We found that the additional training on this domain-specific data leads to significant gains in performance”).\n\n> Introduction\n\nAgreed. The first part could be a bit more crisp while the latter part could be more elaborate. We left the first part untouched but added more details to the contribution as well as motivation.\n\n> Related work [usage of arxiv references and why are two IR tools mentioned?]\n\n- Thanks for the hint. We changed references for SQuAD and MRQA to their respective non-arxiv versions. Other references in the related work are, given the recent nature of the crisis, preprints. \n- The last sentence motivates the mention of IR tools: “We view our work as a complementary effort to aid in the evaluation and/or improvement to these existing tools”. Maybe you are aware of the discussion around the submission “Rapidly Deploying a Neural Search Engine for the COVID-19 Open Research Dataset” where a lot of criticism revolves around the lack of proper evaluation. With our QA dataset these existing tools could be potentially evaluated.\n\n> Section 3 needs a full rewrite. All the important details are missing. What were the annotation instructions and why? How many annotators did you have? Was there any quality control? Any inter-annotator agreements for example?\n\n- Possibly the reviewer failed to notice the hyperlinks to detailed annotation guidelines as well as the videos explaining the annotation procedure. Possibly the reviewer printed out the paper? We had an internal discussion about footnotes vs hyperlinks and thought hyperlinks would be more useful in a paper linking many external resources + people cannot follow or click on footnotes printed on paper anyways\n- General remark. During the review process we added a lot of additional material especially to section 3. Looking now at the section we realized we should divide it into smaller and more digestible parts, which we did by introducing subsections “3.1 Article and Annotator Selection”, “3.2 QA Annotation Process” and “3.3 Dataset Analysis”\n- Number of annotators: Very good point. We added it to section \"3.1 Article and Annotator Selection\"\n- Was there quality control: Quality control is stated in the paper that each annotation was verified by the annotation team lead. We made this more explicit in the newest revision in section \"3.2 QA Annotation Process\", first passage.\n- Inter annotator agreement (also criticized by other reviewers): as stated in the discussions part the dataset does not contain multiple annotations per question. So inter annotator agreement is not possible to compute. We state the reasons for this in the discussion section as well.\n\n> Section 3. What is the relevance of BioBERT and SciBERT\n\nTrue, Bio- and SciBert are Language Models for creating representations (of domain text), these domain representations can be used for doing domain specific QA. All these domain efforts have in common to improve the performance in that domain. This was our hope when creating COVID-QA, improving performance of search systems in this domain.\n\n> Section 3, it says annotators are told they are developing a \"method\". How is this a method? \n\nWe agree, this is a confusing choice of words and change “method” to “datapoints”.\n\n> Method section is quite weak but maybe enough for a dataset work. \n\nThanks, we take this as a compliment.\n\n> Please check your references\n\nAgreed, referencing preprints instead of peer reviewed publications is not good practice. We went through the references and changed them where applicable.\n\n\n\n\n\n\n\n",
"role": "user"
},
{
"content": "Title: Answer to the authors\nThanks for the clarifications.\n\nI just want to add that my concerns were to help improve the writing of the paper. When you have a dataset paper, referring your reader to hyperlinks is a bit of a stretch and some of the annotation guidelines and important annotation related information must stay in the paper. I had looked in those links but I'm not reviewing those but what you have written here. \n\nFor the current draft:\n- The abstract still needs improvement. Here's what the flow breaks: \"...This presents, to\ndate, the largest manually created Question Answering dataset on Covid-19 related material. We began with a RoBERTa base model that was initially\" Abruptly RoBERTa is mentioned. Again this is not a method paper so you could simply say we used benchmarked our dataset using X methods. \n\n\n- Tools: Again, I'd think there are tools that could be more QA and less IR? Or are there none? No objection to keep IR tools but there should be something on QA as it's most relevant to you. ",
"role": "assistant"
},
{
"content": "Title: Response\nThanks for the prompt answer. We really appreciate these more specific suggestions because they outline potential for improvement.\n\n\n> annotation guidelines and important annotation related information must stay in the paper\n\nWe agree that some of the annotation guidelines need to be in the paper. Though we largely followed \"the SQuAD style label process\" which is well known and we reference in the paper. We included where we diverged from the SQuAD dataset creation process: annotating on longer documents, missing n-way annotations and possibly less controlled quality checks. We believe omitting details that referenced papers explain is a reasonable thing to do but are open for discussion.\n\n> The abstract still needs improvement. Here's what the flow breaks [...]\n\nThanks for specification. We acknowledge that it is a break of flow, but would argue that this break is common practice: creating a dataset, then evaluating this dataset. For more clarity we could add in another revision \"To evaluate the dataset we began with ...\"\nConcerning the use of RoBERTa: We feel that BERT or RoBERTa are nowadays well known for people to understand the relation between a QA dataset and a RoBERTa model. Do you have a proposal to state the evaluation more clearly while being concise and to the point?\n\n\n> Tools: Again, I'd think there are tools that could be more QA and less IR? \n\nIt is difficult to apply pure BERT QA models (joint computation of question and document in one model) in real time to incoming user questions on a whole database of documents. So realistic user facing tools should use IR related components. Nevertheless the tools stated do not just retrieve documents but highlight potential answers, which we believe is more QA than IR. ",
"role": "user"
},
{
"content": "I am grateful for the opportunity to discuss these points with the author, which undoubtedly enhances the strength of this paper.",
"role": "assistant"
}
]
|
JENSKEEzsoU | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<JENSKEEzsoU>>```",
"role": "user"
},
{
"content": "Title: QA dataset on CORD-19\nReview Content: Summary of Work: This work provides a new resource, a QA dataset built on top of CORD-19 annotated for span prediction task in form of SQuaD dataset. The dataset contains 2K question from 147 articles (I am not clear on this point, the authors say they extracted 445 documents). They also built a model on top of this dataset by finetuning roberta base and find improvement over original roberta base.\n\nWhile the dataset may be useful, we don't really know what criteria the annotators used to annotate the questions (How do they select what to annotate as answer? , what is considered important? - Do note that the authors also provide online videos of instructions that I have note reviewed which may contain the answer to this.)\n\nAlso the authors mention the questions can be reviewed/corrected by other annotators but again I don't know if it was done for every question and by how many other annotators. The annotators are said to be volunteers (I assume that to mean they were not paid) but I am not sure how they were recruited and how their qualification were evaluated (the paper says the authors require them to have a masters in biomedical sciences, although not sure how that is checked).\n\nIn general, the paper provides us some info about the new dataset, but the details are left out. As such, I am not confident in its utility for useful (and trustable) research .\n\nThe modelling framework is straightforward although their may be obvious improvements that can be made by training on biomed corpora. The top-n-accuracy metric needs more explanation (does the overlap mean overlap in text or position -- the first one is useless since we overlap on stopword easily, the second one might be more useful, but more analysis is needed).\n\nThis work needs a spell check (caronavirus -> coronavirus, fee -> see, and more).\n\nI would note that a lot of my criticism maybe resolved by providing more details. But without them, I cannot truly provide a good review of the dataset itself. Therefore, currently I am giving a marginally below threshold score (and not a clear rejection since the dataset *may* be useful)\n\nUPDATE: The authors answered questions presented above. The concern still remains about the quality of annotations, as well as the utility of model released. On the basis of this, I have increased my score.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Responses to all points in order of occurrence\nThank you for this timely and detailed analysis of our paper. We are keen on adding more details to the paper. We left out some information because we did not want to overburden the reader in the first place but are happy to elaborate more.\nFollowing is a list of responses - we also mention where we applied the relevant changes to the paper.\n--------------\n\n> Clarification on number of articles annotated\n\nGood point, it is a bit unclear. We uploaded 450 articles to the annotation tool, 147 of those contain actual labels. We clarified this in the newer revision by stating: \"Of these 450 scientific articles, 147 were annotated over the 8 week period [...]\" under \"3.1 The Annotation Process\".\n\n> Annotation criteria\n\nYou are right, the guidelines can be found inside the videos mentioned in the paper. For convenience (not everyone likes videos) we also uploaded the annotation guidelines as pdf and will linked them in \"3.1 The Annotation Process\". You can also find them here: https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/Handbook%20-%20Labelling%20Tool.pdf\n\n> Correction of Annotations\n\nThis is an important point, also made by the reviewer no. 1. Unfortunately we cannot give a fully satisfying answer. We manually checked some of the labels and found them to be of good quality as well as created a slack workspace where people could discuss specific annotations. We added information about manual checks under “3.1 The Annotation Process”.\nWe hope to create a more thorough and especially documented review of quality checks in a future version of this dataset.\n\n> Recruiting of Annotators\n\nWe have added how volunteering annotators were recruited to the paper under “3.1 The Annotation Process”.\n\n> Modelling Framework\n\nAgreed, the modelling is not the best you can get with current methods and we talk about improvements to the modelling part in \"5.2 Modelling improvements\" where we also state your suggestion of training on other/biomed data.\n\n> More Explanations for top-n-accuracy metric\n\nThe overlap is on position as stated in the paper under \"4.2 Metric\": \"[...] we choose to take multiple\nmodel predictions and any positional overlap between prediction and label as metric and refer to it\nas top-n-accuracy. Top-n-accuracy compares the\ngold label against n model predictions and looks for\nany overlap between prediction and answer positions. [...]\"\nWe made this explanation more prominent and more explicit in the new revision.\n\n> Spell Check\n\nFixed, thanks for pointing out!",
"role": "user"
},
{
"content": "Thanks to the authors for their clarification. It covers most of my concerns.",
"role": "assistant"
}
]
|
JENSKEEzsoU | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<JENSKEEzsoU>>```",
"role": "user"
},
{
"content": "Title: Development of a QA dataset for COVID-19; a work in progress that requires revision\nReview Content: Authors have developed a QA dataset based on scientific articles about coronaviruses released as part of the COVID-19 Open Research Dataset (CORD-19). A small set of 450 articles were selected based on keyword matching, random sampling, and expert search, and around 2K Q-A pairs were developed against these articles, where question were generated by annotators and answers were sentences in the articles. Annotators were required to have a degree in the biomedical sciences. Also, the annotated dataset and a fine-tuned BERT-based model have been released. \n\nAuthors investigate an important problem, and the paper is well organized and well written. In addition, such a dataset could be a valuable resource as it enables computer scientists and clinicians to address their information need about COVID-19 based on reliable sources (scientific literature).\n\nHowever, there are few shortcomings that prevents me from recommending the paper for acceptance at this time: \n\n1. Data Quality: Each QA pair has been annotated by only one annotator. I suggest having the data annotated by at least two annotators followed by a third one (or a discussion between the first two annotators) to adjudicate the disagreements. It's important to consider that a good model should learn the actual task as apposed to optimally mimicking the behavior of a single annotator. In addition, I wonder if annotators labeled \"all\" sentences of articles that answer the questions. This is important for correct evaluation (models should not be penalized for correct predictions that are not labeled). In addition, I found many questions in the dataset that:\n\na): are not necessarily relevant COVID-19:\n- \"What is the main cause of HIV-1 infection in children?\"\n- \"What plays the crucial role in the Mother to Child Transmission of HIV-1 and what increases the risk\"\n- \"What is the molecular structure of bovine coronavirus?\"\n- \"What causes the outbreak of SARS and MERS?\"\n\n\nb) are general with potentially multiple correct answers:\n- \"Who plays a role in regulating the immune response to a vital infection?\"\n- \"Which species are more prevalent but less severe?\"\n- \"What is a significant cause of Influenze like illness among healthy adolescents and adults presenting for medical evaluation?\"\n\n\nc) combined several answers together:\n- Q: \"Which species are more prevalent but less severe?\"\n A: \"HCoV-HKU1, HCoV-OC43, HCoV-NL63, and HCoV-229E\"\n\n\nIn addition, \"no answer\" questions are probably the most interesting and prevalent questions about COVID-19. Such questions have not been developed in the context of this work.\n\n\n2. Modeling: Articles were splitted into multiple chunks of text and only two chucks (a positive chunk containing the answer of a given question and a randomly sampled chunk as negative example) were used to test the model. This design choice can oversimplify the task. In addition, given that the \"no answer\" output of the model was disabled, I'm not sure how the model works on the negative chunks. What should the model predict in cases of negative chunks?\n\n\n3. Evaluation: Top-n-accuracy measure is defined as follows: \"it compares the gold label against n model predictions and looks for any overlap between prediction and answer positions.\" It is not clear how accuracy is derived from overlaps. But, if \"any\" overlap between prediction and answer positions is treated as a hit, then Top-n-accuracy is not a good/reliable metric because a small overlap (e.g. a stopword) is just enough to treat a wrong prediction as a correct answer, and an untrustworthy algorithm as a trustworthy one. This is specially problematic for questions with long answers.\n\nI recommend authors to consider and address the above comments and re-submit their papers.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Responses to all points mentioned - an ordered list based on our perceived importance\nWe highly appreciate the timely as well as accurate review of our efforts and are motivated to improve upon our work with the help of this review. The mentioned points are all relevant and show a high degree of insights into the domain and a thorough analysis of the paper and especially the accompanying annotated dataset. We hope to address each issue in a satisfactory manner. We will especially address changes to the paper - significant changes to the dataset are, by nature of a volunteer led expert annotation project, out of scope before submission deadline. Thanks also for acknowledging strong points of our paper, that an important problem is investigated and the paper is well organized as well as well written.\n\nWe reordered your points based on our perception of how important they are to address.\n\n>1. Data Quality: Each QA pair has been annotated by only one annotator.\n\nAgreed. Multiple annotations per QA pair are very important. The current version of the dataset misses these n-way annotations, a limitation which we state in \"5.3 Dataset Improvements\". \nWe manually checked QA pairs by experts that were familiar with both the label tool and the domain. These manual checks are unfortunately left undocumented. \nWe also set up a slack workspace for volunteers to discuss certain annotations. We might be able to address this in a future version of the dataset.\nWe added details of the manual checking under “3.1 The Annotation Process” \n\n> 2. Modeling: unfair test scenario\n\nThere might be a slight misunderstanding. Only training data is downsampled, tests are done on the whole article.\nThis downsampling made training much faster since there are on average ~36 negative passages per each positive passage.\nTesting is done on the whole article, so we believe the test scenario is not unfair.\nWe made the downsampling procedure more explicit under “4.3 Evaluation” and also emphasize the size of the documents (6118.5 mean tokens per doc COVID-QA vs 153.2 tokens for SQuAD) as being a challenging test scenario under “5.1 Interpretation of Results”.\n\n> 3. Evaluation: Top-n-accuracy\n\nYou are correct, \"any\" overlap creates a binary hit or miss scenario so accuracy can be computed. It is important to note that overlap in position is used, not overlap of strings.\nWe hope the positional overlap on long documents reduces the reliability concern. To be more certain we analyzed a subset of predictions, available at https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/predictions.tsv, and cannot find meaningless predictions (e.g. stopwords).\nWe made the paper more explicit about how accuracy is calculated under “4.2 Metric”.\n\n> 1. Data Quality: a): are not necessarily relevant COVID-19\n\nThe articles were from the CORD-19 dataset which originally contained a fairly generic set of scientific articles. The number of COVID-19-specific ones is understandably low given the newness of the outbreak. We attempted to narrow the CORD-19 articles even further by extracting more relevant subsets. Nevertheless, SARS and MERS have similar domain-specific relevance to COVID-19 since they define similar viral pandemics. HIV-1 infection mechanisms (and indeed all viral infection mechanisms) are highlight relevant to COVID-19 researchers as well. In order to keep the COVID-QA model generalizable, we felt that including these non-COVID (but viral outbreak related) articles was highly relevant. We added a more detailed explanation of chosen articles under \"3.2 Annotation Process\" (first paragraph)\n\n> 1. Data Quality: \"no answer\" questions are probably the most interesting\n\nAgreed, \"no answer\" annotations would be very helpful in creating good QA systems and have not been included in this dataset as mentioned in the paper under \"5.3 Dataset Improvements\".\nUnfortunately we see true \"no answer\" annotations as out of scope for the current version for the following reasons: \n1. True \"no answers\" as present in Natural Questions are hard to get, because we would first need \nnaturally occuring questions - not questions constructed by looking at some text but questions by actual users using a search engine.\nSince we do not operate such a search engine we only might be able to address \"no answers\" in another version of the dataset, if we get search queries by one of the CORD-19 search engines. \n2. One could artificially construct \"no answers\" as in SQuAD v2.0. The usefulness of those constructed \"no answers\" is debatable.\n\n> 1. Data Quality: b) are general with potentially multiple correct answers\n\nWe added the internal discussions about multiple annotations per question to the paper under “3.1 The Annotation Process”.\n\n> 1. Data Quality: c) combined several answers together\n\nThe combination of several answers only happens when the answer is consecutive text. If it were not consecutive text, this problem boils down to 1b above. Variance in answer length is a reason we stated for choosing top-n-accuracy in \"4.2 Metric\".\n\n",
"role": "user"
},
{
"content": "I value the authors' response. It covers most of my concerns, and I'm open to increasing the score.",
"role": "assistant"
}
]
|
JQCYcdHfXyJ | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<JQCYcdHfXyJ>>```",
"role": "user"
},
{
"content": "Title: Valuable resource\nReview Content: The paper describes a novel dataset containing answers to a survey that asked participants to describe in short and long text their emotional states and rate their levels of anger, anxiety, desire, disgust, fear, happiness, relaxation, and sadness. The dataset includes responses from 2500 UK residents collected during a critical period of the cover-19 pandemic.\n\nThe paper also investigated the predictive performance of lexicon-based and supervised classifiers. Preliminary analyses of the data showed significant correlations between the respondents emotional ratings and the scores inferred by LIWC for the same emotions. However, these correlations are much higher for longer texts, which is not surprising since LIWC was not created for microblog style content and longer text has more words that can match with the lexicon. Similar trends were observed with a supervised model trained to predict the emotional ratings given the text answers. Finally, topic analysis of the data showed that the main topics expressed in shorter text are different than in longer text. \n\nThe paper is well written, well scoped and well executed. The dataset is very interesting and it will be useful for the NLP community not only from the standpoint of understanding how people respond to global crises but also to better understand how the characteristics of social media might influence what kinds of information people decide to share. The cautionary tale about using Twitter to study the public's reactions to these kinds of events is worth a closer look. I am not sure how much we can extrapolate from this, given that in real life people can post more than one tweet about a subject, and they do so spontaneously which is different from being asked to post about something specific on a survey. Some \"experimenter effect” might be at play here. \nRating: 8 accept, good paper\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to reviewer\nThanks for the positive and helpful review. \n\nWe appreciate your comments and will incorporate the points raised. We fully agree that the Tweet-style texts elicited in the study do not fully mimic a ‘natural’ setting in which Twitter users may write about their emotions and COVID-19. Indeed, on Twitter people may write multiple Tweets about a subject or reply to other users. Therefore, we have noted this as a limitation in our revised paper.\n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I stand by my favorable recommendation for this paper.",
"role": "assistant"
}
]
|
JQCYcdHfXyJ | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<JQCYcdHfXyJ>>```",
"role": "user"
},
{
"content": "Title: Good first steps toward understanding the UK public's emotional response toward covid19\nReview Content: The focus of this manuscript is to 1) describe a corpus of long and short lengthened emotional responses to the covid19 pandemic by the public, 2) identify meaningful associations between linguistic measures and emotional responses, and 3) develop prediction models for automatically classifying unseen texts according to their emotional response categories.\n\nQuality: The usage of LIWC lexicon to identify topical/linguistic information is a good start. I'm interested in how differently tools like empath (https://github.com/Ejhfast/empath-client) would perform in identifying pre-configured and new topics on this corpus and what additional insights could be drawn from it. \n\nThe correlation values appear low, but I'm wondering if that's due to out of vocabulary terms. Perhaps, a quick manual review of LIWC term coverage and a lexicon enrichment might help identify a stronger signal. \n\nSome emotions seem a little hard to tease out or have close relationships e.g., worry, anxiety, fear, etc. It would be interesting to understand commonalities and distinguishing characteristics in terms of linguistic measures of related/close categories. \n\nClarity: A few points should be clarified or explained: 1) did you collect any additional meta-data about the participants e.g., sex, gender, age, race, professions (essential vs. non-essential workers), etc. that could be useful to contextualize or identify particular worries among groups? 2) did these \"texts\" also include emoticons that could be used to convey emotional response and topical information? \n\nIt would also be interesting to sample more than 2 days. I wonder how the topics will shift as the pandemic unfolds. \n\nAlso, I would recommend revisiting the corpus title as its very UK-centric and covers a broad range of emotions. What about \"UK COVID19 Public Emotional Response dataset\"?\n\nOriginality: The coupling of the open survey with traditional linguistic and topic modeling approaches for examining the global threat has some originality. The predictive model serves as an initial baseline. It would be interesting to evaluate other traditional machine learning classifiers to establish reasonable baseline performance. \n\nSignificance: The topic is certainly significant and a nice first attempt at obtaining self-reported emotional concerns of the public. This is also a great tool that if deployed more broadly could capture insights across regions, countries, and continents.\n\nThank you for making this invaluable resource publicly available to researchers!\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to reviewer\n\nThanks for the review. The points raised will help us improve the paper for the revision and are much appreciated.\n\nPlease find our response re. your original points below:\n\nRe. Empath: great suggestions. We will look into this and add findings to the revision. Thanks a lot for pointing us to this fantastic tool.\nRe. LIWC dictionary coverage: we had that idea too initially but the coverage was relatively high (92.36% and 90.11% for short and long) which we saw as sufficient evidence that OOV terms are not the issue. The additional Empath analysis we will conduct can hopefully assist us in further understanding this.\nRe. emotion overlap: agreed. While these were taken from validated emotion questionnaires (with our addition of “worry”), we think correlational analyses will shed light on potential overlap. We add these to the revision.\nRe. meta data: yes, we have these data and will add them to the demographic statistics under section 2.3 Descriptive statistics\nRe. emoticons: good point - we will check how many there were (if any) and look at what they can tell us. We add information on this to the revision.\nRe. multiple days of sampling: agreed. We were debating this and think it’s something for future work. A complication is that so many events might affect the public’s response that ideally, we’d need high-frequency sampling. That is currently out of the scope of our funding unfortunately.\nRe. dataset name: agreed - we will amend the name to focus more on emotions. The revision will clarify this.\nRe. prediction models: we will explore more models and will add these to the revision (see also Reviewer 1’s point #8).\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. The response addresses most of my concerns. I keep my recommendation for this paper to be accepted.",
"role": "assistant"
}
]
|
JQCYcdHfXyJ | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<JQCYcdHfXyJ>>```",
"role": "user"
},
{
"content": "Title: Useful dataset\nReview Content: Summary: The paper introduces a self-reported dataset of 2500 short and long (each) texts about emotions of people during the COVID-19 pandemic. The paper then analyses the dataset in terms of LIWC properties, output of a topic model (most probable topics) and a linear regression model to predict emotion based on TFIDF and POS-based features. The dataset will certainly be useful. The fact that it is self-reported and has both short and long texts are noteworthy.\n\nSuggestions:\n1) Short and long texts by the same participant do not necessarily have to be 'parallel'/'analogous', it seems. If this is indeed the case, I would suggest mentioning so.\n\n2) It would be good to know the reason behind picking worry as the key emotion. (In other words, the choice of calling this a 'worry dataset' and not an 'emotion dataset' is not clear). The question asked to the participants (when they draft their text) does not mention 'worry' explicitly. They are asked to rate how worried they feel. However, in addition, the participants are also asked to record other emotions.\n\n3a) Section 2.2: The description accompanying worry clubs it with 'anxiety'. However, table 1 shows that only 55% participants reported anxiety.\n\n3b) Please elaborate \"The participants’ self-reported ability to express their feelings in the long text\". Was it a part of the form?\n\n4) It is not clear if the number of tokens is the same as vocabulary. It would be useful to know the vocabulary sizes of the datasets.\n\n5) The github repo also includes LIWC statistics of the texts. This could also potentially be useful.\n\n6) There is a low correlation between the 'concerns' (work, money, death?) and worry. In contrast, the top topics from the model include job and family. Is it surprising?\n\n7) It would be useful to add statistics on how frequently the participants reported using Twitter. This would be helpful to understand the quality of the short text.\n\n8) Observation: The classifier is not state-of-the-art. It would be useful to add citations to papers which use linear regression for emotion analysis.\n\n9) Was the linear regression model also trained using spaCy?\n\n10) Is the low MAE for worry related to the fact that worry was central to the annotation? Could there have been a bias in the annotations? The stdev for worry is also the lowest (as shown in Table 1).\n\nThe dataset would certainly be useful for future work. The analysis of the dataset (LIWC, topic models) is very interesting. The paper is easy to follow as well.\n\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to reviewer\nThanks for the encouraging and helpful review.\n\nPlease find our response re. your original points in order below:\n\n1. The participants received the same instructions (“to write a text that expresses their feelings about COVID-19”) for both types of text. If they simply shortened the longer text for the Tweet-size version, we would have expected to see the same kind of topics. We will add a clarification about this in the revision.\n2. We agree that the focus on “worry” does not fully align with the instructions the participants received. While worries emerged as a general issue, the focus of calling the dataset “emotion dataset” is more adequate indeed. We will take this into account for the revision (as this was also raised by reviewer 2) and also change the name of the dataset to more adequately reflect the focus on emotions.\n3a) We will adjust the focus to “emotions” more generally. We included worry to be judged as participants might have a broader idea of worries compared to anxiety. However, these two are likely highly correlated (we will add this analysis). The revision will clarify this.\n3b) We add the data of self-reported ability to section 2.3 Descriptive statistics\n4. Agreed - we will add voc. size, tokens, and TTR to the corpus descriptives. \n5. -\n6. We should have clarified this better. The “concerns” from the LIWC do not necessarily relate to negatively connotated concerns but more broadly “areas that concern people” rather than something they worry about. This could be why the correlation is low. That these areas are also emerging from the topics is interesting but needs clarification. We include this in the revision.\n7. Yes, fully agreed. We add these statistics to section 2.3 Descriptive statistics.\n8. Agreed. We will add references here and explore better models. The revision will include these.\n9. We used sklearn and will clarify.\n10. We will clarify (see also point 2 above) in the revision better that the core focus was on emotions. Another explanation could be that it naturally was so ingrained in the texts (i.e. people were predominantly worried), that the model was biased towards predicting high worry values. We will explore this further and add insights to the revision.\n",
"role": "user"
},
{
"content": "I value the authors' response. Most of my concerns have been addressed. I continue to recommend this work for acceptance.",
"role": "assistant"
}
]
|
M4wgkxaPcyj | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<M4wgkxaPcyj>>```",
"role": "user"
},
{
"content": "Title: Simple but effective method to detect COVID-19-related videos containing misinformation\nReview Content: This work proposes a simple but effective method to detect COVID-19 misinformation YouTube videos by using the percentage of conspiracy comments. While the authors did not examine video contents at all, the classification accuracy reached 82.2 by considering video title and the percentage of conspiracy comments only.\n\npros:\n- The approach is simple but works well.\n- The complete YouTube comments dataset and the Colab notebook file are publicly available.\n- Great efforts were made to construct COVID-19-related misinformation video datasets. \n\ncons:\n- While I have some minor concerns/suggestions, none of them are serious flaws. \n\ncomments:\n- Details in the labeling process should be added: For video labeling, did two authors independently label videos, and were all the labels agreed? For comments labeling, more explanations on the actual labeling process are required. For example, how many labels are collected per comment? \n- Comparison with a bag-of-words model: As the authors already have all the text of comments, building a bag-of-words model based on the first 100 comments (videos with less than 100 comments were easy to predict) is not super hard but will make the author’s approach more concrete by adding another baseline. As there are several attempts to infer the political leaning of news articles based on their comments, bag-of-words models employing comments could be a solid baseline in this case as well. \n- Agree & Conspiracy labels in Table 1: drawing 2x2 table might be helpful to understand the characteristics of Agree and Conspiracy as the authors mentioned ‘their different linguistic properties.’ \n- Crowdtangle’s historical data of public Facebook posts (Section 3.1): citations should be added.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Great suggestions! Now implemented\n**Thank you very much for the great suggestions!** They allowed us to achieve better results and a more concrete approach. Here are our changes:\n\n> Details in the labeling process should be added\n\nWe added the necessary information about the labeling process. We had previously omitted it for spacing reasons. I agree that it is important to be transparent about labeling.\n\n> Comparison with a bag-of-words model\n\nThank you for the great impulse! This changed the implications of our study. Using the first 100 comments has a good performance to classify COVID19 misinformation videos (83.9%). Adding the conspiracy percentage the new accuracy is 89.4%. We updated the discussion, abstract, and contributions to take into consideration these new insights. \nThe results suggest that early detection of YouTube misinformation videos is possible, and could allow that videos like \"plandemic\" do not go viral and unnoticed for a long time. \n(Note: we used tf-idf instead of a bag of words model to tokenize the data. We tried both and tf-idf gave slightly better results. )\n\n> make the author’s approach more concrete by adding another baseline.\n\nThis made us reflect on the conspiracy comment classifier. We did not provide baselines here either. We decided to change this and included two baseline models, one with LWIC and another one with Multinomial Naive Bayes. The difference between baseline and transformer models is better appreciated on a new precision-recall plot that we introduced for this classifier. \n\n> As there are several attempts to infer the political leaning of news articles based on their comments\n\nThis is true! I added new citations on studies using user comments for prediction. Interestingly, Jiang and Wilson (2018) also used comments to detect misinformation. However, they only use linguistic signals and find no strong prediction value from user comments. (They did not separate between agree and misinformation labels for example)\n\n> Agree & Conspiracy labels in Table 1\n\nFirst, we eliminated Table 1 as the information was already in the text. We did not proceed to create the 2x2 table, but we performed $\\chi^2$ tests on the bag of words of the agree and conspiracy comments. Less than 1% of the tokens appear in both datasets in a significantly similar manner. We hope this discussion helps to understand why the two types of comments are different linguistically.\nOur motivation for dividing the labels is that previous studies have used simply misinformation or believe in misinformation as a label and reported results on classifying such post. The problem with not dividing between comments such as \"I agree with what you say\" and \"#pizzagate Justin Bieber is also part of the ring\" is that the classifier will learn more to differentiate between agreement and real misinformation content.\n\n> Crowdtangle’s historical data of public Facebook posts\n\nWe added the citation\n\nAdditionally the code has been updated on GitHub.\n\n*With the above modifications, the paper is now 5 pages long... I know that for ACL short papers it is allowed to have 5 pages to address the reviewers' comments...I hope this is not a problem, otherwise, I would remove some plots to keep the 4 pages limit*\n\n\n",
"role": "user"
},
{
"content": "Title: Thanks for your R&R\nThanks for your quick r&r to address my concerns.\n\nI'm glad that the authors achieved better performance, believing that the study becomes more solid.\n\nI increased my rating. Thanks!\n\n\nRegarding the paper length, please directly contact the workshop chairs.\n\nI'm not sure whether the resubmission's length violation is important or not in making a decision.\n\n\n",
"role": "assistant"
}
]
|
PlUA_mgGaPq | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<PlUA_mgGaPq>>```",
"role": "user"
},
{
"content": "Title: Useful resources collected and introduced\nReview Content: This work is an introduction to a suite of resources and their corresponding ideas revolving around the COVID data provided by the Allen institute. Authors in particular provide Neural Covidex which allows exploration of information retrieval and NLP techniques over the COVID dataset.\n\nThe positives of this manuscript is as it also says gathering all the information and resources one would need to start exploring this dataset and further the techniques without having to spend weeks to put them all together. It is significant effort to get this far and of course authors are relying on their previous work over many years. There are some good pointers for some of the recent work in biomedical IR that is useful to know. I personally learnt from the content and enjoyed reading it.\n\nThis work however is written in haste (as it was needed due to the time limit of the current situation) and it is lacking the formal language of a scientific paper and of course evaluation. Obviously evaluation on this particular dataset is not straightforward given for example TREC is only starting on that effort. However though some initial analysis of the dataset could strengthen it. I was half expecting to see the discussion on evaluation metrics be a bit more mature and be given a separate section at least. This work as it stands, while very useful for the IR/NLP community it needs some reworking of the content to get to the state of publication in an ACL workshop. \n\n\n\n\n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Yup, we'll update with TREC-COVID results when they become available! \nDear Reviewer, thanks for your feedback.\n\nYour comments are very much in line with those of reviewers #1 and #2. We'll update the paper with TREC results as soon as they are available.\n\nBest,\nJimmy (on behalf of all the co-authors)\n",
"role": "user"
},
{
"content": "I value the authors' response. The response covers nearly all of the issues I raised. I uphold my recommendation to accept this paper.",
"role": "assistant"
}
]
|
PlUA_mgGaPq | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<PlUA_mgGaPq>>```",
"role": "user"
},
{
"content": "Title: Laudable effort, but currently more preprint material\nReview Content: This paper describes the rapid deployment of an IR system for the CORD-19 dataset. The system borrows state-of-the-art pieces, such as T5 and BioBERT, with training on MS MARCO. A search interface is shown that includes potential answer highlighting. Absent hard evaluation data, the authors provide more of a narrative of the system development, from initial conceptualization, early builds, deployment, and social media advertisement.\n\nFirst and foremost, the authors are to be congratulated for this work. The effort is most laudable.\n\nThe style and tenor of the paper should be described as somewhere between a Medium post and a work-in-progress preprint. While normally this would be out of place, it is less out of place in a workshop such as this one. The casual language is fine and almost cute, while the frankness is quite welcome, but the other preprint-like or web post-like aspects of the paper are more problematic.\n\nLet's start with the lack of evaluation, which the authors acknowledge that \"It is, of course, expected that papers today have an evaluation section that attempts to empirically quantify the effectiveness of their proposed techniques\". Of course, this empirical validation is what makes it science, as opposed to engineering/marketing. I certainly understand that the resource commitment to a TREC-style evaluation is beyond one group's ability under a pandemic situation. But the authors present the solution within their own paper: from footnote 14 it appears that the TREC-style evaluation is underway, and that should have preliminary results available well before this workshop's deadline. So why not simply issue this as a preprint to flagplant their admittedly laudable effort, then wait until some empirical results are available and submit *that* paper to the workshop? If the authors plan to submit a work with results as a separate publication, this should be made more clear and a more compelling argument needs to be made for the scientific usefulness of the \"behind the scenes\" story of this search engine.\n\nThe authors make the claim that instead of the importance of search result ranking (despite Section 3 being about how great the system is at ranking on other tasks), that instead what is really important are the interface improvements that increase the usability of the system. However, every real-world IR evaluation has come to exactly this conclusion: experts always request various bells and whistles. First, note that users can't see the relevant articles that were missed, so they have little choice but to comment on the interface. But more importantly, good ranking != good usability, this is no novel claim. But usability evaluation is a well-honed science, and the authors do not perform any kind of usability evaluation either. These are less resource-intensive as well, typically requiring just an IRB protocol and some experts willing to provide feedback.\n\nMy main stylistic issue is that in many places this paper comes across as a brag-fest for the accomplishments of the group. These accomplishments are beyond doubt, of course, which is all the more reason why the authors can avoid tangents to cite unnecessary prior work (e.g., most of the BERT stuff in the first paragraph of Section 3.3, as this is not even used in the system). The implications in Section 5 that the authors' group's \"culture\" and use of good software practices, unlike other poor-coding researchers, is also unnecessary (why is a \"faculty advisor\" reference even there?). One can extol the virtues of good culture and software engineering without coming across as bragging, so I would recommend a re-write of those parts of this paper to take a more scientific tone.\n\nMinor:\n- In Section 3.1 it is unclear what the \"1\" in \"n+1\" comes from. Do the authors also include a document that is just the title/abstract without the paragraphs?\n- \"who are engage in\" -> \"who are engaged in\"\n\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Yup, we'll update with TREC-COVID results when they become available!\nDear Reviewer, thanks for your feedback.\n\nAs we've responded to reviewer 1 already, we are participating in TREC-COVID and will update the paper to incorporate results from round 1 when they become available.\n\nBest,\nJimmy (on behalf of all the co-authors)\n",
"role": "user"
},
{
"content": "I value the authors' response. It answers most of my concerns, and I will likely adjust the score higher.",
"role": "assistant"
}
]
|
PlUA_mgGaPq | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<PlUA_mgGaPq>>```",
"role": "user"
},
{
"content": "Title: Evaluation of a rapidly deployed search system for the CORD-19 data set\nReview Content: The authors have quickly stood up a conventional and neural search system based on the CORD-19 dataset, which is a commendable task given the global crisis caused by the Covid-19 pandemic. However, this paper is just a description of their system and design decisions, with some additional discussion of why a conventional TREC system-oriented retrieval evaluation would be limited. I agree with the authors on that point.\n\nHowever, it would be helpful to have some sort of evaluation of their system to gauge whether their approach offers any novelty beyond the multiple other systems that been stood up and linked to from the Allen Institute site where the data is housed.\n\nWhile I agree that a system-oriented evaluation approach would be limited, it would also be helpful to, say, compare the baseline and neural approaches. The first round of the TREC-COVID challenge evaluation is being conducted as I write this, and the first set of results will be available by late April or early May. I believe a better approach would be to include these results and then discuss their limitations.\n\nIf the authors do not believe the system-oriented results are important, they can explain why. They could provide some usage statistics for their system and describe other real-world use. They can also propose better evaluation studies, including those that involve users as they allude to in their paper.\n\nOverall this is good work, but it could be much better, and we will hopefully learn more as the system is used, the test collection grows, and more complex tasks beyond ad hoc retrieval are evaluated with it.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Yup, we'll update with TREC-COVID results when they become available!\nDear Reviewer, thanks for your feedback.\n\nIt is not the case that we believe that system-oriented results are not important, rather that, as you note and agree with, \"a system-oriented evaluation approach would be limited\" - for the reasons we have articulated in the current paper.\n\nNevertheless, we are participating in TREC-COVID. We shall update the paper to incorporate results from round 1 when they become available, which is a bit less than two weeks from now.\n\nBest,\nJimmy (on behalf of all the co-authors)\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. It satisfactorily addresses the major issues I had. I continue to recommend this work positively.",
"role": "assistant"
}
]
|
SPxaJuM4Hbz | ACL 2020 Workshop NLP-COVID | AnonReviewer4 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer4, and the following are the paper you need to review:\n```<<SPxaJuM4Hbz>>```",
"role": "user"
},
{
"content": "Title: Study of automatic classification of LitCovid categories.\nReview Content: This article presents a comparison of methods for the classification of articles from the LitCovid collection. From all the evaluated methods, one based on BioBert performs better compared to other evaluated methods. The contribution of the paper is a comprehensive study of a broad range of methods to the multi-class classification of LitCovid. The classifiers are evaluated on a small set of papers from the CORD-19 collection. The article is clearly written and plenty of details. The authors provide the data and code, which would support reusability and reproducibility.\n\nThe performance on CORD-19 is significantly lower compared to the reported numbers for LitCovid. This might be due to several reasons, including the limited size of the CORD-19 articles used or the way manual annotation done by the authors might differ from the one performed in LitCovid.\n\nThe LitCovid collection already provides the classification, thus the classifier might have limited used unless a specific use case is specified in which this work would be used.\n\nThe title might be willing to include the specific collection on which the methods have been tested on, since the collections and the labels are provided by the LitCovid data set.\n\nThe article has been recently uploaded and currently has almost 28k PubMed articles. It might be relevant to mention when the documents were obtained. The authors mention 8k, thus probably it was obtained sometime in April.\n\nIn table 1, 2861 —> 2,861\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 3\nWe are grateful for the reviewer taking the time to read and review our paper. We address some concerns below:\n\n> The article has been recently uploaded and currently has almost 28k PubMed articles. It might be relevant to mention when the documents were obtained. The authors mention 8k, thus probably it was obtained sometime in April.\n\nThe version of the LitCovid dataset we evaluated on was obtained on May 20th and is available on our Github. It contained around 14,000 articles at the time out of which we used 8,000 that were either full articles or abstracts. The rest consisted of only article titles since their abstracts were not available on Pubmed. We are currently running experiments on the most up to date version of the dataset and include scripts to download the most up to date version of the LitCovid collection to make future evaluation straight forward.",
"role": "user"
},
{
"content": "I value the authors' response. Most of my concerns have been addressed. I continue to recommend this work for acceptance.",
"role": "assistant"
}
]
|
SPxaJuM4Hbz | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<SPxaJuM4Hbz>>```",
"role": "user"
},
{
"content": "Title: Overall a good exploration of previous document classification algorithms applied to Covid-19.\nReview Content: The paper presents an exploration of the main document multi-classification algorithms, training and developing on the LitCovid dataset and testing on a small manually annotated portion of the much larger CORD-19 dataset. The LitCovid dataset includes labels for each document to 8 categories with about 1/3 of documents including multi-labels. The significance lies in the usage of these multi-classification algorithms for researchers, clinicians, epidemiologists, and policy makers to find the most relevant articles in a timely manner given the urgency of the pandemic. \n\nThe paper is clear and well-written. The application of document classification algorithms to Covid-19 is novel, but the algorithms themselves are not.\n\n\nPros:\n- An in-depth exploration of many different algorithms along with tuning for best performance.\n- Contains both intrinsic and extrinsic evaluations.\n- The discussion on data efficiency is quite interesting and can have a large impact on how much data is needed to annotate for future creation of gold standard datasets.\n- An error analysis that suggest future work and updates.\n- Important to be applying all research we have so far on these new datasets to help with Covid-19.\n\n\n\nCons:\n- The manual annotation of the subset of CORD-19 seems to not be reliable as 34% of errors were annotation related.\n- Guidelines and annotation process not provided for the CORD-19 set.\n- The CORD-19 set is small and the performance does drop, suggesting that the models are not generalizable.\n- No discussion of providing these models to LitCovid to help with the annotation labeling, even if it is a starting point.\n- The accuracy is quite low for most algorithms and there is no discussion about this.\n- Very little discussion on the very different distributions of the categories between LitCovid and CORD-19. \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 2 \nWe are grateful for the reviewer taking the time to read and review our paper. We are delighted that you found our work to be significant in addressing the current crisis and interesting. We address some concerns below:\n \n> The manual annotation of the subset of CORD-19 seems to not be reliable as 34% of errors were annotation related.\n \nThe error analysis discussed in the paper was on LitCovid errors and not CORD-19 errors. Even though we did not explore the amount of annotation related errors in the CORD-19 test set we are confident that our annotation process was reliable due to the annotation process we discuss below.\n \n> Guidelines and annotation process not provided for the CORD-19 set.\n \nSince the categories are generally intuitive for biomedical experts annotation guidelines were not explicitly provided. The experts (all with years of biomedical research experience) were given a random sample of LitCovid articles for each category to familiarize themselves with the annotations.\nTo come up with our annotations, as discussed in Section 2.2, we asked biomedical experts to label a small set of 100 articles found only in CORD-19. Each article was labelled independently by two annotators. For articles which received two different annotations (around 15%), a third, more senior annotator broke ties.\"\n \n> No discussion of providing these models to LitCovid to help with the annotation labeling, even if it is a starting point.\n \nWe have reached out to the LitCovid team and would love it if these models would help make their annotation process easier, faster and more accurate.\n \n> The accuracy is quite low for most algorithms and there is no discussion about this.\n \nWe agree that the accuracy scores are quite low for these models and believe that they can be partially explained by the same spurious label correlations we discussed in Section 4.3.\n \n> Very little discussion on the very different distributions of the categories between LitCovid and CORD-19.\n \nWe appreciate your feedback, the discrepancy in label distributions between LitCovid and CORD-19 should be mentioned as a possible explanation for the generalization failure that should be further explored. To elaborate a bit more on where the distribution discrepancy originates. Many articles in CORD-19 were published before the COVID-19 pandemic and focus on general coronaviruses and previous epidemics. Therefore, a large portion of the CORD-19 articles study the disease mechanisms of other coronaviruses. However, since LitCovid is exclusively about the ongoing COVID-19 pandemic, there is a large number of articles discussing the transmission, prevention, and case reports, hence the distribution discrepancy. ",
"role": "user"
},
{
"content": "I acknowledge the authors' response. The reply answers almost all of my questions. I keep my recommendation for this paper to be accepted.",
"role": "assistant"
}
]
|
SPxaJuM4Hbz | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<SPxaJuM4Hbz>>```",
"role": "user"
},
{
"content": "Title: Review of Document Classification for COVID-19 Literature\nReview Content: This paper provides an empirical evaluation of several standard models' performances on a covid-19 related document classification task. Specifically, Covid-19 research papers are classified into one or more of eight subcategories based on their topic. Datasets for this task are constructed from LitCovid, and the Covid-19 Open Research Dataset. These datasets are shared online. Finetuned pre-trained language models are found to outperform more traditional models, such as SVM, CNN, or LSTM. In particular, BioBERT and Longformer are found to be most effective.\n\nThe paper introduces a multilabel document classification dataset which categorizes Covid19 related articles into one or more of eight subcategories. This dataset is distinct in that it represents a large body of work in the medical domain, united by a common subject. The data is drawn largely from LitCovid, with a supplemental test set drawn from the COVID-19 Open Research Dataset (CORD-19). The paper notes that models trained on the LitCovid data generalize poorly to the CORD-19 data, and theorizes that this is due to the models failing to ignore the overarching COVID-19 topic. It is unclear if this is the case, however, as, while CORD-19 includes papers related to other coronaviruses, COVID-19 is the overarching topic of both datasets.\n\nThis paper provides a thorough analysis of many different models' performance with thorough documentation. However, the efficacy of fine-tuned pretrained language models on document classification is well established in a large body of research. While this particular classification task is somewhat novel, demonstrating that finetuned pretrained language models are most effective upon it is a relatively limited contribution.\n\nPros\n - introduces a multilabel document classification task related to COVID-19\n - provides a dataset to support this classification task\n - provides a clear and thorough empirical evaluation of many different models' performances on this task\n\nCons\n - the empirical performance evaluation largely confirms trends shown by previous research\n - models trained on the provided dataset are shown to generalize poorly. The reasons for this are underexplored.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 1\nWe are grateful for the reviewer taking the time to review our work. We are glad that you found the proposed classification task to be novel and our empirical evaluation to be clear and complete. To address some of your concerns:\n \n> The paper notes that models trained on the LitCovid data generalize poorly to the CORD-19 data, and theorizes that this is due to the models failing to ignore the overarching COVID-19 topic. It is unclear if this is the case, however, as, while CORD-19 includes papers related to other coronaviruses, COVID-19 is the overarching topic of both datasets.\n \nEven though CORD-19 is seen as primarily a COVID-19 dataset, around half the articles in its first version were about older coronaviruses and other epidemics. Additionally, the 100 articles in our test set were drawn from the ones in CORD-19 but not in LitCovid, so their overarching theme is not COVID-19 but the general coronavirus family. In fact, this test set contains the term \"COVID-19\" in only one of the articles and is mostly unrelated to COVID-19. This setting aims to test whether the models have learned general signals discriminative for the categories or superficial signals specific to COVID-19. Nonetheless, we agree with the reviewer that it is worth further investigation to ascertain that the models' inability to ignore overarching COVID-19 information is the main reason why they fail to generalize.\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. The authors have answered almost all of my concerns. I continue to recommend this work for acceptance.",
"role": "assistant"
}
]
|
XOkm8xdns5R | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<XOkm8xdns5R>>```",
"role": "user"
},
{
"content": "Title: Fairly sound approach, but several serious concerns\nReview Content: This manuscript discusses the use of crowdsourcing to label sub-sentence segments with labels that look a lot like the ones used for structured abstracts.\n\nWith few exceptions, I found the manuscript to be clear. The task is a reasonably interesting one (though it would be a good idea to motivate it a bit more by showing what it could be used for) and the methods are reasonable and I am reasonably convinced that the approach described would produce the results shown. This paper is a pretty good example of the sort of work that can be done with crowdsourcing. I am not aware of other work approaching similar classification tasks with crowdsourcing, but I did not go looking. One very important point that seems to be buried is the fact that the annotation took 10 days. Every annotation project I have ever been involved with that used experts took months; this is a strong argument in favor of this approach.\n\nThere are quite a few things that concern me, however.\n\nFirst, the methods are all straightforward applications of fairly well known work. For the machine learning that is ok, but there are few insights in the crowdsourcing methods or the results that would not be strongly expected. In other words the authors need to make much more clear any claims to originality or significance. \n\nThe paper neglects to mention that many of the abstracts in CORD-19 are structured. For example, the abstract for document e9cgnhbq (PMID 20153051) has the following sections: Objective, Methods, Results, Conclusion. While the correlation may not be 1-to-1 to the annotation scheme, it seems that the mapping might be pretty high. How much does this help the crowd workers? Would it help the classifiers?\n\nThe claim that data sparsity is a challenge in general is of course true - there will never be enough annotated data to train a system to do every task we could want at near-human performance levels - but somewhat irrelevant. If there are no existing datasets that can be used to perform this task, then it is a challenge and the comments about the difficulty of annotating for other tasks is irrelevant. If there are existing datasets for the task needed, then the challenge is not lack of data but whether the dataset really does allow the task to be addressed. More likely, there are datasets that solve a similar task, and these should be evaluated and tested.\n\nWhich version of CORD-19 was used? A new version is released every week.\n\nWhile it is appropriate to provide compensation that roughly approximates 'fair', it is not mentioned how long each abstract took on average and therefore what the actual pay rate was. It would be useful to hear how long it took the experts as well. When I do annotation - admittedly for a more complex task - I almost always have to read the text multiple times.\n\nThis is essentially a text classification task, a task that multinomial Naive Bayes performs quite well on (probably not as well as BERT, but still). See:\nMcCallum, Andrew, and Kamal Nigam. \"A comparison of event models for naive bayes text classification.\" AAAI-98 workshop on learning for text categorization. Vol. 752. No. 1. 1998. \nTackling the poor assumptions of naive bayes text classifiers.\" Proceedings of the 20th international conference on machine learning (ICML-03). 2003.). \n\nThere is a strong but imperfect ordering of the sections: \"Background\" will almost always come before \"Method\". Note that at least one of the cited papers mentions this: Huang 2017. So why was that information not used in the classification models? The way to use it would be different for most of the models, but for naive bayes, the way to use it would be to switch to using a hidden markov model, which handles sequential information. \n\nStructured abstracts provide a great deal of information that is not being exploited here. In addition to being used as a feature directly, they could be used to train a second classifier that would serve as a feature: the structured abstracts will provide section labels and your existing TF-IDF vectorizer provides samples of vectors from segments. If these are run over a large corpus - not even necessarily covid-related - then a multinomial naive bayes model (or your favorite high dimensional instance classifier) could be created that predicts the section label from the TFIDF vector (importantly, this would work even if the abstract being predicted was not structured). There would be some section labels that are rare, these can simply be removed from the model. Then any TF-IDF vector for a segment could have section labels predicted, with that prediction fed as a feature for the classification of your annotation scheme.\n\nThere is no analysis provided to show whether 9x annotation is better than, say 5x annotation for this task. There is no analysis provided of how large the corpus needs to be, ie an ablation study with the classifiers. If the classifiers have the same performance on 11k abstracts that they would have on 5.5k abstracts, then maybe we don't need 11k. If 5x annotation of 5.5k abstracts provides close enough performance, then a sufficient corpus could have been created for 1/4 the cost.\n\nIt is not made clear why the annotation scheme will help fight COVID-19: what use cases do research aspects enable?\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Revision Submitted\nWe just submitted a new version of the paper, in which we addressed most of your comments. Please let us know if you have more questions. Thanks!",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. It answers most of my concerns, and I will likely raise the rating.",
"role": "assistant"
}
]
|
XOkm8xdns5R | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<XOkm8xdns5R>>```",
"role": "user"
},
{
"content": "Title: Potentially important topic, but a number of questionable claims and an overall lack of contact with relevant literature detract from the paper quite a bit.\nReview Content: OVERALL\n\nThank you for the opportunity to review this interesting paper about annotation of abstracts from the CORD-19 corpus. \n\nOVERALL STRENGTHS:\n\n1.The paper is written in the official language of the conference.\n\n2. The paper is within the page limits of the conference.\n\n3. The abstract accurately reflects the contents of the paper.\n\n4. The discussion of worker wages and total cost is very nice. The paper could do a better job of showing the reader why it is important by putting it into the context of Fort et al.’s paper on the ethics of using AMT for NLP data (see the References section below), as well as Cohen et al.’s paper on the relationship between Turker pay rates and actual Turker earnings (from that work, there appears not to be one). Additionally, the paper mentions a target hourly wage, but does not report the ACTUAL hourly wage. This seems like an omission.\n\n5. The data quality assessment is very nice.\n\nOVERALL AREAS FOR IMPROVEMENT:\n\nFor visibility, consider adding “COVID-19” to the title of the paper.\n\nThe literature review is missing quite a bit of relevant work. This is an important gap in the paper because adequate coverage of related work would help in making the significance and innovation arguments.\n\nThere are a couple of claims in the Introduction that are either not relevant or not supportable, and coming in the Introduction, they reduce a reader’s confidence in the paper as a whole. Specifics:\n\nThe statement that producing annotations for thousands of papers can be a prolonged process if only expert annotators are used and that their availability is limited is not connected with the rest of the paper. There is also no evidence that it is true. In fact due to the virus and attendant shutdown, expert annotators might have actually been MORE available at the time this work was done than at any time since the beginning of corpus annotation. I don’t have data on that, but the paper doesn’t present any data for the opposing hypothesis, either, and since there probably is none available, I would advise simply dropping any critique of manual annotation by experts. If the authors feel that it must be included, then be balanced--point out the benefits of expert annotation (see, for example, Amber Stubbs’s dissertation on the topic--sorry I couldn’t find the reference for this one; I suggest contacting her directly).\n\nThe claim that data sparsity is a major challenge for biomedical text mining is not supported, and probably not supportable. There is actually a LOT of data available for BioNLP research--that is why most people got into the field in the first place (see Cohen and Demner-Fushman’s book “Biomedical natural language processing)”. Tons of projects have produced quite a bit of data--e.g., Cohen et al.’s paper on multi-model annotation in the CRAFT corpus lists 25 (twenty-five) biomedical corpus constructions just in the five years before the publishing of that paper.\n\nThe scientific premises of the paper include the idea that the annotation task would typically be thought of as requiring expert knowledge; that is why the paper uses it to study the use of non-expert annotators. The approach to such a research question is reasonable, but the data and the task are not suitable to answering the question. The annotation scheme (Background/Purpose/etc.) does not seem like one that is necessarily difficult for non-experts, for example in the case where they are marked explicitly in abstracts. So, either demonstrate that they’re not marked, or improve the argumentation with a reasoned claim that the task would EVER be thought of as requiring expert annotators.\n\nThe paper’s description of the dataset is inadequate. It samples the CORD-19 corpus, but the CORD-19 corpus has a number of versions (commercial, non-commercial, for example), and the paper does not tell us from which part the CODA corpus was sampled.\n\nIn Table 2, it is not clear to me that F1 is the right measure of agreement here. Consider supporting an argument that it is.\n\nThe paper reports kappa, but does not say how P(e)n was calculated. That makes it difficult to interpret the kappa score at all.\n\nThe result given at the end of Section 5 regarding which model gave the “best” classification performance is not relevant. The point of reporting classification scores in a paper on a corpus is as a measure of annotation quality and quantity--the argument would be that if a classifier can build a good model, then the annotations are probably consistent and the quantity of data is probably adequate (see the Pustejovsky and Stubbs reference). Which model performs “the best” is not actually well-evaluated in the paper, and it is not relevant to the paper, so I would strongly suggest removing that small paragraph altogether.\n\nAs academic courtesy if nothing else, I strongly recommend that you mention the classic work on crowdsourcing for production of NLP data.\n\nREFERENCES (sorry for all of the Cohen references--he has done a lot of work in this area)\n\nFort, Karën, Gilles Adda, and K. Bretonnel Cohen. \"Amazon mechanical turk: Gold mine or coal mine?.\" Computational Linguistics 37.2 (2011): 413-420.\n\nCohen, K. Bretonnel, et al. \"Ethical issues in corpus linguistics and annotation: Pay per hit does not affect effective hourly rate for linguistic resource development on amazon mechanical turk.\" LREC... \n\nInternational Conference on Language Resources & Evaluation:[proceedings]. International Conference on Language Resources and Evaluation. Vol. 2016. No. W40. NIH Public Access, 2016.\n\nCohen, Kevin Bretonnel, and Dina Demner-Fushman. Biomedical natural language processing. Vol. 11. John Benjamins Publishing Company, 2014.\n\nCohen, K. Bretonnel, Karin Verspoor, Karën Fort, Christopher Funk, Michael Bada, Martha Palmer, and Lawrence E. Hunter. \"The Colorado Richly Annotated Full Text (CRAFT) corpus: Multi-model annotation in the biomedical domain.\" In Handbook of Linguistic Annotation, pp. 1379-1394. Springer, Dordrecht, 2017.\n\nPustejovsky, James, and Amber Stubbs. Natural Language Annotation for Machine Learning: A guide to corpus-building for applications. \" O'Reilly Media, Inc.\", 2012.\n\n\n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Revision Submitted \nWe just submitted a new version of the paper, in which we addressed most of your comments. Please let us know if you have more questions. Thanks!",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. It satisfactorily addresses most of my concerns, and I will consider raising the rating score.",
"role": "assistant"
}
]
|
ZQ_HvBxcdCv | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<ZQ_HvBxcdCv>>```",
"role": "user"
},
{
"content": "Title: straightforward but interesting case study in building NLP system from scratch for covid-19\nReview Content: This paper describes the creation of a system from scratch for processing clinical text at the VA to detect positive Covid-19 cases. While lab tests done at the VA will be in structured data, patients who may get some care elsewhere would have positive Covid results from other care recorded only in the free text, and the authors want to be able to capture those cases.\n\nThe challenges are that there was no existing data to build training sets for machine learning, there are many mentions of Covid and related terms in contexts that do not represent positive cases (negative test results, family members, describing the reason for a televisit).\n\nThe authors of this work created an NLP pipeline built on Spacy, including Context for negation/uncertainty/experiencer, sectionizer, etc. In the end, the decision of positive/negative was made based on finding a target term that was not uncertain, negated, or experienced by someone besides the patient, and that did have a positive modifier, occur in a diagnosis section, or was used to modify another condition (\"Covid-19 pneumonia\").\n\nSince this is an IR-like problem it is difficult to evaluate recall but straightforward to evaluate precision. For recall what they do is evaluate at the patient level on the subset of patients for whom they do have lab results they can use as a gold standard.\n\nWhile the NLP described here is fairly straightforward rule-based methods, this is an interesting use case of bootstrapping an NLP system from nothing for a new problem where there is no retrospective data available to build ML systems. The authors tease at the end that now that they've done this bootstrapping work they have essentially a labeled dataset that could be used for an ML system. A little more focus on this aspect would've been interesting to this reviewer, even if there are no ML results to report yet.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer #3\nWe thank the reviewer for reading our paper and for providing comments. We agree that machine learning offers a potential to improve system performance, but we identify some challenges to implementing such a model which need to be addressed first. A revised manuscript will include text such as the paragraph below which offers more information on our plans for machine learning and the challenges in training a classifier for our task: \n\n“In future work, we plan to evaluate machine learning methods to improve identification of positive cases. A machine learning classifier could potentially improve our current system by improving document classification accuracy or identifying high-probability cases for review. This was not feasible in early stages of the response since there were very few known cases and no existing reference set. We have now identified thousands of possible cases which could be leveraged as the basis for a training set for a supervised classifier. However, as stated previously, our clinical review did not equate to creating a reference set and there are some challenges to using our dataset as a training corpus. Specifically, while many positive patients have been identified through chart review, clinical reviewers did not label negative cases, so negative labels may not be consistent. Additionally, labels are assigned at the patient level, rather than the document level, and confirmed positive patients may have a mix of both positive and negative documents. However, we believe that with additional some validation and review, a machine learning classifier has the potential to augment our system’s performance.” ",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. It adequately addresses the points I raised. I remain supportive of my positive recommendation for this paper.",
"role": "assistant"
}
]
|
ZQ_HvBxcdCv | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<ZQ_HvBxcdCv>>```",
"role": "user"
},
{
"content": "Title: A Natural Language Processing System for National COVID-19 Surveillance in the US Department of Veterans Affairs\nReview Content: The paper proposes a natural language processing pipeline for identifying positively diagnosed COVID19 patients, which has been deployed as part of the VA national response to COVID19. The objective of the NLP system is to simply detect if a clinical document contains a positive COVID19 case. \n\nThe paper is very USA-specific. One of the motivations of the study is the fact that community-level testing and drive-through testing are often not counted in numbers derived from healthcare systems. Thus, the authors propose an approach that detects these cases from clinical texts within the Department of VA. The motivation could be made stronger by including numbers of people whose COVID19 status are encoded in free text only. What percentage of the patients may potentially be missed by laboratory-based surveillance methods?\nIt is unclear if the problem of not counting certain positive cases also applies to the VA. Do VA health networks not report laboratory test based COVID19?\n\nPreprocessor: ‘remove problematic template text’ and ‘normalize lexical variants’ are very vague descriptions of what is actually being done. Normalizing lexical variants can be as simple as simply stemming and as complex as mappings multi-word variants to standard IDs (which of course is a very hard problem).\nTarget matcher: Is exact matching used? What is the rate of misspellings within the clinical notes (i.e., how many misspellings per number of tokens?)\nDocument Classifier: what are the positive modifiers? How many of them are there? How often can it capture actual expressions of the diagnoses. \nIn Document Processing, how many keywords are actually used? How many times are these not spelled correctly?\n\nWhat are the actual sections (incl. number of sections)?\n\nThe number of patients without laboratory evidence but with evidence only in clinical notes is quite large—and it justifies the development of such a pipeline. \n\nThe causes for the false negatives are quite straightforward and can be easily addressed. It would be good to ascertain how precision/recall would change if these simple issues (e..g, new lines) were fixed during the preprocessing phase. \n\nIt appears as though the authors did not consider inexact machine and/or semantic matching (e.g., via word2vec-type vectors). Recent research has shown that lexical similarity and semantic similarity based measures can generate data-centric variations/misspellings of noisy texts (e.g., from EHRs and social media), and such approaches have the potential to significantly improving performance—and they will not require machine learning (which would require large amounts of training data). \n\nSince the test results for a very large number of patients already available, it would be pretty straightforward to just use those in a supervised learning framework. \nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer #1\nWe thank the reviewer for reading our paper and for their thoughtful comments. Please see below for our responses to the points made in the review. We will include changes in an updated revision following any additional feedback or direction from reviewers and organizers.\n\n---\n**The motivation could be made stronger by including numbers of people whose COVID19 status are encoded in free text only. What percentage of the patients may potentially be missed by laboratory-based surveillance methods?**\n\n**Response:** While the true percentage is not known, we report in the Results 6,360 patients who were found to be COVID-19 positive but did not have laboratory evidence and were identified using free text. This accounts for 36.1% of known positive cases in VA. We will clarify the significance of this percentage in the abstract to make it clear that these are patients without laboratory evidence. \n\n---\n**It is unclear if the problem of not counting certain positive cases also applies to the VA. Do VA health networks not report laboratory test based COVID19?**\n\n**Response:** As described in the Introduction, laboratory test results can be captured and used for reporting when testing is performance within the reporting healthcare system. This holds in the VA as well, and per our findings ~67% of known positive cases were reported using lab results, the remainder being identified using NLP + clinical review.\n\n---\n**Preprocessor: ‘remove problematic template text’ and ‘normalize lexical variants’ are very vague descriptions of what is actually being done.**\n\n**Response:** We will revise this text in the manuscript to offer more explanation and clarity: \n“Preprocessor: Modifies the underlying text before text processing. This step removes semi-structured templated texts and questionnaires which can cause false positives and replaces certain abbreviations or misspellings to simplify later processing steps.” \n\n ---\n**Target matcher: Is exact matching used? What is the rate of misspellings within the clinical notes (i.e., how many misspellings per number of tokens?)**\n\n**Document Classifier: what are the positive modifiers? How many of them are there? How often can it capture actual expressions of the diagnoses.**\n\n**In Document Processing, how many keywords are actually used? How many times are these not spelled correctly?**\n\n**What are the actual sections**\n\n**Response:** Due to the operational nature of this project, we do not collect corpus statistics such as misspelling counts or phrase frequency but agree that these would be interesting research questions for future work. \nAs far as the specifics of the target matching and modifying rules, we present a high-level summary of our system here due to space constraints but have made [our code available publicly](https://github.com/abchapman93/VA_COVID-19_NLP_BSV ). We hope that having access to the code can provide additional details about implementation to interested readers. \n\n--- \n\n**It would be good to ascertain how precision/recall would change if these simple issues (e..g, new lines)**\n\n**Response:** Some formatting issues, such as new lines, are difficult to fix because it is challenging to distinguish between new lines which are inserted as part of EHR formatting vs. actual new lines used to mark new sentences or sections. Preprocessing handles some cases, but applying this step too aggressively can lead to additional errors. \n\nWe also include in revisions some additional examples of false negatives which illustrate more causes of errors. For example, positive modifiers which are not included in the lexicon, leading a mention to incorrectly not be marked as positive. \n\n --- \n\n**It appears as though the authors did not consider inexact machine and/or semantic matching (e.g., via word2vec-type vectors)**\n\n**Response:** We agree that this would be a valuable step for future work in expanding the lexicon. While this was not feasible in early phases of the pandemic, where there were relatively few examples, we now have a much larger corpus which contains a great deal of variation and would benefit from this type of approach. \n\n--- \n\n**Since the test results for a very large number of patients already available, it would be pretty straightforward to just use those in a supervised learning framework.**\n\n**Response:** We agree that machine learning offers a potential to improve system performance, but we identify some challenges to implementing such a model which need to be addressed first. A revised manuscript will include text such which offers more information on our plans for machine learning and the challenges in training a classifier for our task. Due to space constraints in the OpenReview platform, please find our full response to this question in our Reply to Reviewer #3\n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. It resolves the majority of my concerns. I remain supportive of my positive recommendation for this paper.",
"role": "assistant"
}
]
|
mlmwkAdIeK | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<mlmwkAdIeK>>```",
"role": "user"
},
{
"content": "Title: nice application to new data set to be made available\nReview Content: This paper explores gender differences in linguistic productions between two groups of Redditors who self-identify as either \"male\" or \"female\". It examines a corpus of covid-19 pandemic threads in relation to two areas: emotion analysis (employing a VAD lexicon and word embedding representations); and topic analysis (employing the tool MALLET).\n\nThe paper's novelty is in the application of an established method to a new corpus that the authors have developed pertaining to covid-19 threads. As expected, the language usage for covid-19 posts had a lower Valence when compared to language used in a baseline corpus. There is also a general trend for the language used in the female sub-corpus scoring slightly higher in the Valence scale than male sub-corpus. The trends are reversed when Arousal and Dominance were examined: overall higher for men, and when comparing baseline to the covid-19 posts, the baselines score slightly lower for both male and female data.\n\nTo compare and contrast the different topics covered between the male and female authored posts, topic modelling was applied to each sub-corpus and the topics with the highest coherence scores were presented. However, applying topic modelling to the corpus as a whole and analysing the topic allocation of the male and female posts would give a better indication of similarities and differences of the topics covered in the sub-corpora. However, two different topic models for each sub-corpus were developed and the most cohesive topics were presented.\n\nIn general the VAD study is interesting, although unsurprsing. The goal of discovering if different or similar topics were covered in the two sub-corpora may be best approached by discovering the topics covered by the corpus as a whole and analysing the topic allocation of the sub-corpora.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer\nThank you for taking the time to read and review our paper, we appreciate the feedback!\n\nPerforming topic modelling on the corpus as a whole indeed seems like a strong alternative to our methodology. If time permits, we can conduct this analysis and include it in the camera-ready version of the paper.",
"role": "user"
},
{
"content": "Thank you for the authors' feedback. The authors have answered almost all of my concerns. I maintain my positive recommendation for this work.",
"role": "assistant"
}
]
|
mlmwkAdIeK | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<mlmwkAdIeK>>```",
"role": "user"
},
{
"content": "Title: Overall the paper is okay but fails to provide the significance of the work.\nReview Content: This paper aims to understand the difference between male and female discourse in social media looking at a manually annotated set of Reddit threads related to Covid-19 compared to a baseline set. They confirm existing results about male and female discourse on the VAD scale.\n\nThe paper is clear and well-written and seems to be an interesting analysis, but fails to provide the significance of the work. Further the only novelty of the work is the application to Covid-19, otherwise all methods are utilizing previous work. This is not to say the authors should re-invent the wheel. \n\n\nPros:\n- An interesting exploration of gender differences that confirms previous results.\n- A good use of previous work on a new corpus. \n\nCons:\n- Missing the overall significance for researchers, clinicians, epidemiologists, etc.\n- It is unclear why Reddit specifically is used. They mention it is the 19th most visited site world. What about the other ones that are more visited? Is Reddit truly representative of the population at large? A description of the basic characteristics of Reddit users and posts would be helpful. \n- There is also a large imbalance between male and female posts (2:1 ratio). \n- This is very heteronormative.\n- The dataset is pulled from 15 weeks starting from Feb 1 to June 1, which was a rapidly changing time. The paper would benefit from a discussion of the different topics discussed over that time in comparison to the topics pulled out by the models. Currently we are in a new \"normal\" and I think that would be reflected during the different weeks.\n- The baseline is pulled from the same time period of Covid-19. An explanation of why the baseline should be the same time frame would be helpful to understand why the baseline is not from before Covid-19 when males and females were posting \"normal\" stuff. \n- The overall results in table 1 are confusing in what is being compared and what is statistically significant. The difference between males and females for the VAD criteria may be statistically significant but it is a minor increase (< 0.2). It is unclear how important this is and what implications it has.\n- A more in depth discussion on the relevance of the most coherent topics for males and females would be helpful. \nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer\nDear reviewer,\n\nThank you for your useful feedback! Please find below our response addressing your points.\n\n- We see the significance of our findings in demonstrating that emotional and topical differences in gender-linked language (shown thus far on small amounts of data and/or number of authors) are borne out and amplified in an emotionally-intensive setting. The novelty of this work lies in applying a novel combination of existing techniques (VAD ratings of words + regression models applied on word and sentence embeddings) for inferring emotional ratings of multi-sentence submissions. We will emphasize these contributions in the camera-ready version, if the paper is accepted.\n\n- The Reddit discussion platform is one of the most popular social media outlets for NLP research. Its monthly traffic closely follows that of twitter and facebook, with the number of active users comparable to that of twitter (330M as of 2020): it represents at large the population active on social media. Offering unlimited length of posts, and a convenient way for providing meta-data about users' traits (such as demographics), this platform provides an improved ability to detect emotion and topics in natural text (compared to character-limited twitter messages). Additionally, our work relies on accurate, self-reported gender of tens of thousands of authors -- information not easily obtainable from other social media outlets.\n\n- Indeed, male authors are more active on reddit (or, at least, they more commonly self-report their gender). As we collected utterances of over 16,000 authors comprising over 100,000 posts, we believe that this uneven distribution does not introduce any statistical bias into our analysis.\n\n- We agree that work on gender divided only into male and female language varieties is not ideal, but unfortunately we are limited by two factors: (1) most work in linguistics and psycholinguistics has focused on these dominant genders, and (2) the vast majority of users on reddit who self-report their gender do so choosing either male or female. An advantage of working on reddit (as opposed to pursuing traditional experimental or case-study analyses) is that, as the amount of text grows and the number of users grows who self-report genders other than male and female, eventually there should be sufficient samples of language that would enable going beyond the traditional male/female categories.\n\n- We conducted topic modeling analysis on the set of posts discussing the pandemic. A diachronic analysis in the dynamics of topics (unrelated to the virus) being discussed on social media and reflecting the new \"normal\" would definitely be an interesting direction for future work.\n\n- We selected our baseline data from the same time period in order to control for other time-dependent factors that could influence the language samples, so that the only difference was whether the posts mentioned the COVID-related keywords or not. We thought this was the most conservative approach (ie, findings could not be linked to other changes between, say, a year ago and the past few months). With over 50,000 daily submissions across a wide range of topical threads (subreddits), we believe that our baseline data (collected from a parallel timeline) reliably reflect general, pandemic-unrelated (\"normal\") linguistic utterances across the two genders. To dispel any doubt, we can perform a similar comparison to baseline data collected from a period preceding the virus outbreak for the camera-ready version.\n\n- All differences in Table 1 are statistically significant (as mentioned in the caption). The low effect sizes reflect the (expectedly) small, yet non-trivial and reliably detected, differences between male and female speakers. We are unaware of prior work reporting cross-gender differences in VAD dimensions (for meaningful comparison), but low effect sizes are not an uncommon finding in the psycholinguistic literature. More importantly, we show that these divergences are amplified in an emotionally-charged setup, which is one of the main contributions of this study.\n\n- We will extend the discussion on topic modeling results for the camera-ready version.\n\n",
"role": "user"
},
{
"content": "I value the authors' response. They have addressed most of the issues I raised, and I'm likely to revise the rating.",
"role": "assistant"
}
]
|
mlmwkAdIeK | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<mlmwkAdIeK>>```",
"role": "user"
},
{
"content": "Title: Overall the paper is well written, contains re-usable data, and describes clear results.\nReview Content: Quality:\nOverall the paper is well written, contains re-usable data, and describes clear results.\n\nClarity:\nAuthors aims of analysis were clearly stated, as were the methods employed. Results are elucidated clearly. The paper is well written, concise, and easy to follow logically.\n\nOriginality:\nGiven the findings corroborate already established patterns of F / M speech, the exact findings that those patterns persist in covid-related speech is not particularly original. However, within the context of studying phenomena amidst a completely novel world event, covid, the findings regarding how people talk about said event are original. Combination of methodologies to perform analysis are somewhat original.\n\nSignificance:\nMohammed's VAD showed low inter-annotator agreement for A & D types. This may reduce the impact of any findings, distinctions, or variances (even if statistically significant) between the genders in these categories. Even if statistically significant, the cohen-d effect sizes between F & M are still very small (< .2 in all categories). What is the _human significance_ (not mathematical significance) of the analyzed differences? \n\nEditing suggestions: \n Clarify Fig1 caption by including \"re covid\" or something to that effect\n Typo in sentence, missing \"of\" : \"COVID-related data trends (Figure 2) show comparatively low scores for valence and high scores\nfor arousal in the early weeks [OF] our analysis (February to mid-March)\". \n This sentence comes off as sexist: \"women tend to use more positive language, while men score higher on arousal and dominance.\" Use similar terms to describe characteristics for both genders instead of saying what women do and what men score, e.g, \"Women score higher in use of positive language, while men score ...\"\n\npros\n Straightforward, solid results that established F / M speech patterns persist in novel corpus.\n Probably a decent baseline paper to use in further research on gender differences re covid speech or other domains.\n \ncons\n Statistical significance does not explain human importance of findings. \n \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer\nThank you for taking the time to read and review our paper, we appreciate the feedback. To address your points:\n\n1. We appreciate your three editing suggestions, and will implement\nthem for the camera-ready version.\n\n2. Regarding human importance: We agree that further work could\nimprove our understanding of the differences in the arousal and dominance\ndimensions. In particular, a potential route might be to collect annotations on\na sample of the posts to see whether the differences we detect with our\napproach, while small, are also perceptible by human raters. A more elaborate method\nmight be to see whether comments to posts with automatically-detectable differences\nin emotional intensity are influenced by these differences; such an approach\nwould require careful establishment of controls. In any case, given the short\ntimelines, these considerations were not addressable in the current paper.\n\nThank you again for your\nfeedback.",
"role": "user"
},
{
"content": "Thanks to the authors for their clarification. It adequately addresses the points I raised. I keep my recommendation for this paper to be accepted.",
"role": "assistant"
}
]
|
qd51R0JNLl | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<qd51R0JNLl>>```",
"role": "user"
},
{
"content": "Title: timely contribution, could be better positioned with regard to previous work\nReview Content: This paper presents a dataset of 1700 questions related to covid which\nare hand labeled and divided into 15 general categories and 207\nclusters of semantic equivalence. The value is potentially useful for\ngeneral question classification, for semantic similarity, with\nparticular application to reducing load on question answerers by\nremoving redundancy, The data set is on the small side and some\nimportant details about how the data was collection are omitted. The\nauthors make a number of factual errors. All of these could be easily\ncorrected and the data set is a useful resource.\n\n\n\"we scraped questions about covid\". how is a 'question about covid'\ndetermined? Are keywords used? If so what keywords? Additionally how\nwere questions that had location/time-specific versions vs. questions\nwith only one version determined? There are number of ways this could\nbe done, some noisier than others, some more scalable than others.\n\nWhat are all the 'synonymous ways of saying COVID?'\n\nsame answer -> same question cluster. \"In which country is SARS-CoV-2\nbelieved to have originated\" and \"Which country manufactures the most\nface masks\" have the same answer but are not the same question nor are\neven related. Plenty of useful questions do not have an answer (yet);\nhow are these to be clustered?\n\n\"there are fewer ways to ask how long COVID will last than ways to\nwrite a positive movie review\" -- I would argue both are countably\ninfinite. \n\nThe cluster classification task was oddly formed. This is ultimately a\nsentence similarity task or a coreference/nil clustering task. One\ncould use the data to pose the binary question \"are these two\nsentences asking the same question?\" One could also posit the far more\nuseful \"is this a question that has already been asked [and if so\nwhich one] or is this a novel question?\" task of coref/nil\nclustering. Static assignment to clusters seems wrong for that kind of\ndata. By specifically excluding clusters with a small number of\nquestions you specifically skirt the issue you would have to deal with\nin real application of this data.\n\n\n\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to review comments and clarification on question clustering model\n> This paper presents a dataset of 1700 questions related to covid which are hand labeled and divided into 15 general categories and 207 clusters of semantic equivalence. The value is potentially useful for general question classification, for semantic similarity, with particular application to reducing load on question answerers by removing redundancy, The data set is on the small side and some important details about how the data was collection are omitted. The authors make a number of factual errors. All of these could be easily corrected and the data set is a useful resource.\n\nThank you for the insightful review. We plan to make an update to the manuscript soon (in the next couple of days), and hope that it will be a stronger paper after these revisions.\n\nOur dataset is indeed small, and we did everything we could given the circumstances at the time (our paper was submitted more than one month ago)---we scraped from all the official FAQ sources we could find and collected hundreds of questions from search engines. The call for papers for this workshop specifically asks for “active and late-breaking research”. With more time, we can obviously grow the dataset, what we present here is part of an ongoing project to collect a highly quality dataset of questions related to COVID-19. We have made the dataset available since the beginning to help with research in this area.\n\n> \"we scraped questions about covid\". how is a 'question about covid' determined? Are keywords used? If so what keywords? Additionally how were questions that had location/time-specific versions vs. questions with only one version determined? There are a number of ways this could be done, some noisier than others, some more scalable than others.\n\nFor sources such as the CDC and FDA, we used all the questions on their COVID FAQ website. For sources such as Google, Bing, Yahoo, and Quora, we used the keywords “COVID” and “coronavirus” to find COVID-related questions. We will specify this point in the revision of our paper.\n\nAll location/time-specific questions were manually determined by the first and last author as such. For even more clarity, involved readers can refer to the original sources that we scraped---these were uploaded to our Github at the time of submission.\n\n> What are all the 'synonymous ways of saying COVID?'\n\nIn this dataset, we defined [“SARS-COV-2”, “2019-nCOV”, “coronavirus”, “COVID-19”, “COVID19”] as synonymous ways of saying COVID. Based on manual examination, these covered most of the dataset. \n\n> same answer -> same question cluster. \"In which country is SARS-CoV-2 believed to have originated\" and \"Which country manufactures the most face masks\" have the same answer but are not the same question nor are even related. Plenty of useful questions do not have an answer (yet); how are these to be clustered?\n\nThank you for making this clarifying comment. In practice, most of the questions were not one-word answers---i.e., answers were typically like “Early on, many of the patients at the epicenter of the COVID-19 outbreak in China had some link to a large seafood and live animal market, suggesting animal-to-person spread”, not simply “China.”\nRather, what we meant was that, for example, the answer “COVID-19 originated in Wuhan, China.” responds to both “Where is SARS-CoV-2 believed to have originated” and “Where did COVID-19 start”, and we would therefore put these two questions in the same cluster. \n\nWhether two questions belonged to the same cluster was decided in most cases by the annotator, without referring to the answers of the questions. We realize that there is some subjectivity here, as is the case with ground-truth labels of many datasets. We think many annotators would agree with most of our dataset’s annotations, as we showed with our quality control checks with independent annotators.\n\n> \"there are fewer ways to ask how long COVID will last than ways to write a positive movie review\" -- I would argue both are countably infinite. \n\nIn this sentence, we hoped to provide some intuition about why data augmentation helped, without intending to give a formal proof. We see your point; we have rephrased it in the revised version.\n\n> The cluster classification task was oddly formed. This is ultimately a sentence similarity task or a coreference/nil clustering task. One could use the data to pose the binary question \"are these two sentences asking the same question?\" One could also posit the far more useful \"is this a question that has already been asked [and if so which one] or is this a novel question?\" task of coref/nil clustering. Static assignment to clusters seems wrong for that kind of data. By specifically excluding clusters with a small number of questions you specifically skirt the issue you would have to deal with in real application of this data.\n\nThanks for this insightful comment. We aimed to provide a straightforward evaluation for matching questions that asked the same thing, but your perspective and framing of this problem is probably more comprehensive.\n\nIn the coming days, we will reframe this task to use all data (including single-question clusters), and ask our model to identify whether a given question has already been asked in the database. Specifically, we will split the data by class into 70% train and 30% test, and then for each testing question, our model will predict whether that question was similar enough to an existing question in the database (and if so which one) or if it was a novel question. We will then report performance using a single number metric that captures whether this prediction was made correctly. \n\nAs this is a substantive change that requires re-running our models and evaluation, we are already working and hope to get this revision back within the next few days.\n",
"role": "user"
},
{
"content": "I value the authors' response. They have addressed most of the issues I raised, and I will consider raising the rating score.",
"role": "assistant"
}
]
|
qd51R0JNLl | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<qd51R0JNLl>>```",
"role": "user"
},
{
"content": "Title: A good dataset paper - some clarification and statement of limitation\nReview Content: The paper presents a dataset of questions related to COVID-19; included are annotations mapping questions to specific categories and classes.\n\nIt's always good to see papers that contribute datasets to the research community, particularly if they require significant efforts in annotation as this one does.\n\nI think that some changes are needed in terms of positioning of this paper to make clear the contributions and benefits. The other reviewer, I believe, also struggled a little with this. I suggest an explicit \"who would use this data\" and \"how would this dataset be used\" section. I think limiting it to just people who want to train an QA IR system is a mistake. First, not all question have answers (Table 1); second, it's not obvious how the classes and categories focus, which are a large part of the paper, is core to training a QA IR system. Instead, I think the classes and categories can be used a lot more in the query intent / query understanding area of research. Pitching the paper more broadly - and being explicit on areas - would help to clarify and strengthen the contributions.\n\nOn the methodology of annotation:\n- The paper rests on the technical soundness of the manual annotation process so the authors should really focus on ensuring the methodology / rigour is clear - especially as all the annotation work was done by the authors themselves. \n- Categories were made up by the author. How subjective and context specific was this? What are the limitations?\n- Seems like one question is mapped to one class and one category - what about overlapping questions? Surely there are questions that fall in multiple - and if there are not in this dataset then there will certainly be out in the wild. The paper, at leasts, needs to address this.\n- Categories may remain static (i.e., new questions about covid-19 would likely map to existing categories). But classes will be ever evolving. For example, if a vaccine is developed then a whole host of new question classes will arise. Having these static list of classes, at this snapshot in time, risks them becoming out-of-date very quickly in a rapidly changing environment.\n- Figure 2 - questions/class highly skewed. So is it actually worth having the classes? How valuable are they?\n- Having an explicit limitation section at the end of the paper will really help with the above points.\n\n\nSome changes in presentation could really elevate the paper:\n- The definition and difference between categories and classes was not clear at the beginning of the paper. In the middle section 2, the definition finally becomes clear. A definition of both is really needed earlier in the intro. \n- I found “classes” not really the best term. What about question “topics” or even question “clusters” since they are essential groups of duplicate questions. When classes are introduced, there needs to be a better justification for their purpose. It comes clearer much later but I think is needed earlier.\n- Matched vs. unmatched is confusing. It’s not that they were not matched, it’s just that they were classes with only one question, right? Can a better term found? Also Table 1 presents matched/unmatched without any reference / definition - only much later in the paper do you find out.\n- At some point, the paper suddenly starts talking about “labels” - it’s not clear whether this is classes or categories or both - why not just use the actual terms.\n\nOn the classifier / experiments\n- The contribution of the paper is the dataset and not the ML methods. As stated, in the response to the review, the performance is merely an indication of how well current methods do on this task. The fact that the performance is good or bad does not really change the papers worth. However, insights from the experiments are part of the contribution; and these could be improved: e.g., were certain categories much harder than others? where two categories often confused for each other? \n\nOverall, this paper provided a good dataset for the community. But it needs to be much more explicit and clear about how people might use the dataset rather than a generic reference to training QA IR system. Addressing the presentation issues and directly addressing a number of limitations will greatly improve the paper.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Review Response\nIt's always good to see papers that contribute datasets to the research community, particularly if they require significant efforts in annotation as this one does.\n\n- We are very grateful for the thoughtful review. See our revision in the updated PDF. (We moved some non-essential content to the supplementary materials to stay within the 4-page limit.) We see papers as the start of a conversation (rather than the end of one), and so if any new feedback arises or there are further clarifications we can make, we are happy to consider them. \n\nI think that some changes are needed in terms of positioning of this paper to make clear the contributions and benefits. The other reviewer, I believe, also struggled a little with this. I suggest an explicit \"who would use this data\" and \"how would this dataset be used\" section. I think limiting it to just people who want to train an QA IR system is a mistake. First, not all questions have answers (Table 1); second, it's not obvious how the classes and categories focus, which are a large part of the paper, is core to training a QA IR system. Instead, I think the classes and categories can be used a lot more in the query intent / query understanding area of research. Pitching the paper more broadly - and being explicit on areas - would help to clarify and strengthen the contributions.\n\n- We included a “Use cases” section in our discussion to qualify how we think our dataset might be helpful for the community. To make our mention of retrieval QA more specific, we reference an external and already deployed QA system that relates strongly to our paper. We mention query understanding and how our dataset annotations apply. And as mentioned in the last version of the paper, we emphasize that our dataset could be used for evaluation of a pre-trained COVID model, (e.g., something like Sci-BERT for COVID). Finally, we mention other related areas. If there are other use cases to add or further clarification that we can make, please let us know and we will consider them in the next revision.\n\nThe paper rests on the technical soundness of the manual annotation process so the authors should really focus on ensuring the methodology / rigour is clear - especially as all the annotation work was done by the authors themselves. \n\n- Our goal was to make the collection and annotation process as clear as possible. We don’t see any specific criticism in this comment, but if we can clarify or detail the process in any way, let us know.\n\nSeems like one question is mapped to one class and one category - what about overlapping questions? Surely there are questions that fall in multiple - and if there are not in this dataset then there will certainly be out in the wild. The paper, at leasts, needs to address this.\n\n- Our first author made a new pass through the dataset and found that 40 of the 207 question clusters could arguably map to two or more categories, most frequently overlapping between the transmission and prevention categories. To make this annotation process more transparent, we came up with and posted with our dataset formal definitions of each question category so that the distinctions that we made to decide why clusters were assigned to a certain category are more transparent. We mention this in the “annotation quality” part of our paper.\n\nCategories were made up by the author. How subjective and context specific was this? What are the limitations? Categories may remain static (i.e., new questions about covid-19 would likely map to existing categories). But classes will be ever evolving. For example, if a vaccine is developed then a whole host of new question classes will arise. Having these static list of classes, at this snapshot in time, risks them becoming out-of-date very quickly in a rapidly changing environment. Figure 2 - questions/class highly skewed. So is it actually worth having the classes? How valuable are they? Having an explicit limitation section at the end of the paper will really help with the above points.\n\n- We included a limitations section that considers these three points.\n\n(continued, 1 of 2)\n(2 of 2)\n\nSome changes in presentation could really elevate the paper:\nThe definition and difference between categories and classes was not clear at the beginning of the paper. In the middle section 2, the definition finally becomes clear. A definition of both is really needed earlier in the intro. \n\n- We defined question categories and question clusters more clearly at their first mention in the introduction. \n\nI found “classes” not really the best term. What about question “topics” or even question “clusters” since they are essential groups of duplicate questions. When classes are introduced, there needs to be a better justification for their purpose. It comes clearer much later but I think is needed earlier.\n\n- We changed our “classes” terminology to “clusters,” as per your suggestion. We made it more clear in the introduction that cluster annotations indicate questions with the same intent and could therefore be helpful in evaluating a retrieval-QA system.\n\nMatched vs. unmatched is confusing. It’s not that they were not matched, it’s just that they were classes with only one question, right? Can a better term found? Also Table 1 presents matched/unmatched without any reference / definition - only much later in the paper do you find out.\n\n- We changed this matched/unmatched terminology to “multi-question clusters” and “single-question clusters”. For clarity, we explicitly define these terms in the caption of Table 1, and again clarify again in the Single-Question Clusters paragraph that single-question clusters means questions that did not ask the same thing as any other question in the dataset. We removed all instances of “matched/unmatched questions”. \n\nAt some point, the paper suddenly starts talking about “labels” - it’s not clear whether this is classes or categories or both - why not just use the actual terms.\n\n- We clarified this terminology in the revision.\n\nOn the classifier / experiments\nThe contribution of the paper is the dataset and not the ML methods. As stated, in the response to the review, the performance is merely an indication of how well current methods do on this task. The fact that the performance is good or bad does not really change the papers worth. However, insights from the experiments are part of the contribution; and these could be improved: e.g., were certain categories much harder than others? were two categories are often confused for each other? \n\n- We added a confusion matrix/heatmap for the question-category classification task using the BERT+SVM. As we describe in the supplementary materials, the model did indeed confuse some categories. The biggest mistakes were between Transmission and Having COVID (cumulative 34% error rate), and between Having COVID and Symptoms (33% error rate). \n\nOverall, this paper provided a good dataset for the community. But it needs to be much more explicit and clear about how people might use the dataset rather than a generic reference to training QA IR system. Addressing the presentation issues and directly addressing a number of limitations will greatly improve the paper.\n\n- Thank you for your insightful feedback, and we hope our revised version improves on the weaknesses in the first submitted version.\n",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It satisfactorily addresses the major issues I had. I stand by my favorable recommendation for this paper.",
"role": "assistant"
}
]
|
qd51R0JNLl | ACL 2020 Workshop NLP-COVID | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<qd51R0JNLl>>```",
"role": "user"
},
{
"content": "Title: Preliminary work, limited significance\nReview Content: This paper presents a corpus of 1690 documents about COVID-19 manually annotated with questions classes, along with basic question classification algorithms.\n\nPros\n- in scope for the conference\n- dataset is publicly available\n- technical details are kept to a minimum\n- accessible to a large audience\n\nCons\n- no clear statement of objectives\n- no clear statement of specific contribution\n- no explicit annotation guidelines (making it difficult to reproduce this effort)\n- ad hoc categories (not modeled after general types of questions)\n- no justification for the selection of documents\n- mediocre performance of the basic question classification algorithms\n- no discussion (the so-called discussion section is merely a summary)\n- the claim that this dataset can help train QA systems is unwarranted at this stage\n\nOverall\nThe significance of this preliminary work is extremely limited\n\nOther comments\n- the presentation of figure 2 is unnecessarily confusing\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to review\nThanks for the review.\n\n\n- No clear statement of objectives: \n\nThe objective is to release an curated dataset for developing QA/information retrieval algorithms for COVID-19.\n\n- No clear statement of specific contribution: \n\nThe contribution of the paper is the carefully curated dataset.\n\n- No explicit annotation guidelines (making it difficult to reproduce this effort): \n\nQuestions were annotated by matching questions that ask the same thing to a common class, and in the paper we define two questions to be asking the same thing if they can be answered with the same answer.\n\n- Ad hoc categories (not modeled after general types of questions):\n\nCategories are decided by annotators. We believe a natural breakdown of questions by topics is better than some general question types from linguistics.\n\n- No justification for the selection of documents: \n\nPublicly available documents. We selected our sources by finding trustworthy organizations that had an FAQ page about COVID-19. We exhaustively scraped questions from thirteen sources and also generated questions to have as many questions for our dataset as possible.\n\n- Mediocre performance of the basic question classification algorithms: \n\nThe goal of the paper is to present the data. We benchmarked our data using a well-known off-the-shelf classifier. The goal of the paper is not to provide a state-of-the-art QA system.\n\n- No discussion (the so-called discussion section is merely a summary): \n\nSure. The discussion section should have been named conclusions.\n\n- The claim that this dataset can help train QA systems is unwarranted at this stage: \n\nWe respectfully disagree. We believe this carefully curated dataset will be of use for training/evaluating QA systems. \n\n- Overall, the significance of this preliminary work is extremely limited:\n\nWe think there is value in a dataset that could potentially be used to evaluate COVID-specific NLP models. For example, in the submission immediately following ours (“An Analysis of BERT FAQ Models for COVID-19 Infobot”), the authors note that \"[they] did not have COVID-19 related dataset to help [them] evaluate the performance of QA systems.\" ",
"role": "user"
},
{
"content": "I value the authors' response. It satisfactorily addresses most of my concerns.",
"role": "assistant"
}
]
|
ub9_2iAo3D | ACL 2020 Workshop NLP-COVID | AnonReviewer5 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer5, and the following are the paper you need to review:\n```<<ub9_2iAo3D>>```",
"role": "user"
},
{
"content": "Title: Interesting correlation, but not actionable and not clear\nReview Content: This paper analyzes the correlation between Covid19 cases (as reported by JHU) and relevant tweets, on a per-country based granularity.\nThe results are remarkable, showing a very strong correlation (Table 2) for a model trained on Italian data. Moreover, using modern multi-lingual embeddings, the trained regression carries over to other languages; although the correlation there is very variable (and sometimes negative).\nThe results are intriguing and show once more how expressions on social media reflect physical reality. \n\nI have however two major concerns:\n\n1/ Why is this useful?\nNot in the sense of what could public health officials do with such models (although I have no clue what they could do). It seems to me (and please correct me if I am wrong) that you are using the tweets of day D to predict number of cases of day D. If that is the case, then the causality is the other way round (cases => tweets). It would be much more interesting to try to predict number of cases at day D+1, or even better at day D+n.\n \n2/ Reproducibility \nThere are lots of things that are not clear. Those are not fundamental issues, but I cannot recommend acceptance in the current state, as I could not understand exactly what the authors did, much less would I be able to reproduce it\nFiltering: \n - why do you remove tweets with hyperlinks?\n - \"we remove [...] tweets discussing countries other than Italy\". What does this mean?\n - \"We further filter Italy’s tweets for a balanced representation of tweet embeddings.\" What does this mean?\nModel: what exactly is a model? Is it the embedding + the total frequency of the selected words? Could you enumerate all the \"models\" (feature set) you used and - maybe in the appendix - the values for all of them?\nEvaluation: Why do you not report mean error? The correlation itself is not very actionable, the predicted value would be\n\nAlso, there is a confusion between language and country, which is conflated. The given argument is that the chosen languages have a majority country, but for rigourousity I would recommend replacing country everywhere by language as it is misleading.\n\nFig 1 seems to be done with the number of tweets before the filtering of Sect 2.5. Why?\n\nFinally, I am not sure if I understand how the transfer is done. Is it correct to say that you use the same regression and provide as feature the Japanese/Indonesian/etc tweets after putting them through the multi-lingual embedding?\n\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer 5\nThank you for the review. We hope our comments sufficiently address your concerns.\n\n>Not in the sense of what could public health officials do with such models (although I have no clue what they could do). It seems to me (and please correct me if I am wrong) that you are using the tweets of day D to predict number of cases of day D. If that is the case, then the causality is the other way round (cases => tweets). It would be much more interesting to try to predict number of cases at day D+1, or even better at day D+n.\n\nThe model in the paper describes real-time alignment of tweets to confirmed cases in each country. Our model prediction utilizes only the tweet features and does not rely on past case counts/time periods for alignment. We study the ability to correlate social media tweets across languages and countries in a pandemic scenario. Based on this demonstration, researchers can study various cross-cultural reactions to the pandemic on social media. The use of cross-lingual transfer learning can allow us to determine how actions taken to contain the outbreak, such as lockdowns, can transfer across countries who implement the same measures through social media sensing. Studying a country with an early outbreak can help us to detect similar COVID-19 trends in other countries. \n\n>why do you remove tweets with hyperlinks?\n\nThis is done to remove tweets spreading news information and focus more on personal tweets.\n\n>\"we remove [...] tweets discussing countries other than Italy\". What does this mean?\n\nFor the Italy model, we are evaluating against reported cases in Italy. Therefore, we do not want to utilize tweets that focus on other countries as this will not have effect on Italy’s COVID-19 case status and our model focuses on the personal narratives of those living in Italy. Similarly, for other countries, we remove tweets that discuss countries other than the one being tested on.\n\n>\"We further filter Italy’s tweets for a balanced representation of tweet embeddings.\" What does this mean?\n\nThere may be duplicate tweets present in our initial data. We collapse these to a single tweet.\n\n>Model: what exactly is a model? Is it the embedding + the total frequency of the selected words? Could you enumerate all the \"models\" (feature set) you used and - maybe in the appendix - the values for all of them?\n\nThe model represents the trained SVM model for Italy. The input features are the embedding + the total frequency of the selected words. We have two regression models in this case: one for the number of new daily cases and another for the total case count.\n\n>Evaluation: Why do you not report mean error? The correlation itself is not very actionable, the predicted value would be\n\nDue to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. That is why we are using Spearman correlation instead of mean absolute error for the evaluation.\n\n>Fig 1 seems to be done with the number of tweets before the filtering of Sect 2.5. Why?\n\nThis is done as an initial analysis to determine whether even a basic feature such as the original tweet frequency can give us insight into the pandemic case trends.\n\n>Finally, I am not sure if I understand how the transfer is done. Is it correct to say that you use the same regression and provide as feature the Japanese/Indonesian/etc tweets after putting them through the multi-lingual embedding?\n\nWe are utilizing the trained Italy regression model and using Japanese/Indonesian/etc tweets (along with keyword counts) as input to the model for testing. mBERT and LASER are used to extract multi-lingual tweet embeddings.",
"role": "user"
},
{
"content": "Title: Clarification questions\nThank you for this very prompt answer. It clarifies most of my doubts. For the others however, I realized it was not clear in my review the information I would have liked you to provide. I will try to be more explicit\n\n> Our model prediction utilizes only the tweet features and does not rely on past case counts/time periods for alignment\nDo you confirm that for day D you use tweets from that day D to predict counts of day D?\n\n> For the Italy model, we are evaluating against reported cases in Italy. Therefore, we do not want to utilize tweets that focus on other countries as this will not have effect on Italy’s COVID-19 case status and our model focuses on the personal narratives of those living in Italy. Similarly, for other countries, we remove tweets that discuss countries other than the one being tested on.\nI understand what you want to achieve. I do not understand *how* you achieve this. Could you be more clear on how you \"remove countries other than the one being tested on\"?\n\n> There may be duplicate tweets present in our initial data. We collapse these to a single tweet.\nSo, do you confirm that \"filter Italy’s tweets for a balanced representation of tweet embeddings\" means removing duplicates?\n\n> Due to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. Even inside the same country? Wouldn't that affect the correlation as well? I am not convinced of this argument, and still believe that the real value lies in the predicted value themselves. Could you report those in the appendix for instance?\n\n\n",
"role": "assistant"
},
{
"content": "Title: Response to Clarification Questions\n>Our model prediction utilizes only the tweet features and does not rely on past case counts/time periods for alignment Do you confirm that for day D you use tweets from that day D to predict counts of day D?\n\nYes. In this paper, we first attempt to answer whether we can build a model that can correlate the day's tweets with the number of cases in a given country. We then attempt to see if we can use this trained model and apply it to tweets and cases in a new country with a different language and culture.\n\n>For the Italy model, we are evaluating against reported cases in Italy. Therefore, we do not want to utilize tweets that focus on other countries as this will not have effect on Italy’s COVID-19 case status and our model focuses on the personal narratives of those living in Italy. Similarly, for other countries, we remove tweets that discuss countries other than the one being tested on. I understand what you want to achieve. I do not understand how you achieve this. Could you be more clear on how you \"remove countries other than the one being tested on\"?\n\nWhen we do this, we simply remove tweets that include other countries’ names when translated into the current country’s language.\n\n>There may be duplicate tweets present in our initial data. We collapse these to a single tweet. So, do you confirm that \"filter Italy’s tweets for a balanced representation of tweet embeddings\" means removing duplicates?\n\nYes, filtering for a balanced tweet representation is the process of removing duplicate tweets.\n\n>Due to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. Even inside the same country? Wouldn't that affect the correlation as well? I am not convinced of this argument, and still believe that the real value lies in the predicted value themselves. Could you report those in the appendix for instance?\n\nYes, there could be the case of different testing resources within a country and this may introduce some noise into the data.",
"role": "user"
},
{
"content": "Title: Thanks & Congrats\nThanks a lot for getting back, even after acceptance (I just watched your talk).\nIn case you continue this work, I can only encourage you to look into predictions for days D+n, as well as reporting the numbers of predicted cases in addition to correlations.\n",
"role": "assistant"
}
]
|
ub9_2iAo3D | ACL 2020 Workshop NLP-COVID | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<ub9_2iAo3D>>```",
"role": "user"
},
{
"content": "Title: Good idea but some methodological concerns\nReview Content: The following paper hypothesizes that tweets can be used to model COVID-19 outbreaks in across countries. The paper does this by using cross-lingual sentence embeddings from mBERT and LASER to predict case-counts.\n\nAlthough this is a good idea and Figure 1 is particularly compelling, I find that the paper is at best a work in progress and at worst intentionally misleading. I'd recommend substantially more work before it is ready for publication. \n\nMy biggest concern is that the way the tweets are filtered and the time-chunks are specified, I have no reason to believe that this model is using tweets to actually predict an outbreak, as the authors imply (NB the authors are careful to say they are \"aligning\", not \"predicting\", although I think this intentionality is rather muddy). It seems to me that they are simply capturing the response to an outbreak already occurring -- I'm unclear what the value of this is to social scientists and policymakers.\n\nThe authors explicitly filter for words such as \"lockdown\", \"quarantine\", \"social distancing\", \"epidemic\", and \"outbreak\", to perform their alignment, which to me seem to describe tweet-responses to policy rather than tweet-responses to sickness. I think the authors' hypothesis would be better served by choosing words, topics or other indicators that are more personal and health-related -- words like \"fever\", \"cough\" or \"lack of smell\" -- to avoid such confounding.\n\nFurther, the authors utilized fixed time-periods to explore their regression, which is a little confusing to me. I think the authors should use date-ranges in each country relative to the kth case in that country. Since their date-ranges are so wide, I worry that the date-ranges are simply capturing the full policy response of a government that has already predicted an outbreak.\n\nWhy is spearman's correlation the only metric used? If the authors are conducting a regression experiment, there are other more compelling and interpretable metrics. Why aren't significance-values included for the correlation?\n\nWhy do the authors say that they observe \"right-skewed Gaussians\"? This is clearly more of a point process (i.e. a Hawkes process.)\n\nAdditionally, I would urge a more serious consideration of other confounders as well. I'm not sure what the best ones to use would be, but I think some policy analysis is necessary -- when did each country actually institute a lockdown? Do lockdowns effect overall non-COVID tweet-volume as well? I'm sure there are confounders in the literature.\n\nIf this is not possible, then I urge a reframing of the paper. The authors need to be clearer about what they are actually purporting to do, and not hide behind words like \"align\".\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer 2\nThank you for the review. We provide a response to these comments below and hope this response and our revision sufficiently address your concerns.\n\n>My biggest concern is that the way the tweets are filtered and the time-chunks are specified, I have no reason to believe that this model is using tweets to actually predict an outbreak, as the authors imply (NB the authors are careful to say they are \"aligning\", not \"predicting\", although I think this intentionality is rather muddy). It seems to me that they are simply capturing the response to an outbreak already occurring -- I'm unclear what the value of this is to social scientists and policymakers.\n\nThe model in the paper describes real-time alignment of tweets to confirmed cases in each country. The use of cross-lingual transfer learning can allow us to determine how actions taken to contain the outbreak, such as lockdowns, can transfer across countries who implement the same measures through social media sensing. Studying a country with an early outbreak can help us to detect similar COVID-19 trends in other countries. Additionally, it allows us to study the various cultural reactions to the pandemic on social media. \n\n>The authors explicitly filter for words such as \"lockdown\", \"quarantine\", \"social distancing\", \"epidemic\", and \"outbreak\", to perform their alignment, which to me seem to describe tweet-responses to policy rather than tweet-responses to sickness. I think the authors' hypothesis would be better served by choosing words, topics or other indicators that are more personal and health-related -- words like \"fever\", \"cough\" or \"lack of smell\" -- to avoid such confounding.\n\nWhile we have taken the approach of analyzing policy actions in tweets for this model, our model can also be changed and utilized for the approach of symptom detection.\n\n>Further, the authors utilized fixed time-periods to explore their regression, which is a little confusing to me. I think the authors should use date-ranges in each country relative to the kth case in that country. Since their date-ranges are so wide, I worry that the date-ranges are simply capturing the full policy response of a government that has already predicted an outbreak.\n\nOur model prediction utilizes only the tweet features and does not rely on past case counts/time periods for alignment. Therefore, we can freely use any date-range instead of partitioning based on the kth case count which could potentially lead to overfitting.\n\n>Why is spearman's correlation the only metric used? If the authors are conducting a regression experiment, there are other more compelling and interpretable metrics. Why aren't significance-values included for the correlation?\n\nWe emphasize that the goal of this current study is to determine if there is an alignment between tweets and confirmed cases. Due to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. That is why we are using Spearman correlation instead of mean absolute error for the evaluation.\n\n>Why do the authors say that they observe \"right-skewed Gaussians\"? This is clearly more of a point process (i.e. a Hawkes process.)\n\nWe use this to describe the shape of confirmed cases. However, a Hawkes process may be used to describe the diffusion of cases as seen in the figure.\n\n>Additionally, I would urge more serious consideration of other confounders as well. I'm not sure what the best ones to use would be, but I think some policy analysis is necessary -- when did each country actually institute a lockdown? Do lockdowns effect overall non-COVID tweet-volume as well? I'm sure there are confounders in the literature.\n\nWhile we focus only on textual features in this model, future work can include other features such as restriction measures.\n",
"role": "user"
},
{
"content": "I value the authors' response. It answers most of my concerns, and I may consider adjusting the rating upwards.",
"role": "assistant"
}
]
|
ub9_2iAo3D | ACL 2020 Workshop NLP-COVID | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<ub9_2iAo3D>>```",
"role": "user"
},
{
"content": "Title: Review of: Cross-lingual Transfer Learning for COVID-19 Outbreak Alignment\nReview Content: # [REVIEW] Cross-lingual Transfer Learning for COVID-19 Outbreak Alignment\n\n10th June 2020\n\n## SUMMARY\nThis short paper describes work on cross-lingual transfer learning to track COVID-19 cases across Italy, Thailand, Japan, Turkey, and Indonesia (i.e. languages that are largely spoken in a single country). Using case statistics derived from Johns Hopkins COVID dashboard, the researchers aim to determine (a) if the number of COVID-related tweets is associated with case numbers (good correlations were achieved for this), and (b) can this training be applied to other countries that may be at a slightly different point in their COVID trajectory i.e. cross-lingual transfer learning (a range of correlations with JHU data were achieved ranging from -.316 to 0.859, indicating that the approach has some utility — but see caveats below). \n\nIt is difficulty for me to assess the quality of this paper relative to other submissions, but I suspect that this use of cross-lingual transfer learning is sufficiently interesting to justify a short paper.\n\n## MAJOR COMMENTS\n1. There is a major methodological issue not related to the NLP and not something you can be reasonably expected to do anything about — but I think it’s important to point it out as a limitation: The international ground truth data for COVID is not very reliable regarding underlying prevalence/incidence of COVID-19. Even in the US, there have been stark differences in approaches to testing, and the criteria for testing. International comparisons are even more difficult. This undermines the evaluation somewhat.\n\n## MINOR COMMENTS [NITPICKING]\n1. [p1c1] “With globalization, it is intuitive that countries have followed earlier affected regions in patterns of outbreaks and measures to contain to them (Cuffe and Jeavans, 2020).” Consider rephrasing this to emphasise that it is the increased travel associated with globalization that is important, rather than globalisation per se.\n2. [p3c1] “Bert as service” probably requires a few words of explanation (at least for this reviewer - I had to google it)\n3. [p3c2] “When measuring against new daily cases, the correlations are not as significant in time II” suggest avoiding using the word significance here, unless you are referring to statistical significance.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Reply to Reviewer 1\nThank you for reviewing our paper. We hope our comments below address your concerns.\n\n>There is a major methodological issue not related to the NLP and not something you can be reasonably expected to do anything about — but I think it’s important to point it out as a limitation: The international ground truth data for COVID is not very reliable regarding underlying prevalence/incidence of COVID-19. Even in the US, there have been stark differences in approaches to testing, and the criteria for testing. International comparisons are even more difficult. This undermines the evaluation somewhat.\n\nWe understand that due to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. That is why we are using Spearman correlation instead of mean absolute error for the evaluation. We have tested the cross-lingual model on four countries to show that we can transfer knowledge even in the case of variable testing practices.\n\n>1. [p1c1] “With globalization, it is intuitive that countries have followed earlier affected regions in patterns of outbreaks and measures to contain to them (Cuffe and Jeavans, 2020).” Consider rephrasing this to emphasise that it is the increased travel associated with globalization that is important, rather than globalisation per see.\n2. [p3c1] “Bert as service” probably requires a few words of explanation (at least for this reviewer - I had to google it)\n3. [p3c2] “When measuring against new daily cases, the correlations are not as significant in time II” suggest avoiding using the word significance here, unless you are referring to statistical significance.\n\nWe have rewritten these sentences in our revision. \n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. The authors have answered almost all of my concerns. I reaffirm my positive endorsement for this paper.",
"role": "assistant"
}
]
|
BJerQWcp6Q | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<BJerQWcp6Q>>```",
"role": "user"
},
{
"content": "Title: Very interesting paper that describes a positive contribution to the state of the art in BioNLP\nReview Content: This paper proposes a deep-learning-based method to solve the known BioNLP task of disease normalization on the NCBI disease benchmark (where disease named entities are normalized/disambiguated against the MeSH and OMIM disease controlled vocabularies and taxonomies). The best known methods (DNorm, TaggerOne) are based on a pipeline combination of sequence models (conditional random fields) for disease recognition, and (re)ranking models for linking/normalization.\n\nThe current paper proposes instead an end-to-end entity recognition and normalization system relying on word-embeddings, siamese architecture and recursive neural networks to improve significantly (4%, 84 vs 80% F1-score, T. 3). A key feature is the use of a GRU autoencoder to encode or represent the \"context\" (related entities of a given disease within the span of a sentence), as a way of approximating or simulating collective normalization (in graph-based entity linking methods), which they term \"coherence model\". This model is combined (weighted linear combination) with a model of the entity itself.\nFinally, the complete model is trained to maximize similarity between MeSH/OMIM and this combined representation.\nThe model is enriched further with additional techniques (e.g., distant supervision). \n\nThe paper is well written, generally speaking. The evaluation is exhaustive. In addition to the NCBI corpus, the BioCreative5 CDR (chemical-disease relationship) corpus is used. Ablation tests are carried out to test for the contribution of each module to global performance. Examples are discussed.\n\nThere are a few minor issues that it would help to clarify:\n\n(1) Why GRU cells instead of LSTM cells?\n(2) Could you please explain/recall why (as implied) traditional models are NP-hard? I didn't get it. Do you refer to the theoretical complexity of Markov random fields/probabilistic graphical models? Maybe you should speak of combinatorial explosion instead and give some combinatorial figure (and link this to traditional methods). My guess is that this is important, as the gain in runtime performance (e.g., training time - F. 4) might be linked to this.\n(3) A link should be added to the GitHub repository archiving the model/code, to ensure reproducibility of results.\n(4) Could you please check for *statistical significance* for T. 3, 5, 6 and 7? At least for the full system (before ablations). You could use cross-validation. \nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thank you for the feedback\nThank you for the review and insightful questions. We address them below:\n\n(1) In general, based on anecdotal evidence it seems that the relative predictive performance of GRU and LSTM cells depends on the particular task at hand. In our case, we selected GRU cells based on experiments we performed with both LSTM and GRU cells during the model design, which showed that GRU cells led to better results. Another potential benefit of this choice is increased training performance, as GRU cells are less complex and less computation-intensive than LSTM cells. We revised Section 4.3 to explain the reasoning behind our choice of GRU cells. \n\n(2) We are referring to the complexity of modeling and performing inference from the joint probability of the entire set of tags, which is an NP-hard problem. To avoid the exponential blowup, existing techniques employ different types of approximation algorithms (e.g., Ganea and Hoffman (2017) present an N^2 approximation algorithm using a fully-connected pairwise conditional random field, which requires loopy belief propagation to train). Our proposal is to model the problem as a tag sequence using a recurrent net to avoid combinatorial explosion, though other solutions could also be proposed to reduce the complexity (i.e. model it as a tag sequence and use a conditional random field or a hidden Markov model). We cleaned up the language surrounding this point both in the abstract and in Section 4.3. \n\n(3) We intend to make the code of the best models for each dataset available upon acceptance and will be providing a link to it in the paper. \n\n(4) We attempted to obtain significance results during the author feedback period by performing 10-fold cross-validation on the NCBI disease corpus both for the best NormCo model and the best baseline model (which is TaggerOne). While we were able to obtain the evaluation metrics for NormCo, we ran into several issues while retraining TaggerOne on new splits, including (a) TaggerOne’s code breaking (i.e., throwing null pointer exceptions) and (b) TaggerOne’s internal F-score evaluation failing for concepts that have multiple labels (such as “inherited neuromuscular disease”, which is mapped to both MESH:D009468 “Neuromuscular disease” and MESH:D030342 “Genetic diseases, inborn”), which were not present in the original test set. Ultimately these issues, coupled with TaggerOne’s long training times documented in Section 6.4.5, did not allow us to obtain significance results for TaggerOne. However, we were able to perform cross-validation of the best NormCo model (i.e., MC-synthetic), which resulted in an average accuracy of 0.853 with a low standard deviation of 0.013. ",
"role": "user"
},
{
"content": "I value the authors' response. The response addresses most of my concerns.",
"role": "assistant"
}
]
|
BJerQWcp6Q | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<BJerQWcp6Q>>```",
"role": "user"
},
{
"content": "Title: Simple, fast method with decent results on disease normalization (linking)\nReview Content: Summary:\nThe authors address the problem of disease normalization (i.e. linking). They propose a neural model with submodules for mention similarity and for entity coherence. They also propose methods for generating additional training data. Overall the paper is nicely written with nice results from simple, efficient methods.\n\nPros:\n- Paper is nicely written with good coverage of related work\n- LCA analysis is a useful metric for severity of errors\n- strong results on the NCBI corpus\n- methods are significantly faster and require far fewer parameters than TaggerOne while yielding comparable results\n\nCons:\n- Results on BC5 are mixed. Why?\n- Data augmentation not applied to baselines\n- Methods are not very novel\n\nQuestions:\n- Were the AwA results applied only at test time or were the models (including baselines) re-trained using un-resolved abbreviation training data?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Thank you for the feedback\nThank you for the review! We address some of the issues raised below:\n\n(1) The reason that results on BC5 are mixed is that our model is more conservative, favoring high precision over recall (see Table 3). Since the BC5CDR dataset has a greater diversity of concepts that the NCBI dataset (1082 concepts in BC5CDR compared to 753 in NCBI), the lower recall becomes more important, leading to a slightly lower accuracy than the baseline models. However, note that even in this case the NormCo model still outperforms the baselines on the average LCA distance performance metric, which, as explained in the paper, takes into account not only the overall accuracy but also the severity of the errors. We added an explanation of this to Section 6.4.1. \n\n(2) The AwA models were applied only at test time and the models were not re-trained. We have added language to make this clearer in Section 6.3.1. The purpose of this experiment was to observe how abbreviation resolution affects the performance of the trained models. ",
"role": "user"
},
{
"content": "I value the authors' response. They've addressed nearly all of my concerns.",
"role": "assistant"
}
]
|
BJerQWcp6Q | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<BJerQWcp6Q>>```",
"role": "user"
},
{
"content": "Title: very competent work on an important problem\nReview Content: The paper presents a method for named entity disambiguation\ntailored to the important case of medical entities,\nspecifically diseases with MeSH and OMIM\nas the canonicalized entity repository.\nThe method, coined NormCo, is based on a cleverly designed\nneural network with distant supervision from MeSH tags of\nPubMed abstracts and an additional heuristic for estimating\nco-occurrence frequencies for long-tail entities.\n\nThis is very competent work on an important and challenging\nproblem. The method is presented clearly, so it would be easy\nto reproduce its findings and adopt the method for further\nresearch in this area.\nOverall a very good paper.\n\nSome minor comments:\n\n1) The paper's statement that coherence models have\nbeen introduced only recently is exaggerated. \nFor general-purpose named entity disambiguation, coherence\nhas been prominently used already by the works of\nRatinov et al. (ACL 2011), Hoffart et al. (EMNLP 2011)\nand Ferragina et al. (CIKM 2010); and the classical\nworks of Cucerzan (EMNLP 2007) and Milne/Witten (CIKM 2008)\nimplicitly included considerations on coherence as well.\nThis does not reduce the merits of the current paper,\nbut should be properly stated when discussing prior works.\n\n2) The experiments report only micro-F1. Why is macro-F1 \n(averaged over all documents) not considered at all?\nWouldn't this better reflect particularly difficult cases\nwith texts that mention only long-tail entities,\nor with unusual combinations of entities?\n\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thank you for the feedback\nThank you for the review and constructive feedback. We address the raised concerns below:\n\n(1) Thank you for the comment. In the updated submission we revised the related work section to better represent existing work on coherence, including the list of works mentioned above. \n\n(2) We agree that macro-F1 could give valuable insights into the performance of the normalization algorithms on rarely seen entities. The reason we reported micro-F1 numbers was mainly to keep in line with the recent publications in the area on the particular datasets that focus on micro-F1 performance. However, we still evaluated the macro-F1 performance of our model on NCBI and found out that it outperformed DNorm with 0.856/0.823/0.833 P/R/F1 compared to 0.828/0.809/0.819 P/R/F1 for DNorm (TaggerOne does not report macro-F1). We added a brief discussion of this to Section 6.3.1. ",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It satisfactorily addresses the major issues I had.",
"role": "assistant"
}
]
|
BJgrxbqp67 | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<BJgrxbqp67>>```",
"role": "user"
},
{
"content": "Title: Review of Improving Relation Extraction by Pre-trained Language Representations\nReview Content: The paper presents TRE, a Transformer based architecture for relation extraction, evaluating on two datasets - TACRED, and a commonly used Semeval dataset.\n\nOverall the paper seems to have made reasonable choices and figured out some important details on how to get this to work in practice. While this is a fairly straightforward idea and the paper doesn't make a huge number of innovations on the methodological side, however (it is mostly just adapting existing methods to the task of relation extraction).\n\nOne point that I think is really important to address: the paper really needs to add numbers from the Position-Aware Attention model of Zhang et. al. (e.g. the model used in the original TACRED paper). It appears that the performance of the proposed model is not significantly better than that model. I think that is probably fine, since this is a new-ish approach for relation extraction, getting results that are on-par with the state-of-the-art may be sufficient as a first step, but the paper really needs to be more clear about where it stands with respect to the SOTA.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 2\nWe would like to thank Reviewer 2 for their review and constructive suggestions.\n\nWe report the best single-model performance (PA-LSTM) from the original TACRED paper in Table 4. Zhang et al. report slightly different results, 65.4 (2017) vs. 65.1 (2018), and we report the latter. The overall best performing model reported in the original TACRED paper is an ensemble combining 5 independently trained models.\n\nWe will submit a revised version clearly indicating that we compare single-model performance.",
"role": "user"
},
{
"content": "Thank you for the authors' feedback. They've addressed nearly all of my concerns.",
"role": "assistant"
}
]
|
BJgrxbqp67 | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<BJgrxbqp67>>```",
"role": "user"
},
{
"content": "Title: Incremental but solid contribution\nReview Content: This paper presents a transformer-based relation extraction model that leverages pre-training on unlabeled text with a language modeling objective.\n\nThe proposed approach is essentially an application of the OpenAI GPT to relation extraction. Although this work is rather incremental, the experiments and analysis are thorough, making it a solid contribution.\n\nGiven that the authors have already set up the entire TRE framework, it should be rather easy to adapt the same approach to BERT, and potentially raise the state of the art even further.\n\nIn terms of writing, I think the authors should reframe the paper as a direct adaptation of OpenAI GPT. In its current form, the paper implies much more novelty than it actually has, especially in the abstract and intro; I think the whole story about latent embeddings replacing manually-engineered features is quite obvious in 2019. I think the adaptation story will make the paper shorter and significantly clearer.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 1\nWe would like to thank Reviewer 1 for their review and constructive suggestions.\n\nWe agree that adapting our approach to BERT is rather easy, in fact, we have already done so for our ongoing experiments.\n\nRegarding the last comment: We agree that our paper is a slightly adapted application of the OpenAI GPT for relation extraction. However, in the introduction, our goal was to motivate this application, because most state-of-the-art approaches (e.g. all competing approaches listed in table 4 and 5) still rely on dependency parse information and other manually-engineered features.\n\nWe will rephrase our paper to clearly indicate the adaptation of the OpenAI GPT.",
"role": "user"
},
{
"content": "Thanks for the authors' clarification. Their response has resolved the majority of my concerns.",
"role": "assistant"
}
]
|
BJgrxbqp67 | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<BJgrxbqp67>>```",
"role": "user"
},
{
"content": "Title: Application of existing method for relation extraction\nReview Content: This article describes a novel application of Transformer networks for relation extraction.\n\nCONS:\n- Method is heavily supervised. It requires plain text sentences as input, but with clearly marked relation arguments. This information might not always be available, and might be too costly to produce manually. \nDoes this mean that special care has to be taken for sentences in the passive and active voice, as the position of the arguments will be interchanged?\n\n- The method assumes the existence of a labelled dataset. However, this may not always be available. \n\n- There are several other methods, which produce state of the art results on relation extraction, which are minimally-supervised. These methods, in my opinion, alleviate the need for huge volumes of annotated data. The added-value of the proposed method vs. minimally-supervised methods is not clear. \n\nPROS:\n- Extensive evaluation\n- Article well-written\n- Contributions clearly articulated\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 3\nWe would like to thank Reviewer 3 for their review and constructive suggestions.\n\nWe acknowledge that distantly- and semi-supervised methods are a vital part of (large-scale) relation extraction. In this work we explicitly focus on the supervised scenario for the following reasons:\nOur main goal is to show that pre-trained language representations, in combination with a self-attentive architecture, are able to perform comparable or better than methods relying on explicit syntactic and semantic features, which are common in current state-of-the-art relation extraction methods. Due to the automated annotation process, distantly- and semi-supervised methods introduce a considerable amount of noise. This requires extending the approach to explicitly account for the noisiness of the data, which makes it more difficult to assess the efficacy of our approach in isolation. In ongoing work we address the distantly-supervised scenario, extending our approach to account for the noise introduced during the automated annotation process.\n\nNo special care has to be taken for passive and active voice. Our approach implicitly assumes the head entity to be provided first, followed by the tail entity. I.e. if the entities are provided as <Person A> <Organisation A> (assuming that person A is an employee of Organisation A), the system predicts \"per:employee_of\", whereas <Organisation A> <Person A> results in the inverse relation \"org:top_members/employees\".",
"role": "user"
},
{
"content": "Thanks to the authors for their clarification. It satisfactorily addresses the major issues I had.",
"role": "assistant"
}
]
|
BJlh0x9ppQ | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<BJlh0x9ppQ>>```",
"role": "user"
},
{
"content": "Title: Solid work, convincing experiments and results\nReview Content: The paper presents innovative work towards learning numerical attributes in a KB, which the authors claim to be the first of its kind. The approach leverages KB embeddings to learn feature representations for predicting missing numerical attribute values. The assumption is that data points that are close to each other in a vector (embeddings) space have the same numerical attribute value. Evaluation of the approach is on a set of highest ranked 10 numerical attributes in a QA setting with questions that require a numerical answer.\n\nThe paper is well written and the approach is explained in full detail. \n\nMy only concern is the application of the method across numerical attributes of (very) different granularity and context. The authors briefly mention the granularity aspect in section 3 and point to a normalization step. However, this discussion is very brief and leaves perhaps some further questions open on this.\nRating: 8 accept, good paper\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n",
"role": "assistant"
},
{
"content": "Title: Rebuttal Review#1\nThanks for the positive and encouraging comments.\n\nGranularity: You have raised a very valid concern regarding the granularity of different numerical attributes. We agree on that the normalization step is really important to this problem, as one has to jointly train a number of regression models (with shared input vector representations) where output variables have different scales. We also tried min-max scaling, however this gave worse performance in comparison to standard scaling. We will add this comment to the paper.",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. The response addresses most of my concerns.",
"role": "assistant"
}
]
|
BJlh0x9ppQ | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<BJlh0x9ppQ>>```",
"role": "user"
},
{
"content": "Title: Nice paper on prediction of numeric attributes in KBs.\nReview Content: The paper reports on the prediction of numerical attributes in knowledge bases, a problem that has indeed\nreceived too little attention. It lays out the problem rather clearly and defines datasets, a number of baselines\nas well as a range of embedding-based models that I like because they include both simple pipeline models \n(learn embeddings first, predict numerical attributes later) and models that include the numerical attributes\ninto embedding learning. I also appreciate that the paper makes an attempt to draw together ideas from different\nresearch directions.\n\nOverall, I like the paper. It's very solidly done and can serve as excellent base for further studies. Of course, I also have comments/criticisms.\n\nFirst, the authors state that one of the contributions of the paper is to \"introduce the problem of predicting the value \nof entities’ numerical attributes in KB\". This is unfortunately not true. There is relatively old work by Davidov and\nRappoport (ACL 2010) on learning numeric attributes from the web (albeit without a specific reference to KBs), and \na more recent study specifically aimed at attributes from KBs (Gupta et al. EMNLP 2015) which proposed and modelled exactly the same task, including defining freely available FreeBase-derived datasets.\nMore generally speaking, the authors seem to be very up to date regarding approaches that learn embeddings directly\nfrom the KB, but not regarding approaches that use text-based embeddings. This is unfortunate, since the\nmodel that we defined is closely related to the LR model defined in the current paper, but no direct comparison is \npossible due to the differences in embeddings and dataset.\n\nSecond, I feel that not enough motivation is given for some of the models and their design decisions. For example,\nthe choice of linear regression seems rather questionable to me, since the assumptions of linear regression (normal distribution/homoscedasticity) are clearly violated by many KB attributes. If you predict, say, country populations, the \nerror for China and India is orders of magnitude higher then the error for other countries, and the predictions are dominated by the fit to these outliers. This not only concerns the models but also the evaluation, because\nMAE/RMSE also only make sense when you assume that the attributes scale linearly -- Gupta et al. 2015 use this\nas motivation for using logistic regression and a rank-based evaluation. \nI realize that you comment on non-linear regression on p.12 and give a normalized evaluation on p.13: \nI appreciate that, even though I think that it only addresses the problem to an extent.\n\nSimilarly, I like the label propagation idea (p. 7) but I lack the intuition why LP should work on (all) numeric attributes.\nIf, say, two countries border each other, I would expect their lat/long to be similar, but why should their (absolute) GDP be similar? What is lacking here is a somewhat more substantial discussion of the assumptions that this (and the other) \nmodels make about the structure of the knowledge graph and the semantics of the attributes.\n\nSmaller comment:\n* Top of p.5, formalization: This looks like f is a general function, but I assume that one f is supposed to be learned\n for each attribute? Either it should be f_a, or f should have the signature E x A -> R.\n (p4: Why is N used as a separate type for numeric attributes if the function f is then supposed to map into reals anyway?)\n\n\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Rebuttal Review#2\nThanks a lot for your feedback. \n\nLet us address individually each of your concerns:\n\n1. Previous work: Thank you for bringing the work by Davidov and Rappoport (ACL 2010) and Gupta et al. (EMNLP 2015) to our attention, we were not aware of this work. We will elaborate on these papers in the related work section. Regarding Gupta et al. (EMNLP 2015), the paper uses word2vec embeddings of named entities as inputs to a number of regression models. Similar to us, they aim to predict numerical attributes of knowledge base entities. Different to us, they leverage text information to do so. This is important, as in our problem we do not assume the existence of information other than the own graph structure. This relates to knowledge bases where entities’ names are unknown/anonymized (e.g. medical knowledge bases). Nevertheless, this is very interesting and nicely complements our approach of learning from KG relationships as opposed to free text. Similarly, Davidov and Rappoport leverages text and/or class label information.\n\n2. Linear Regression: As you correctly observed, different output variables have different distributions. Ideally, for attributes that do not permit a normal distribution assumption one would need to validate (i) the power transformation to be applied to the data, and (ii) the regression function. This comes at an exponential cost, as this validation should be jointly done for a number of regression models. To circumvent this issue we used a common normalization procedure for all attributes. We also trained with L1-loss (Robust regression), which is more robust (w.r.t. outliers) than OLS, for all attributes but the results were similar.\n\n3. Label Propagation: Our assumption is that if two entities relate to the rest of the world in a similar manner, then they have similar numerical attributes. Nevertheless, your observation is an excellent point.\n\nLabel Propagation uses Euclidean distance between vectors to build the k-nearest neighbor graph. We speculate that subsets of entities’ latent factors could be encoding different relational and numerical information. For instance, it is possible that a few dimensions of the entity embeddings encode location information, while others encode population information and so on. For this reason, we learned Mahalanobis metrics for capturing different entity similarities. We did this while learning knowledge base embeddings by using an additional nearest neighbor loss. It did slightly improve the performance for few attributes, but overall it did not improve the results. We will include this discussion in the paper. Nevertheless, we suggest that future work should address this research direction in greater depth.\n",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It adequately addresses the points I raised.",
"role": "assistant"
}
]
|
BJlh0x9ppQ | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<BJlh0x9ppQ>>```",
"role": "user"
},
{
"content": "Title: interesting paper with decent contribution, but doubts about practical viability\nReview Content: The paper presents a method to predict the values\nof numerical properties for entities where these properties\nare missing in the KB (e.g., population of cities or height of athletes).\nTo this end, the paper develops a suite of methods, using\nregression and learned embeddings. Some of the techniques\nresemble those used in state-of-the-art knowledge graph completion,\nbut there is novelty in adapting these techniques\nto the case of numerical properties.\nThe paper includes a comprehensive experimental evaluation\nof a variety of methods.\n\nThis is interesting work on an underexplored problem,\nand it is carried out very neatly.\n\nHowever, I am skeptical that it is practically viable.\nThe prediction errors are such that the predicted values\ncan hardly be entered into a high-quality knowledge base.\nFor example, for city population, the RMSE is in the order\nof millions, and for person height the RMSE is above 0.05\n(i.e., 5cm). So even on average, the predicted values are\nway off. Moreover, average errors are not really the decisive\npoint for KB completion and curation.\nEven if the RMSE were small, say 10K for cities, for some\ncities the predictions could still be embarrassingly off.\nSo a knowledge engineer could not trust them and would \nhardly consider them for completing the missing values.\n\nSpecific comment:\nThe embeddings of two entities may be close for different\nreasons, potentially losing too much information.\nFor example, two cities may have close embeddings because\nthey are in the same geo-region (but could have very different\npopulations) or because they have similar characteristics\n(e.g., both being huge metropolitan areas). Likewise, two\nathletes could be close because of similar origin and\nsimilar success in Olympic Games, or because they played\nin the same teams. \nIt is not clear how the proposed methods can cope with\nthese confusing signals through embeddings or other techniques.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Rebuttal Review#3\nThanks for your helpful review.\n\nLet us address individually each of your concerns:\n\n1. Prediction errors: We agree that, for the data sets discussed in the paper, the performance is not up to industrial standard and thus this technique may not be useful “as-is” in the industry. We believe that this paper is the first step which will hopefully spur more interest in this problem, and would eventually lead to industry-ready algorithms. However, this (current) limitation is not specific to this new problem, but common to many research problems (e.g. link prediction methods, for the moment, are far from meeting industrial requirements).\n\n2. Embeddings: This is again an excellent point. Please also see rebuttal for Reviewer #2. Subsets of latent factors capture signals related to different relational and numerical information. A metric learning approach that “selects” a group of latent factors relevant to a certain numerical attribute would make sense. However, during our experiments, we learned Mahalanobis metrics by incorporating an additional loss during the learning of knowledge graph embeddings, and it did not improve the overall performance. We will discuss this in the paper. Nevertheless, we suggest that future work should address this research direction in greater depth.",
"role": "user"
},
{
"content": "I appreciate the authors' reply. They have adequately addressed the majority of my concerns.",
"role": "assistant"
}
]
|
BylEpe9ppX | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<BylEpe9ppX>>```",
"role": "user"
},
{
"content": "Title: Interesting approach to a new task + new data set\nReview Content: This work aims to address the problem of answering visual-relational queries in knowledge graphs where the entities are associated with web-extracted images.\n\nThe paper introduces a newly constructed large scale visual-relational knowledge graph built by scraping the web. Going beyond previous data sets like VisualGenome having annotations within the image, the ImageGraph data set that this work proposes allows for queries over relations between multiple images and will be useful to the community for future work. Some additional details about the dataset would have been useful such as the criteria used to decide \"low quality images\" that were omitted from the web crawl as well as the reason for omitting 15 relations and 81 entities from FB15k. \n\nWhile existing relational-learning models on knowledge graphs employ an embedding matrix to learn a representation for the entity, this paper proposes to use deep neural networks to extract a representation for the images. By employing deep representations of images associated with previously unseen entities, their method is also able to answer questions by generalizing to novel visual concepts, providing the ability to zero-shot answer questions about these unseen entities.\n\nThe baselines reported by the paper are weak especially the VGG+DistMult baseline with very low classifier score leading to its uncompetitive result. It would be worth at this point to try and build a better classifier that allows for more reasonable comparison with the proposed method. (Accuracy 0.082 is really below par) As for the probabilistic baseline, it only serves to provide insights into the prior biases of the data and is also not a strong enough baseline to make the results convincing.\n\nWell written paper covering relevant background work, but would be much stronger with better baselines.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to comments \nThank you for your helpful review.\n\nFB15k contain entities like “ISO_3166-1:SO”, or “C16:1(n-7)”, for which the crawler returned few images, or even no image at all. After removing those entities from the graph, some relations were not present in the data set.\n\nWe could have replaced our feature extractor VGG-16 with more sophisticated/up-to-date CNNs such as ResNet or DenseNet. These models obtain moderate gains in accuracy on ImageNet, but they come at the cost of (largely) slower running times. In our opinion, the low performance relates to the difficulty of the problem. Our data set has 15 times more of categories/entities than ImageNet, hence the low performance in the image retrieval task. However, for the relation prediction task, where the output space is comparable to that of ImageNet, performances are much more competitive.\n\nWe agree that the experiments provide insights into the bias of the data set. Most importantly, the experiments show that the introduced problem is much harder than the traditional link prediction problem. We wanted to evaluate the scoring functions of state-of-the-art link prediction methods on the problems. At least for the visual entity prediction problem, these scoring functions did not work better than a concatenation. We hope this serves as a first step to this new problem.",
"role": "user"
},
{
"content": "Thank you for the authors' feedback. Their response has resolved the majority of my concerns.",
"role": "assistant"
}
]
|
BylEpe9ppX | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<BylEpe9ppX>>```",
"role": "user"
},
{
"content": "Title: Compelling new task and dataset\nReview Content: The paper introduces several novel tasks for visual reasoning resembling knowledge base completion tasks but involving images linked to entities: finding relations between entities represented by images and finding images given an image and a relation. The task is accompanied with a new dataset, which links images crawled from the web to FreeBase entities. The authors propose and evaluate the first approach on this dataset.\n\nThe paper is well written and clearly positions the novelty of the contributions with respect to the related work.\n\nQuestions:\n* What are the types of errors of the proposed approach? The error analysis is missing. A brief summary or a table based on the sample from the test set can provide insights of the limitations and future directions.\n* Is this task feasible? In some cases information contained in the image can be insufficient to answer the query. Error analysis and human baseline would help to determine the expected upper-bound for this task.\n* Which queries involving images and KG are not addressed in this work? The list of questions in 4.1. can be better structured, e.g. in a table/matrix: Target (relation/entity/image), Data (relation/entity/image)\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to comments\nWe thank the reviewer for the encouraging review.\n\nWe have observed that the proposed models learn to relate the two entity types that are involved in a certain relationship/predicate. This is illustrated in Figure 6. However, it sometimes struggles to find an image of the entity that completes the query. This is related to your concern regarding the feasibility of the task. This highly depends on the entities involved in the query. Queries involving entities with canonical images (“Statue of Liberty”) are easier to answer than those involving entities that can be represented with heterogeneous images (“United States of America”). This is a nice suggestion, so we will include this discussion in the paper.\n\nOur queries solely include visual information. This means that we do not address queries where the underlying entity for a given image is known.\n",
"role": "user"
},
{
"content": "Thank you for the authors' reply. The authors have answered almost all of my concerns.",
"role": "assistant"
}
]
|
BylEpe9ppX | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<BylEpe9ppX>>```",
"role": "user"
},
{
"content": "Title: REVIEW\nReview Content: The paper proposes to extend knowledge base completion benchmarks with visual data to explore novel query types that allow searching and completing knowledge bases by images. Experiments were conducted on standard KB completion tasks using images as entity representations instead of one-hot vectors, as well as a zero shot tasks using unseen images of unknown entities.\n\nOverall I think that enriching KBs with visual data is appealing and important. Using images to query knowledge bases can be a practical tool for several applications. However, the overall experimental setup suffers from several problems. The results are overall very low. In the non-zero shot experiments I would like to see a comparison to using entity embeddings, and maybe even using a combination of both, as this is the more interesting setup. For instance, I would like to see whether using images as additional information can help building better entity representations. The explored link prediction models are all known, so apart from using images instead of entities there is very limited novelty. The authors find that concatenation followed by dot-product with relation-vector works best. This is very unfortunate because it means that there is no interaction between h and t at all, i.e.: s(h, r, t) = [h;t] * r = h * r_1 + t * r_2. This means that finding t given h,r only depends on r and not on h at all. Finally, this shows that the proposed image embeddings derived from the pretrained VGG16 model are not very useful for establishing relations.\n\nGiven the mentioned problems I can unfortunately not recommend this paper for acceptence.\n\nOther comments: \n- I wouldn't consider a combination of pretrained image embeddings bsaed on CNNs with KB embeddings a \"novel machine learning approach\", but rather a standard technique \n- redefine operators when describing LP models: \\odot is typically used for element-wise multiplication, for concatenation use [h; t] for instance\n- (head, relation, tail) is quite unusual --> better: (subject, predicate, object)\n- baselines are super weak. Concatenation should be the baseline as it connects h and t with r indepedent of each other. What is the probabilistic baseline?\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to comments\nThank you for the helpful review.\n\nThe novelty of our submission is in the visual KB that we have created, the novel query types we propose, and the combination of standard CNNs with KB embedding scoring functions. We do not claim that the scoring functions are novel. What is novel is the combination of CNNs on image data with these scoring functions for the novel query types we propose. Our work shows that some existing scoring functions are not better than a concatenation, at least for the image retrieval task. We have explored the zero-shot learning setting in the link prediction problem for the first time in the literature. We think that this also makes the work novel.\n\nExperiments show that the difficulty and performance are different from that of the traditional link prediction problem. We have (adapted and) benchmarked some of the most standard scoring functions from the link prediction literature in this problem and show that: i) they work, to some extent, in the relation prediction task, and ii) they perform poorly in the image retrieval task, where the (naive) concatenation approach performs the best. We remark again the complexity of the latter problem (see rebuttal for reviewer 1). We think that this set of experiments constitutes a solid answer to the question regarding the ability of the traditional scoring function to answer these new types of visual queries.\n\nPlease also note the setup descriptions in Section 4.1. In the image retrieval and link prediction scenarios, (experiments (1) and (2)) we have images for which we do not know the underlying KG entities. This is the reason why we have not considered approaches that leverage entity embeddings, as this information is not available in our setting.\n\nThank you for your suggestion regarding the notation used to define certain operators. We completely agree. However, we think the notation (head, relation, tail) is not that uncommon, as it has been used in previous works (e.g. (Bordes et al. 2013), (Garcia-Duran&Niepert, 2017)).\n\nFinally, the probabilistic baseline is explained in Section 5.2 at the end of the second paragraph: “The second baseline (probabilistic baseline) computes the\nprobability of each relation type using the set of training and validation triples. The baseline\nranks relation types based on these prior probabilities.”\n",
"role": "user"
},
{
"content": "Title: RE\nWell, the biggest issues still remain though. Considering everything this work added images from a web search to an existing KG and showed that traditional approaches do not work well when using **a pretrained VGG16 model**, in fact they do not work at all. Don't get me wrong, the idea of exploring visual information for AKBC and using it for zero-shot learning is very interesting but the experiments are just too limited. End-to-end training should be considered instead of taking a pretrained feature extractor that doesn't work for this task, or at the very least other models for visual feature extraction should have been tried. The problem is that in its current form the experiments tell us only that the pretrained VGG16 model is a bad feature extractor but nothing conclusive about existing models for link prediction. Because of that unfortunately also the zero shot experiments add little value.",
"role": "assistant"
},
{
"content": "Title: Response to comments\nWe respectfully disagree with the statement “the experiments tell us only that the pretrained VGG16 model is a bad feature extractor,” and we are not sure how you come to that conclusion. The results for the image retrieval tasks are mixed not because the VGG16 network is a bad feature extractor but because the entities and corresponding images are heterogeneous. For instance, there are entities such as “Albert Einstein” for which the feature extraction works very well, and other such as “USA” for which it is challenging because there is no canonical image representing said entity. We envision in the future models that pay attention to particular more canonical images in this case. We do not anticipate significant improvement by simply using a different feature extractor. Replacing the VGG16 network with a deeper or more advanced CNN, for example, should lead to minor improvements only. \nMoreover, we would like to clarify that the model is pre-trained on ImageNet, but the entire model is end-to-end fine tuned, with each of the scoring functions.\n\nThe results for relation prediction and zero-shot learning are actually promising. Since the input for the system is an image, there is no information about the entity that it represents. Hence, the problem and the performance cannot be compared with previous methods as we present the problem (and possible solutions) for the first time. \n",
"role": "user"
},
{
"content": "Title: RE\nI am sorry I misunderstood that the model was finetuned, but from the paper it was not obvious to me. Anyway my main issue remains, the concatenation model doesn't compute entity interaction which leaves me with the conclusion that VGG is not a good feature extractor. So if the concatenation model works the way I understand it, it means that results cannot be promising because you are computing a score where entity representations do not interact at all which should be essential for any reasonable link prediction model.",
"role": "assistant"
}
]
|
HJxYZ-5paX | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HJxYZ-5paX>>```",
"role": "user"
},
{
"content": "Title: Good paper, some clarifications needed\nReview Content: Summary\n\nThis paper addresses the ARC dataset by reformulating the question using embeddings from ConceptNet. Their model selects a few terms from the question using the embeddings from ConceptNet, rewrites the query based on the selected terms, retrieves the documents and solves the query. The empirical result shows that embeddings from ConceptNet is beneficial, and the overall result is comparable to recent performance on ARC dataset.\n\nQuality\npros\n1) This paper contains a thorough study of recent QA models and datasets.\n2) This paper describes the model architecture, conducts ablation studies of different Essential terms classification, and includes thorough comparisons with recent models on ARC challenges.\n\ncons\n- Although the paper includes recent works on QA models/datasets, it doesn’t contain much studies on query reformulations. For example, \"Ask the Right Questions: Active Question Reformulation with Reinforcement Learning” (Buck et al., ICLR 2018) is one of the related works that the paper didn’t cite.\n- The paper does not have any example of reformulated queries or error analysis.\n\nClarity\n\npros\n1) The paper describes the framework and model architecture carefully.\n\ncons\n1) It is hard to understand how exactly they reformulate the query based on selected terms. (I think examples would help) For example, in Fig 2, after “activities”, “used”, “conserve” and “water” were selected, how does rewriter write the query? The examples will help.\n2) Similar to the above, it would be helpful to see the examples of decision rules in Section 5.2.\n3) It is hard to understand how exactly each component of the model was trained. First of all, is rewrite module only trained on Essential Terms dataset (as mentioned in Section 3.1.3) and never fine-tuned on ARC dataset? Same question for entailment modules: is it only trained on SciTail, not fine-tuned on ARC dataset? How did decision rules trained? Are all the modules trained separately, and haven’t been trained jointly? What modules were trained on ARC dataset? All of these are a bit confusing since there’re many components and many datasets were used.\n\nOriginality & significance\n\npros\n* Query reformulation methods have been used on several QA tasks (like Buck et al 2018 above), and incorporating background knowledge has been used before too (as described in the paper), but I think it’s fairly original to do both in the same time.\n\ncons\n* It is a bit disappointing that the only part using background knowledge is selecting essential terms using ConceptNet embedding. I think the term “using background knowledge” is too general term for this specific idea.\n\nIn general, I think the paper has enough contribution to be accepted, if some descriptions are better clarified.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Added discussion of AQA, example, and clarifications on training -- many good ideas for future work!\nThank you for the very considerate review!\n\nWe will include a discussion of our paper in the context of the AQA work, which tackles the very interesting problem of open-vocabulary questions reformulation. Their paper illustrates the difficulty of the task, noting that “99.8% of [AQA-QR rewrites] start with the prefix What is [name]...”. They speculate that this “might be related to the fact that virtually all answers involve names, of named entities (Micronesia) or generic concepts (pizza).” We hope to expand our work to include question and topic expansion in the near future, as we agree that making progress on open-vocabulary query reformulation in the context of non-factoid question answering is an exciting effort.\n\nWe have space in the paper and have included examples of the rewritten queries in our revision. This also goes along with your Clarity Con point 2 -- the rewritten query is a concatenation of the selected terms.\n\nWe will try to work in an example of the decision rule in the appendix. Since the decision rule uses many query results it takes up significant space. Informally, the decision rule looks at the average or max entailment for each answer for the various query results. This is tuned on the ARC-Challenge-Train dataset.\n\nIndeed, as you say the rewriter is trained only on the Essential Terms dataset and the entailment is only trained on the SciTail dataset. We are in the process of transforming some of the ARC-Challenge questions to fine tune the entailment model but this work is not completed yet. The Decision Rules are trained on 1/5th of the ARC-Training set which we can use as we have not used the ARC-Train for any other portion of the model. \n\nWe agree that leveraging background knowledge is an important line of research, and that the work is this paper represents very early steps towards fully utilizing it in an open-domain QA pipeline. Future work indeed involves jointly training the modules together -- we are in the process of integrating background knowledge into the entailment model, both by a similar approach to https://arxiv.org/pdf/1809.05724.pdf and by using more sophisticated graph embeddings.\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. It answers most of my concerns.",
"role": "assistant"
}
]
|
HJxYZ-5paX | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<HJxYZ-5paX>>```",
"role": "user"
},
{
"content": "Title: good qa paper\nReview Content: This paper focuses on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. The paper presents a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. The rewriter is able to incorporate background knowledge from ConceptNet. A textual entailment system trained on SciTail that identifies support in the retrieved results. Experiments show that the proposed system is able to outperform several baselines on ARC.\n\n* (Sec 2.2) \"[acl-2013] Paraphrase-driven learning for open question answering\" and \"[emnlp-2017] Learning to Paraphrase for Question Answering\" can be added in the related work section.\n* (Sec 3.1) Seq2seq predicts 0 and 1 to indicate whether the word is salient. A more straightforward method is using a pointer network for the decoder, which directly selects words from the input. This method should be more effective than seq2seq used in Sec 3.1.1.\n* (Sec 3.1) How about the performance of removing the top crf layer? The LSTM layer and the classifier should play the most important role.\n* How to better utilize external resources is an interesting topic and is potentially helpful to improve the results of answering science exam questions. For example, the entailment module described in Sec 5.1 can be trained on other larger data, which in turn helps the problem with smaller data. I would like to see more details about this.\n* Are the improvements significant compared to the baseline methods? Significance test is necessary because the dataset is quite small.\n* Experiments on large-scale datasets are encouraged.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Added example, significance tests, additional references & discussion\nWe very much appreciate the thoughtful suggestions of your review, and have added the citations for the paraphrase-based approaches to query reformulation in open QA to the related work section in our revision.\n\nThe comments about pointer networks vs. our seq2seq formulation as well as measuring the effect of the CRF layer are well taken; we will attempt to include those results in the paper and update our revision, time permitting.\n\nExperiments comparing the performance of the entailment module on the somewhat larger SciTail dataset are conducted in https://arxiv.org/pdf/1809.05724.pdf . We will update the discussion to reflect this result in the science domain, and try to provide references for its performance on other large-scale NLI corpora.\n\nIn the original ARC paper (https://arxiv.org/pdf/1803.05457.pdf), they evaluate their baselines (DGEM, BiDAF, and DecompAttn) using a significance threshold of ±2.5% compare to random chance:\n“The most striking observation is that none of the algorithms score significantly higher than the random baseline on the Challenge set, where the 95% confidence interval is ±2.5%. In contrast, their performance on the Easy set is generally between 55% and 65%. This highlights the different nature and difficulty of the Challenge set.”\nBy both that measure and by using a one-sample t-test, we found that our models outperformed the reported accuracy of those three baselines at the 99% confidence interval on the test set; however, our methods are not significantly better than KG2 at the 95% level (p=0.652 for NCRF++). Comparing to the reported accuracy of the BiLSTM Max-Out model of Mihaylov et al., our regular NCRF++ models (as well as the ones augmented with TransH and PPMI embeddings) are not significantly worse at 95% confidence.\n\nOn the other hand, paired t-tests comparing the AI2 rule to the Top-2 rule (without splitting) favored the Top-2 rule with at least 99.9% confidence on the test set. While we observe that splitting is harmful for the test set, the results are only significant at the 95% level for PPMI and Original Question.",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It adequately addresses the points I raised.",
"role": "assistant"
}
]
|
HJxYZ-5paX | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HJxYZ-5paX>>```",
"role": "user"
},
{
"content": "Title: The system is a combination of many existing techniques, and is outperformed by several works.\nReview Content: This paper introduces an end-to-end system to answer science exam questions for the ARC challenge. The system is a combination of several existing techniques, including (i) query rewriting based on seq2seq or NCRF++, (ii) answer retriever, (iii) entailment model based on match-LSTM, and (iv) knowledge graph embeddings. The description of the system is clear, and there is abundant ablation study. However, I have following concerns about this paper:\n\n1. There seems to be no new techniques proposed in the system. Hence, the novelty of this work is questioned.\n2. I do not understand why the authors use TransH, which is a KG embedding model that differentiates one entity into different relation-specific representations.\n3. The system is significantly outperformed by Sun et al. 2018 and Ni et al. 2018.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Very incisive review\nThank you for the careful review of our paper.\n\nFor (1) we must somewhat disagree -- since submission indeed there have been several new papers posted to arXiv which include rewriter modules for the ARC dataset inspired by the work of Boratko et al. (https://arxiv.org/pdf/1806.00358.pdf) but as of submission time the notion of combining a rewriter with the entailment module for ARC is novel. We also point to Reviewer2’s comment, “Query reformulation methods have been used on several QA tasks (like Buck et al 2018 above), and incorporating background knowledge has been used before too (as described in the paper), but I think it’s fairly original to do both in the same time.”\n\nWith respect to (2) -- we used TransH to have a common and easily trainable baseline for comparison. We do not have a priori beliefs about which aspects of an entity’s relations are useful for answering a particular science question, but given that many science questions pertain to different functional properties of matter it does not seem unreasonable to use the vector embedding provided by TransH to do so.\n\nWith respect to (3) -- we have not yet incorporated transformer networks/BERT as done by Sun 2018 and Ni 2018 and some other groups have done (https://leaderboard.allenai.org/arc/submissions/public). However, these papers have been posted between the time of submission to AKBC and the response period and we are working to integrate these aspects into future work. There's no question that massively pre-trained models like BERT represent a substantial shift in the state-of-the-art of NLP, and we expect that all future work in the near future should use them to represent text. However, we feel that the contributions of our paper are mainly focused around the end-to-end pipeline and providing techniques for leveraging a corpus of background knowledge that improve on the assumptions of prior work from e.g. AI2.\n",
"role": "user"
},
{
"content": "I appreciate the authors' feedback. Most of my concerns have been addressed.",
"role": "assistant"
}
]
|
HJxeGb5pTm | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HJxeGb5pTm>>```",
"role": "user"
},
{
"content": "Title: OPIEC: An Open Information Extraction Corpus\nReview Content: In this paper, the authors build a new corpus for information extraction which is larger comparing to the prior public corpora and contains information not existing in current corpora. The dataset can be useful in other applications. The paper is well written and easy to follow. It also provides details about the corpus. However, there are some questions for the authors: 1) It uses the NLP pipeline and the MinIE-SpaTe system. When you get the results, do you evaluate to what extent that the results are correct? 2) In Section 3.4, the author mentioned the correctness is around 65%, what do you do for those incorrect tuples? 3) Have you tried any task-based evaluation on your dataset?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Clarifications\n\"1) It uses the NLP pipeline and the MinIE-SpaTe system. When you get the results, do you evaluate to what extent that the results are correct?\"\n\"2) In Section 3.4, the author mentioned the correctness is around 65%, what do you do for those incorrect tuples?\"\n\n-> We report on precision and confidence scores in the new subsection 4.6. \n\n\n\"3) Have you tried any task-based evaluation on your dataset?\"\n\nThe OPIEC corpus is being used in other research projects. The goal of this paper is to introduce the dataset to other researchers and provide insight into its properties.\n\n\n Global/general Response:\n```\nThank you very much for your helpful and insightful comments! All of you asked for more information about precision; we added Section 4.6 “Precision and Confidence Score” to provide more details and statistics about precision as well as the confidence scores provided with the dataset.\n```\n",
"role": "user"
},
{
"content": "Thank you for the authors' detailed response. It answers most of my concerns.",
"role": "assistant"
}
]
|
HJxeGb5pTm | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HJxeGb5pTm>>```",
"role": "user"
},
{
"content": "Title: Interesting analysis, but may not be enough content\nReview Content: The paper describes the creation of OPIEC -- an Open IE corpus over English Wikipedia.\nThe corpus is created in a completely automatic manner, by running an off-the-shelf OIE system (MinIE), which yields 341M SVO tuples. Following, this resource is automatically filtered to identify triples over named entities (using an automatic NER system, yielding a corpus of 104M tuples), and only entities which match entries in Wikipedia (5.8M tuples).\n\nOn the positive side, I think that resources for Open IE are useful, and can help spur more research and analyses.\n\nOn the other hand, however, I worry that OPIEC may be too skewed towards the predictions of a specific OIE system, and that the work presented here consists mainly of running off-the-shelf can be extended to contain more novel substance, such as a new Open IE system and its evaluation against this corpus, or more dedicated manual analysis. For example, I believe that most similar resources (e.g., ReVerb tuples) were created as a part of a larger research effort.\n\nThe crux of the matter here I think, is the accuracy of the dataset, reported tersely in Section 5.3, in which a manual analysis (who annotated? what were their guidelines? what was their agreement?) finds that the dataset is estimated to have 60% correct tuples. Can this be improved? Somehow automatically verified?\n\nDetailed comments:\n\n- I think that the paper should make it clear in the title or at least in the abstract that the corpus is created automatically by running an OIE system on a large scale. From current title and abstract I was wrongfully expecting a gold human-annotated dataset.\n\n- Following on previous points, I think the paper misses a discussion on gold vs. predicted datasets for OIE, and their different uses. Some missing gold OIE references:\nWu and Weld (2010), Akbik and Loser (2012), Stanovsky and Dagan (2016).\n\n- Following this line, I don't think I agree with the claim in Section 4.3 that \"it is the largest corpus with golden annotations to date\". As far as I understand, the presented corpus is created in a completely automated manner and bound to contain prediction errors.\n\n- I think that some of the implementation decisions seem sometimes a little arbitrary. For instance, for the post-processing example which modifies \n(Peter Brooke; was a member of; Parliament) to (Peter Brooke; was ; a member of Parliament), I think I would've preferred the original relation, imagining a scenario where you look for all members of parliament (X; was a member of; Parliament), or all of the things Peter Brooke was a member of (Peter Brooke; was a member of; Y) seems more convenient to me.\n\nMinor comments & typos:\n\n- I assume that in Table 1, unique relations and arguments are also in millions? I think this could be clearer, if that's the indeed the case.\n\n- I think it'd be nice to add dataset sizes to each of the OPIEC variants in Fig 1.\n\n- End of Section 3.1 \"To avoid loosing this relationship\" -> \"losing this relationship\"\n\n- Top of P. 6: \"what follows, we [describe] briefly discuss these\"\n\n- Section 4.5 (bottom of p. 9) \"NET type\" -> \"NER type\"?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Clarifications\n\"I worry that OPIEC may be too skewed towards the predictions of a specific OIE system\"\n\n-> Yes, OPIEC is based on a modified variant of MinIE, which allows us to provide syntactic annotations, semantic annotations, and confidence scores. The pipeline used to create OPIEC can be applied to other datasets and (perhaps with minor modifications) with other OIE systems as well. We plan to publish the source code of this pipeline along with the corpus and data access tools along with OPIEC.\n\n\n\"the work presented here consists mainly of running off-the-shelf can be extended to contain more novel substance, such as a new Open IE system and its evaluation against this corpus\". \n\n-> We use an improved version of MinIE, which adds space-time awareness + confidence score. MinIE, and in particular the OPIEC corpus, is indeed being used in other research projects already. The goal of this paper is to introduce the dataset to other researchers and provide insight into its properties.\n\n\n\"The crux of the matter here I think, is the accuracy of the dataset, reported tersely in Section 5.3, in which a manual analysis (who annotated? what were their guidelines? what was their agreement?) finds that the dataset is estimated to have 60% correct tuples. Can this be improved? Somehow automatically verified?\"\n\n-> We added subsection 4.6 to provide more information about the confidence scores that OPIEC provides. These scores allow filtering the corpus for only high-confidence triples, for example. \n\n\n\"I think that the paper should make it clear in the title or at least in the abstract that the corpus is created automatically by running an OIE system on a large scale. From current title and abstract I was wrongfully expecting a gold human-annotated dataset.\"\n\"I don't think I agree with the claim in Section 4.3 that \"it is the largest corpus with golden annotations to date\". As far as I understand, the presented corpus is created in a completely automated manner and bound to contain prediction errors.\"\n\n-> We carefully revisited the paper to be more precise. For example, we now always refer to “golden annotations for arguments”.\n\n\n\"Following on previous points, I think the paper misses a discussion on gold vs. predicted datasets for OIE, and their different uses. Some missing gold OIE references: Wu and Weld (2010), Akbik and Loser (2012), Stanovsky and Dagan (2016).\"\n\n-> To the best of our knowledge, no data was released from Wu and Weld (2010) and Akbik and Loser (2012). We now mention the benchmark dataset of Stanovsky and Dagan (2016) in the related work section. The focus of this work is on large OIE resources, however. \n\n\n- \"I think that some of the implementation decisions seem sometimes a little arbitrary. For instance, for the post-processing example which modifies (Peter Brooke; was a member of; Parliament) to (Peter Brooke; was ; a member of Parliament), I think I would've preferred the original relation, imagining a scenario where you look for all members of parliament (X; was a member of; Parliament), or all of the things Peter Brooke was a member of (Peter Brooke; was a member of; Y) seems more convenient to me.\"\n\n-> We agree with the reviewer in this particular example. However, “Member of Parliament” is a concept in Wikipedia. OPIEC avoids to split concepts or named entities across arguments/relations; such an approach is often erroneous.\n\n\n“I assume that in Table 1, unique relations and arguments are also in millions? I think this could be clearer, if that's the indeed the case.”\n\n-> Fixed\n\n- I think it'd be nice to add dataset sizes to each of the OPIEC variants in Fig 1.\n-> Fixed\n\n- Typos -> fixed\n\n Global/general Response:\n```\nThank you very much for your helpful and insightful comments! All of you asked for more information about precision; we added Section 4.6 “Precision and Confidence Score” to provide more details and statistics about precision as well as the confidence scores provided with the dataset.\n```\n",
"role": "user"
},
{
"content": "Thanks for the authors' response. The reply answers almost all of my questions.",
"role": "assistant"
}
]
|
HkyI-5667 | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HkyI-5667>>```",
"role": "user"
},
{
"content": "Title: Sound approach for rule learning but heavy dependence on black-box algorithm to propose candidate rules\nReview Content: The paper proposes a model for probabilistic rule learning to automate the completion of probabilistic databases. The proposed method uses lifted inference which helps in computational efficiency given that non-lifted inference in rules containing ungrounded variables can be extremely computationally expensive.\nThe databases used contain binary relations and the probabilistic rules that are learned, are also learned for discovering new binary relations. The use of lifted inference restricts the proposed model to only discover rules that are a union of conjunctive queries.\nThe proposed approach uses AMIE+, a method to generate deterministic rules, to generate a set of candidate rules for which probabilistic weights are then learned. The model initializes the rule probabilities as the confidence scores estimated from the conditional probability of the head being true given that the body is true, and then uses a maximum likelihood estimate of the training data to learn the rule probabilities. \n\nThe paper presents empirical comparison to deterministic rules and ProbFOIL+ on the NELL knowledge base. The proposed approach marginally performs better than deterministic rule learning.\n\nThe approach proposed is straightforward and depends heavily on the candidate rules produced by the AMIE+ algorithm. The paper does not provide insights into the drawbacks of using AMIE+, the kind of rules that will be hard for AMIE+ to propose, how can the proposed method be improved to learn rules beyond the candidate rules. \n\n‘End-to-end differentiable proving’ from NeurIPS (NIPS) 2017 also tackles the same problem and it would be nice to see comparison to that work. \n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Comparisons to Neural Theorem Provers and Details on AMIE+ explained\n5) ‘End-to-end differentiable proving’ from NeurIPS (NIPS) 2017 also tackles the same problem and it would be nice to see a comparison to that work. \n\nWe have qualitatively drawn parallels between our Statistical Relational Learning (SRL) based approach and Knowledge Graph Embedding (KGE) based approaches for the problem of KB completion in Appendix E that we added to the revision. The SRL based approach is much more interpretable and explainable as compared to the black-box KGE based approaches but also, in our opinion, as compared to Neural Theorem Provers (NTPs). KGE based approaches, including NTPs, also need the test data to get the embedding which is not required by any SRL based approaches. SRL based approaches can reason with unseen constants in the data as they learn first-order rules. KGE based approaches would require re-training in order to incorporate a lot of new constants which is not the case with our SRL based approach. Please refer to Appendix E for further details.\n\nTo compare the scalability of SafeLearner with NTPs, ‘End-to-end differentiable proving’ uses 4 deterministic KBs that are not at a large scale (Countries KB has 1158 facts, Kinship KB has 10686 facts, Nations KB has 2565 facts and UMLS KB has 6529 facts). Recently, they submitted a follow-up paper, ’Towards Neural Theorem Proving at Scale’, where they claim to have made their technique more scalable. The follow-up paper uses the following 3 deterministic KBs: 1) WordNet18 KB with 151,442 facts, 2) WordNet18RR with 93,003 facts, and 3) Freebase FB15k-237 KB with 14,951 facts. On the other hand, SafeLearner has demonstrated that it scales to YAGO 2.4 KB of 948,000 probabilistic tuples. Although previous SRL techniques were not as scalable, SafeLearner is as scalable as the latest version of NTPs because it is the first SRL technique to use lifted inference.\n\nMoreover, in our opinion, NTPs do not actually 'learn' rules. They enumerate all possible rules up to a defined length and learn how to activate them. Essentially, NTPs optimize theorem-proving procedure given the rules. In their follow-up paper, they focus on finding just one proof efficiently (instead of all proofs, as in the initial version) and this brings them scalability.\nWe would like to thank the reviewer for the comments.\n\n1) The proposed approach marginally performs better than deterministic rule learning.\n\nThis is true only for PR curves but not for cross entropy. In the section of Parameter Learning, we explain how maximizing expected log likelihood is equivalent to minimizing cross entropy (Equation 3). As our proposed approach optimizes on cross entropy, it induces an average reduction in the cross entropy of 82% and 85% as compared to ProbFOIL+ and AMIE+ respectively. The insignificant differences in precision-recall curves only suggest that obtaining a ranking of tuples based on predicted probabilities (but not the actual probabilities) can be quite reliably done already by models that are not very precise when it comes to predicting the actual probabilities.\n\n\n2) The approach is straightforward and depends heavily on the candidate rules produced by AMIE+.\n\nOur approach is straightforward on purpose as we want to see how much the proper treatment of probabilities in the KB completion task using a rule-based approach helps. At first glance, SafeLearner does seem to be heavily dependent on the rules generated by AMIE+. But AMIE+ is not a black-box as we exactly know the kind of rules we require as candidates. Had AMIE+ not been developed, we could have coded the function ourselves to generate candidate rules. Since AMIE+ is a multi-threaded package in Java, it does the job well by scaling well to large KBs. Furthermore, as SafeLearner is not specific to any particular candidate generation method, it can be used with any other relational rule learner instead of AMIE+.\n\n\n3) The paper does not provide insights into the drawbacks of using AMIE+, the kind of rules that will be hard for AMIE+ to produce.\n\nAs mentioned in the cited paper ‘Fast rule mining in ontological knowledge bases with AMIE+’ (Galarraga et al.,2015), AMIE+ uses 3 types of language biases to restrict the size of the search space:\n\t1) The rules learned by AMIE+ omit reflexive atoms of the form ‘x(A, A)’.\n\t2) The rules are connected, i.e., every atom shares at least one variable transitively to every other atom of the rule. This omits vague rules of the form ‘x(A, B) :- y(C, D)’.\n\t3) The rules are closed, i.e., all the variables in a rule appear at least twice within itself. This omits open rules of the form ‘x(A, B) :- b(A, C)’ which would hold for any substitution of B and C.\n\nMoreover, AMIE+ only works with binary relations in a KB and does not have negations in its rules. For instance, the types of non-recursive rules of length <= 3, that can be generated by AMIE+ within SafeLearner are:\n\t 1) x(A, B) :- y(A, B).\n \t 2) x(A, B) :- y(B, A).\n\n \t 3) x(A, B) :- y(A, C), y(C, B).\n \t 4) x(A, B) :- y(A, C), y(B, C).\n \t 5) x(A, B) :- y(C, A), y(C, B).\n \t 6) x(A, B) :- y(C, A), y(B, C).\n\n \t 7) x(A, B) :- y(A, C), z(C, B).\n \t 8) x(A, B) :- y(A, C), z(B, C).\n \t 9) x(A, B) :- y(C, A), z(C, B).\n \t10) x(A, B) :- y(C, A), z(B, C).\nIn the context of doing Probabilistic Rule Learning for KB Completion, these are precisely all the forms of rules which we require as we can not practically predict missing tuples using reflexive, disconnected or open rules. SafeLearner is capable of learning rules of any length by specifying the maximum rule length as an input parameter. \n\n\n4) How can the proposed method be improved to learn rules beyond the candidate rules?\n\nThe method can be used with any other method that learns deterministic rules to make them probabilistic. We do not claim that using AIME+ is the best. Our main interest is in seeing if/how the proper treatment of probabilities helps in the KB completion tasks using a rule-based approach.",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It covers most of my concerns.",
"role": "assistant"
}
]
|
HkyI-5667 | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HkyI-5667>>```",
"role": "user"
},
{
"content": "Title: Good but not surprising\nReview Content: In general, the paper presents a routing practice, that is, apply lifted probabilistic inference to rule learning over probabilistic KBs, such that the scalability of the system is enhanced but being applicable to a limited scope of rules only. I would not vote for reject if other reviewers agree to acceptance.\n\nSpecifically, the proposed algorithm SafeLearner extends ProbFOIL+ by using lifted probabilistic inference (instead of using grounding), which first applies AMIE+ to find candidate deterministic rules, and then jointly learns probabilities of the rules using lifted inference.\n\nThe paper is structured well, and most part of the paper is easy to follow.\n\nI have two major concerns with the motivation. It reads that there are two challenges associated with rule learning from probabilistic KBs, i.e., sparse and probabilistic nature. \n1) While two challenges are identified by the authors, but the paper deals with the latter issue only? How does sparsity affect the algorithm design?\n\n2) The paper can be better motivated, although there is one piece of existing work for learning probabilistic rules from KBs (De Raedt et al. [2015]). Somehow, I am not convinced by the potential application of the methods; that is, after generating the probabilistic rules, how can I apply the probabilistic rules? It will be appreciated if the authors can present some examples of the use of probabilistic rules. Moreover, if it is mainly to complete probabilistic KBs, how does this probabilistic logics based approach compare against embedding based approach?\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer's evaluation is an educated guess\n",
"role": "assistant"
},
{
"content": "Title: Kindly refer to the Appendix\nWe would like to thank the reviewer for the comments.\n\n1) The paper identifies sparsity of Knowledge bases as a challenge but does not deal with it. How does sparsity affect the algorithm design?\n\nSince SafeLearner is a Statistical Relational Learning (SRL) approach that would learn first-order rules to predict tuples, it can even reason about new constants being included in the KB without re-training. On the other hand, since knowledge graph embedding (KGE) based approaches implicitly consider all tuples, these may discard the existence of a tuple with a new constant as the embedding for the new constant does not exist. Thus, it is easier for SRL based approaches to handle highly sparse KBs as they can handle the high number of constants since they only learn first-order rules. \n\n\n2) I am not convinced by the potential application of the methods. After generating the probabilistic rules, how can I apply them? It will be appreciated if the authors can present some examples of the use of the probabilistic rules. If it is mainly to complete probabilistic KBs, how does this probabilistic logics based approach compare against embedding based approach?\n\nWe have answered this question in Appendix E. Please have a look. ",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It addresses most of my concerns.",
"role": "assistant"
}
]
|
HkyI-5667 | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<HkyI-5667>>```",
"role": "user"
},
{
"content": "Title: review\nReview Content: This paper was interesting and rather clearly written, as someone who didn't have much background in rule learning.\n\nSection 5.1.1.1 : \"(ii) tuples contained in the answer of Q' where Q' is the same as Q but without the rule with empty body, but not in the training set\" is unclear.\n\nIn the algorithm, line 22, aren't rules removed until H leads to a \"safe\" UCQ ?\n\nSection 6.1 : \"In line 7 we formulate a UCQ Q from all the candidate rules in H (explained in 5.2 with an example)\". I was unable to find the example in section 5.2 \n\nIt would be interesting to have an idea of the maximum scale that ProbFoil+ can handle, since it seems to be the only competitor to the suggested method. \n\nIn section 7.2 does the \"learning time\" include the call to AMIE+ ? if not, it would be interesting to break down the time into its deterministic and learning components, since the former is only necessary to retrieve the correct probabilities.\n\nBeing new to this subject, I found the paper to be somewhat clear. However, I found that it was sometimes hard to understand what was a part of the proposed system, and what was done in Amie+ or Slimshot.\n\nFor example, \"But, before calling Slimshot\nfor lifted inference on the whole query, we first break down the query to independent subqueries such that no variable is common in more than one sub-query. Then, we perform\ninference separately over it and later unify the sub-queries to get the desired result.\"\n\nThis is described as important to the speedup over ProbFoil+ in the conclusion, yet doesn't appear in Algorithm 1. \n\nSimilarly, \"it caches the structure of queries before doing inference \" is mentioned in the conclusion but I couldn't map it to anything in Algorithm 1 or in the paper.\n\n\nI lean toward an accept because the work seems solid, but I feel like I don't have the background required to judge on the contributions of this paper, which seems to me like a good use of Amie+/Slimshot with a reasonable addition of SGD to learn rule weights. Some of the components which are sold as important for the speed-up in the conclusion aren't clear enough in the main text. Some numbers to experimentally back-up how important these additions are to the algorithms would be welcome. \n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer's evaluation is an educated guess\n",
"role": "assistant"
},
{
"content": "Title: Clarifications addressed\nWe would like to thank the reviewer for the comments and would like to clarify further.\n\n1) Section 5.1.1.1 : \"(ii) tuples contained in the answer of Q' where Q' is the same as Q but without the rule with an empty body, but not in the training set\" is unclear.\n\nWe have elaborated more on the 3 categories of target tuples in the revised paper. We hope it is clearer now.\n\n\n2) In the algorithm, line 22, aren't rules removed until H leads to a \"safe\" UCQ?\n\nWe have elaborated further on our algorithm in the paper. Checking for a safe UCQ is performed in Line 8 of QueryConstructor function. The only way to check for a safe query is by trying to construct a query plan, which is exactly what happens inside SlimShot. So if SlimShot fails to construct a query plan, the UCQ is considered to be unsafe.\n\n\n3) Section 6.1: \"In line 7 we formulate a UCQ Q from all the candidate rules in H (explained in 5.2 with an example)\". I was unable to find the example in section 5.2.\n\nThank you for pointing it out. It has now been rectified in the revision.\n\n\n4) It would be interesting to have an idea of the maximum scale that ProbFoil+ can handle since it seems to be the only competitor to the suggested method.\n\nWe have conducted a small experiment to demonstrate that ProbFOIL+ struggles to scale upto large KBs with a number of target tuples > 5000. On the other hand, such large KBs are handled reasonably faster by SafeLearner. For instance, for a simple probabilistic KB with 20000 target tuples and 20000 non-target tuples, ProbFOIL+ took 15 hours and 54 minutes and SafeLearner took just 30 minutes. The detailed procedure and results of the experiment could be found in Appendix C.\n\n\n5) In section 7.2 does the \"learning time\" include the call to AMIE+?\n\nYes, the learning time includes both structure learning (including AMIE+) and parameter learning components.\n\n\n6) I found that it was sometimes hard to understand what was a part of the proposed system, and what was done in AMIE+ or Slimshot.\n\nOnly line 3 in Algorithm 1 uses AMIE+. On the other hand, SlimShot is used in lines 7 (QueryConstructor function) and 10 (ProbabilityPredictor function). Every other line of Algorithm 1 is part of SafeLearner.\n\n\n7) Explain the breaking down of queries into independent sub-queries in the algorithm.\n\nIf a query can be written as a union of independent sub-queries then its probability can be expressed as a unification of the probability of its sub-queries. We have explained this further in Appendix D.\n\n\n8) Caching is not mentioned anywhere in the algorithm.\n\nWe use caching/memoization before calling SlimShot in SafeLearner. This is primarily done to speed up SafeLearner by storing the results of the expensive function call to SlimShot and returning the cached result when the structure of the query occurs again in the input. Since SlimShot produces a query plan, we exploit the fact that isomorphic queries naturally have the same query plans. We have explained this further in Appendix D.\n\n\n9) Some of the components which are sold as important for the speed-up in the conclusion aren't clear enough in the main text. Some numbers to experimentally back-up how important these additions are to the algorithms would be welcome.\n\nWe performed an experiment where we compared SafeLearner with and without the 2 speed-up techniques, memoization, and query disintegration. Our results on NELL (850th iteration) and YAGO show that memoization and query disintegration give an average speed-up of 50% and 7% respectively. It is important to understand that query disintegration would only give a speed-up when SafeLearner learns a high number of rules with a lot of rules being independent to one another. The detailed procedure and results of the experiment could be found in Appendix D.\n",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It covers most of my concerns.",
"role": "assistant"
}
]
|
HylHE-9p6m | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HylHE-9p6m>>```",
"role": "user"
},
{
"content": "Title: Super-exciting stuff, great analysis and overall a well written and executed paper\nReview Content: This paper studies the problem of fine grained entity recognition (\\fger) and typing . They initially present an excellent analysis of how fine grained entity type sets are not a direct extension of the usual coarse grained types (person, organization, etc) and hence training \\fger systems on data annotated on coarse grained typed datasets would lead to low recall systems. Secondly, they show that automatically created fine grained typed datasets are also not sufficient since it leads to low recall because of noisy distant supervision. This analysis was both an interesting read and also sets the stage for the main contribution of this work.\n\nThe main contribution of this work is to create a high recall large training corpora to facilitate \\fger and typing. The authors propose a staged pipeline which takes in input (a) a text corpus linked to a knowledge base (using noisy distant supervision), (b) a knowledge base (and also a list of aliases for entities) and (c) a typed hierarchy. All of these are readily available. The output is a high-recall linked corpus with linked entities to the KB and the type hierarchy. To show the usability of the training corpus the paper performs excellent intrinsic and extrinsic analysis. In the intrinsic analysis, they show that the new corpus indeed has a lot more mentions annotated. In the extrinsic analysis, they show that state art of the models trained on the new corpus has very impressive gains in recall when compared to the original existing datasets and also wiki dataset with distant supervision. There is a loss in precision though, but it is fairly small when compared to the massive gains in recall. This experiments warrants the usability of the generated training corpus and also the framework they stipulate which I think everyone should follow.\n\nGreat work!\n\nQuestions:\n\n1. In table 3, can you also report the difference in entities (and not entity mentions). I would be interested to see if you were able to find new entities. \n2. Are you planning to release the two training datasets (and if possible the code ?)\n\nSuggestion: The heuristics definitely work great but I think we can still do better if parts of stage 2 and 3 were replaced by a simple machine learned system. For example, in stage 2, just relying on heuristics to match to the prefix trees would result in always choosing the shortest alias and would be problematic if aliases of different entities share the same prefix. Restricting to entities in the document would definitely help here but still there might be unrecoverable errors. A simple model conditioned on the context would definitely perform better. Similarly in sentence 3, the POS based heuristics would just be more robust if a classifier is learned. \nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thanks for an encouraging review! Here is our response to your questions:\n\nQuestion 1: \nIn Table 3, can you also report the differences in entities?\n\nResponse to question 1:\nWe have updated the Table 3 and section 5.1 to report differences in entities. The summary of the table is that our corpus has around 1.5 times more unique entities when compared with the NDS approach to annotate the same sentences. Also, there are slight changes due to a mistake in row 1 and 3 of WikiFbT numbers. The entity mentions annotated are now even more! (42 million -> 45 million and 24 million -> 27 million).\n\nQuestion 2:\nAre you planning to release the two training datasets (and if possible the code ?)\n\nResponse to question 2:\nAll experiments reported in this work will be easily reproducible. A note for the same is included in the paper.\n\nWe are planning to publicly release the following things:\n1. The preprocessed Wikipedia text along with the preprocessed Freebase to facilitate the dataset construction research.\n2. The code used to generate the two training datasets (WikiFbF and WikiFbT) along with the generated datasets.\n3. The code used to train learning models and the evaluation corpus.\n\nResponse to suggestion:\nThanks for the great suggestion. Indeed there is a possibility of further improvement by replacing some of the heuristics with learning models. We consider this as a future work.\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. The response addresses most of my concerns.",
"role": "assistant"
}
]
|
HylHE-9p6m | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<HylHE-9p6m>>```",
"role": "user"
},
{
"content": "Title: New distant supervision method focusing on mention coverage, but incomplete evaluation \nReview Content: The paper proposes a new way to get distant supervision for fine grained NER using wikipedia, freebase and a type system. The paper explores using links in wikipedia as entities, matching them to freebase, and then expanding the using some string matching, and then finally pruning sentences using heuristics. Methods are compared on FIGER. The paper also introduces a dataset, but it is not fully described. \n\nOne interesting aspect about this paper is, as far as I can tell, one of the few works actually doing mention detection, and exploring the role mention detection. \n\nThat being said, its unclear what is new about the proposed source of supervision. The first two stages seem similar to standard methods and the last method (generally speaking pruning noisy supervision), has also been explored (e.g. in the context of distant supervision for IE, http://www.aclweb.org/anthology/P18-2015, and see Section 2.2) . Its also not clear to me what specifically targets good mention detection in this method. The experiments do somewhat argue that mention detection is improving, but not really on FIGER, but instead on the new dataset (this inconsistency causes me some pause). \n\nAll that being said, I don't think it would matter much if the supervision were incorporated into an existing system (e.g. https://github.com/uwnlp/open_type or https://github.com/MurtyShikhar/Hierarchical-Typing) and demonstrated competitive results (I understand that these systems use gt mention, but any stand in would be sufficient). Table 5 on the other hand has some results that show baselines that are significantly worse than the original FIGER paper (without gt mentions) and the proposed supervision beating it (on the positive side, beating the original FIGER results too, but not included it in the table, see Section 3 of http://xiaoling.github.io/pubs/ling-aaai12.pdf ). So in the end, I'm not convinced on the experimental side.\n\nThis paper could be significantly improved on the evaluation. Incomplete reference to previous work on FIGER and insufficient description of their new datasets are areas of concern. Cursory reference, instead of evaluation, on new fine grained datasets (like open-type) also seem like missing pieces of a broader story introducing a new form of supervision.\n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thanks for the critical review. We were intrigued by your comments. Our response is as follows:\n\nComment 1:\nIncomplete evaluation and mention detection not improving on FIGER as the results reported for the baseline used in our work is different for the results reported in the original FIGER paper (Table 2 of http://xiaoling.github.io/pubs/ling-aaai12.pdf)\n\nResponse to comment 1:\nThis is indeed a very interesting observation. We would like to clarify here that the results reported in the FIGER paper and in the baseline of the proposed work are not comparable due to the following reasons:\n1. The datasets used are different as the FIGER paper does not use the FIGER dataset for a part of their experiments. This might seems surprising and can be easily missed as it is mentioned in the footnote 4 of the FIGER paper, decoupled with the result table.\n2. The learning models are different.\n\nWe followed a simple and consistent evaluation strategy for Table 5, which we summarize below:\n1. Entity Recognition can be decomposed into two sub-problems: Entity Detection and Entity Classification\n2. Use current state-of-the-art learning model for both the sub-problems in a pipelined manner and report the result for the entity recognition task. \n3. The dataset used for the sub-problems is the same, as mentioned in Table 5. There is other interpretation of results as that of the results reported in the FIGER paper.\n\nWe apologize for not bringing the correct interpretation of the results in the FIGER paper beforehand, which could have avoided this confusion.\n\nWe would like to add here that mention detection is improving on both the datasets: FIGER and 1k-WFB-g. Please refer to Table 4. The resulting analysis of entity detection is available in Section 5.2.4. Also, there is an analysis of entity detection in Section 3. The improvement for mention detection is 0.08% F1 on FIGER dataset and 15.21% F1 on 1k-WFB-g. Note that although the improvement is small on FIGER, this improvement is over a model trained on a manually annotated (complete noise-free) dataset which matches well with FIGER dataset as both have predominant mentions of person, location and organization entity types. When we compare with a distantly supervised dataset, we have an improvement of 37.11% on FIGER dataset.\n\nWe hope that this clarification will resolve the concerns about incomplete evaluation. We have updated Section 3 (last paragraph) and Section 5.2.2 (last paragraph) to clearly state that the results in FIGER paper are not directly comparable. \n\nWe will be glad to answer any other questions related to the evaluation. \n\n\nComment 2:\nResemblance to http://www.aclweb.org/anthology/P18-2015\n\nResponse to comment 2:\nWe agree that at an abstract level, the pruning of noisy dataset idea is the same. However, there are fundamental differences between these works:\n1. The tasks are different, the referred paper by the reviewer is removing the false positives for relation extraction task and our work is removing false positives and false negatives for the entity detection task.\n2. The fundamental approach for these two different tasks is also different, where the referred work is a modeling approach and our being a heuristic approach.\n\nIt is possible to add a learning model to make certain decisions instead of a pure heuristic approach used in our framework. We consider this as a future work. This is also suggested by Reviewer 1.\n\nWe would like to mention here that the main contributions of this work are:\n1. An analysis that the existing coarsely annotated datasets such as CoNLL are not suitable to detect entity mentions in the context of FIGER and TypeNET type hierarchies. (Section 3)\n2. A framework to automatically construct quality datasets suitable for fine entity detection and typing. (Section 4)\n3. Establish the state-of-the-art baselines on FIGER and new manually annotated corpus which is much richer than FIGER evaluation corpus. (Section 5)\n\nAdditionally, in the final version, we will update the related work section incorporating a reference to the work suggested by the reviewer along with other works which are also similar at an abstract level. \n\n\nComment 3:\nDescription of the new datasets is missing\n\nResponse to comment 3:\nWe would like to specify that the generated datasets description is mentioned in Section 5, overview paragraph. We also report various statistics of the datasets in Table 2. We have updated the Section 5.2.2 to include a description of evaluation datasets.\n\nWe hope that our response will clarify your concerns. \n\nComment 4:\nSupervision being used in existing entity typing systems: (TypeNet and Open-type)\n\nResponse to comment 4:\nWe would like to mention here that we proposed an approach to generate a training dataset for Fine-grained Entity Recognition (FgER) task. FgER task can be decomposed into two subproblems: Fine entity detection and fine entity typing. TypeNET and Open-type fall into the latter category. Our work has shown that as the number of types increase, entity detection task becomes the bottleneck for FgER task. Existing works such as FIGER, TypeNet and open-type do not address this issue. We established this point in Section 3 and proposed a framework for generating datasets to remove the bottleneck for FIGER and TypeNET hierarchy. Our benchmarking experiments were focused on providing quality estimates for the generated datasets. Thus we kept the learning model fixed and varied the dataset to demonstrate that the generated dataset help learning model to generalized better.\n\nAdditionally, in the case of open-type, it is an elusive problem of detecting 10k+ types of entities as many of these types go beyond knowledge bases.\n\nWe hope that our response will clarify your concerns. \n",
"role": "user"
},
{
"content": "Thanks to the authors for their clarification. Most of my concerns have been addressed.",
"role": "assistant"
}
]
|
HylHE-9p6m | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HylHE-9p6m>>```",
"role": "user"
},
{
"content": "Title: Good paper with two caveats\nReview Content: This paper presents a methodology for automatically annotating texts from Wikipedia with entity spans and types. The intended use is the generation of training sets for NER systems with a type hierarchy going beyond the common four types per, loc, org, misc.\n\nI think this paper is a good fit for AKBC, given that entity recognition and linking systems are an essential part of knowledge-base population. This should be good for interesting discussions.\n\nA big caveat of this paper is that, while the approach discussed by the authors is generally sound, it is very tailored towards Wikipedia texts, and it is unclear how it can be generalized beyond. Since this approach is meant for downstream training of NER systems, these systems will be limited to Wikipedia-style texts. A statement from the authors regarding this limitation would have been nice. Or maybe it's not a limitation at all? If the authors could show that NER systems trained on this Wikipedia-style texts do perform well on, say, scientific texts, that would be already good.\n\nA second issue I see is that Stage III of the algorithm filters out sentences that might have missing entity labels. This makes sense, provided that the discarded sentences are not fundamentally different from the sentences retained in the NER tagger training set. For example, they could be longer on average, or could have more subordinate clauses, or just more entities on average, etc. This is something the authors should look into and should report in their paper. If they find that there are differences, an experiment would be nice that applies an NER tagger trained on the produced data to these sentences and verifies its performance there.\n\nA minor point: The authors claim their approach scales to thousands of entity types, which I find a bit of an overstatement, given that the dataset produced by the authors has <1.5k different types (which is already quite a lot).\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thanks for the review. Indeed the issues highlighted are important and we hope that our response will address these issues. \n\nComment 1:\nThe approach is tailored towards Wikipedia text.\n\nResponse to comment 1: \nWe agree that all our experiments use Wikipedia as a text source. In fact, to the best of our knowledge, only Wikipedia text exists in public domain that is highly interlinked to knowledge bases such as Freebase, DBpedia and WikiData. Our approach fundamentally requires a linked text corpus, and the current options are limited to Wikipedia only.\n\nAlso, we would like to mention here is that FIGER evaluation dataset consists of sentences sampled from news and specialized magazines (photography and veterinary domain). Thus, it has a different writing style as compared with Wikipedia text. In Table 4, we can observe that the entity detection model trained on our dataset outperforms models trained on other existing datasets. Hence, the learning model is able to generalize beyond the Wikipedia style text.\n\nFurther, for several other domains, such as scientific articles, conversation text, where directly utilizing the generated datasets might not yield good performance, we consider our datasets as a source dataset for transfer learning. The generated dataset is richer in entity types (1k+ from several domains) and is very large, which makes it a good candidate to use as a source dataset for transfer learning setting when the task is to do NER for a specific domain text.\n\nWe have updated the discussion section (last paragraph) and 5.2.4 section (3rd paragraph, last sentence) to reflect this view.\n\n\nComment/Question 2:\nAre discarded and retained sentences fundamentally same or different? \n\nResponse to comment/question 2:\nThis is an interesting question. To answer the question we analyzed the discarded and retained sentences on the following parameters:\n1. Are the discarded sentences longer on average?\n2. Does the entity mention in discarded sentences longer on average?\n3. Is there a fundamental change in the token and entity mention distribution of discarded and retained sentences?\n\nThis analysis is added as Appendix B to the paper. The summary of the analysis is that, other than the sentence length distribution, there is not a significant change in other distributions. The mean sentence length in the retained corpus is around 22, whereas in the discarded corpus is around 27. \n\nNote that, this result has a subtle interpretation. We observe that in the discarded sentences, there are more than 100k sentences with the length greater than 100 tokens. In fact, a corpus constituting of only these longer sentences is larger than several news-domain NER datasets. We observe that the majority of these sentences are caused due to incorrect sentence segmentations or they follow a list like patterns such as:\n1. PER, PER, PER, PER, PER …\n2. PER - PER - PER - PER - PER …\n3. Director (Movie), Director (Movie), Director (Movie), ....\n4. Project (year), project (year), project (year), …\n5. NUMBER NUMBER NUMBER NUMBER … \n6. Movie (year), movie (year), movie (year), …\n7. PER | PER | PER | PER | PER ...\n\nThe largest sentence length in discarded sentences is 6564 tokens!! \n\nOur dataset also captures these long sentences but the number is far less: 7664 sentences with the length greater than 100. The largest sentence length in the retained sentences is 624 tokens. \n\nAlthough, being a basic analysis, the analysis conveys that these longer sentences might not be suitable for applications where NER systems are used. To support this claim, we plotted the sentence length distribution of five NER datasets from different domains in Figure 7 and 8. The result conveys that, sentences longer than 100 words occur rarely in these domains and the sentence length distribution in the retained sentences is closer to the sentence length distribution in these domains when compared with the discarded sentences.\n\n\nComment 3: \nscales to thousands of types is a bit overstatement\n\nResponse to comment 3:\nWe agree with the reviewer that the current experiments demonstrate the efficacy of the framework for up to 1.1k types. We would like to add on this that the Freebase (which has 1.1k entity types) can be replaced with WikiData (which as 15k entity types) in the HAnDS framework to generate a training dataset constituting of 15k entity types. However, since we don’t have any empirical results on this, we will modify the scalability claim in the paper.\n\n\nWe hope that our response will provide a better insight into the proposed work and will clarify your concerns.\n",
"role": "user"
},
{
"content": "I appreciate the authors' feedback. The reply answers almost all of my questions.",
"role": "assistant"
}
]
|
HylaEWcTT7 | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HylaEWcTT7>>```",
"role": "user"
},
{
"content": "Title: Nice review of the development and recent advances in semantic parsing \nReview Content: This work provided a comprehensive review of important works in semantic parsing. It starts with the rule-based systems in the early days. Then it described the introduction of statistical methods to learn from natural language and logical form pairs. Finally, it summarized the recent advances in weakly supervised semantic parsing or learning semantic parsing from denotations and the rise of seq2seq models. It also briefly compared different learning strategies (MML, RL, Max-Margin). \n\nThe paper is well written and easy to follow. The survey covers most of the important works in the field. It provides a good summary of the development of the field as well as the most recent advances. I support the acceptance of this paper. \n\nSome minor comments: \n\n\"the ATIS domain is : What states border Texas :λx.state(x)borders(x, texas).\" \nIs this example from ATIS? It seems more like a GeoQuery example. \n\nRegarding Reinforcement Learning (section 7.2), there is a recent work (Liang et al, 2018) that is quite relevant. It introduced a principled RL method for semantic parsing, and compared it with other objectives like MML. It also introduced a systematic exploration strategy to address the exploration problem mentioned in this section. Might be worth discussing here. \n\nMemory Augmented Policy Optimization for Program Synthesis and Semantic Parsing, Liang, Chen and Norouzi, Mohammad and Berant, Jonathan and Le, Quoc V and Lao, Ni, Advances in Neural Information Processing Systems, 2018\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Addressed minor change + addition of relevant papers\nWe thank you for your positive comments. We have updated the paper with your suggestions.\n\n1) \"the ATIS domain is : What states border Texas :λx.state(x)borders(x, texas).\" Is this example from ATIS? It seems more like a GeoQuery example. \n--> It is indeed an example from the GeoQuery dataset. We have updated it accordingly in the revision. Thanks for pointing it out!\n2) Regarding Reinforcement Learning (section 7.2), there is a recent work (Liang et al, 2018) that is quite relevant. \n--> We absolutely agree with you that Liang et al. 2018 is relevant and we have included it in the survey. Thanks for the suggestion. \n\n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. It adequately addresses the points I raised.",
"role": "assistant"
}
]
|
HylaEWcTT7 | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HylaEWcTT7>>```",
"role": "user"
},
{
"content": "Title: Good survey paper, but no originality\nReview Content: Summary:\nThis paper conducts a thorough survey on semantic parsing, as the title suggested. This paper introduces the formal definition of the semantic parsing, categorized them, describes the development of the system from 1970s to very recent in (fairly) chronological order.\n\nQuality & clarity:\nThe survey is very thorough and self-contained, and the descriptions are all very clear and well-written.\n\nOriginality & significance:\nSince it is the survey paper, it’s hard to say it has originality. \n\nGeneral comment:\nAlthough the survey is very thorough, the paper does not have an original contribution, which conference paper should have.\n\n\n* Update\nI had a misunderstanding about the policy regarding survey paper. I agree with other reviewers that the paper is a well-written survey. Therefore I vote for the acceptance.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Motivation for writing this survey and submitting to AKBC\nWe thank you for your review and positive comments! The main contributions of this survey are to give a new researcher in the field a comprehensive overview of the various components of a semantic parsing system and to discuss prominent past and current work.\nWe believe that the field has evolved so drastically that it is important to have a comprehensive survey that gives an overview of the trends through the years and keeps track of the seminal papers at various stages. We also think that a better understanding of a few open problems will help new researchers work towards concrete research directions. This was the main motivation behind writing this comprehensive survey. \nAlso, while we understand that the score was guided by reservations regarding having a survey paper at a conference, we would like to point out that the call for papers mentioned surveys as one of the types of submissions that were being accepted. ",
"role": "user"
},
{
"content": "I acknowledge the authors' response. Most of my concerns have been addressed.",
"role": "assistant"
}
]
|
HylaEWcTT7 | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<HylaEWcTT7>>```",
"role": "user"
},
{
"content": "Title: easy to follow and includes sufficient references\nReview Content: The survey paper examines the various components of semantic parsing and discusses previous work. The semantic parsing models are categorized into different types according to the supervision forms and the modeling techniques. Overall, the survey is easy to follow and includes sufficient references. The following points can be improved:\n\n* Training a semantic parser involves NL, MR, context, data, model, and learning algorithms. A summarization and examples of popular datasets are helpful.\n* Sec 3.1 Rule based systems can be expanded. The current section is too brief.\n* Sec 2.1 Language for Meaning Representation and Sec 2.2 Grammar should be merged. \n* P9: \"Machine translation techniques\" is not a method instead of `` 5. Alternate forms of supervision\"\n* Sec 8 is too brief now. More discussions on future work are welcome.\n\nminor:\n* Combinatory categorial Grammar -> Combinatory Categorial Grammar\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Addressing suggestions about including datasets and future work \nWe thank you for your review and positive comments! \n1) Yes, we agree that a table summarizing datasets would indeed be helpful. Due to space constraints, we would not have been able to do justice to the large number of datasets available for the task of semantic parsing and consequently planned to have a companion webpage for the survey having a comprehensive table covering datasets. We also plan to include examples from each of the datasets along with links to relevant resources.\n\n2) Regarding Sec 3.1, we had to omit work due to space constraints especially since we felt it would be more valuable to devote a larger portion of the space to current directions. However, we will try to include some more work in the web page.\n\n3) \"Sec 2.1- Language for Meaning Representation and Sec 2.2 Grammar should be merged.\"\nWe think both the grammar and the language for meaning representations are important and having different subsections highlights their importance. We understand that current neural approaches marginalize out the grammar and can directly generate the language (such as SQL), but there are example of recent works which show significant gain by explicitly using the grammar (e.g. Yin and Neubig, 2017 [1])\n\n4) * P9: \"Machine translation techniques\" is not a method instead of `` 5. Alternate forms of supervision\" \nWe do believe that this setting of using methods inspired by machine translation techniques is unique enough to get its own subsection. The papers described here provide supervision in a manner distinct from the others and we believe that it is worth highlighting. \n\n5) Re Sec 8: Yes we agree and have added several future directions which we think are important\n\n6) * Combinatory categorial Grammar -> Combinatory Categorial Grammar\nThanks for pointing that out! We have corrected it in the revision. \n\nReferences\n[1] A Syntactic Neural Model for General-Purpose Code Generation. Pengcheng Yin and Graham Neubig, ACL 2017\n",
"role": "user"
},
{
"content": "I appreciate the authors' feedback. It satisfactorily addresses the major issues I had.",
"role": "assistant"
}
]
|
S1xf-W5paX | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<S1xf-W5paX>>```",
"role": "user"
},
{
"content": "Title: Recent relevant work not adequately discussed or compared. \nReview Content: Paper summary: This paper presents a method of learning word embeddings for the purpose of representing hypernym relations. The learning objective is the sum of (a) a measure of the “distributional inclusion” difference vector magnitude and (b) the GloVE objective. Experiments on four benchmark datasets are mostly (but not entirely positive) versus some other methods.\n\nThe introduction emphasizes the need for a representation that \"able to encode not only the direct hypernymy relations between the hypernym and hyponym words, but also the indirect and the full hierarchical hypernym path.” There has been significant interest in recent work on representations aiming for exactly this goal, including Poincare Embeddings [Nikel and Kiela], Order Embeddings [Vendrov et al], Probabilistic Order Embeddings [Lai and Hockenmaier], Box embeddings [Vilnis et al]. It seems that there should be empirical comparisons to these methods.\n\nI found the order of presentation awkward, and sometimes hard to follow. For example, I would have liked to see a clear explanation of test-time inference before the learning objective was presented, and I’m still left wondering why there is not a closer correspondence between the multiple inference methods described (in Table 3) and the learning objective.\n\nI would also have liked to see a clear motivation for why the GloVE embedding is compatible with and beneficial for the hypernym task. “Relatedness” is different than “hypernymy.”\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: More recent relevant work have been added\n-->Q1: There has been significant interest in recent work on representations aiming for exactly this goal, including:\nPoincare Embeddings [Nikel and Kiela], \nOrder Embeddings [Vendrov et al], \nProbabilistic Order Embeddings [Lai and Hockenmaier], \nBox embeddings [Vilnis et al]. \nIt seems that there should be empirical comparisons to these methods. \n\nAns: Thank you for the suggestion. Poincare seems to be an excellent fit to be empirically compared against the proposed method, as they both share a similar spirit to explicitly learn hierarchical word embeddings rather than hypernymy-specific embeddings\nWe have now updated the paper with two more related works, Poincare[1] and LEAR[2] and empirically compare the proposed method against them.\nWe have also updated the paper with a new evaluation task (section 4.4) to test the proposed method on a graded lexical entailment as suggested by reviewer3.\nThe proposed method reports an improvement over most of the prior works, including Poincare, in the three tasks (hypernym detection, graded lexical entailment, and the hierarchical path completion), except for the LEAR in two datasets.\nMore interestingly, Poincare seems to perform well in the proposed hierarchical path completion task in contrast to the other methods apart from HWE. The fact that Poincare embeddings, a hierarchical word embeddings learn method, reports good performance on this hierarchical path completion task, suggests that this is an appropriate task for evaluating hierarchies and embeddings. \n\nPlease note that Probabilistic and Box Embeddings are relatively less related to the proposed method as they work on phrase embeddings and use pre-trained word embeddings to feed an LSTM for learning phrase embeddings.\n\n[1] Poincare Embeddings [Nikel and Kiela] - NIPS 2017\n[2] Specialising Word Vectors for Lexical Entailment [Vulic and Mrksic] - NAACL-HLT 2018\n\n\n-->Q2: I found the order of presentation awkward, and sometimes hard to follow. For example, I would have liked to see a clear explanation of test-time inference before the learning objective was presented, and I’m still left wondering why there is not a closer correspondence between the multiple inference methods described (in Table 3) and the learning objective.\n\nAns: We use the hierarchical word embeddings produced by the proposed method (HWE) in three tasks: hypernym detection (section 4.3), graded lexical entailment (section 4.4) and the hierarchical path completion (section 4.5).\nEach task has different inference methods, and that is the reason why we describe the inference methods under each section separately and not in the method for learning hierarchical word embeddings section.\nFor example, for the first task (section 4.3) we used the concatenation approach as stated in the section.\nSimilarly, for the graded lexical entailment task, we used the inference method described in section 4.4 (Eq. (6)).\nThe inference methods described in Table 3 are specific to the hierarchical path completion task.\nAmong the different inference methods compared in Table 3, ADD corresponds closely to the training objective used by the HWE learning method we propose (see Eq. (2)). This might explain why ADD turns out to be the best inference methods in Table 3.\n\n\n-->Q3: I would also have liked to see a clear motivation for why the GloVE embedding is compatible with and beneficial for the hypernym task. “Relatedness” is different than “hypernymy.”\n\nAns: All the datasets in the hypernym identification task are pairwise relation data, and it could be the case that it easier for such distributional methods to pick the hypernymy, where hypernymy tend to occur in similar context.\n Global/general Response:\n```\nWe thank all the reviewers for their valuable comments and constructive suggestions. The main concerns highlighted from the reviewers are about the evaluation part, mainly about missing an empirical comparison with some recent relevant work. We have now updated the original paper with:\n(1) More related work and discussion about their relevance to the proposed method\n(2) An empirical comparison with the suggested relevant work in all the evaluation tasks\n(3) A new evaluation task (section 4.4) on the graded lexical entailment.\n```\n",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It adequately addresses the points I raised.",
"role": "assistant"
}
]
|
S1xf-W5paX | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<S1xf-W5paX>>```",
"role": "user"
},
{
"content": "Title: Well motivated approach but some concerns\nReview Content: This paper proposed a joint learning method of hypernym from both raw text and supervised taxonomy data. The innovation is that the model is not only modeling hypernym pairs but also the whole taxonomy. The experiments demonstrate better or similar performance on hypernym pair detection task and much better performance on a new task called \"hierarchical path completion\". The method is good motivated and intuitive. Lots of analysis on the results are done which I liked a lot. But I have some questions for the authors.\n\n1) One major question I have is for the taxonomy evaluation part, I think there are works trying to do taxonomy evaluation by using node-level and edge-level evaluation. 'A Short Survey on Taxonomy Learning from Text Corpora:\nIssues, Resources, and Recent Advances' from NAACL 2017 did a nice summarization for this. Is there any reason why this evaluation is not applicable here?\n\n2) At the end of section 4.2, the author mentioned Retrofit, JointReps and HyperVec are using the original author prepared wordnet data. Then the supervised training data is different for different methods? Is there a more controlled experiment where all experiments are using the same training data?\n\n3) In section 4.4, there are three prediction methods are introduced including ADD, SUM, and DH. The score is calculated using cosine similarity. But the loss function used in the model is by minimizing the L2 distance between word embeddings? Is there any reason why not use L2 but cosine similarity in this setting? Also, I'm assuming SUM and DH are using cosine similarity as well? It might be useful to add that bit of information.\n\n4) The motivation for this paper is to using taxonomy instead of just hypernym pairs? Another line of research trying to encode the taxonomy structure into the geometry space such that the taxonomy will be automatically captured due to the self-organized geometry space. Some papers including but not restricted 'Order-Embeddings of Images and Language', \n'Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures'. Probably this line of work is not directly comparable, but it might be useful to add to the related work session.\n\nA few minor points: \n1) In equation four of section 3, t_max appears for the first time. This equation maybe part of the GLOVE objective, but a one-sentence explanation of t_max might be needed here.\n2) at the end of section 3, the calculation of gradients for different parameters are given, but the optimization is actually performed by AdaGrad. Maybe it would be good to move these equations to the appendix.\n3) In section 4.1 experiment set up, the wordnet training data is generated by performing transitive closure I assume? How does the wordnet synsets get mapped to its surface form in order to do further training and evaluation?\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Concerns clarification\n-->Q1: One major question I have is for the taxonomy evaluation part, I think there are works trying to do taxonomy evaluation by using node-level and edge-level evaluation. 'A Short Survey on Taxonomy Learning from Text Corpora:\nIssues, Resources, and Recent Advances' from NAACL 2017 did a nice summarization for this. Is there any reason why this evaluation is not applicable here?\n\nAns: The above-mentioned paper is mainly about the recent work on taxonomy construction from free texts, which is different from what we are proposing in this paper. Our goal is not to create taxonomies but to learn word embeddings that preserve taxonomic information as vector representations. As we do not create taxonomies, we cannot evaluate the word embeddings using taxonomy evaluation methods.\n\n-->Q2: At the end of section 4.2, the author mentioned Retrofit, JointReps and HyperVec are using the original author prepared wordnet data. Then the supervised training data is different for different methods? Is there a more controlled experiment where all experiments are using the same training data?\n\nAns: The models Retrofit, JointReps, and HyperVec (and the newly added two recent relevant work (Poincare and LEAR)) work with pairwise relation data. However, the proposed HWE works on a full hierarchical hypernym path. Therefore, the models require slightly different data.\n\n-->Q3: In section 4.4, there are three prediction methods are introduced including ADD, SUM, and DH. The score is calculated using cosine similarity. But the loss function used in the model is by minimizing the L2 distance between word embeddings? Is there any reason why not use L2 but cosine similarity in this setting?\n\nAns: We have empirically tested both the L2 and cosine and found that the cosine to work better in the given experiment.\n\n-->Q4: Also, I'm assuming SUM and DH are using cosine similarity as well? It might be useful to add that bit of information. \n\nAns: Yes. This has been added in the updated version of the paper.\n\n-->Q5: The motivation for this paper is to using taxonomy instead of just hypernym pairs?\n\nAns: Yes\n\n-->Q6: Another line of research trying to encode the taxonomy structure into the geometry space such that the taxonomy will be automatically captured due to the self-organized geometry space. Some papers including but not restricted 'Order-Embeddings of Images and Language', 'Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures'. Probably this line of work is not directly comparable, but it might be useful to add to the related work session.\n\nAns: Thank you for the suggestion. We have now updated the paper with two more related works, Poincare[1] and LEAR[2] and empirically compare the proposed method against them.\nPlease note that Probabilistic and Box Embeddings are relatively less related to the proposed method as they working on phrase embeddings and used pre-trained word embeddings to feed an LSTM for learning phrase embeddings.\n\n[1] Poincare Embeddings [Nikel and Kiela] - NIPS 2017\n[2] Specialising Word Vectors for Lexical Entailment [Vulic and Mrksic] - NAACL-HLT 2018\n\n-->Q7: In equation four of section 3, t_max appears for the first time. This equation maybe part of the GLOVE objective, but a one-sentence explanation of t_max might be needed here.\n\nAns: Yes, it is the weighting function of GloVe so that it becomes relatively small for words of large frequency and set to 100 as stated in section (4.1).\n\n-->Q8: at the end of section 3, the calculation of gradients for different parameters are given, but the optimization is actually performed by AdaGrad. Maybe it would be good to move these equations to the appendix.\n\nAns: The gradients equations have been moved to the appendix in the updated version of the paper.\n\n-->Q9: In section 4.1 experiment set up, the wordnet training data is generated by performing transitive closure I assume? How does the wordnet synsets get mapped to its surface form in order to do further training and evaluation?\n\nAns: We lemmatise the corpus and use the form given in the WordNet as the surface form.\n Global/general Response:\n```\nWe thank all the reviewers for their valuable comments and constructive suggestions. The main concerns highlighted from the reviewers are about the evaluation part, mainly about missing an empirical comparison with some recent relevant work. We have now updated the original paper with:\n(1) More related work and discussion about their relevance to the proposed method\n(2) An empirical comparison with the suggested relevant work in all the evaluation tasks\n(3) A new evaluation task (section 4.4) on the graded lexical entailment.\n```\n",
"role": "user"
},
{
"content": "I value the authors' response. They have adequately addressed the majority of my concerns.",
"role": "assistant"
}
]
|
S1xf-W5paX | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<S1xf-W5paX>>```",
"role": "user"
},
{
"content": "Title: Needs better evaluation \nReview Content: This paper presents a method to jointly learn word embeddings using co-occurrence statistics as well as by incorporating hierarchical information from semantic networks like WordNet. \n\nIn terms of novelty, this work only provides a simple extension to earlier papers [1,2] by changing the objective function to instead make the word embeddings of a hypernym pair similar but with a scaling factor that depends on the distance of the words in the hierarchy.\n\nWhile the method seems to learn some amount of semantic properties, most of the baselines reported seem either outdated or ill fitted to the task and do not serve well to evaluate the value of the proposed method for the given task. \nFor example the JointRep baseline is based on a semantic similarity task which primarily learns word embeddings based on synonym relations and seems to not be an appropriate baseline to compare the current approach to.\nFurther, there are two primary methods of incorporating semantic knowledge into word embeddings - by incorporating them during the training procedure or by post processing the vectors to include this knowledge. While I understand that this method falls into the first category, it is still important and essential to compare to both types of strategies of word vector specialization. In this regard [3] has been shown to beat HyperVec and other methods on hypernym detection and directionality benchmarks and should be included in the results. It would be also interesting to see how the current approach fares on graded hypernym benchmarks such as Hyperlex. \n\nMinor comments : Section 4.2 there is a word extending out of the column boundaries. \n\n\n[1] Alsuhaibani, Mohammed, et al. \"Jointly learning word embeddings using a corpus and a knowledge base.\" PloS one (2018)\n[2] Bollegala, Danushka, et al. \"Joint Word Representation Learning Using a Corpus and a Semantic Lexicon.\" AAAI. 2016.\n[3] Vulić, Ivan, and Nikola Mrkšić. \"Specialising Word Vectors for Lexical Entailment.\" NAACL-HLT 2018.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: More evaluation task has been added.\n\n-->Q1: While the method seems to learn some amount of semantic properties, most of the baselines reported seem either outdated or ill fitted to the task and do not serve well to evaluate the value of the proposed method for the given task.\n\nAns: We have now updated the paper with two more recent relative work, Poincare[1] and LEAR[2], and empirically compare the proposed method against them in all the three evaluation tasks.\n\n[1] Poincare Embeddings [Nikel and Kiela] - NIPS 2017\n[2] Specialising Word Vectors for Lexical Entailment [Vulic and Mrksic] - NAACL-HLT 2018\n\n-->Q2: For example the JointRep baseline is based on a semantic similarity task which primarily learns word embeddings based on synonym relations and seems to not be an appropriate baseline to compare the current approach to.\n\nAns: The JointReps method used different semantic relations (synonyms, hypernyms, hyponyms… etc), however, here we used the hypernym relation when we train the JointReps to compare the proposed method with it in all the evaluation tasks.\n\n-->Q3: Further, there are two primary methods of incorporating semantic knowledge into word embeddings - by incorporating them during the training procedure or by post processing the vectors to include this knowledge. While I understand that this method falls into the first category, it is still important and essential to compare to both types of strategies of word vector specialization.\n\nAns: The evaluation proposed in the paper does, in fact, compare against both type of categories. JointReps and HyperVec fall into the first category, whereas the Retrofit method and the newly added (in the updated version of the paper) LEAR method falls into the second. In all of these methods, we used the hypernym relations to incorporate the semantic knowledge into the learnt embeddings.\n\n-->Q4: In this regard [3] has been shown to beat HyperVec and other methods on hypernym detection and directionality benchmarks and should be included in the results. \n\nAns: Thank you for the suggestion. We have now added LEAR to the updated version of the paper and empirically compare the proposed method against in all the three evaluation tasks.\nThe proposed method reports better or comparable results to LEAR in two of the main evaluation tasks (hypernym detection and hierarchical completion).\n\n-->Q5: It would be also interesting to see how the current approach fares on graded hypernym benchmarks such as Hyperlex.\n\nAns: Thank you for the suggestion. We have added a new sub-section (section 4.4) in the updated version of the paper with a new evaluation task on the graded lexical entailment prediction using HyperLex.\n\n-->Q6: Section 4.2 there is a word extending out of the column boundaries. \n\nAns: Thank you. This has been modified in the updated version of the paper.\n Global/general Response:\n```\nWe thank all the reviewers for their valuable comments and constructive suggestions. The main concerns highlighted from the reviewers are about the evaluation part, mainly about missing an empirical comparison with some recent relevant work. We have now updated the original paper with:\n(1) More related work and discussion about their relevance to the proposed method\n(2) An empirical comparison with the suggested relevant work in all the evaluation tasks\n(3) A new evaluation task (section 4.4) on the graded lexical entailment.\n```\n",
"role": "user"
},
{
"content": "Thanks to the authors for their clarification. The response addresses most of my concerns.",
"role": "assistant"
}
]
|
Sye0lZqp6Q | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<Sye0lZqp6Q>>```",
"role": "user"
},
{
"content": "Title: Review of Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction\nReview Content: The paper presents an approach to cross-sentence relation extraction that combines LSTMs and convolutional neural network layers with word and position features. Overall the choices made seem reasonable, and the paper includes some interesting analysis / variations (e.g., showing that an LSTM layer followed by a CNN is a better choice than the other way around).\n\nEvaluation is performed on two datasets, Quirk and Poon (2016) and a chemical induced disease dataset. The paper compares a number of model variations, but there don't appear to be any comparisons to state-of-the-art results on these datasets. The paper could benefit from comparisons to SOTA on these or other datasets.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Comparison with state-of-the-art results\nWe thank the reviewer for the comments.\n\nThe paper, in addition to evaluation of number of model variations, provides a comprehensive evaluation of comparing the proposed model against state-of-the-art results on both datasets. The comparison of results for Quirk and Poon Dataset is provided in Table 4 with the explanation provided in Section 4.4.6. Similarly, the comparison of results for Chemcial-induced dataset is provided in Table 5 and the explanation is provided in Section 4.4.6. As explained in Section 4.4.6, the proposed model clearly outperforms the state-of-the-art results on both datasets.\n\nFurther, since this research specifically considers n-ary relation extraction, these two datasets were considered which involves binary and ternary relation instances. This is the reason we do not perform evaluation on standard intra-sentence relation extraction datasets such as Semeval 2010 Task 8 dataset. Further, this is inline with previous work (Peng et al., 2016) who do not consider evaluation on Semeval 2010 Task 8 relation extraction dataset.",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. They have adequately addressed the majority of my concerns.",
"role": "assistant"
}
]
|
Sye0lZqp6Q | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<Sye0lZqp6Q>>```",
"role": "user"
},
{
"content": "Title: Nice experimental study\nReview Content: The paper addresses cross-sentence n-ary relation extraction. The authors propose a model consisting of an LSTM layer followed by an CNN layer and show that it outperforms other model choices. The experiments are sound and complete and the presented results look convincing. The paper is well written and easy to follow. \nIn total, it presents a nice experimental study.\n\nSome unclear issues / questions for the authors:\n- \"The use of multiple filters facilitates selection of the most important feature for each feature map\": What do you mean with this sentence? Don't you get another feature map for each filter? Isn't the use of multiple filters rather to capture different semantics within the sentence?\n- \"The task of predicting n-ary relations is modeled both as a binary and multi-class classification problem\": How do you do that? Are there different softmax layers? And if yes, how do you decide which one to use?\n- Table 1/2: How can you draw conclusions about the performance on binary and ternary relations from these tables? I can only see the distinction of single sentence and cross sentence there.\n- Table 3: The numbers for short distance spans (mostly 20.0) look suspicious to me. What is the frequency of short/medium/long distance spans in the datasets? Are they big enough to be able to draw any conclusions from them?\n- You say that CNN_LSTM does not work because after applying the CNN all sequential information is lost. But how can you apply an LSTM afterwards then? Is there any recurrence at all? (The sequential information would not be lost after the CNN if you didn't apply pooling. Have you tried that?)\n- Your observation that more than two positional embeddings decrease the performance is interesting (and unexpected). Do you have any insights on this? Does the model pay attention at all to the second of three entities? What would happen if you simply deleted this entity or even some context around this entity (i.e., perform an adversarial attack on your model)?\n\nOther things that should be improved:\n- sometimes words are in the margin\n- there are some typos, e.g., \"mpdel\", \"the dimensions ... was set... and were initialised\", \"it interesting\"\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Clarification for some unclear issues\nWe thank the reviewer for the comments.\n\nQ1. - \"The use of multiple filters facilitates selection of the most important feature for each feature map\": What do you mean with this sentence? Don't you get another feature map for each filter? Isn't the use of multiple filters rather to capture different semantics within the sentence?\nAns: It is true that the use of multiple features facilitates capturing different semantics within the sentence. We have corrected this section on max-pooling in the paper.\n\nQ2. \"The task of predicting n-ary relations is modeled both as a binary and multi-class classification problem\": How do you do that? Are there different softmax layers? And if yes, how do you decide which one to use?\nAns: The task of predicting n-ary relations as a binary and multi-classification problem is specific to the datasets that are used in the paper. While using the Peng et al. (2016) dataset, we have a multi-class classification problem. However, when we are working with Chemcial-induced dataset, we have a binary classification problem, to predict whether their exists a binary relation between the entities both in single sentence and across sentences. Therefore, when working with Peng et al dataset, we use softmax function with categorical cross entropy loss to output probability over the five output classes (resistance; resistance or no-response; response; sensitivity; and none) and employ softmax layer with binary cross-entropy loss to predict probability over two classes.\n\nQ3: Table 1/2: How can you draw conclusions about the performance on binary and ternary relations from these tables? I can only see the distinction of single sentence and cross sentence there.\nAns: As indicated in the caption for Tables 1 and 2, while Table 1 specifically deals with ternary relations involving ternary relations between drug-gene-mutation, Table 2 deals with binary relations involving drug-gene entities. However, these binary and ternary relations exists both in single sentences and across sentences. Thus the distinction between single and across sentences is made. Given this aspect, the conclusions are drawn for the performance of binary and ternary relations in single sentences and across sentences.\n\nQ4: - Table 3: The numbers for short distance spans (mostly 20.0) look suspicious to me. What is the frequency of short/medium/long distance spans in the datasets? Are they big enough to be able to draw any conclusions from them?\nAns: The reviewer is right in noting that there are very few instances for short distance spans. This is also the reason why the performance of different models are lower for short distance spans, compared to medium and long distance spans.\n\nQ5: - You say that CNN_LSTM does not work because after applying the CNN all sequential information is lost. But how can you apply an LSTM afterwards then? Is there any recurrence at all? (The sequential information would not be lost after the CNN if you didn't apply pooling. Have you tried that?)\nAns: We had conducted experiments removing the max-pooling layer and passing the features to an LSTM layer. However, this did not help in improving the performance.\n\nQ6: Your observation that more than two positional embeddings decrease the performance is interesting (and unexpected). Do you have any insights on this? Does the model pay attention at all to the second of three entities? What would happen if you simply deleted this entity or even some context around this entity (i.e., perform an adversarial attack on your model)?\nAns: Given the experimental results, we observe that adding positional embeddings for more than two entities results in a decrease in the performance. We have not conducted further experiments such as removing the second entity or context around the second entity. Removing the second entity or the context around it would certainly result in a poor performance as we would be disturbing the sequential information. However, it is worthwhile to conduct such experiments and intend to do it in the future.\n\nQ7: Typos\nAns: Thanks for identifying the typos. The typos are corrected.",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. It satisfactorily addresses most of my concerns.",
"role": "assistant"
}
]
|
Sye0lZqp6Q | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<Sye0lZqp6Q>>```",
"role": "user"
},
{
"content": "Title: Good paper with small modeling improvements but thorough evaluations\nReview Content: The paper presents a method for n-ary cross sentence relation extraction.\nGiven a list of entities, and a list of sentences the task is to identify which relation (from a predefined list) is described between the entities in the given sentences.\nThe proposed model stacks CNN on LSTM to get long range dependencies in the text, and shows to be effective, either beating or equalling the state-of-the-art on two datasets for the task.\n\nOverall, I enjoyed reading the paper, and would like to see it appear in the conference.\nWhile the proposed model is not very novel, and was shown effective on other tasks such as text classification or sentiment analysis, this is the first time it was applied for this specific task.\nIn addition, I appreciate the additional evaluations, which ablate the different parts of the model, analyze its performance by length between entities, compare it with many variations as baselines and against state-of-the-art for the task.\n\nMy main comments are mostly in terms of presentation - see below.\n\n\nDetailed comments:\n\nIn general, I think that wording can be tighter, and some repetitive information can be omitted. For example, Section 4 could be condensed to highlight the main findings, instead of splitting them across subsections.\nI think that Section 3.1.2 (“Position Features”) would benefit from an example showing an input encoding.\nTable 5 shows up in the references. \n\nMinor comments and typos:\n\nText on P. 9 overflows the page margins.\nI think that Table 3 would be a little easier to read if the best performance in each column were highlighted in some manner.\nSection 2, p. 3: “mpdel” -> model.\nPerhaps using “Figure 1” instead of “Listing 1” is more consistent with *ACL-like papers?\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Answers to comments\nWe thank the reviewer for the comments. The following is our response.\n\nQ1: In general, I think that wording can be tighter, and some repetitive information can be omitted. For example, Section 4 could be condensed to highlight the main findings, instead of splitting them across subsections.\nAns: Since we wanted to discuss various aspects of the model, we have used different sub-sections. The title of the sub-section indicates the key aspect of the model, that we wanted to highlight and discuss. We feel that condensing it into one section, can result in difficulty in reading the paper.\n\nQ2: I think that Section 3.1.2 (“Position Features”) would benefit from an example showing an input encoding.\nAns: We have updated Section 3.1.2 to provide an example of position embedding.\n\nQ3: Table 5 shows up in the references. \nAns: Corrected. Table 5 is moved to the next page\n\nQ4: Typos and margins\nAns: The typos and margins are corrected.\n\n",
"role": "user"
},
{
"content": "Thanks to the authors for their thoughtful response. It covers most of my concerns.",
"role": "assistant"
}
]
|
SygLHbcapm | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<SygLHbcapm>>```",
"role": "user"
},
{
"content": "Title: lack of details, few novelty\nReview Content: This paper presents a hierarchical framework for integrating user feedback for KB construction under identity uncertainty.\n\n1. it is unclear about the algorithm implementation, such as what is the implementation of feedback mention.\n\n2. There is a definition about attribute map, but I can't find where the model uses it. Same thing for the precision of a node pair.\n\n3. How to calculate the function g(.) is unclear either. \n\n4. In section 5.2, the COMPLETE definition seems not correct, it is still the definition of PURE.\n\n5. The example for constructing positve/negative feedback is too vague.\n\n6. The experiment section needs more analysis including qualitative result.\nRating: 3 reject, not good enough\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Details are included in the paper\nThank you for your review. \nThe details you mention do appear in our paper, but perhaps they can be further clarified. We address each of your criticisms below:\n\n1. The implementation of feedback mentions is described in the third paragraph of section 2.3: “FMs are composed of two attribute maps” the first called “packaging” and the second called “payload”. Immediately after this description, we provide intuition for why both attribute maps are needed. In section 3.1.1, we describe precisely how both of these attribute maps are used during canonicalization.\n\n2. The model, g, computes the linkage score between two nodes using their attribute maps. This is described in the second paragraph in 3.1.1: “Since g scores the compatibility of two nodes based on their attribute maps...”. Perhaps we can further clarify this in the paper by including some text describing the linkage function g more precisely. \n\nThe precision of a node pair is used during training. In the first paragraph of section 4: “We train g to regress to the precision…” Thus, during learning we run agglomerative clustering and, at each merge, train g to predict the precision of the corresponding node pair.\n\n3. The function g is computed from the attribute maps of the corresponding nodes (which differ by dataset). It is true that we omit the specific features that we use, for example, whether two nodes have the same title with a corresponding positive count in their respective attribute maps, or whether one node contains an attribute A with a positive count and the second node contains the attribute A with a negative count. We are open to including these features in a supplement.\n\n4. We agree that the definition for complete could be revised to be clearer. We will modify the definition as follows:\nA node v is complete if for some $i$,\n\\lvs{e^\\star_i} \\subseteq lvs{v}.\nThus, if for some $i$, the leaves of $e^\\star_i$ are a subset of the leaves of $v$, then $v$ is complete. We note that the definition for complete is described in words immediately after the mathematical statement in the paper and is different from the definition of pure.\n\n5. What part of the detailed/concise feedback needs further explanation? As described in 5.2.1 and 5.2.2 and as detailed in the figure, the simulation procedure finds a “destination” and “target” for each edit. The simulated edit contains a packaging that includes all of the positive attributes at the destination and a payload containing the attributes at the target (either with positive or negative weight, as described in the paper). We’re happy to describe the feedback generation process in more detail.\n\n6. We believe that qualitative results would not be very illuminating, but we’re open to including some. Would you find informative an example of an edit added to a tree and the subsequent modification of inferred entities?\n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. The response addresses most of my concerns.",
"role": "assistant"
}
]
|
SygLHbcapm | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<SygLHbcapm>>```",
"role": "user"
},
{
"content": "Title: A nice algorithmic contribution for integrating user feedback for KBC\nReview Content: This paper introduces a method to integrate user feedback into KBs in the presence of identity uncertainty, a problem that arises in the integration of new data in knowledge base construction. The proposed method represents user feedback as feedback mentions and uses an online algorithm for integrating these mentions into the KB.\n\nThe paper targets an important problem in knowledge base construction, i.e., integrating user feedback in the online setting. The proposed hierarchical model looks reasonable and effective. And overall, the work is well presented. \n\nThe paper makes an algorithmic contribution. The contribution is, however, limited from the perspective of human computation. The experiment uses simulated user feedback for evaluating the method. In real-world settings, user feedback can be skewed to certain types (e.g., negative feedback) or be noisy (so the feedback is not reliable). How would these affect the result?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Experiment with mixed detailed and concise feedback\nThank you for your positive review! \nWe would be happy to expand on our experimental results. While it is not possible for us to carry out a study with real human feedback (which requires IRB approval, etc.) we can perform experiments with a mixture of detailed and concise feedback and with different proportions of positive and negative feedback. Would this address your criticism (at least in part)?\n",
"role": "user"
},
{
"content": "I acknowledge the authors' response. Most of my concerns have been addressed.",
"role": "assistant"
}
]
|
SylxCx5pTQ | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<SylxCx5pTQ>>```",
"role": "user"
},
{
"content": "Title: MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts\nReview Content: The paper “MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts” details the construction of a manually annotated dataset covering biomedical concepts. The novelty of this resource is its size in terms of abstracts and linked mentions as well as the size of the ontology applied (UMLS). \nThe manuscript is clearly written and easy to follow. Although other resources of this type already exist, the authors create a larger dataset covered by a larger ontology. Thus, allowing for the recognition of multiple medical entities at a greater scale than previously created datasets (e.g. CRAFT).\nDespite the clarity, this manuscript can improve the following:\nSection 2.3 – How many annotators were used?\nSection 2.4, point 2 - The process used to determine biomedical relevance is not detailed. Section 4.1 - No reason is given for the choice of TaggerOne. In addition, other datasets could have been tested with TaggerOne for comparison with the MedMentions ST21pv results.\nMisspelling and errors in section 2.3: “Rreviewers”, “IN MEDMENTIONS”\nOverall, this paper fits the conference topics and provides a good contribution in the form of a large annotated biomedical resource.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Responses to AnonReviewer2\nMore detailed information on the annotation process and inter-annotator agreement is being gathered and will be published on the release site.\n\nSection 2.4 point 2 has been expanded to describe \"biomedical relevance\" as used to select semantic types.\n\nThe TaggerOne based model is now described in (expanded) section 4.2, and reason for its selection is also included. The TaggerOne paper referenced does include performance metrics on other biomedical datasets. Its demonstrated performance on recognizing biomedical entities from multiple types was our reason for using it as a baseline. Our goal at present is to simply offer baseline metrics for concept recognition models trained on MedMentions ST21pv.\n\nTypos fixed, thanks!\n Global/general Response:\n```\nWe would like to thank the reviewers for their detailed review. A revised version of the paper has been uploaded that should address most of the points raised. Various sections have been expanded in the revision, including: the introduction, to make the motivation more explicit; a clearer description of the metrics used for evaluating concept recognition models. See also responses to some specific questions below.\n```\n",
"role": "user"
},
{
"content": "I appreciate the authors' reply. It covers most of my concerns.",
"role": "assistant"
}
]
|
SylxCx5pTQ | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<SylxCx5pTQ>>```",
"role": "user"
},
{
"content": "Title: Solid biomedical entity extraction/linking dataset\nReview Content: In this paper the authors introduce MedMentions, a new dataset of biomedical abstracts (PubMed) labeled with biomedical concepts/entities. The concepts some from the broad-coverage UMLS ontology, which contains ~3 million concepts. They also annotate a subset of the data with a filtered version of UMLS more suitable for document retrieval. The authors present data splits and results using an out-of-the-box baseline model (semi-Markov model TaggerOne (Leaman and Lu, 2016)) for end-to-end biomedical entity/concept recognition and linking using MedMentions.\n\nThe paper describes the data and its curation in great detail. The explicit comparison to related corpora is great. This dataset is substantially larger (hundreds of thousands of annotated mentions vs. ones of thousands) and covers a broader range of concepts (previous works are each limited to a subset of biomedical concepts) than previous manually annotated data resources. MedMentions seems like a high-quality dataset that will accelerate important research in biomedical document retrieval and information extraction.\n\nSince one of the contributions is annotation that is supposed to help retrieval, it would be nice to include a baseline model that uses the data to do retrieval. Also, it looks like the baseline evaluation is only on the retrieval subset of the data. Why only evaluate on the subset and not the full dataset, if not doing retrieval?\n\nThis dataset appears to have been already been used in previous work (Murty et al., ACL 2018), but that work is not cited in this paper. That's fine -- I think the dataset deserves its own description paper, and the fact that the data have already been used in an ACL publication is a testament to the potential impact. But it seems like there should be some mention of that previous publication to resolve any confusion about whether it is indeed the same data.\n\nStyle/writing comments:\n- Would be helpful to include more details in the introduction, in particular about your proposed model/metrics. I'd like to know by the end of the introduction, at a high level, what type of model and metrics you're proposing.\n- replace \"~\" with a word (approximately, about, ...) in text\n- Section 2.3: capitalization typo \"IN MEDMENTIONS\"\n- Section 2.4, 3: \"Table\" should be capitalized in \"Table 6\"\n- Use \"and\" rather than \"/\" in text\n- Section 4: maybe just say \"training\" and \"development\" rather than \"Training\" and \"Dev\"\n- 4.1: Markov should be capitalized: semi-Markov\n- 4.1: reconsider use of scare quotes -- are they necessary? 'lexicons', 'Training', \"dev', 'holdout'\n- 4.1: replace \"aka\" with \"i.e.\" or something else more formal. In general this section could use cleanup.\n- 4.1: last paragraph (describing metrics, mention-level vs. document-level) is very confusing, please clarify, especially since you claim that a contribution of the paper is to propose these metrics. Is it the case that mention-level F1 is essentially entity recognition and document-level is entity linking? An example could possibly help here.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to AnonReviewer3\nThe Introduction has been expanded in the revised submission to make the motivation more explicit.\n\nIn this paper we wanted to describe the new resource we have created for training and evaluating biomedical concept recognition in scientific literature. The resource addresses 2 key needs:\n(i) Larger annotated resource, useful for training today's more complex ML models\n(ii) Broader coverage of biology and medicine concepts, by targeting UMLS.\n\nInformation Retrieval models is a research area on its own, and in this paper we just wanted to focus on concept recognition (CR). Metrics for CR models are quite standardized now, and the specific ones we use are described in a new section 4.1 added to the revised submission.\n\nCitation of the (Murty et al., ACL 2018) paper was excluded to anonymize the paper for review. It will be included in the final copy.\n\nThanks for the detailed proof-reading. These should now be fixed in the uploaded revision.\n Global/general Response:\n```\nWe would like to thank the reviewers for their detailed review. A revised version of the paper has been uploaded that should address most of the points raised. Various sections have been expanded in the revision, including: the introduction, to make the motivation more explicit; a clearer description of the metrics used for evaluating concept recognition models. See also responses to some specific questions below.\n```\n",
"role": "user"
},
{
"content": "Thanks for the response provided by the authors. It resolves the majority of my concerns.",
"role": "assistant"
}
]
|
SylxCx5pTQ | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<SylxCx5pTQ>>```",
"role": "user"
},
{
"content": "Title: Useful resource, but claims could be better supported, and the uniqueness of the resource better argued\nReview Content: The paper describes a new resource \"Med Mentions\" for entity linking of Pubmed abstracts, where entities are concepts in the UMLS type hierarchy -- for example \"Medical Device\".\n\nThe annotations were manually verified. (I assume the intention is to use this as a benchmark, but the paper does not say)\n\nThe paper is very rigorous in describing which concepts were considered, and which were pruned. Authors suggest to combine it with \"TaggerOne\" to obtain end-to-end entity recognition and linking system.\n\nIt is a little bit unclear what the main contribution of this paper is. Is it a benchmark for method development and evaluation (the paper mentions the train/dev/test split twice)? or do the authors propose a new system based on this benchmark?, or was the intent to test a range of baselines on this corpus (and what is the purpose?) -- I believe this lack of clarity could be easily addressed with a change in structure of headings. (Headings are currently not helping the reader, a more traditional paper outline would be helpful.)\n\nI appreciate that the paper lists a range of related benchmarks. However, I am missing a discussion of: where the advantage of MedMentions is in contrast to these benchmarks? What is MedMentions offering that none of the other benchmarks couldn't?\n\n\nIt is indisputable that a new resource provides value to the community, and therefore should be disseminated. However, the paper quality is more reminiscent of a technical report. A lot of space is dedicated to supplemental information (e.g. page 6) which would be better spent on a clear argumentation and motivation of the steps taken.\n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to AnonReviewer1\nThe introduction has been updated to state that a new annotated gold standard resource ('benchmark') is being introduced.\n\nThe main contribution in this paper is the new 'benchmark' - a manually annotated resource, for training and evaluating biomedical concept recognition systems. This new resource addresses 2 key needs:\n(i) Larger annotated resource, useful for training today's more complex ML models\n(ii) Broader coverage of biology and medicine concepts, by targeting UMLS.\n\nCRAFT is the closest, in size and coverage of biology. MedMentions can be viewed as a supplement -- the sizes of the ontologies is so much greater than the available annotated corpora that for ML models more data will always be useful. MedMentions has the added benefit of better coverage of concepts from some biomedical disciplines, e.g. diseases and drugs. This comes from the use of UMLS as the target ontology.\n\nTaggerOne is an established model for biomedical concept recognition and is offered here as a baseline concept recognition model, that researchers developing new models may compare their results against.\n\nSection 3 on related annotated corpora has been expanded slightly. Main differences are the size of the MedMentions benchmark corpus, and through the use of UMLS as the target ontology, more comprehensive coverage of biomedical concepts.\n Global/general Response:\n```\nWe would like to thank the reviewers for their detailed review. A revised version of the paper has been uploaded that should address most of the points raised. Various sections have been expanded in the revision, including: the introduction, to make the motivation more explicit; a clearer description of the metrics used for evaluating concept recognition models. See also responses to some specific questions below.\n```\n",
"role": "user"
},
{
"content": "Thanks for the authors' clarification. It satisfactorily addresses most of my concerns.",
"role": "assistant"
}
]
|
r1e3WW5aTX | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<r1e3WW5aTX>>```",
"role": "user"
},
{
"content": "Title: Novel solution and comprehensive experimentation\nReview Content: This paper proposes a novel solution to the relation composition problem when you already have pre trained word/entity embeddings and are interested only in learning to compose the relation representations . The proposed supervised relational composition method learns a neural network to classify the relation between any given pair of words and as a by-product, the penultimate layer of this neural network, as observed by the authors, can be directly used for relation representation.\nThe experiments have been performed on the BATS dataset and the DiffVec Dataset.\nThe inferences that were made to advocate for the usefulness of proposed MnnPL are as follows-\n- Out_of_domain relation prediction experiment to test for generalisability showed that MnnPL outperformed other baselines at this task\n- The interesting analysis in table 3 highlights the difficulty in representing lexicographic relations as compared to encyclopedic. MnnPL outperforms others here too. The authors provide a reasonable explanation (Figure 2) as to why the PairDiff operator that was proposed to work well on the Google analogy dataset works worse in this scenario.\n- The experiment to measure the degree of relational similarity using Pearson correlation coefficient, showcases that the relational embeddings from MnnPL are better correlated with human notion of relational similarity between word pairs\n- They also showed that MnnPl is less biased to attributional similarity between words\nThe authors show that the proposed MnnPL had outperformed other baselines on several experiments.\n\nSome of the positive aspects about the paper\n- elaborately highlighted all the implementations details in a crisp manner\n- Extensive experimentation done and a very due-diligent evaluation protocol.\n- In the experiments the authors have compared their proposed supervised operator extensively with other unsupervised operators like PairDiff, CONCAT, e.t.c and some supervised operators like SUPER_CONCAT, SUPER_DIFF e.t.c. They also compared against the bilinear operator proposed by Hakami et al., 2018, which was published very recently.\nSome of the limitations of the work\n- Though extensive experiments have been done and elaborate evaluation protocols have been followed to advocate for MnnPL, I believe that it lacked slightly on novelty side.\n- Reference to Table 2 on page 12 should actually be a reference to figure 2\n\nQuestions for rebuttal:-\n- Some reasoning on why does the CONCAT baseline show a better Pearson correlation coefficient than PairDIff?\n- Interesting to see that CBOW performed better than others, especially GLOVE, on all experiments. Some analysis on this.\n- Break down of the performance for the two semantic relation types, could be shown on a few other datasets to strengthen claim.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 1\nWe want to thank the reviewer for the time and review. \nQuestions for rebuttal: \n1-- CONCAT is a general operator that maps a pair to a higher dimensional space that considers the features of the two words in the word-pairs. We empirically observe that CONCAT operator for relations is significantly affected by the pairwise similarities of the corresponding words a and c, b and d for (a, b) and (c, d) word-pairs [1]. For relation prediction task in our work, we alleviate the effect of getting high relational similarity scores due to attributional similarities by excluding the 25% nearest pairs for each stem using cross-pair attributional similarity scores given by Equation 6. However, in measuring the degree of relational similarities and compare them with the human scores we did not consider such pre-processing step because of the nature of this evaluation is different than the relation prediction. In Chen dataset, there are many word-pairs that are assigned higher relational scores (human scores) than others because of the similarities between the corresponding entities in pairs. For example, (animal, pig) and (insect, ant) got 6.33 compared to (animal, pig) and (song, opera) that scored 4.3. It was also consistent with a closely related task measuring degrees of prototypicality in a recent work by Jameel, et al (2018) wherein PairDiff and Concat report 0.173 and 0.167 Spearman correlations, respectively [2]. \n\n2--\tFirst we want to note that this result is consistent with a recent comparative study by Hakami and Bollegala (2017). In [3] it has been shown that CBOW embeddings with unsupervised PairDiff operator perform the best for relation representations compared to various embedding types among many relational datasets. Similarly, Levy et al. (2015) show that CBOW (and SG) are better than GloVe in most cases for Google and MSR relational dataset using 3CosAdd and 3CosMult pair-based operators for answering analogical questions [4]. \n-\tThe GloVe objective for learning word embeddings aims to reduce the difference between the dot product of the vectors of the two words and the log of their co-occurrence. Thus, GloVe model is trained on the word-context co-occurrence matrix instead of the text corpus. According to this, GloVe is close to traditional word representation methods such as LSA that factorizing a word-context co-occurrence matrix. On the other hand, the CBOW objective seeks to predict a target word from its surrounding context. This difference in the objective between GloVe and CBOW might make sense for the superiority of CBOW prediction word embedding to capture informative information to induce relations between words. \n-\tThis work was not aimed to compare the embedding types, instead, we wanted to show that the proposed MnnPL method for representing relations is performing well across various type of word embeddings. Despite this, we train all the models on the same corpus to make a fair comparison to some extent. It is also known that hyperparameters tuning and pre-processing (or post-processing) are affecting the performance of such word embedding models. We also carry out our experiments using the 300 dimensional pre-trained publicly available GloVe and CBOW embeddings and it turns out that again CBOW outperforms GloVe for our tasks. \n\n3--\tIn BATS dataset, the semantic relations are classified to encyclopaedic and lexicographic where we show the performance on each relation type. On the other hand, as shown in Appendix A (table 5), all the semantic relations in DiffVec are lexicographic. \n\nReferences:\n[1] Turney, Peter D. \"Domain and function: A dual-space model of semantic relations and compositions.\" Journal of Artificial Intelligence Research 44 (2012): 533-585.\n[2] Jameel, Shoaib, Zied Bouraoui, and Steven Schockaert. \"Unsupervised Learning of Distributional Relation Vectors.\" Proceedings of ACL, Melbourne, Australia (2018).\n[3] Hakami, Huda, and Danushka Bollegala. \"Compositional approaches for representing relations between words: A comparative study.\" Knowledge-Based Systems 136 (2017): 172-182.\n[4] Levy, Omer, Yoav Goldberg, and Ido Dagan. \"Improving distributional similarity with lessons learned from word embeddings.\" Transactions of the Association for Computational Linguistics 3 (2015): 211-225.\n\n",
"role": "user"
},
{
"content": "I acknowledge the authors' response. They have addressed most of the issues I raised.",
"role": "assistant"
}
]
|
r1e3WW5aTX | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<r1e3WW5aTX>>```",
"role": "user"
},
{
"content": "Title: Elegant Method, Neat Result, Manuscript Needs Major Reorganization \nReview Content: This paper presents a simple model for representing lexical relations as vectors given pre-trained word embeddings.\n\nAlthough the paper only evaluates on out-of-context lexical benchmarks, as do several other papers in lexical-semantics, the empirical results are very encouraging. The proposed method achieves a substantial gain over existing supervised and unsupervised methods of the same family, i.e. that rely only on pre-trained word embeddings as inputs.\n\nIn my view, the main innovation behind this method is in the novel loss function. Rather than using just a single word pair and training it to predict the relation label, the authors propose using *pairs of word pairs* as the instance, and predicting whether both pairs are of the same relation or not. This creates a quadratic amount of examples, and also decouples the model from any schema of pre-defined relations; the model is basically forced to learn a general notion of similarity between relation vectors. I think it is a shame that the loss function is described as a bit of an afterthought in Section 4.4. I urge the authors to lead with a clear and well-motivated description of the loss function in Section 3, and highlight it as the main modeling contribution.\n\nI think it would greatly strengthen the paper to go beyond the lexical benchmarks and show whether the learned relation vectors can help in downstream tasks, such as QA/NLI, as done in pair2vec (https://arxiv.org/abs/1810.08854). This comment is true for every paper in lexical semantics, not only this one in particular.\n\nIt would also be nice to have an empirical comparison to pattern-based methods, e.g. Vered Shwartz's line of work, the recent papers by Washio and Kato from NAACL 2018 and EMNLP 2018, or the recently-proposed pair2vec by Joshi et al (although this last one was probably published at the same time that this paper was submitted). The proposed method doesn't need to be necessarily better than pattern-based methods, as long as the fundamental differences between the methods are clearly explained. I think it would still be a really exciting result to show that you can get close to pattern-based performance without pattern information.\n\nMy main concern with the current form of the paper is that it is written in an extremely convoluted and verbose manner, whereas the underlying idea is actually really simple and elegant. For example, in Section 3, there's really no reason to use so many words to describe something as standard as an MLP. I think that if the authors try to rewrite the paper with the equivalent space as an ACL short (around 6-7 pages in AKBC format), it would make the paper much more readable and to-the-point. As mentioned earlier, I strongly advise placing more emphasis on the new loss function and presenting it as the core contribution.\n\nMinor comment: Section 2.2 describes in great detail the limitation of unsupervised approaches for analogies. While this explanation is good, it does not properly credit \"Linguistic Regularities in Sparse and Explicit Word Representations\" (Levy and Goldberg, 2014) for identifying the connection between vector differences and similarity differences. For example, the term 3CosAdd was actually coined in that paper, and not in the original (Mikolov, Yih, and Zweig; 2013) paper, in order to explain the connection between adding/subtracting vectors and adding/subtracting cosine similarities. The interpretation of PairDiff as a function of word similarities (as presented in 2.2) is very natural given Levy and Goldberg's observation.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response \nWe would like to thank the reviewer for the useful comments. We will take your suggesstions about organizing the paper in the final version. ",
"role": "user"
},
{
"content": "Thank you for the authors' reply. The response covers nearly all of the issues I raised.",
"role": "assistant"
}
]
|
r1e3WW5aTX | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<r1e3WW5aTX>>```",
"role": "user"
},
{
"content": "Title: OK work, but the framing makes it sound trivial\nReview Content: \n== Summary ==\nThe paper proposes a way to embed the relation between a given pair of words (e.g., (aquarium, fish)).\n- Assume a dataset of |R| relations (e.g., hypernym, meronym, cause-effect) and many word pairs for each relation.\n- In each of the 10 test scenarios, 5 relations are randomly chosen. The test procedure is: for each pair (a, b) with embedding r(a, b), rank (c, d) based on cosine(r(a, b), r(c, d)). Evaluate how well we can retrieve pairs from the same relation as (a, b) (top-1 accuracy and mean average precision).\n- The proposed method is to train a multiclass classifier on the |R| - 5 relations. The model is a feed-forward network, and the output of the last hidden layer is used as r(a, b).\n- The method is compared with unsupervised baselines (e.g., PairDiff: subtracting pre-trained embeddings of a and b) as well as similar supervised methods trained on a different objective (margin rank loss).\n\n== Pros ==\n- The evaluation is done on relations that are not in the training data. It is not trivial that a particular relation embedding would generalize to unseen and possibly unrelated relations. The proposed method generalizes relatively well. Compare this to the proposed margin rank loss objective, which performs well on the classification task (Table 7) but is worse than PairDiff on test data.\n\n== Cons ==\n- The paper is framed in such a way that the method is trivial. My initial thought from the abstract was: \"Of course, supervised training is better than unsupervised ones\", and the introduction does not help either. The fact that the method generalizes to unseen test relations, while a different supervised method does to a lesser extent, should be emphasized earlier.\n- The reason why the embedding method generalizes well might have something to do with the loss function used rather than how the word vectors are combined (difference, concatenation, bilinear, etc.). This could be investigated more. Maybe one loss is better at controlling the sizes of the vectors, which is an issue discussed in Section 2.2.\n- The result on the DiffVec dataset is pretty low, considering that a random baseline would get an accuracy of ~20% (1 in 5 classes). The results also seem to be only a bit better than the unsupervised baseline PairDiff on the BATS dataset.\n- The writing is a bit confusing at times. For instance, on page 12, for the \"lexical-overlap\" category, it should not be possible for any two test pairs to have exactly 1-word overlap, or maybe I am missing something.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 3\nWe appreciate your effort in reviewing our work. \n== Cons ==\n1-- Thanks for your comment. We will add couple of sentences in the abstract and the introduction to emphasize the generalization ability of the proposed model. \n\n2-- The supervised baselines with different inputs (PairDiff, Concat and BiLin) that are described in Section 4.4 are all trained using the same loss: ranking loss defined in equation 5. \n-\tWe perform a direct comparison between two ways to derive relation representations: the penultimate layer of a neural network trained on relation prediction task using the softmax cross-entropy loss among relations vs direct relation representation learning models using analogous and non-analogous pairs of word-pairs considering the ranking loss. In this case, the two models are exactly same including the number of hidden layers, non-linear activation etc, with only a change in the loss. Empirically the proposed MnnPL (softmax cross entropy) outperforms the direct relation representation trained on the pairs of word-pairs relational data. \n-\tIn fact, there are many objective functions that can be used to train a supervised model for relation representations which we can evaluate them extensively in our future work. \n\n3-- In Table 2, MnnPL outperforms the unsupervised baselines for BATS but the gap is smaller than those in DiffVec. In BATS, we got two semantic relation types as shown in Table 3. Breaking down the performance of encyclopaedic and lexicographic indicates that MnnPL performs significantly better the PairDiff for lexicographic relations and slightly better for encyclopaedic ones. Discussion about the two relation types with some justification are illustrated in Page 12 (Figure 2). \n-\tAs shown in Appendix A (table 5), all the semantic relations in DiffVec are lexicographic type of relations. Even though the comparison made for BATS dataset show the difficulty of capturing lexicographic relations compared to encyclopaedic one, it is shown that MnnPL reports 48.6% accuracy with CBOW embeddings (statistically significant that the random baseline). \n\n4-- If the test word-pair is (a,b), then if there is a train word-pair (a,c), (b,c), (c,a) or (c,b) then we say there is a lexical-overlap between the test and the train datasets. For example, (‘animal, cat) and (animal, dog) has lexical-overlap because animal is a common word in the two pairs. We will add this clarification for lexical-overlap in the revised version. \n\n\n\n ",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. It resolves the majority of my concerns.",
"role": "assistant"
}
]
|
r1xP1W56pQ | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<r1xP1W56pQ>>```",
"role": "user"
},
{
"content": "Title: Compelling initial result but would benefit from additional experiments, especially direct comparisons with prior work.\nReview Content: This work applies an existing system, DiscK (Chen and Van Durme, 2017) to medical concept linking. They show that DiscK improves the mean reciprocal rank of candidate generation over several simple baselines, and leads to slightly worse but significantly more efficient linking performance in one existing downstream system, DNorm (Leaman et al., 2013). \n\nPro’s:\n-\tClear description of the task and associated challenges. \n-\tThorough explanation of the proposed system, experiments, and results. In particular, the discussion of the performance trade-off between MRR and coverage, and why it is useful that DiscK maintains robust MRR even though it under-performs several baselines w.r.t. coverage, was clear and useful. \n-\tThe downstream concept linking result is compelling: for example, the DNorm+DiscK system with k=5000 and d=1000 only considers 4% of candidates but its coverage declines by just 2% compared to considering all 100% of candidates (as in DNorm alone). \n\nCon’s/Suggestions:\n-\tSection 3/DiscK Overview: Without having read the DiscK paper in full (admittedly), I found Section 3 very hard to follow. How are the feature weights chosen? How are the feature parameters trained? What’s the difference between a feature type and a feature value? Since the DiscK work is integral to the rest of the paper, the authors should spend more time giving a high-level overview of the approach (targeted at “naïve readers”) before delving into the details.\n\n-\tSection 4: The authors note at the end of Section 4 that “While we found that many mentions could be matched to concepts by lexical features, a significant portion required non-lexical features.” It would be helpful if the authors provided a concrete example of such a case. \n\nAlso, they note that “While features from additional mention properties, such as the surrounding sentence, were tested, none provided an improvement over features build from the mention span text alone.” This merits more detailed explication: What features did they try? Since examples requiring non-lexical features were a consistent source of error, why do the authors think that non-span-based features failed to influence model performance on these examples?\n \n-\tData: It is unclear to me whether expanding the ontology to include the additional concepts (e.g. Finding) would prevent comparison with other systems for this dataset, and what is gained from this decision. In addition, it would be helpful if the authors explained what a “preferred entry” is in the context of the SNOWMED-CT ontology. \n\n-\tEvaluation & Results: \n\nThe authors note that “The concept linking systems that were evaluated in the shared task may have included a candidate generation stage, but evaluations of these stages are not provided.” However, it is unclear to me why the systems that do include a candidate generation phase could not either be re-run or re-implemented to get such results, especially since Share/CLEF is a shared task with many entries and corresponding published system descriptions. Since direct comparisons with prior work were omitted, it is hard to gauge the strength of the baselines and proposed system. \n\nIn addition, it might be useful to test the value of higher MRR in the downstream concept linking task by comparing the DiscK-based candidate generator against one of the baseline approaches with higher coverage but lower MRR. Can DNorm capitalize on a more informative ranking, or are the results similar as long as a basic level of coverage is achieved for a given k?\n\n-\tRelated Work: The authors only briefly mention entity linking work outside of the biomedical domain. Did the authors evaluate any of these approaches, especially in the context of computationally-efficient linking architectures? Also, a more detailed description of DNorm would be useful. \n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to reviewer 1\nThanks for your comments on our work. First, we can clarify several points on DiscK. The goal of DiscK is to use a simple feature set to learn a weighted similarity between a query and a set of documents. In our task, the query is a mention and the set of documents is a set of concepts from the UMLS. It is parameterized in a way that allows for retrieval in sublinear time (as reviewer #2 points out, we incorrectly said “linear” - this will be corrected to “sublinear”).\n\nFor the training procedure, the weights θ are trained using a negative sampling procedure, with the goal of learning a set of weights that will correctly predict mention-concept pairs. For each training mention, it is paired with the correct ontology entry and 50 incorrect ontology entries. The resulting weights are used to project which ontology entry is most suited to the mention feature set. This is trained using a log-linear model. \n\nThe feature set we explored that included additional mention properties (e.g. the surrounding sentence and the definition of the concept, if available) used the same approach as the mention features, except that they included the text of the sentence or definition. For example, one proposed feature was a bag-of-words approach using the surrounding sentence and the concept definition.\n\nRegarding the data, we cannot directly compare to previously reported results on this dataset for several reasons. As pointed out, we do use a specific subset of the ontology, and it isn’t clear which specific subsets other papers used to evaluate. However, the more important point is that we do not have access to the test set (as noted in Section 5), and therefore we don’t have the necessary data for a direct comparison. The DNorm system was one of the best performing systems on the task (at the release of the shared task results, see Pradhan et al 2015), so that point of comparison is available.\n",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It adequately addresses the points I raised.",
"role": "assistant"
}
]
|
r1xP1W56pQ | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<r1xP1W56pQ>>```",
"role": "user"
},
{
"content": "Title: Some efficiency gains, but more rigorous evaluation and motivation required\nReview Content: This paper proposes using an existing method, DiscK, to pre-filter candidate concepts for a medical concept linking system. When combined with an existing state-of-the-art linker (DNorm), the joint system is able to consider much fewer candidate concepts, albeit with some drop in accuracy. These results are potentially significant, but assessing the actual impact requires additional measurements and better explanation of the motivation.\n\nPros:\n- DiscK + DNorm achieves concept linking accuracy close to the previous state-of-the-art, while requiring DNorm to process significantly fewer candidates. For example, the system gets up to .759 MRR when considering 2000 candidates, compared to .781 MRR for the full set of ~125K candidates (Table 5).\n\nCons:\n- Evaluation of candidate generation without downstream linking (Section 6) was not compelling. Without information about how the linker might use these candidate sets, it is hard to tell whether these numbers are good or not.\n- The claims of superior computational efficiency in Section 8 are not supported quantitatively. There should be a comparison between DNorm alone and DiscK + DNorm of how many mentions or documents can be processed per second. Otherwise, it is not clear how much overhead DiscK adds compared to the cost of running DNorm on more candidates. Hopefully this issue can be easily addressed.\n- Writing was at times unclear and misleading. Most importantly, the authors write, \"DiscK allows us to efficiently retrieve candidates over a large ontology in linear time.\" In fact the point of DiscK is that it enables sublinear queries in the size of the candidate space/ontology. But perhaps the authors are referring to being linear in something else? After all, pairwise scoring models are also linear in the size of the candidate space.\n- Straightforward application of existing approaches, with no new technical contribution\n\nAdditional comments:\n- As an outsider to this medical linking task, I wanted to understand more why speed is important. How expensive is it to run DNorm on a large corpus? Are there online settings in which low latency is desired, or is this only for offline processing?\n- I would like some discussion of two other seemingly more obvious ways to improve runtime:\n (1) Learn a fast pairwise comparison model and filter using that. The number of candidates is large but not unmanageably large (125K), so it seems possible that you could still get significant wall clock speedups by doing this (especially as this is very parallelizable). Such an approach would also not have to abide by the restrictions imposed by the DiscK framework.\n (2) Speed up DNorm directly. Based on the original paper (Leaman et al., 2013), it seems that DNorm is slow primarily because of the use of high-dimensional tf-idf vectors. Is this correct? If so, might a simple dimensionality reduction technique, or use of lower-dimensional word vectors (e.g. from word2vec) already make DNorm much more efficient, and thus obviate the need for incorporating DiscK? Relatedly, are other published linkers as slow as DNorm, or much faster?\n- Based on table 5, in practice you would probably choose a relatively high value of K (say ~2000), to maintain near state-of-the-art accuracy. We also know that at high values of K, character 4-gram is competitive with or even better than DiscK. So, what is the runtime profile of the character 4-gram approach? Footnote 8 mentions that it is asymptotically slower, but computing these character overlaps should be very fast, so perhaps speed is not actually a big issue (especially compared to the time it takes DNorm to process the remaining K candidates). This is is related to point (1) above.\n- The authors mention that DNorm is state-of-the-art, but they don't provide context for how well other approaches do on this linking task. It would be good to know whether combining DiscK + DNorm is competitive with say, the 2nd or 3rd-best approaches.\n- The organization of this paper was strange. In particular, Section 8 had the most important results, but was put after Discussion (Section 7) and was not integrated into the actual \"Evaluation and Results\" section (Section 6).\n- Measuring MRR in Section 6 is unmotivated if you claim you're only evaluating candidate generation, and plan to re-rank all the candidate sets with another system anyways. It is still good to report these MRR numbers, so that they can be compared with the numbers in Section 8, so perhaps all of the MRR's should be reported together.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 2\nThanks for the comments. While we do not report downstream results in Section 6, we do in Section 8. We did so in this setup so that we could first compare potential retrieval methods, and then test the DiscK method downstream. We agree that reporting timed results would be ideal. However, after a significant amount of engineering effort, we were unable to directly integrate the DiscK method into DNorm, as it is an older codebase. Therefore, we run the DiscK retrieval method, save the results, and finally loading the results into DNorm. This doesn’t allow for a fair comparison on timed efficiency, so we are left with the efficiency measurements we report in the paper. In addition, thanks for correctly pointing out that instead of linear time, we should state that DiscK is sublinear time - this will be corrected in the paper.\n\nIn response to comment (1), it is true that the size of this dataset (125k concepts) could possibly allow for a pairwise model, as you suggest. However, the mentions in this dataset are only linked to a subset of one ontology, SNOMED Disorders. In real clinical settings, the size of the ontology is often much larger, as the entire ontology (SNOMED has 300K concepts) or several ontologies may be used in combination, leading to a concept set of 1 million or more. Therefore, we wanted to approach this problem in a way that will scale to very large ontologies, and a pairwise approach, while potentially feasible for this dataset, would get computationally expensive quickly for larger ontology sets. The downside of the pairwise approach extends to DNorm, which essentially performs a weighted similarity calculation between the tf-idf vector of the mention and the concept. In larger settings, this will grow very slowly. While most of the time concept linking is performed offline, there are several scenarios where online concept linking would be useful, such as a clinician seeking to process a specific clinical note.\n\nFollowing off of the above point, this is part of the reason we don’t simply make DNorm more efficient (as suggested in (2)), as it is still a pairwise method. Furthermore, we wanted to propose a general method that would allow candidate generation for any system, and not simply speed up an existing one.\n\nAs noted in the response to reviewer #2, the DNorm system was one of the best performing systems on the task (at the release of the shared task results, see Pradhan et al 2015), so that point of comparison is available. However, those results are on the test set, and aren’t directly comparable. While we don’t currently have the code for any of the other systems reported in that paper, we do have access to a later system (“Sieve-Based Entity Linking for the Biomedical Domain”, D’Souza and Ng 2015), and we can use that as a point of comparison for the final paper.",
"role": "user"
},
{
"content": "Thank you for the authors' reply. They have adequately addressed the majority of my concerns.",
"role": "assistant"
}
]
|
r1xP1W56pQ | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<r1xP1W56pQ>>```",
"role": "user"
},
{
"content": "Title: throrough study and a good reminder to think about this important step in the entity linking pipeline\nReview Content: In the area of entity linking, this paper focuses on candidate generation. The author make a valid point, for the importance of this stage: While much work focuses on disambiguation of a candidate set of entities, this only matters if the correct choice is present in the candidate set. This might not be true for simple candidate generation methods. The paper suggests a weighted combination of different candidate generating features, such as exact string match, BM25, character 4-grams or abbreviation matches. These are contrasted with two variations of the DiscK approach.\n\nThe evaluation is based on the Share/Clef eHealth task. It is very exhaustive, evaluating different properties of the candidate set. From an efficiency standpoint, I would be interested in candidate methods that contain the true candidate in a small pool set i.e., 1000 might not be affordable in a real system. I only have one doubt: The lines do not cross for MRR (Figure 2), so how is it possible that they cross for coverage (Figure 1)? Maybe standard error bars would be helpful to understand which differences are due to quantization artifacts.\n\nIt would have been useful to study the effects of the candidate set on the final result when combined with a candidate-based entity linker.\n\nI am not sure I follow the terminology of (DiscK), weighted, combined, binary -- it would help to have an exhaustive list of methods with consistent names.\n\nIf the authors find it useful, here is a related paper that also discusses the effects of the candidate method during entity linking: Dalton, Jeffrey, and Laura Dietz. \"A neighborhood relevance model for entity linking.\" Proceedings of the 10th Conference on Open Research Areas in Information Retrieval. 2013.\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 3\nThanks for your comments. As you point out, the lines do not cross for MRR but do cross for coverage. This is due to the fact that while the Char 4-gram has higher coverage at high K levels, the correct candidates are still ranked very low, and have a low effect on the MRR. DiscK overall does a much better job of ranking correct concepts highly, but omits some that Char 4-gram assigns a low ranking. \n\nWe agree that the size and contents of the candidate set would have significant implications for the final linking results. We are limited in the flexibility we have to experiment with this, since the candidate set is determined by the annotators, but it is an important point. Regarding the terminology, DiscK weighted uses the parameters learned by the training procedure (discussed in Response #1), while DiscK binary only considers the presence of features (i.e. all parameters are 1) and does not use the learned weights. The combined model uses the best DiscK weighted with a model that has higher coverage at K=1000 (Char 4-gram). For each candidate, we take the higher of the two normalized scores to achieve a combined ranking. \n",
"role": "user"
},
{
"content": "Thank you for the authors' reply. It resolves the majority of my concerns.",
"role": "assistant"
}
]
|
rJgHCgc6pX | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<rJgHCgc6pX>>```",
"role": "user"
},
{
"content": "Title: Although the paper has a lot of promise, it is not suitable for publication in an academic conference in its present form, in my view. I advise the authors to review all the comments, and prepare a significantly revised version for future publication.\nReview Content: \nThe authors present Alexandria, a system for unsupervised, high-precision knowledge base construction. Alexandria uses a probabilistic program to define a process of converting knowledge base facts into unstructured text. The authors evaluate Alexandria by constructing a high precision (typically 97%+) knowledge base for people from a single seed fact. \n\nStrengths:\n\n--although it is unusual to combine and intro and related work into a single section, I enjoyed the authors' succinct statement (which I agree with) about the 'holy grail of KB construction'. Overall, I think the writing of the paper started off on a strong note.\n\n--the paper has a lot of promise in considering a unified view of KB construction. Although I am not recommending an accept, I hope the authors will continue this work and submit a revised version for future consideration (whether in the next iteration of this conference, or another)\n\nWeaknesses:\n\n--It is not necessary to be putting 'actual programs' (Figure 1). It does not serve much of a purpose and is inappropriate; the authors should either use pseudocode, or just describe it in the text. \n\n--In Web-scale fact extraction, the domain-specific insight graphs (DIG) system should also be mentioned and cited, since it is an end-to-end system that has been used for KGC in unusual domains like human trafficking. \n\n--Starting from Section 2, the whole paper starts reading a bit like a technical manual/specification document. This kind of paper is not appropriate for an academic audience; the paper should have been written in a much more conceptual way, with actual implementations/programs/code details relegated to a github repo/wiki, and with a link to the same in the paper.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to review comments for AnonReviewer2\nThank you for your helpful review. Here are some responses to your specific questions:\n1. \"It is not necessary to be putting 'actual programs' (Figure 1). It does not serve much of a purpose and is inappropriate; the authors should either use pseudocode, or just describe it in the text.\": We are using a probabilistic programming approach to construct the knowledge base. The programs here are not source code for implementation but define the probabilistic models used - in others words the core assumptions of our method. It is common practice to share programs in a probabilistic programming paper, for example: Picture: A Probabilistic Programming Language for Scene Perception (Kohli et al., 2015) or Human-level concept learning through probabilistic program induction (Lake et al., 2015) both show example programs.\n2. \"Mention and cite DIG\": Thank you for bringing this to our attention. We will certainly cite this in an updated version. DIG has several aspects. (1) The training of domain-specific feature extractors from semi-structured pages using user-specified examples and automatically generated examples for crowd-sourcing, (2) The mapping of knowledge into the DIG ontology including the preservation of provenance , (3) Entity linking using a variety of similarity algorithms on text and images, and (4) Querying and Visualization. In Alexandria both (1) and (3) are solved by running inference in the same probabilistic model, and both unstructured data and semi-structured source are supported. But image data is not currently supported. (2) and (4) are not directly addressed by the Alexandria system.\n3. \"The whole paper starts reading a bit like a technical manual/specification document.\": As stated above, these programs are not implementations but probabilistic programs defining the models/assumptions used to learn the knowledge base. They are a key contribution of this paper which is why we have included them to allow academic readers to assess and critique assumptions encoded in the system.\n",
"role": "user"
},
{
"content": "Thank you for the authors' feedback. The response covers nearly all of the issues I raised. I still support my favorable recommendation for this manuscript.",
"role": "assistant"
}
]
|
rJgHCgc6pX | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<rJgHCgc6pX>>```",
"role": "user"
},
{
"content": "Title: Learning to extract facts with little prior except a structural prior from a probabilistic program\nReview Content: This paper uses a probabilistic program describing the process by which facts describing entities can be realised in text, and a large number of web pages, to learn to perform fact extraction about people using a single seed fact.\n\nDespite the prodigious computational cost (close to a half million hours of computation to acquire a KB only about people) I found the scale at which this paper applied probalistic programming exciting. It suggests that providing a structural prior in this form, and then learning the parameters of a model with that structure is a practical technique that could be applied widely.\n\nQuestions that arose whilst reading: the most direct comparison was with a system using Markov Logic Networks, in which the structural prior takes the form of a FOL theory. A more direct comparison would have been useful - in particular, an estimate of the difficulty of expressing and equivalently powerful model, and the computational cost of trainining that model, in MLN.\n\nQuite a lot of tuning was required to make training tractable (for outlying values of tractable) - this limits the likely applicability of the technique.\n\nThe paper suggests in future work an extension to open domain fact extraction, but it is not clear how complex or tractable the require prob program would be. The one in the paper is in some respects (types mainly) specific to the facts-about-people setting.\n\nIt is unclear why theTAC-KBP Slot Filling track was mentioned, given that performance on this track does not seem to have been evaluated. An informal evaluation suggesting beyond SoA performance is mentioned, but not useful details given. This significantly weakens what otherwise could be a stand-out paper\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to review comments for AnonReviewer3\nThank you for your helpful review. Here are some responses to your specific questions:\n1. Comparison with MLN: Probabilistic programs are substantially richer than MLNs - and indeed can express MLNs as a special case. It is unclear if an MLN can be used to express our model - we suspect not, since our model can, for example, reason about the cardinality of sets of values which cannot be achieved in a compact form with FOI. Expressing an equivalently powerful model would be a substantial research exercise in itself!\n2. \"Quite a lot of tuning was required\": Could you clarify what you mean by tuning? Are you referring to the approximate inference assumptions? If so, this is 'done'. There is no need to do this for each domain. If you are referring to computational performance, the computation is expensive due to running on the whole web, and dramatic speed-ups can be obtained by restricting to particular web domains.\n3. Extensibility to other domains: It is true that the hierarchies in the paper are people-specific, but we are developing the ability to learn these. Otherwise, the types in the paper have broad applicability, and they (along with a few others which have been added more recently) have since been used for 100's of entity types. \n4. \"It is unclear why the TAC-KBP Slot Filling track was mentioned\": We wanted to evaluate our system against the TAC-KBP Slot Filling track but were unable to do so since the track required manual evaluation. Whilst we could have trained comparable judges to judge our system there is significant subjective judgement needed and using different judges would bring any comparison into question. What is needed is automatic evaluation of such systems to allow ongoing fair comparison.\n",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. The response covers nearly all of the issues I raised. I maintain my positive recommendation for this work.",
"role": "assistant"
}
]
|
rJgHCgc6pX | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<rJgHCgc6pX>>```",
"role": "user"
},
{
"content": "Title: Very interesting overview of an existing knowledge base, will inspire interesting discussions\nReview Content: \nThe paper is about a knowledge base that is constructed with a probabilistic model (Markov logic network, as implemented in Infer.NET). The system is expansive and covers many important aspects, such as data types and values, templates for extraction and generation, noise models. The model addresses three kinds of large scale inferences: (1) template learning, (2) schema learning, and (3) fact retrieval. The knowledge base construction approach is evaluated against other knowledge base approaches, YAGO2, NELL, Knowledge Vault, and DeepDive.\n\nThe paper is a very interesting overview over the knowledge base Alexandia, that would inspire interesting discussions at the AKBC conference. \n\nMy only pet peeve are a set of initial claims, intended to be distinguishing this approach from others, which are simply not true:\n- \"Alexandia's key differences are its unsupervised approach\" -- clearly Alexandria requires prior knowledge on types, and furthermore weakly supervision for template learning etc. Unsupervised means \"not require any clue about what is to be predicted\". \n- \"KnowledgeVault cannot discover new entities\" -- I'd be surprised.\n- \"DeepDive uses hand-constructed feature extractors\" -- It is a matter of preferences whether one seeds the extraction with patterns or data. This statement does not convince me.\nWhile the discussion of these differences is important, I suggest the authors use more professional language.\n\n\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to review comments for AnonReviewer1\nThank you for your helpful review. Here are some responses to specific questions:\n\nKnowledgeVault:\nThank you for bringing this to our attention. We will modify our statement in the updated paper.\n\nDeepDive. \"It is a matter of preferences whether one seeds the extraction with patterns or data.\":\nThis is true. The point we are trying to make here is that we are starting with a single well-known fact with no manual effort involved. \n\n\"Alexandria's key differences are its unsupervised approach\": \n\"Unsupervised learning is a branch of machine learning that learns from test data that has not been labelled, classified or categorized\" (Wikipedia). That is the case in the experiment reported in this paper except for the provision of a single data point. I guess the point you're making is that the knowledge inherent in automatically setting sensible priors on properties (for example heights of people) requires a set of values for the property. But in the paper experiment, such priors are learnt by type inference on extracted values as part of schema learning, and so, again, do not need labelled data. The types themselves are general and not domain or property or data specific. Hierarchies are hand-built in this experiment and therefore should be considered domain-specific prior information, but we have research addressing hierarchy learning which would, again, be fed by extracted values. ",
"role": "user"
},
{
"content": "I appreciate the authors' reply. Most of my concerns have been addressed. I remain supportive of my positive recommendation for this paper.",
"role": "assistant"
}
]
|
rygDeZqap7 | AKBC 2019 Conference | AnonReviewer3 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<rygDeZqap7>>```",
"role": "user"
},
{
"content": "Title: An interesting paper in methodology, but I'm not entirely convinced by the experiments\nReview Content: This paper proposes an interesting combination of semi-supervised learning and ensemble learning for information extraction, with experiments conducted on a biomedical relation extraction task. The proposed method is novel (to certain degree) and intuitively plausible, but I'm not entirely convinced of its effectiveness. \n\nPros:\n\n- Novel combination of semi-supervised learning and ensemble learning for information extraction\n- Good discussion of literature\n- Paper is well written and easy to follow\n\nCons:\n\n- Experimental design and results are not supportive enough of the effectiveness of the proposed method.\n\nSpecifically, I think the following problems undermine the claims:\n\n- The experiments are conducted on a single dataset. I understand it may be hard to get more biomedical RE datasets, but the proposed method is in principle not limited to biomedical RE (even though the authors limited their claim to biomedical RE). In general I think it needs multiple datasets to test a generic learning methodology.\n\n- Only one split of training data is used, and the unlabeled data to labeled data are close in scale. It would be really informative if there are experiments and performance curves with different labeled subsets of different sizes. Also, (after under-sampling) 4500 unlabeled vs. 2000 labeled seems less impressive and may be not very supportive of the usefulness of the proposed method. What will happen if there are only 200 labeled examples? 400?\n\n- No comparison with other semi-supervised learning methods for information extraction, so it's not clear how competent the proposed method is compared with other alternatives. \n\n- The fact that the mean base learner performance is often on par with the meta-learner, and simple majority vote of the weak labels can largely outperform the meta-learner may suggest a better meta-learner other than LSTM should have been used.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Review response & revision details\nThe authors would like to thank the reviewer for his comments. \nResponding to the reviewers request, we repeated the experiments using the very same process/methodology in a different dataset investigating a similar relationship, relevant to the biomedical domain. Experimental results demonstrate an identical behavior, and therefore all of our initial conclusions/remarks still hold. The paper has been revised, containing all of the additional results and details. \n\nThe reviewer also proposed to change the dataset proportions; decreasing the labelled set to increase the “virtually” unlabelled examples to conclude if the overall performance gain of the meta-learner would increase further. While this is an interesting direction, it moves towards few-shot learning, which is a different use case and out of the scope of this paper. This approach comes with additional challenges, mainly related to defining enough adequate base learners which can provide meaningful results in a handful of labelled examples. It is even further more challenging for our selected meta-learner (LSTM), which has shown a big deficiency when trained with only few training labels.\nThe performance of the LSTM indeed seems to be significantly lower compared to the rest of the models. It is already proven by the experiments that the LSTM itself performs lower than the average Base Learner (eg. a simple Logistic Regression). possibly due to the combination of model complexity and few training data. This is a common phenomenon when the model complexity is not on par with the amount of training set size. However, using a complex model (LSTM) compared to a simple model (Logistic Regression with bag-of words) is beneficial for the nature of our experiments, as we expect to see a much bigger performance gain when increasing the training set size. To put this into numbers, the LSTM had a performance gain of approximately 7-10 F1 points when doubling the training set size, whereas simpler algorithms gained only a few (in principle <5) F1 points. Therefore, a Logistic Regression, would not be able to make good use of larger dataset, hence the choice of LSTM. \n\nPlease also note that we are planning to release the full source code after the conference, to ensure reproducibility of the results and help the research community adapt the experiment to their own use cases.\n",
"role": "user"
},
{
"content": "Thank you for the authors' detailed response. It satisfactorily addresses most of my concerns.",
"role": "assistant"
}
]
|
rygDeZqap7 | AKBC 2019 Conference | AnonReviewer1 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<rygDeZqap7>>```",
"role": "user"
},
{
"content": "Title: The paper proposes a potentially interesting direction, though needs to improve in many aspects.\nReview Content: The authors propose to semi-supervised method for relation classification, which trains multiple base learners using a small labeled dataset, and applies an ensemble of them to annotate unlabeled examples for semi-supervised learning. They conducted experiments on a BioCreative shared task for detecting chemical-protein interactions from biomedical abstracts. The experimental results suggest that ensemble could help denoise the self supervised labels. Overall, this is an interesting direction to explore, but the paper can also improve significantly in several aspects.\n\nFirst, the discussion about related work is a bit lacking and at place slightly misguided. The authors seem to be unaware of a growing body of recent work in biomedical machine reading using indirect supervision, e.g., Deep Probabilistic Logic (EMNLP-18). These methods have demonstrated success in extracting biomedical relations across sentences and beyond abstracts, using zero labeled examples. Likewise, in discussing semi-supervised approaches and cross-view learning, the authors should discuss their obvious connections to recent progress such as Semi-Supervised Sequential Modeling with Cross-View Learning (EMNLP-18) and EZLearn (IJCAI-18).\n\nAdditionally, the authors seem to pitch the proposed method against standard weak supervision approaches. E.g., the paper stated \"For the creation of the weak labels, we use classifiers pretrained in a small labeled dataset, instead of large Knowledge Bases which might be unavailable.\" Effectively, the authors implied that distant supervision is not applicable here because it requires \"large knowledge bases\". In fact, the precise attraction of distant supervision is that knowledge bases are generally available for the relations of value, even though their coverage is sparse and not up-to-date. And past work has shown success with small KBs using distant supervision, such as recent work in precision oncology:\n\t- Distant Supervision for Relation Extraction beyond the Sentence Boundary (EACL-17)\n\t- Cross-Sentence N-ary Relation Extraction with Graph LSTMs (TACL-17)\n\nA labeled dataset contains strictly more information than the annotated relations, and arguably is harder to obtain than the corresponding knowledge base. Some prior work even simulated distant supervision scenarios from labeled datasets, e.g., DISTANT SUPERVISION FOR CANCER PATHWAY EXTRACTION FROM TEXT (PSB-15). These won't detract significance in exploring ensemble learning, but the proposed direction should be considered as complementary rather than competing with standard weak supervision approaches.\n\nThe paper should also discuss related datasets and resources other than BioCreative. E.g., BioNLP Shared Task on event extraction (Kim et al. 2009) is influential in biomedical relation extraction, followed by a number of shared tasks in the same space.\n\nAnother major area that can be improved is the technical and experimental details. E.g.:\n\nThe ensemble of base learners is key to the proposed method. But it's unclear from the paper how many base learners have been considered, what are their types, etc. At the high level, the paper should specify the total number of candidate base learners, distribution over major types (SVM, NN, ...), the types of chosen centroids, etc. For each type, the paper should include details about the variation, perhaps in supplement. E.g., SVM kernels, NN architectures. \n\nTable 3 shows the mean F1 score of base learners, which raises a number of questions. E.g., what're the top and bottom performers and their scores? If the mean is 53, that means some base learner learning from partial training set performs similar to LSTM learning from the whole training set, so it seems that out right some learner (supposedly not LSTM) is more suited for this domain? It's unclear whether LSTM is included in the base learner. If it was, then its low performance would mean that some top performers are even higher, so the ensemble gain could simply stem from their superiority. In any case, the lack of details makes it very hard to assess what's really going on in the experiment.\n\nA key point in the paper is that high-capacity methods such as LSTM suffers from small dataset. While in general this could be true, the LSTM in use seems to be severely handicapped. For one, the paper suggested that they undersampled in training LSTM, which means that LSTM was actually trained using less data compared to others. They did this for imbalance, but it is probably unnecessary. LSTMs are generally less sensitive to label imbalance. And if one really has to correct for that, reweighting instances is a better option given the concern about small training set to begin with.\n\nThe paper also didn't mention how the word embedding was initialized in LSTM. If they're randomly initialized, that's another obvious handicap the LSTM suffers from. There are publicly available PubMed word2vec embedding, not to more recent approaches like Elmo/Bert. \n\nMinor comments:\n\nIn Table 3, what does it mean by \"F1 of weak labels\"? Does it mean that at test time, one compute the ensemble of predictions from base learners?\n\nThe paper uses D_G in one place and D_B in another. Might be useful to be consistent.\n\n\"Text trimming\" sounds quite odd. What the paragraph describes is just standard feature engineering (i.e., taking n-grams from whole text vs. in between two entities).\n\nFig 3: it's unclear which line is which from the graph/caption. One has to guess from the analysis, \n\nFig 5: what was used in tSNE? I.e., how is each instance represented? It's also hard to make any conclusion from the graph, as the authors' tried to make the point about distributions.\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Review response & revision details\nThe authors would like to thank the reviewer for his thorough and many suggestions. \nResponding to the reviewers request, we repeated the experiments using the very same process/methodology in a different dataset investigating a similar relationship, relevant to the biomedical domain. Experimental results demonstrate an identical behavior, and therefore all of our initial conclusions/remarks still hold. The paper has been revised, containing all of the additional results and details.\nWe further describe our changes and respond to this review in our follow-up comment (see below).\n\nPlease also note that the authors are planning to release the full source code after the conference, to ensure reproducibility of the results and help the research community adapt the experiment to their own use cases.\nThe revision also addresses several other aspects of the review, such as: (a) augmenting the related literature with discussion of the relevant indicated articles, (b) providing thorough details regarding the architecture and performance of the selected base-learners (see Appendix) (c) Addressing specific details indicated by the reviewer.\nIn addition to those changes, we also have some follow-up remarks/responses on this review: \n\n(a) Distant supervision: \nIt is indeed true that we could complement our method using distant supervision. However, it is ambiguous what is an “important” relationship and what not, therefore the construction or augmentation of a relationship in a Knowledge base could be much more time-consuming compared to labelling some documents (eg. 400 abstracts) for this relation. In any case, our work is complementary to distant supervision work, which we have now highlight in the revised version.\n(b) Tackling class imbalance on the LSTM:\nThe reviewer concluded that the LSTM was trained with less examples compared to the other simpler models (Base Learners) and therefore had an inferior performance because of that. This is not true. In all cases, we performed undersampling to address class imbalance. Using the exact same undersampled ground-truth training dataset (D_B), the LSTM performed consistently worse than simpler models.\nThe only training set size mismatch occured when comparing the LSTM (as a meta-learner), using different weak labels (on D_U). This stems from the fact, that the denoiser produces different weak labels in each case. From this point onwards, we simulate a real-world scenario, where we don’t have the real labels of D_U and undersample the meta-learner using the weak (virtually unknown) labels. If we consider case A, where the denoiser produces equally balanced classes on the weak labels, and case B where we have a (predicted) imbalance of 55/45, case B will result in a 10% smaller training set. This does happen to some extent when we compare the denoising methods to each other, and our methodology, compared to the scenario where we would have all gold labels available (for D_B and D_U). However, this detail does not affect by any means the most important comparison; training the LSTM @ D_B with gold labels vs. training the LSTM with weak labels @ D_U. \nSolving this issue with re-weighting could possibly prove to be a better solution, but the solution is non-trivial and could introduce additional bias to the results. The reason is that this is not the typical classification scenario: the meta-learner is not trained using True/False labels, but with numbers in the range of [0,1] which correspond to confidences. Further, already observed that when gold-labels where used, the LSTM performed better with undersampling vs. changing the class weights in the loss function. Therefore, proceeding with such a design choice would be non-trivial and require further investigation.\n(c) LSTM suitability as meta-learner:\nIt is already proven by the experiments that the LSTM itself performs lower than the average Base Learner (eg. a simple Logistic Regression), possibly due to the combination of model complexity and few training data. \n\nThis is a common phenomenon when the model complexity is not on par with the amount of training set size. However, this is beneficial for the nature of our experiments, as we expect to see a much bigger performance gain using a complex model (LSTM) compared to a simple model (Logistic Regression with bag-of words) when increasing the training set size. To put this into numbers, the LSTM had a performance gain of approximately 7-10 F1 points when doubling the training set size, whereas simpler algorithms gained only a few (in principle <5) F1 points. Therefore, in case where a Logistic Regression was to be used as a meta-learner, our experiments would become weaker, as it would become harder to notice performance gains and draw conclusions. Further, the word embeddings were indeed randomly initialized; we now highlight this detail in the revised version.\n\n(d) F1 of weak labels: \nThe output of the denoiser is a probabilistic/marginal (or binary) weak label. It ranges from 0-1 and corresponds our confidence that the true label is “True” or False, ie. a label of 0.95 would mean 95% confidence of the true label being “True”. If we draw the classification boundary on 0.5, we can compute the F1 scores of the weak labels themselves (which are later used to train the meta-learner). Several adjustments were made to the paper to further clarify this point.\n\n",
"role": "user"
},
{
"content": "I appreciate the authors' feedback. It answers most of my concerns.",
"role": "assistant"
}
]
|
rygDeZqap7 | AKBC 2019 Conference | AnonReviewer2 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<rygDeZqap7>>```",
"role": "user"
},
{
"content": "Title: review \nReview Content: -summary\nThis paper addresses the problem of generating training data for biological relation extraction, looking specifically at the biocreative chemprot dataset. The authors weakly label data by using a set of weak classifiers, then use those predictions as additional training data for a meta learning algorithm.\n\n-pros\n- nicely written paper with good exposition on related works\n- good analysis experiments\n\n-cons\n- experiments all on a single dataset\n- methodology isn’t very novel\n\n-questions\n- In table 3, majority vote of weak classifiers outperforms the meta learning. Are these two numbers comparable and if so, does this mean an ensemble of weak classifiers is actually the best performing method in your experiments.\n\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Review response & revision details\nThe authors would like to thank the reviewer for his comments. \nResponding to the reviewers request, we repeated the experiments using the very same process/methodology in a different dataset investigating a similar relationship, relevant to the biomedical domain. Experimental results demonstrate an identical behavior, and therefore all of our initial conclusions/remarks still hold. The paper has been revised, containing all of the additional results and details.\n\nRegarding the question about the ensemble of weak classifiers vs meta-learner, it is indeed true that the ensemble outperforms our method for this problem and in a training set of such size. However, this is does not underestimate the importance of our method/results, as the ensemble can not improve using additional unlabelled data, in contrast to our methodology.\n",
"role": "user"
},
{
"content": "I acknowledge the authors' response. They have addressed most of the issues I raised.",
"role": "assistant"
}
]
|
0xDHnrK6S5B | AKBC 2021 Conference | Reviewer_xrNy | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_xrNy, and the following are the paper you need to review:\n```<<0xDHnrK6S5B>>```",
"role": "user"
},
{
"content": "Title: Challenging, real world problem with evaluations that could be better\nReview Content: Summary: Enterprise Alexandria automatically builds knowledge bases with typed entities from private enterprise datasets. To build a KBC and identify typed entities, EA uses a probabilistic program and does not use meta-data and human annotations, which are generally not available in enterprise datasets anyways.\n\nThe authors ran a user study and they show that users need a variety of entity types and users wish to engage with different types in different ways. The authors claim that the study helps in the design of EA's user model. \n\nEvaluations in EA focused on two aspects: (1) the precision and coverage of EA's identified entity types against the precision and coverage of outputs from 4 BERT-based models and (2) the scalability of EA's incremental clustering. The results show EA has gains in terms of precision. \n\n\nStrong Points: \n1. Explains the challenges, motivation, and novelty fairly well. \n2. Evaluations proved gains in precision with EA compared to BERT models, which are the state-of-the-art methods.\n3. Performance evaluation proved how well the incremental clustering scaled in terms of memory usage and runtime.\n4. The world cloud visualization shows how relevant the most frequent entity types in the dataset are to the dataset topics. Also, 40% of the user-expected types (from the user study) were retrieved by EA. \n\n\nWeak points: \n1. It's not clear how the user study contributes to the design of EA itself. \n2. Don't entity types differ from one dataset domain to another? The results show that 40% of the user-expected types were retrieved by EA, which is great, but the domains covered from the user study (entity types related to multinational corporation data and types related to data found in the tourism and travel industry) are different from the domain datasets from the evaluation (types expected of Enron and Anonymous Eyes-on). How were the differences in domain addressed when comparing the types from EA and the types identified by the users from the user study? \n3. While there are gains for precision across both datasets in the evaluations, it's not the same for relative coverage. It would be interesting to see how relative coverage fares on a third dataset (if it is even feasible to find such a third dataset). \n4. Word cloud visualizations and the datasets associated need a bit more context. A few sentences describing what datatypes are expected based on a given dataset would help. \n\nOverall, the paper coupled with the evaluation results are promising. However, there are certain aspects of the paper that needs to be addressed. \nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 1\nThanks for the detailed review and the important points raised. Regarding the contribution of the user study, as stated both in the introduction and in the incipit of Section 2, this study contributes to support the design and the evaluation of Enterprise Alexandria. Its findings are essential to justify the need for discovering types as part of the entity discovery task in the enterprise setting. Since type discovery is a non-trivial task that adds extra complexity to our model, it is important to justify why it is essential through a qualitative user study, which indeed supports this requirement. It also provides reliable data about real entity types for two specific organizations that are important for our evaluation. \n\nWe do expect that entity types will vary across different domains and Enterprise Alexandria is able to accommodate this requirement. To clarify, as stated in the last paragraph of Section 3.3, the iterative type discover process leads to discover all types that have a chain of entities that connects that type to a seed type. On our largest dataset, this process led to discover a large universe of types (precisely 171 types including “university”, “case”, “campaign”, etc.) that appear to represent a wide variety of domains. Then, the entity discovery process will surface only the subset of types that appear in the input dataset. In practice, this means that types like “university” or “case” will be prevalent in the data belonging to an academic institution or a law firm, respectively. We have now clarified this point in the revised version.\n\nThe above also explains the 40% overlap between the types discovered in *Enron* and *Anonymous Eyes-on* and the types provided by the user study. Since both *Enron* and *Anonymous* are technology companies (the latter will be apparent when the name will be revealed in the final version), we cannot expect full overlap with the types reported by the participants given they come from different domains. However, as stated in Section 6.4 when we focus the “Technology” and “Seed” types, there is a significant overlap with the types discovered by EA, while the non-retrieved types tend to be more specific categories that we do not necessarily expect to be retrieved in every dataset belonging to the technology industry. We have clarified this point and improved the description of the world cloud in the revised version.\n\nFinally, on the higher relative coverage but lower precision of the baselines w.r.t. EA, unfortunately we do not have access to a third dataset to further validate this result. Intuitively the results suggest that the BERT-based models discover a very large set of entities, but their precision is significantly lower compared to EA. In our application, however, given the role of the knowledge curator, who must editorially check the validity of each entity (see first paragraph of Section 1), we find that methods with high coverage and low precision are less practical because they tend to overload the curator with a very large and noisy set of entities, more than a human editor can cope with. Instead, high-precision methods like EA tend to provide a more manageable, yet substantial (i.e., >100 entities for Enron >1000 entities for Anonymous Eyes-On), set of entities that are much easier to deal with for the knowledge manager. In this light, errors in precision should be penalized more than providing additional coverage. We have clarified this point in the evaluation section.\n",
"role": "user"
},
{
"content": "I acknowledge the authors' response. The authors have answered almost all of my concerns. I will continue to recommend accepting this work.",
"role": "assistant"
}
]
|
0xDHnrK6S5B | AKBC 2021 Conference | Reviewer_Pg7f | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_Pg7f, and the following are the paper you need to review:\n```<<0xDHnrK6S5B>>```",
"role": "user"
},
{
"content": "Title: Interesting and useful System that needs more work and clarifications on the experimental side \nReview Content: This work extends the Alexandria system for KB construction [Winn et al.,19] , to the particular scenario of constructing enterprise-specific knowledge bases. The main system extensions described in the paper are:\n(1) Extending Alexandria’s templates mechanism to support (semi-)structured documents, that are common in enterprise scenarios such as emails & calendar events. This allows the system to leverage the availability of semi structured data when building the KB. \n (2) Improved entity disambiguation mechanism which considers the number and length of words, as well as variants stemming from reasonable enterprise scenarios \n(3) Type discovery mechanism - The original entity discovery process described in [Winn et al.,19] is extend to allow the system to partially discover types. This is very practical in the Enterprise scenario, where the entity types may vary from enterprise to enterprise, as oppose to the case of general KBs. However, the new types are learned for entities with already known types, so this does not extend to the general case of arbitrary type discovery.\nAlso, An additional incremental computation mechanism is introduced which allow to incorporate manual annotation/curation\n\n\nOverall this work showcase and interesting and useful extension of Alexandria, although the technical contributions seem incremental and without much technical depth. The main issue with this work is that the experimental evaluation section requires more clarifications and experiments. \n\nStrong points:\n\nS1 The paper extends Alexandria to particularly construct an enterprise-specific knowledge base, which is an interesting, practical use case \n\nS2 The extensions proposed in Section 3 takes the main differences between the general KB and the enterprise KB. \n\nS3 The Experiments section provides an interesting comparison between the dedicated approach proposed in the paper with with several baselines for general-purpose NER.\n\nWeak Points (refer to the Detailed Comments Section for more information)\n\nW1 The introduction and related work section are somewhat confusing and do not properly justify the particular work and the contribution made. \n\nW2 The qualitative study does not seem informative and useful in the rest of the paper\n\nW3 The extensions proposed to Alexandria do not demonstrate much novelty and technical depth\n\nW4 The experimental section lacks several important clarifications and additional experiments, as detailed in commends D4 and D5\n\n\nDetailed Comments:\n\nD1 Positioning: The contribution is not clearly stated in the introduction, as the authors first claims that EA is specifically designed to answer a privacy constraint that forbids manual annotation of enterprise data. However in the contribution section the authors do not address the privacy issue anymore. Also, the abstract should state that EA extends Alexandria. \n\nD2 The purpose of the qualitative study is not clear. The authors asked the participants to list typed entities relevant to their organization, and conclude that “a wide variety of entity types are important for work”. What exactly is wide? Are all these entities necessary? How is this a conclusion from the study? And in any case, 48 entity types obtained for two different organizations, sounds really small in comparison with general knowledge bases (e.g. dbpedia) which contains thousands of entities. \n\nD3 While the extensions of Alexandria seem appropriate to the enterprise scenario, they do not show great technical depth and novelty. \n\nD4 The authors compared their approach to several baselines based on pre-trained language models that were fine tuned for NER tasks with limited, predefined number of entity types. If we are interested in specific types, isn’t it better to perform the fine tuning over some more relevant types? This may greatly improve the precision of these baselines. \n\nD5 Several Aspects in the experiments are not clear: \n(i) Which particular entities were labeled in the datasets? Of which/ how many types? What are the negative entity examples? As one dataset is anonymous and the other is Enron + manual labels, this requires clarification \n(ii) What is the relative coverage measure and why was it used instead of standard measures such as recall?\n(iii) Why does the EA version with >100 evidence obtained lower precision scores? Intuitively it should be more “correct” w.r.t. the entities it identifies, on the account of the recall. \n(iv) Why wasn’t Alexandria a baseline in your experiments? It seems reasonable to show that the existing system does not properly identify the desired entities and types in an enterprise data scenario. \n(v) The evaluation of the the type retrieval mechanism, one of the major contributions of the paper, is lacking, to my opinion, as it only shows a world cloud to exemplify the discovered types. There should be an experiment that measure the quality/utility/correctness of the types discovered.\n(vi) The details regarding the performance of the system, which rely on the incremental/clusterting mechanism of Section 4, are lacking a comparison with the approach described in [Winn et al.,19] in order to demonstrate that the optimization proposed in the paper is indeed beneficial. \n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 2 and More Details on User Study\nMany thanks for the insightful review. On the comment regarding the positioning of the paper, as you correctly state, the introduction does mention 1) the privacy constraints and 2) the lack of manual annotations in building a knowledge base for an enterprise as two key differentiating factors. These requirements are respected throughout the design and the application of EA. In fact, the model training is unsupervised except for a small number of seed types injected in the type discovery process and the small sample of manually labelled entities (857 labels, 274 positives, 583 negatives) used to train the entity correctness model. Respecting these two requirements makes EA the first method that is truly applicable and competitive in Enterprise scenarios to the best of our knowledge, which makes it a relevant contribution to the state of the art. We have clarified this part in the introduction as well as added more details to the size of the training set of the entity correctness model. We also agreed about stating that EA is an extension of Alexandria in the abstract and we have modified the revised version accordingly.\n\nRegarding the contribution of the user study, please see the first paragraph of the reply to Reviewer 1. Regarding the \"wide variety\" of entity types, the variety was in terms of relative meaning and purpose for the user, rather than quantity in comparison to general knowledge bases (see continued response in a separate comment below).\n\nRegarding improving the NER baselines with fine-tuning on specific types, in principle we agree this can improve their precision but it is challenging to find labels for types to train on in an enterprise setting - in fact we have none. This feature further highlights the ability of EA on learning types without requiring such labels.\n\nTo clarify the concerns raised in D5:\n-\t(i) As reported in Section 6.2, “we manually labelled a random sample of 100 entity names for each method”. The labels were either positive or negative. Entity types were not manually labelled and their correctness was not evaluated. We have now clarified this point in the revised version (see Section 6.4).\n-\t(v) To further expand on the evaluation of types retrieval mechanism, an empirical evaluation of the correctness of the types requires additional ground truth, ideally to be sourced from employees of each organization. Unfortunately, we do not have access to any ground truth for types. However, since it is known that *Enron* and *Anonymous* are specialized in the energy and software industry, respectively, -- the latter will be apparent in the final version when the actual name for “Anonymous” will be disclosed -- it is possible to see that the top discovered types in each dataset, including “Site”, “Area” and “City” for *Enron* and “Team”, “App” and “Package” for *Anonymous* are very relevant to their business. We have now discussed the lack of ground truth in the experimental section.\n-\t(ii) As reported in Section 6.2, “the relative coverage measures the recall ratio between each method and EA”. It was used instead of recall because recall requires collecting additional ground truth for all entities, including those that were not discovered by any system. We have now clarified this point in the revised version.\n-(iii) As explained in Section 6.2, EA(size>100) is a version of EA that lacks the entity correctness classifier. Regarding the difference in precision between (size>100) and EA, the size filter can be regarded as a basic entity correctness model using only one feature (i.e., size). Therefore, it is expected to be outperformed by the more sophisticated entity correctness model used by EA that has access to more features “like the number of documents mined from, the kinds of documents and the template diversity” (see Section 4)\n-(iv) Alexandria cannot be a baseline for our task because, as presented in its original paper (Section 4), Alexandria deals with three different tasks: template learning (i.e., discover a set of templates from text given a set of known entities), schema learning (i.e., discover new properties from text given a set of entities and templates) and fact retrieval (i.e., new facts about entities are retrieved from text given the entity names). Also, Alexandria had access to a set of 3000 real entity names that we don’t have for our dataset. Making Alexandria suitable to jointly solve the entity discovery and type discovery tasks requires a set of non-trivial extensions that we described in the paper. In summary, EA has a capability of entity and type discovery that Alexandria didn't have.\n\nTo further expand on the “wide variety” of entity types, the variety was in terms of relative meaning and purpose rather than quantity in comparison to general knowledge bases. To demonstrate the variety of meaning, the categories described by participants varied in terms of:\n\n- **Physical referent**, from abstract (e.g., management buy-in, industry knowledge) to concrete (e.g., papers/reports, tasks);\n- Similarly the **audience** for the categories varied from knowledge intended for ‘insider’ team members (e.g., tasks, budget), enterprise wide (e.g., design library, documentation), and ‘outsiders’ who are external to the enterprise (e.g., media, research papers).\n- **Focus**: What something is (e.g., stakeholder, research template) versus what something does (e.g., communicate)\n- **Temporality**: Whether the category was largely fixed (e.g., headcount) or fluid - in terms of potential for additional items (e.g., research papers, tools) or changes to the item itself (e.g., problems, roadmap) – over time.\n\nIn turn, it becomes relevant to ask: \n\n1. Whether and how this variety of categories can be meaningfully extracted from enterprise data and represented as types?\n2. What are the opportunities for users to partner with Enterprise Alexandria to capture these different layers of meaning?\n\nDue to limitations of space, we will provide details about this deeper analysis in the extended version of the paper.",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I stand by my favorable recommendation for this paper.",
"role": "assistant"
}
]
|
0xDHnrK6S5B | AKBC 2021 Conference | Reviewer_2sAy | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_2sAy, and the following are the paper you need to review:\n```<<0xDHnrK6S5B>>```",
"role": "user"
},
{
"content": "Title: Well-motivated real-world problem, the main contributions are nicely explained.\nReview Content: The authors have presented a dedicated new system for automatically constructing a knowledge base from private enterprise data. This system is an extension of an existing system designed for Web-based information extraction. It employs a probabilistic program to generate both the document and its metadata from a set of latent entities and adds the ability to learn the set of types associated with each entity from a pre-defined type hierarchy. . This work also presents a real-world evaluation showing that the developed system scales well and significantly improves the precision of the extracted knowledge compared to four baselines on three real-world datasets. \n\n\nStrengths:\n\n - the problem this work handles is well-motivated, and the paper is well written and easy to follow. \n- the authors explain very clearly their contributions, challenges, and novel extensions to Alexandria. \n- the presented system supports online settings, where the documents are uploaded not at a single time point and the memory consumption exceeds the capacity of a single machine, which is a real-life scenario for enterprise data. \n- the authors compare their proposed system with state-of-the-art baselines, showing their advantages in real-life scenarios. \n\nWeaknesses:\n\n- the presented user study is very interesting and nicely demonstrates\nwhy a wide variety of entity types are important. However, this study includes a very small number of participants, and important details regarding the grouping of the categories are missing.\n-It would be of interest to evaluate each component's marginal contribution, e.g., using other clustering methods to better explain the reviewers' choices. \nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer 3\nMany thanks for your comments.\n\nPlease see our replies to reviewer 1 and reviewer 2 above about the importance of the user study. The small number of participants is consistent with qualitative research, which is designed to look for patterns of meaning rather than quantities. In terms of method, the categories that were defined by participants were grouped inductively (bottom-up) using an affinity diagram, and groupings were iterated on based on feedback from collaborators. The groupings themselves were based on the category names, descriptions and discussions provided by participants in the workshop. The categories were grouped primarily at the semantic level, which is in the explicit and surface meanings of the data; for example, participants used terms such as “various tools during projects”, “insightful information”, and “people working in org” in their descriptions and discussions. However, as outlined in our response to Reviewer 2, we also examined latent (underlying) patterns of meaning to demonstrate similarities and differences across the groupings that were generated through the affinity diagram exercise . We have added more details to the user study subsection about how categories were grouped, which we will expand in the extended version of the paper.\n\nFinally, due to space limitations, we were not able to add individual evaluations for each component to complement the end-to-end evaluation of EA reported in the paper. We will aim to add these evaluations in an extended version of the paper.",
"role": "user"
},
{
"content": "I'm grateful for the authors' response. The authors have answered almost all of my concerns. I still support my favorable recommendation for this manuscript.",
"role": "assistant"
}
]
|
0xDHnrK6S5B | AKBC 2021 Conference | Reviewer_fWqx | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_fWqx, and the following are the paper you need to review:\n```<<0xDHnrK6S5B>>```",
"role": "user"
},
{
"content": "Title: lack of focus, unclear contributions and difficult to read and follow\nReview Content: The paper describes a system named “Enterprise Alexandria” for knowledge base construction with typed entities. The approach uses relevant methods.\nOverall, the paper is difficult to read and follow. It is very difficult to decode the main contributions of the paper. Some parts are very briefly described and it is difficult to connect and understand different parts of the paper. The evaluation is also questionable - 1) the focus is unclear - what has been evaluated?, and 2) the comparison/experiments with the related methods improperly designed.\nIn summary, there are some contributions but they are not well presented. The paper is a mix of presentation of a system/approach/workflow and description of few methods -> the paper lacks focus. E.g. it would be better if the paper describes the system only, or focus on a particular method, e.g. entity linking or entity typing. In the current form it is a mixture and confuses the reader.\nConsidering my comments above I do not recommend acceptance of the paper. \n More comments below.\n\n==\n\n= Abstract\n\n“ability to process both the textual information and the structured metadata available in each document in an online fashion“\n-> it is not clear what is meant by “online fashion”\n\n“The knowledge discovery process uses a probabilistic program defining the process of generating the data item from a set of unknown typed entities.”\n-> complex sentence\n\nSection 4. Incremental clustering\n-> it is unclear the purpose of the entity correctness task. It is applied, but it is unclear if any follow-up action is then. Also, there are missing details about how it actually works.\n\nSection 6. Experimental Evaluation\n-> It is unclear what exactly has been evaluated? I.e. what part of the pipeline? Typing, linking, matching?\n-> the setup of the experiments are questionable. The (BERT) models have been trained and perform typing with just a few types (3, 16 types), while the model proposed in the paper performs typing of over 152. The comparison is unbalanced. More proper setup would be to train/configure the BERT to perform typing with the same amount of types - that would be fair comparison.\n\nSection 6.4. Type Retrieval\n-> Instead of word cloud figures the authors could consider providing in-depth analysis on the performance of entity typing.\n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to reviewer 4\nThanks for the review.\n\nTo clarify some potential misunderstandings and answer some questions raised, “online” stands for online machine learning and it is a standard term used in ML (https://en.wikipedia.org/wiki/Online_machine_learning) referring to “a method of machine learning in which data become available in a sequential order”, just like in EA. On the entity correctness model, as stated in the last paragraph of Section 4, the entity correctness model is used to filter out entities that are not required for the knowledge base, e.g., dates, locations, document names and tiny entities.\n\nRegarding section 6, fine-tuning BERT on 152 types would require ground truth labelling of 152 types. No such ground truth is available, and EA doesn’t need it. In fact, EA discovered the 152 types without any ground truth.",
"role": "user"
},
{
"content": "Thanks for the authors' response. It covers most of my concerns, and I will likely adjust the score higher.",
"role": "assistant"
}
]
|
485AXJD1fQ5 | AKBC 2021 Conference | Reviewer_1ase | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_1ase, and the following are the paper you need to review:\n```<<485AXJD1fQ5>>```",
"role": "user"
},
{
"content": "Title: strong paper introducing new large-scale CDCR resource, strong results on existing gold datasets\nReview Content: The paper collects a new ‘silver’ dataset HyperCoref for cross document coreference resolution task, constructing by using hyperlinks in online news articles. The paper shows that the dataset is considerably larger () than existing human-annotated datasets and has broader coverage in terms of event types. Experiments on three existing human-annotated datasets (ECB+, FCC-T, GVC) show that models trained on the silver-dataset perform comparable to those trained on gold train data. \n\n\nStrengths:\n1. The dataset collected is significantly larger than existing resources and will be useful for future research in this area. \n2. The collection pipeline is well-described. The filtering and hand-written rules appear to be reasonable and can potentially be extended to other sources/languages. \n3. The analysis of the silver data, and its comparison to existing datasets, provides useful insights about the data distribution. \n4. Models trained on the collected ‘silver’ data show impressive results on gold test sets. \n\n\nWeaknesses:\n\n1. Direct comparison with WEC is missing. I am not fully convinced by the argument that the events in WEC are inherently limited. Have you tested how well models trained on HyperCoref perform on the WEC test set?\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response\nThank you for your comments.\n\nThe reason why WEC is discussed only briefly is that at the time of WEC's announcement (end of April), experimental work for this paper was nearly complete. We therefore did not test HyperCoref models on WEC, though we agree that it would be an interesting scenario for the future.\n\nRegarding our comments on the limitations of events in WEC, we give a deeper explanation here:\nEvents in WEC stem from Wikipedia articles on real-world events. The Wikipedia community has established a set of guidelines for judging whether a real-world event is notable enough to warrant an article [1]. These guidelines require an event to have lasting effects and sufficient geographical scope. Writing articles for \"routine events\" (specifically mentioned are \"most crimes, accidents, deaths, celebrity or political news\") is discouraged. Conversely, crimes, accidents, celebrity or political news are the predominant types of events present in the ECB+, FCC-T and GVC corpora.\nDue to this mismatch, we see only limited potential for (pre-)training CDCR systems on WEC if the goal is to apply those systems on ECB+/FCC-T/GVC. There may be other applications (possibly involving question answering) where the encyclopedic origins of WEC could be beneficial though.\n\n[1] https://en.wikipedia.org/wiki/Wikipedia:Notability_(events)",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. Most of my concerns have been addressed. I will continue to recommend accepting this work.",
"role": "assistant"
}
]
|
485AXJD1fQ5 | AKBC 2021 Conference | Reviewer_QASE | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_QASE, and the following are the paper you need to review:\n```<<485AXJD1fQ5>>```",
"role": "user"
},
{
"content": "Title: Review\nReview Content: \nThis paper collects and releases a substantial new resource for the challenging task of cross-document event coreference, of \"silver\" coreference clusters automatically constructed from hyperlinks in web news articles. It is substantially larger than previous human-annoated cross-doc event coref datasets, which is a task that is extremely challenging to annotate. The authors also demonstrate that training a model on this silver data yields a high-performing model as evaluated on a previous human-annotated dataset.\n\nI very strongly believe there are important event KB extraction applications which need more research attention, which is the main justification for event coref research, since it could be useful to help them - for example, the Gun Violence Corpus, drawn from efforts to document gun violence, which is used as part of the evaluation - and data availability for event coref is a major issue. It still seems like a long way to go to practical usability, but that's to be expected on a challenging task. One thing I'm less sure of is whether cross-doc event coref as mention clustering, in the way it's conceptuatlized from ECB through this work, has the potential to actually be useful for event KB extraction, as opposed to doing something more direct. Cluster-style event coref has all the many conceptual limitations spelled out in the Hovy et al. \"Events are not Simple\" paper (in the references). Still, it's not obviously a wrong approach to the problem, and certainly worth exploring. Note that this paper briefly presents different justifications for event coref - QA and fact-checking - but without many details on how event coref could be applied.\n\nThe paper also has a good description of the process used to create the resource, which could be helpful for other efforts to exploit web news and hyperlink structure for other semantic tasks; it seems the choices and heuristics used took a fair amount of development themselves, and a successful previous example could help future guide efforts both for this task, or other fine-grained cross-document tasks.\n\nThe performance story about how useful their silver data is, is somewhat complicated in my opinion - it is certainly decent by itself, but if you have access to gold data, it does not always uniformly enhance the model. Perhaps there is room for future modeling advances to better account for the noisiness in silver data input.\n\nIt would be helpful to discuss the motivation for only using ABC and BBC sources for news data. Was there something about their hyperlink writing style that made them more useful? Or was it due to size? Was there a need to customize preprocessing for each website?\n\nminor\n - pg 5: \"fine-granular\" => \"fine-grained\"\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response\nThank you for your comments.\n\n* Regarding fact checking, one possibility of applying CDCR we imagine would be to find additional evidence in a large corpus on the level of statements (not entire documents). QA systems could benefit from a network of interrelated statements (connected by the event coreference relation) to source additional material for answers.\nAs you probably had suspected, these would be retrieval-style applications rather than clustering applications.\nApart from the conceptual limitations of clustering you already mentioned, the established evaluation methodology for CDCR makes it difficult to distinguish cross-document from within-document performance (see Upadhyay et al. 2016 in the references). From a technical standpoint, existing CDCR systems could be adapted relatively easily to a retrieval setup, since they learn either pair similarity or vectorized mention representations from which a ranking could be derived.\nStill, since the focus of this particular work was the development of the dataset, we stuck with the established way of evaluating CDCR through clustering for comparability.\n\n* Regarding the performance and choice of news outlets:\nWe realize that we have only scratched the surface with respect to modeling experiments in this work. The purpose behind the experiments conducted mainly was to assess whether hyperlink data can be viable training data for CDCR at a fundamental level.\nWe picked ABC and BBC since they are reputable news sources, have different composition of domains (see Table 1b), and are mid-to-large in size (see Appendix Table 4), i.e. a representative sample of the whole dataset. We would have liked to train on more datasets, but already faced scalability problems with the SOTA system implementation on this small subset (see our comment on page 7 under \"Data Preparation\" and the second paragraph in Appendix A.3, page 25). We are considering future work on the modeling side to rectify this.\n\n* Regarding customized preprocessing per website:\nWhile developing the handwritten rule set (step 5d on page 4), we gave the extracted data from each news outlet a cursory look to make sure we did not miss major sources of noise. Apart from this all pipeline settings are website-agnostic.",
"role": "user"
},
{
"content": "Thanks for the authors' response. It resolves the majority of my concerns. I will continue to recommend accepting this work.",
"role": "assistant"
}
]
|
485AXJD1fQ5 | AKBC 2021 Conference | Reviewer_cWi5 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_cWi5, and the following are the paper you need to review:\n```<<485AXJD1fQ5>>```",
"role": "user"
},
{
"content": "Title: Solid paper on x-doc coref corpus from newswire w/ hyperlinks\nReview Content: \nThis isn't going to move the field forward by a huge amount, but its a bit of solid work and above the bar in my estimation.\n\nI was forced due to time constraints to give this a very quick review, a deeper one (due shortly) may answer some of these questions and concerns:\n\n1) There has a been a lot of work on using hyperlinks for entity detection, and event detaction in twitter, the claim to novelty is based on the scarcity of that in news corpora. This needs to be checked better (on me), I didn't see enough of that in the related work section.\n\n2) I would have liked to see more direct comparison to WEC. The papers says: \"These corpus differences have a direct impact on results, with our broad-coverage ABC Sss model outscoring the identical RoBERTa-based architecture trained by Eirew et al. [2021] on WEC by 5.9% CoNLL F1 when tested on ECB+ (75.04 vs. 69.10 F1).\"\n\nIs this direct comparison valid? You don't really say how you're using CoNLL F1, which in my understanding is based on OntoNotes, but that doesn't seem to agree with the text. Please add more explanation of how you used this metric. The moschitti CoNll paper says the official metric is \"a weighted average of three metrics: MUC, B-CUBED, and CEAF\" - is that what you did?\n\nAnyway, I suspect direct comparison of WEC and Hypercoref is apples-oranges unless you had the same test set, so I would love to see a system trained on HC perform on WEC, and vice-versa, it would be more conclusive....\n\n(confidence reflects high expertise but very quick read of the submission)\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response\nThank you for your comments.\n\n### Concern 1\nWe were not aware of additional related work in this area. If you find the time, we would highly appreciate if you could point us to the additional related work you mentioned.\n\n### Concern 2\nPlease excuse the dense sentence, we meant to save space there. We did the following:\n1. We trained the RoBERTA-based model of Arie Cattan et al. 2020 entirely on silver data from HyperCoref (specifically the ABC subcorpus).\n2. We use the Python-based coreference scorer implementation from Moosavi et al. (see Appendix A.4.1) which produces the same results as the official Perl-based scorer [1] but additionally reports the LEA coreference metric.\n3. Prior to running the coreference system, we simplify the task by pre-partitioning all documents according to Barhom et al. 2019 (see Appendix A.3 in the third-to-last paragraph).\n4. To evaluate cross-document coreference, we merge all documents into one meta-document and run the scorer on this document. This is the quasi-standard procedure since at least Barhom et al. 2019's paper.\n5. We test on the official ECB+ test split.\n\nEirew et al. 2021 did the following:\n1. They train a RoBERTA-based model on silver data from WEC. Based on their model description and their code on Github, the Eirew et al. 2021 is equivalent to the Cattan et al. 2020 model (also note that Arie Cattan is a co-author of the Eirew et al. 2021 paper).\n2. They use the official Perl-based scorer [1].\n3. and 4. and 5. are reported in their paper and match our setup.\n\nSo, to the best of our knowledge, the comparison is valid.\nPlease also refer to our response to \"Reviewer 1ase\" as to why WEC is discussed rather briefly in our work.\n\n[1] https://github.com/conll/reference-coreference-scorers",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It adequately addresses the points I raised. I remain supportive of my positive recommendation for this paper.",
"role": "assistant"
}
]
|
485AXJD1fQ5 | AKBC 2021 Conference | Reviewer_bxBV | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_bxBV, and the following are the paper you need to review:\n```<<485AXJD1fQ5>>```",
"role": "user"
},
{
"content": "Title: An interesting semi-automatic approach to producing training data for CDCR that complements other SOTA developments well. \nReview Content: This paper proposes an information extraction approach to collecting evaluation and training data for CDCE, relying on the fact that authors of online news\noften use hyperlinks to reference significant events when writing online articles. The authors argue that hyperlinks in common for given mentions can be used to form CDCE mention clusters/topics. \nThe proposed approach tackles the difficulty and cost of producing large-scale CDCE resources, particularly in a multilingual setting. \nThe authors propose an extraction pipeline to build the HyperCoref corpus from the common crawl (40 sources, 2M mentions) and then proceed to evaluate the quality of the data by evaluating state-of-the-art CDCR systems as compared to performance with currently available gold data, both in-domain and out-of-domain.\n\nThe evaluation pipeline appears robust, with precautions taken to remove non event-related hyperlinks (same domain, most frequent prefixes only, no links to high-degree pages). \nThe authors preform a comprehensive evaluation that shows that system performance with their silver corpus is competitive with using manually curated datasets (at equal size). The qualitative analysis is welcome and offers adequate insights into the intrinsic quality of the data. \n\n\n\n\n*Strengths*:\n- Well written and easy to follow\n- Elegant yet simple approach\n- Robust pipeline that's easy to transpose to arbitrary source sites. \n\n*Shortcomings*: \n- Since multilinguality is one of the key arguments to generate silver-data, some limited evaluation and analysis would have been welcome (it's only evoked that experiments were performed and that the approach also works). \n\n\n*Other questions / comments*: \n\nCouldn't exceptionsbe added for links to trusted domains (for example to wikipedia pages, which are often referenced in articles, even tough they aren't always event-oriented). ? \n\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response\nThank you for your comments.\n\n* We agree that deeper analysis of the language independence of the proposed approach would have strengthened the submission. However, since proper multilinguality research would have required checking multiple language families, sourcing appropriate gold data for comparison (which is hard to come by) and researchers capable of these languages, we did not pursue this idea further. We will however release the implementation of the entire data pipeline, hence researchers willing to address multilinguality may do so in the future.\n\n* Regarding trusted domains: We had tried this during development, but found that it predominantly adds noise. From what we remember anecdotally, links to Wikipedia mostly referenced people (actors, politicians, etc.), countries/regions or other things people would consider looking up in an encyclopedia, but rarely eventive content.\nWe also tried permitting links between trusted news outlets (such as including hyperlinks to bbc.com in articles published by ABC News). This worked sometimes, but in the vast majority of cases it only led to citation-like hyperlinks such as \"as per the BBC\", \"as reported by the BBC\", or just \"[BBC]\". Our implementation still is capable of whitelisting arbitrary domains, so researchers with a good idea on how to filter these cases quickly and reliable may experiment freely.",
"role": "user"
},
{
"content": "I acknowledge the authors' response. It resolves the majority of my concerns. I continue to recommend this work positively.",
"role": "assistant"
}
]
|
4Exq_UvWKY8 | AKBC 2021 Conference | Reviewer_pTkv | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_pTkv, and the following are the paper you need to review:\n```<<4Exq_UvWKY8>>```",
"role": "user"
},
{
"content": "Title: nice contribution\nReview Content: The paper uses domain-specific pretrained language models to biomedical knowledge graph completion.\n\n- I like the research problem of the paper. Pretrained language models are definitely a way to extract knowledge from unstructured texts. It would be great if we could find a way to extract the knowledge from LM to structured or semi-structured stuff. Using LMs for knowledge graph completion is a clever way, which is easy to evaluate.\n\n- Why is the domain-specific task evaluated in the paper? How about the open-domain setup?\n\n- It's good to see the proposed method is complementary to previous methods. There would be some opportunities to better integrate these methods together.\n\n- In terms of interpretation, there is a related paper \"Knowledge Neurons in Pretrained Transformers\", which also tries to align the neurons in LMs and knowledge.\n\n- Is this a good way to evaluate pretrained LMs? A metric could be defined for this.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer pTkv\nThank you for your detailed feedback. We evaluate the KG completion task in the biomedical domain specifically as applying LMs to biomedical KG completion is still underexplored despite the importance of this domain, and our empirical analysis demonstrates a variety of ways in which LMs can be effectively applied in this domain (although some of these techniques may also transfer to the general domain – see our response to reviewer fvNB). \n\nThe “Knowledge Neurons” approach could definitely be combined with the datasets we’ve explored in this work, to examine more specifically which network parameters encode what biomedical knowledge and modify that knowledge if desired. \n\nWith regard to evaluating pretrained LMs for factual knowledge, one dominant paradigm for this kind of evaluation is probing the LM using a cloze task such as with the LAMA dataset [1]. The KG completion task could similarly be used to evaluate what factual knowledge is stored in an LM’s parameters, and its ability to identify stored knowledge could be compared to the results of a more LAMA-style evaluation in future work.\n\n\n[1] Petroni, F., Rocktäschel, T., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.H., & Riedel, S. (2019). Language Models as Knowledge Bases? EMNLP.",
"role": "user"
},
{
"content": "I value the authors' response. Most of my concerns have been addressed. I maintain my positive recommendation for this work.",
"role": "assistant"
}
]
|
4Exq_UvWKY8 | AKBC 2021 Conference | Reviewer_qESR | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_qESR, and the following are the paper you need to review:\n```<<4Exq_UvWKY8>>```",
"role": "user"
},
{
"content": "Title: Strong empirical study for transductive setting, but weaker in the inductive setting\nReview Content: This paper proposes a method for link prediction in biomedical knowledge graphs that combines the predictions of knowledge graph embedding models with pre-trained language models. The main contribution of this paper is a routing method that acts as a switch between a set of KG embedding models and language models (that are fine-tuned for the link prediction task) and outputs the best model’s prediction for a given triple (h, r, t).\n\nThere are similar methods to this paper proposed in the general KG completion task (cited in this paper). This paper extends those methods to biomedical knowledge graphs. The key difference is that existing methods use a weighted average of different model’s predictions whereas the proposed model chooses one of the model’s predictions using a routing method. Empirical results show that this approach is effective for transductive settings. The main finding of the paper is that language models and knowledge graph embedding models are complementary.\n\n**Reasons to accept**\n1. The paper is well-written and performs a thorough empirical study by considering a wide range of methods and various combinations of them.\n2. Performed detailed analysis of the model’s predictive performance.\n\n**Reasons to reject**\n1. The proposed model performs much worse than the fine-tuned LM model in an inductive setting. Also, the reason to choose the nearest neighbor of the unseen entity is not well justified. Since the embedding of the unseen entity is obtained using a fine-tuned LM, why can’t we directly use that? \n2. There are other strong baselines for inductive settings such as R-GCN [1], CompGCN [2], and [3] that are not considered for comparisons. Also, integrating these models into the proposed method might further improve the performance.\n\n**Comments**\nPlease mention the number of relations in each dataset in Table 1.\n\n**Missing references**\n\n[1] Schlichtkrull, M.S., Kipf, T.N., Bloem, P., van den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: Proceedings of ESWC 2018. LNCS, vol. 10843, pp. 593–607. Springer (2018)\n\n[2] Vashishth, S., Sanyal, S., Nitin, V., Talukdar, P.: Composition-based multi-relational graph convolutional networks. In: International Conference on Learning Representations (2020)\n\n[3] R Bhowmik, G de Melo. Explainable Link Prediction for Emerging Entities in Knowledge Graphs. International Semantic Web Conference, 2020\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer qESR\nThank you for your constructive and detailed feedback. We address your concerns below:\n\n1. The purpose of the experiment in the inductive setting was not to propose a new model, but to study a simple KGE baseline to compare with the fine-tuned LM. Since standard KGE models can only be applied in the transductive setting, this baseline uses LM representations to find the most similar entity in the training set based on textual information, and uses that entity’s structural embedding learned by the KGE model for computing the ranking score. Since the entity embedding computed by the LM is not a KG embedding that has been trained with the ComplEx scoring function, it cannot be directly plugged in to replace the missing ComplEx embedding, although it can still be used to retrieve the embedding of the most semantically-related “nearest neighbor” entity. We don’t expect this simple baseline to perform better than the fine-tuned LM, and the fine-tuned LM definitely can directly be used in this setting. However, it is interesting that despite its simplicity the nearest-neighbor baseline can actually outperform a more sophisticated baseline model (DKRL) that relies solely on textual information. This is an indication that combining structural and textual information can provide benefits in the inductive setting as well, and can be explored further in the future.\n\n\n2. While graph neural network-based methods like the ones you’ve proposed can certainly be used in the inductive setting to construct an embedding for an unseen entity, they require the unseen entity to have _known connections_ to entities that were seen during training in order to propagate information needed to construct the new embedding. In our inductive experiments, we consider the more challenging setup where every test set triple has at least one entity with no known connections to entities seen during training, such that graph neural network-based methods cannot be applied. This is meant to model the phenomenon of rapidly emerging concepts in the biomedical domain, where a novel drug or protein may be newly studied and discussed in the scientific literature without having been integrated into existing knowledge bases. We will clarify this point in the section on inductive KG completion and cite the methods you’ve mentioned.\n\nAs to your comment, we will be sure to include the number of relations for each dataset in Table 1. We hope our responses addressed your concerns, and we welcome any additional clarifying questions.\n",
"role": "user"
},
{
"content": "Thank you for the authors' feedback. It resolves the majority of my concerns. I continue to recommend this work positively.",
"role": "assistant"
}
]
|
4Exq_UvWKY8 | AKBC 2021 Conference | Reviewer_fvNB | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_fvNB, and the following are the paper you need to review:\n```<<4Exq_UvWKY8>>```",
"role": "user"
},
{
"content": "Title: A comprehensive emprical study\nReview Content: **Summary**:\n\nThis paper presented a comprehensive empirical study of applying various scientific language models to the biomedical knowledge graph completion task. The authors found that language models and knowledge graph embedding models have complementary strengths, and further proposed a routing method, which is able to integrate these two models. Domain-specific biomedical language models are carefully evaluated.\n\n**Reasons to accept**:\n\n1. The paper is well-organized and nicely presented. A wide range of methods including the very recent ones are surveyed. I am not an expert in this research direction, and I learned a lot from this paper.\n\n2. Extensive experiments are conducted to investigate the existing models under different settings and verify the effectiveness of the proposed routing method under the transductive settings. Details of the experimental settings such as datasets, hyper-parameters, baselines are included. \n\n**Reasons to reject**:\n\nNo strong reasons for rejection. However, as mentioned by the other reviewer, graph neural network-based models should also be considered for discussions and comparisons.\n\n**Questions**: \n\n1.Though investigating general-domain language models is out of the scope of the paper, I am wondering if the proposed method for domain-specific LM also works for general-domain language models? \n\n2.For biomedical knowledge base completion, do we have any domain-specific issues when applying LM to it? \n\n3.The last question is whether integrating LM-based models with KG embedding models is an under-explored research question for the general-domain settings.\n\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Response to Reviewer fvNB\nThank you for your feedback. We invite you to consider our response to reviewer qESR regarding comparison to graph neural network-based models. We address your other questions below:\n\n1. We focus on the biomedical domain for the purpose of this study as the use of LMs for KG completion has been underexplored in this important setting. However, we have demonstrated that some techniques – such as using a router rather than a weighted average for combining LMs and KGE models – can sometimes work better than approaches that have been used in the general domain. This certainly opens up the possibility that these or other extensions to existing techniques may improve performance on general domain KG completion benchmarks as well, which we leave as a direction for future studies.\n\n\n2. One main difference between biomedical and general domain KGs is the domain-specific terminology and jargon that exists in names and descriptions of biomedical entities. Much like in general natural language understanding tasks, we demonstrate that LMs pretrained on domain-specific texts are crucial for biomedical KG completion, with the biomedical LMs we explore outperforming a general domain LM (RoBERTa) in nearly all cases. In addition, we believe there are more structural differences that would be interesting to explore in more depth. For example, entities with uninformative technical names – such as protein names that are combinations of numbers and letters (e.g., P2RY14) – appear very often in scientific KGs, and are likely related to the benefit of adding descriptions (Table 6, appendix). The surface forms of entity mentions in the biomedical literature (on which the LMs were pre-trained) tend to be diverse with many aliases (e.g., here is a [list of aliases](https://www.genecards.org/cgi-bin/carddisp.pl?gene=ERBB2) for one gene), while entities such as cities (e.g., “Paris”) or people (e.g., “Albert Einstein”) in the general domain often show less variety in their surface forms used in practice. This could potentially be challenging when trying to tap into the latent knowledge LMs hold on specific entities as part of the KG completion task, and likely requires LMs to disambiguate these surface forms to perform the task well. General-domain LMs are also trained on corpora such as Wikipedia which has “centralized” pages with comprehensive information about entities, while in the scientific literature information on entities such as drugs or genes is scattered across the many papers that form the training corpora for the LMs.\n\n\n3. We definitely think this is an underexplored area where much more work can be done. There has been some work on general domain benchmarks that computes a weighted average of the ranking scores from each type of model [1-3]. We show in this work that adaptively routing each example to a single model can sometimes be preferable to using an average over all models. We also demonstrate that integrating more than two LM/KGE models can yield additional benefits, something which has not been explored previously and which emphasizes the complementary strengths of even different domain-specific LMs applied to the same dataset. Finally, we briefly explore which features are related to whether an LM or a KGE model will perform better on a particular example, although our findings are not always clear-cut and characterizing this further is an open question. Through these contributions, we extend the understanding of how LM and KGE models can be effectively integrated to improve KG completion.\n\n\n[1] Toutanova, K., Chen, D., Pantel, P., Poon, H., Choudhury, P., & Gamon, M. (2015). Representing Text for Joint Embedding of Text and Knowledge Bases. EMNLP.\n\n[2] Xie, R., Liu, Z., Jia, J., Luan, H., & Sun, M. (2016). Representation Learning of Knowledge Graphs with Entity Descriptions. AAAI.\n\n[3] Wang, B., Shen, T., Long, G., Zhou, T., & Chang, Y. (2021). Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion. WWW.",
"role": "user"
},
{
"content": "Title: Thanks for the reply\nThanks for the detailed reply. I would like to keep my score for the paper.",
"role": "assistant"
}
]
|
7aIsabcVqMH | AKBC 2021 Conference | Reviewer_XsjV | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_XsjV, and the following are the paper you need to review:\n```<<7aIsabcVqMH>>```",
"role": "user"
},
{
"content": "Title: Interesting exploration of phrase retrieval\nReview Content: This paper explores how dense phrase retrieval (i.e., treating a phrase as the unit of retrieval) compares to dense passage retrieval in the context of QA and specialized document retrieval tasks. To improve phrase retrieval, the authors propose the use of in-passage negatives (i.e., negative phrases taken from a relevant passage) and discuss how this relates to strategies for choosing negative exmaples with passage retrieval. In the evaluation, the proposed approach is more effective than DPR (passage retrieval) for QA and competitive on the other tasks. Additionally, the authors demonstrate that quantization can be used to substantially decrease the phrase index size, which mitigates this potential downside of phrase retrieval.\n\n\n# Strengths\n\nThis work demonstrates that a straightforward approach treating phrases as the unit of retrieval can be effective, which is an interesting perspective that can motivate future work. It additionally proposes a new strategy for identifying hard negatives in this setting and demonstrates techniques for mitigating the increase in index size that comes with this setting.\n\n# Weaknesses\n\n- The proposed approach does not always outperform RAG (which could be described in more detail) in the experiments.\n\n- The reader is left to follow prior work for many basic details about the approach, such as exactly how a phrase is defined (how are they chosen? how long are they? can phrases overlap?) and how multiple phrases are used to score a document. (Eq 3 indicates that only the max score is used, but this is at odds with later statements about multiple representations.)\n\n\n# Other comments/Questions\n\n- How dependent is this approach on the definition of a phrase? Were other definitions considered?\n\n- It's surprising that a phrase representation could include enough information to match it against a query with no lexical overlap. Do you see this approach as applicable to the general task of ad hoc document ranking? Could it be benefitting from the specifics of the datasets or tasks considered? Any insights or discussion along these lines would be very interesting.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thank you for your review!\nThanks for your comment. We’ve revised the paper with a description of RAG in Section 5 - Results and more details on the definition phrase in Section 2.\n\nQ1. How dependent is this approach on the definition of a phrase? Were other definitions considered?\n- We think our definition of phrases is very general (Footnote 1), since we take a phrase to be any contiguous text segment up to L words. In future work, it would be interesting to constrain the definition to linguistic phrases.\n\n\nQ2. Do you see this approach as applicable to the general task of ad hoc document ranking?\n- Yes, we are currently planning to evaluate our model on ad-hoc document retrieval tasks like MS MARCO.\n Global/general Response:\n```\nWe thank all the reviewers for their thorough comments. All three reviewers asked for a more detailed description of the baseline models in our KILT results, especially RAG (Lewis et al., 2020). We’ve added this in our revision. Although their method uses a generative reader model unlike other models in the table, we only compare the performance of the retrieval components.\n```\n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I continue to recommend this work positively.",
"role": "assistant"
}
]
|
7aIsabcVqMH | AKBC 2021 Conference | Reviewer_eDvt | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_eDvt, and the following are the paper you need to review:\n```<<7aIsabcVqMH>>```",
"role": "user"
},
{
"content": "Title: Well written paper with clear empirical results\nReview Content: The paper shows that dense phrase Representations can be better representations for retrieval than the document-level representations. Specifically they show that DensePhrases representations can be used to identify relevant documents _without any fine-tuning_ on the target QA dataset compared to DPR. Additionally they show why phrase representations might outperform passage-level representations (much harder negatives). On non-QA datasets, fine-tuning the query representations can lead to competitive performance too. Finally they also provide a potential solution to reduce the index size of DensePhrases to match that of DPR while maintaining a higher performance.\n\n\n# Strengths\n---\n - A simple retrieval approach that outperforms current document-based representation model - DPR on QA and non-QA tasks\n - Allows for a trade-off between accuracy and index size easily\n - A new perspective on document retrieval that may lead to research in better retrieval approaches via phrase-based representation learning\n\n\n# Weakness\n---\n- Some more analysis would be helpful\n\n\n# Questions/Comments\n---\n- Phrase representations are designed to answer these questions directly so maybe it is not as surprising that the answer phrases (which are likely to be correct) are from the gold documents. How often does the DensePhrases model get the wrong answer but still retrieve a relevant document? \n\n- One issue with such an approach would be the dependence on the definition of phrases. If the answer to a question is not extracted as a phrase, do you think it most likely would not be retrieved by such an approach? \n\n- I see that using DensePhrases reduces the need for feeding long documents to FiD. However, the training resource comparison doesn't seem fair since you reduced the batch size. Couldn't the same hardware be used by Izacard & Grave with the reduced batch size (for the same k)?\n\n- Table 3: Can you add a couple of words regarding RAG? Also it seems unclear why you mention that $L_doc$ performs comparably well with DPR-based models? There seems to be a huge gap between Multi-Task DPR and DensePhrases?\n\n- A recent work that also converts documents into multi-vector representations and also similarly trades off accuracy with efficiency - \"ReadOnce Transformers: Reusable Representations of Text for Transformers\". I think their approach is sufficiently different and not comparable, but would be worth contrasting against. \n\n- I like the analysis to show the impact on input document size and index size. What is the cost of using this representation on the inference time as compared to DPR? If each document has about k phrases, it would be k times slower than DPR (not accounting for the slowdown due to a larger index)? Also does the quantization to reduce the index size further slow down retrieval time?\n\n\n \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thank you for your review!\nThanks for your comment. We’ve revised the paper with a description of RAG.\n\nQ1. How often does the DensePhrases model get the wrong answer but still retrieve a relevant document?\n- If the definition of relevancy is based on whether the answer is included or not, it is straightforward to compute the proportion of questions having relevant passages with incorrect phrase-level answers at top-k: (top-k passage retrieval accuracy) - (top-k end-to-end QA accuracy). For the top-1 results of DensePhrases-multi, this corresponds to 51.1% - 41.3% = 9.8%.\n\nQ2. If the answer to a question is not extracted as a phrase, do you think it most likely would not be retrieved by such an approach?\n- We think that high-quality contextualized phrase representations will contain many salient features about the text: For instance, for a question “Why is the sky blue?,” our method retrieves a passage containing the sentence “The clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering” based on the phrase “Rayleigh scattering”. A generative reader model can then synthesize a longer answer, e.g.. “The sky is blue due to an optical effect known as Rayleigh scattering”. For non-factoid retrieval, we would point to our WoW dialogue-generation results.\n\nQ3. Couldn't the same hardware be used by Izacard & Grave with the reduced batch size\n- We used the same batch size (due to the resource constraint, we had to use gradient accumulation) so this is a fair comparison. We have revised Table 2 to make it clearer. \n\nQ4. What is the cost of using this representation on the inference time as compared to DPR? Also does the quantization to reduce the index size further slow down retrieval time?\n- As both DPR and DensePhrases can be defined as a single MIPS search, a theoretical drawback of DensePhrases is the larger index size. However, we combat this with our quantization methods. In practice, retrieval speed can be very implementation-dependent and we find that DensePhrases runs comparably fast as DPR for passage retrieval. The only additional operation in phrase-based passage retrieval is to merge the top-k’ phrases into top-k passages, but this has a very small cost. Product quantization can actually further speed up the retrieval time, see Jégou et al., 2011. \n\n[1] Jégou et al., 2011. Product quantization for nearest neighbor search. \n\n Global/general Response:\n```\nWe thank all the reviewers for their thorough comments. All three reviewers asked for a more detailed description of the baseline models in our KILT results, especially RAG (Lewis et al., 2020). We’ve added this in our revision. Although their method uses a generative reader model unlike other models in the table, we only compare the performance of the retrieval components.\n```\n",
"role": "user"
},
{
"content": "Thank you for the detailed response from the authors. It satisfactorily addresses the major issues I had. I stand by my favorable recommendation for this paper.",
"role": "assistant"
}
]
|
7aIsabcVqMH | AKBC 2021 Conference | Reviewer_C1RY | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_C1RY, and the following are the paper you need to review:\n```<<7aIsabcVqMH>>```",
"role": "user"
},
{
"content": "Title: A comprehensive and thorough study of the utility of dense phrase retrieval for larger-context retrieval\nReview Content: ### Summary\n\nThe paper investigates the utility of dense phrase retrieval for retrieval of passages and documents, based on the intuition that every phrase is derived from a larger context. The authors show that (a) scoring a passage based on the maximal phrase score of all phrases in the passage improves retrieval performance over passage retrieval models (DPR), and (b) that this improvement translates to better QA accuracy on two open-domain QA datasets (Natural Questions and TriviaQA). Further analysis highlights the differences between models trained for passage retrieval and phrase-based passage retrieval, proposing that DensePhrases struggles with retrieval tasks where topic matters while it is better at fine-grained entailment. The authors then propose a variant of DensePhrases adapted for larger contexts, and measure its performance on entity-linking and dialogue tasks from the KILT benchmark, showing a significant boost in performance compared to using the original DensePhrases retrieval. Yet, it looks like phrase-based retrieval is far behind the RAG baseline in terms of R-precision, which (unless I missed it) is not mentioned in the paper. Last, the authors draw the connection between their method and recently published work about multi-vector passage encoding, and demonstrate the tradeoff between retrieval performance and the number of vectors per passage, through a simple filtering method. By employing this filtering and using quantization in fine-tuning, they substantially reduce the size of the index, which makes this method more efficient at almost no cost in retrieval performance. \n\n\n### Strengths\n\nThe paper is very well written and rich in experiments and interesting findings, which provide a clear and comprehensive view of the capabilities and limitations of using DensePhrases to retrieve larger contexts.\n\nThe paper is timely and draws meaningful connections to recent related work.\n \n\n\n### Weaknesses \n\nResults on the KILT tasks (Table 3) are not fully clear/convincing. The DensePhrases variant ($\\mathcal{L}_{\\text{doc}}$) gives a significant boost in performance compared to the original DensePhrases, but it doesn't seem to perform comparably well with RAG in terms of R-precision. Actually, the RAG baseline is not mentioned anywhere in the text and the corresponding paper is not cited.\n\nThe Topical vs. Hard Negatives analysis (section 4.2, figure 3) is missing a baseline. The choice of passages for $\\mathcal{L}_{\\text{hard}}$ makes sense (using the original hard negative BM-25 based examples of DPR doesn't make sense because the answer span is in them), but isn't this loss clearly in favor of DensePhrases? DPR estimates the similarity between the passage and the question, and the passage without the answer sentence is probably still relevant after removing the sentence with the answer. In contrast, phrase-based passage scores are based on the phrases in the passage, so removing the answer should help. A less biased alternative for hard negatives passages would be to use, for example, passages that get high scores by BM-25 but do not contain the answer.\n\n\n### Questions \n\n- Did you try using DensePhrases with $\\mathcal{L}_{\\text{doc}}$ for the QA tasks?\n\n- Figure 3 - What are possible explanations for the higher topical loss of DensePhrases in comparison to DPR? Is it because the passage is ranked based on the scores of the phrases in it that were computed without context (the entire passage)?\n\n\n\n\n### Suggestions\n\n- Table 3 - a RAG baseline appears in the table but is not mentioned anywhere in the text, and the corresponding paper (\"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks\", Lewis et al. NeurIPS, 2020) is not cited.\n\n- \"Passages and phrases are conceptually not so different\" - this is arguable.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thank you for your review!\nThanks for your comment. Following your suggestion, we updated the wording of the arguable sentence.\n\nQ1. Isn't this loss ($\\mathcal{L}_\\text{hard}$) clearly in favor of DensePhrases? A less biased alternative for hard negative passages would be to use passages that get high scores by BM-25 but do not contain the answer.\n- It’s a valid point that current $\\mathcal{L}_\\text{hard}$ might favor DenePhrases. We also measured $\\mathcal{L}_\\text{hard}$ with BM25 negatives, but our phrase-based passage retrieval still had a better $\\mathcal{L}_\\text{hard}$ (0.27) compared to DPR (0.34) although DPR was explicitly trained on BM25 negative passages.\n\n\nQ2. Did you try using DensePhrases with $\\mathcal{L}_\\text{doc}$ for the QA tasks?\n- For open-domain QA, we tried using $\\mathcal{L}_\\text{doc}$ during query-side fine-tuning. We have found out that it can improve the passage retrieval accuracy, but it slightly sacrifices the end-to-end QA accuracy.\n\n\nQ3. What are possible explanations for the higher topical loss of DensePhrases in comparison to DPR?\n- Relatively higher $\\mathcal{L}_\\text{topic}$ of DensePhrases can be attributed to the following two facts: First, DensePhrases uses a smaller batch size (84) for the in-batch negatives compared to the batch size of DPR (256). This could be a disadvantage to DensePhrases when learning topical relevance. Second, since DensePhrases’s positive instances are phrases (not passages), it can be trained in a way that it only considers the local entailment between questions and phrases even in the presence of the in-batch negatives. Note that most of the topically different phrases can also be discriminated by their local cues since they will also have different local cues. In this situation, DensePhrases might lose a chance to encode topical information in their phrase representations.\n\nThank you for your suggestions, which we’ve included in our revision.\n Global/general Response:\n```\nWe thank all the reviewers for their thorough comments. All three reviewers asked for a more detailed description of the baseline models in our KILT results, especially RAG (Lewis et al., 2020). We’ve added this in our revision. Although their method uses a generative reader model unlike other models in the table, we only compare the performance of the retrieval components.\n```\n",
"role": "user"
},
{
"content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I continue to recommend this work positively.",
"role": "assistant"
}
]
|
7pUJHzNNhvy | AKBC 2021 Conference | Reviewer_vazn | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_vazn, and the following are the paper you need to review:\n```<<7pUJHzNNhvy>>```",
"role": "user"
},
{
"content": "Title: Fun discussion and study of vagueness in NPs. Corpus limited in size and could be better defined.\nReview Content: This paper presents a discussion of vagueness of noun phrases along with a categorization of types of vagueness, and a corpus study that measures their prevalance in English Wikipedia. There ais a nice related work section that discusses efforts to identify and reason with vagueness in NLP, with a focus on formal reasoning systems. The authors suggest that their corpus could e a useful resource in creating detectors of vague NPs in a system that then has special reasoning capabiilties designed to deal with this vagueness.\n\nI enjoyed reading the paper and I think that discussions of vagueness, underspecification, and ambiguity are becoming increasingly important as the reasoning capabilities of our NLP systems are reaching the point where these phenomena are important. However, I also think that the current paper would be stronger with a more formal definition of the process by which the two annotators identified vague NPs, in particular with relation to the effect of context. I'd also like to see a discussion of the pragmatic effect of vague NPs on a listener --- i.e. does it actually affect the listener's interpretation or is it mostly just an issue for set-theoretic modeling approaches.\n\nThe authors do mention the importance of context, but do not tel us how much context the annotators consider. If the annotators are considering anything less than the full document context, I would like to know to what extent the different forms of vagueness are reducible through the consideration of more context. Similarly, it would be nice to know to what extent the vagueness can admit signicantly different interpretations to a reasonable reader.\n\nStrengths:\n- Interesting discussion of vagueness, which is distinct from other types of ambiguity often discussed in the field.\n- Nice overview of related work from adjacent fields.\n- Enjoyable to read.\n\nWeakensses:\n- Definition of annotation process could be more formal. In particular w.r.t. amount of context considered and importance of pragmatics.\n- Corpus is small and likely to be of limited utility.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thanks for your review and feedback! We have uploaded a revised version of the paper with your suggestions.\n- \"*Corpus is small and likely to be of limited utility.*\" We disagree here. Our corpus covers more than two thousand NPs and has proven to be sufficient to train a classifier to detect and classify vagueness (see Section 6). It thus serves its purpose. That said, future work can indeed extend the corpus to more annotations.\n Global/general Response:\n```\nReviewer vVo1 = R1\n\nReviewer ygk9 = R2\n\nReviewer vazn = R3\n\nWe thank the reviewers for their thoughtful remarks. We are encouraged they found our study about vagueness well-crafted (R1), fascinating (R2) and fun (R3). We are glad they found our categorization intuitive, the case study well designed and useful (R1), and the problem of vagueness important (R3). We will now reply to each question.\n\n**@R1,R2: What is our vision?** Our position is that vagueness is currently ignored when it comes to knowledge bases and their applications such as question answering systems, and that vagueness should be taken into account. We have made this clearer in the \"vision\" section. More specifically, we define a problem, categorize its instances, analyze its nature, provide a dataset, discuss the approaches that have done something similar, and call for the community to look into the problem. \n\n**@R1,R2: Why is beauty vague?** Even a single observer considers that beauty applies to a certain degree (\"Alice is more beautiful than Bob\"). Yet, there is no consensus on how to measure this degree (someone else may find Bob more handsome than Alice). This makes the concept subjectively vague. In general, we consider vague any noun phrase that cannot appear in a contract, and beauty qualifies here.\nLassiter and Goodman [2017] see just scalar vagueness and “everything else”, under which they also group nouns such as “bird” (presumably because of a lack of precision). Bennett [2005] defines a form of vagueness as the \"overlap of disjoint but overlapping concepts\". For example, Paris in France is both a city and a department, and both objects are the same w.r.t. geography, but are different from a legal perspective. Again, this definition doesn't apply to beauty. Hence the need to introduce a new category, which is what our work does. We find that this category is actually very frequent: It concerns a large number of noun phrases that carry an opinion (like \"sophisticated\" or \"fame\"), and/or that cannot be assessed objectively (like \"power\" or \"great ability\"). \n\n**@R2,R3: In which context do you analyse vagueness?** We treat each document top down, and judge the vagueness of a word at the point where it appears, given all previous text. We did not treat the case where a later explication treats the vagueness that was encountered previously, as the word would still appear vague to human readers when they first encounter it.\n\n**@R1,R3: What about multiple forms of vagueness in a noun phrase?** Annotators were allowed to tag the NPs with multiple types of vagueness. For example, \"complex societies\" is both subjectively vague and quantitatively vague. Our paper discusses this in Section 2, but we also updated the use case section to be clearer. There are 79 NPs with multiple types of vagueness (3.22% of all NPs, or 12.37% of vague NPs). All combinations of vagueness exists in our dataset.\n\nThe paper has been updated to take into account all of these points.\n```\n",
"role": "user"
},
{
"content": "Thanks for the authors' response. It adequately addresses the points I raised. I remain supportive of my positive recommendation for this paper.",
"role": "assistant"
}
]
|
7pUJHzNNhvy | AKBC 2021 Conference | Reviewer_ygk9 | [
{
"content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n",
"role": "system"
},
{
"content": "Now your reviewer ID is assigned as Reviewer_ygk9, and the following are the paper you need to review:\n```<<7pUJHzNNhvy>>```",
"role": "user"
},
{
"content": "Title: fascinating exploration of vagueness in noun phrases, with many open questions and some gaps in the conceptual framework\nReview Content: # General comments\nThis paper presents a fascinating attempt to define \"vagueness\" as it applies to noun phrases, and to annotate it and model it in natural language text. The authors attempt to arrive at a working definition of NP vagueness and then annotate noun phrases using the simple schema they develop. I have two main concerns: the first is that I found some of the definitional parts of the paper somewhat lacking in rigor, and the second is that I am not certain what the main position of the authors is, other than \"vagueness is rampant in natural language\" and \"researchers have not explored it enough\". I *suspect* the position is that we need to do a better job of modeling vagueness so that our knowledge representations and our ability to answer questions about knowledge expressed in natural language form can be improved, but these points were things I could only surmise but were not exactly made explicit: they were raised in the introduction but not really fleshed out in the \"Vision\" or conclusion sections.\n\n# Specific comments\n* It is a little too strong to say that the use of any vague noun phrases makes a sentence \"inaccessible to current knowledge extraction methods\". Perhaps the authors are thinking only about KB's/KG's that consist of named entities and facts related to those entities?\n* We can and should differentiate with NP's that are vague in their current sentence, but which could be made more precise, either by understanding the greater discourse or social context, and/or by further research. In the example sentence \"An anti-tobacco sentiment grew in many nations from the middle of the 19th century\", we can imagine that we could reasonably ask \"in which nations did anti-tobacco sentiment rise after 1850?\"\n* The definition of ambiguity seems overly rigid. A lot of work in lexical semantics and word sense disambiguation (yes, that was an intentionally vague phrase) has looked at resolving the ambiguity of even closely related definitions of words; see, e.g., the definitions of the verb \"cut\" in WordNet.\n* It is somewhat odd to group \"subjectivity\" into \"vagueness\". Subjectivity just means that the truth value is dependent upon the observer, but not that the concept is lacking in precision for whatever reason. It certainly does mean that any \"knowledge\" that we might want to extract should probably be attached to that observer, as the authors note near the end of Section 6.\n* Similar to the above, it is somewhat odd to conclude that \"beauty\" is \"clearly vague\". I think it is that beauty is both a subjective concept *and* it is complex and/or hard to define.\n* Some questions:\n * How much of the vagueness observed with noun phrases was resolvable when reading the entire document? Natural language derives much of its expressive power and compactness due to its pervasive use of ambiguity and vagueness, coupled with our ability to come up with helpful interpretations based on many different types of context to resolve ambiguities and vague expressions.\n * You mention how a phrase can exhibit multiple forms of vagueness. How often does that happen, and with what types?\n * In your \"Vision\", you mentioned that we should have a better, end-to-end treatment of vagueness. Have you considered extending a question-answering dataset so that it includes questions where the answers—or support for the answers—lie in sentences with vague noun phrases?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n",
"role": "assistant"
},
{
"content": "Title: Thanks for your review and feedback! We have uploaded a revised version of the paper with your suggestions.\n- \"*It is a little too strong to say that the use of any vague noun phrases makes a sentence \"inaccessible to current knowledge extraction methods\". Perhaps the authors are thinking only about KB's/KG's that consist of named entities and facts related to those entities?*\" Yes, we are referring mainly to knowledge base building. The paper has been updated.\n- \"*We can and should differentiate with NP's that are vague in their current sentence, but which could be made more precise*\" Indeed, but we believe that this is a goal in a second step. In this paper, which is a first step, we circumscribe the problem by reviewing the literature in terms of definitions, detection, modeling, and we conduct a quantitative study.\n- \"*The definition of ambiguity seems overly rigid.*\" You are right. We took the definition from [Zhang, 1998], but we have now opted for a loser definition.\n- \"*Have you considered extending a question-answering dataset so that it includes questions where the answers—or support for the answers—lie in sentences with vague noun phrases?*\" \nThis is certainly something we want to try!\n Global/general Response:\n```\nReviewer vVo1 = R1\n\nReviewer ygk9 = R2\n\nReviewer vazn = R3\n\nWe thank the reviewers for their thoughtful remarks. We are encouraged they found our study about vagueness well-crafted (R1), fascinating (R2) and fun (R3). We are glad they found our categorization intuitive, the case study well designed and useful (R1), and the problem of vagueness important (R3). We will now reply to each question.\n\n**@R1,R2: What is our vision?** Our position is that vagueness is currently ignored when it comes to knowledge bases and their applications such as question answering systems, and that vagueness should be taken into account. We have made this clearer in the \"vision\" section. More specifically, we define a problem, categorize its instances, analyze its nature, provide a dataset, discuss the approaches that have done something similar, and call for the community to look into the problem. \n\n**@R1,R2: Why is beauty vague?** Even a single observer considers that beauty applies to a certain degree (\"Alice is more beautiful than Bob\"). Yet, there is no consensus on how to measure this degree (someone else may find Bob more handsome than Alice). This makes the concept subjectively vague. In general, we consider vague any noun phrase that cannot appear in a contract, and beauty qualifies here.\nLassiter and Goodman [2017] see just scalar vagueness and “everything else”, under which they also group nouns such as “bird” (presumably because of a lack of precision). Bennett [2005] defines a form of vagueness as the \"overlap of disjoint but overlapping concepts\". For example, Paris in France is both a city and a department, and both objects are the same w.r.t. geography, but are different from a legal perspective. Again, this definition doesn't apply to beauty. Hence the need to introduce a new category, which is what our work does. We find that this category is actually very frequent: It concerns a large number of noun phrases that carry an opinion (like \"sophisticated\" or \"fame\"), and/or that cannot be assessed objectively (like \"power\" or \"great ability\"). \n\n**@R2,R3: In which context do you analyse vagueness?** We treat each document top down, and judge the vagueness of a word at the point where it appears, given all previous text. We did not treat the case where a later explication treats the vagueness that was encountered previously, as the word would still appear vague to human readers when they first encounter it.\n\n**@R1,R3: What about multiple forms of vagueness in a noun phrase?** Annotators were allowed to tag the NPs with multiple types of vagueness. For example, \"complex societies\" is both subjectively vague and quantitatively vague. Our paper discusses this in Section 2, but we also updated the use case section to be clearer. There are 79 NPs with multiple types of vagueness (3.22% of all NPs, or 12.37% of vague NPs). All combinations of vagueness exists in our dataset.\n\nThe paper has been updated to take into account all of these points.\n```\n",
"role": "user"
},
{
"content": "I value the authors' response. It satisfactorily addresses the major issues I had. I will continue to recommend accepting this work.",
"role": "assistant"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.