Dataset Viewer
Auto-converted to Parquet
paper_id
stringlengths
9
13
conference_year_track
stringclasses
139 values
reviewer_id
stringlengths
13
13
βŒ€
messages
listlengths
5
21
βŒ€
0gLzHrE_t3z
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<0gLzHrE_t3z>>```", "role": "user" }, { "content": "Title: Excellent description of a critical COVID-19 dataset, some questions remaining\nReview Content: This manuscript describes an exemplary effort to address COVID-19 by bringing together much of the relevant literature into one corpus, CORD-19, and increasing its accessibility by providing a harmonized and standardized format convenient for use by automated tools. CORD-19 has been - and is likely to continue being - a critical resource for the scientific community to address COVID-19, and this manuscript not only reflects that importance, but also gives insight into the approach used, the design decisions taken, challenges encountered, use cases, shared tasks, and various discussion points. The manuscript is well-organized and readable, and (overall) an excellent case study in corpus creation. This manuscript is not only important for understanding the CORD-19 corpus and its enabling effect on current COVID-19 efforts, but is possibly also a historically important example of joint scientific efforts to address COVID-19.\n\nDespite the critical importance of this dataset, there are several questions left unanswered by this manuscript, and it would be unfortunate to not address these before publication.\n\nIt would be useful to have a very clear statement of the purpose for CORD-19. The inclusion of SARS and MERS makes intuitive sense, but it is less clear why other coronaviruses that infect humans (e.g. HCoV-OC43) are not explicitly included - I am not a virologist, but neither will be most of the audience for this manuscript. While many of the articles that discuss these lesser known cornaviruses would be included anyway because they would also mention \"coronavirus\", this is not guaranteed. \n\nWhile it seems appropriate for document inclusion to be query-based, it is important to consider the coverage of the query. The number of name variants in the literature for COVID-19 or SARS-CoV-2 is rather large, and not all of these documents will include other terms that will match, such as \"coronavirus\". For example, how would a document that mentions \"SARS CoV-2\" but none of the query terms listed be handled? This is not a theoretical case: the title and abstract for PMID 32584542 have this issue, and I was unable to locate this document in CORD-19. In addition to minor variations such as this, there are many examples of significant variations such as \"HCoV-19\", \"nCoV-19\" or even \"COIVD\". Are these cases worth considering? If not, can we quantify how much is lost? And if we can't quantify it, this is a limitation.\n\nHow is the following situation handled: querying source A returns a document (e.g. the source has full text and that matches), but the version in source B does not return it (e.g. the source only has title & abstract, and they do not match). From the description, I would assume that the version from source A is used and the version from source B is ignored; is any reasonably useful data lost by not explicitly querying source B for its version?\n\nThere are other efforts to provide a repository of scientific articles related to COVID-19, and it would be appropriate to mention these, if only to indicate why CORD-19 has unique value. I am aware of LitCovid (Chen Q, Allot A, Lu Z. Keep up with the latest coronavirus research. Nature. 2020;579(7798):193), are there others?\n\nThere are also non-COVID-19 efforts to provide a large percentage of the literature in formats appropriate for text mining or other processing. One is (Comeau, Donald C., et al. \"PMC text mining subset in BioC: about three million full-text articles and growing.\" Bioinformatics 35.18 (2019): 3533-3535.), which not only provides the full text of a large percentage of the articles in PubMed Central, but it is also kept up-to-date and converts all documents into a straightforward standardized XML format appropriate for text mining. While this effort is single-source, it specifically addresses some of the issues encountered in the creation of CORD-19 and the representation aspect of the \"Call to Action\".\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Thank you - new revision and responses\nThank you for the detailed review. We have uploaded a new revision which addresses most of the comments. More detailed explanations are provided below.\n\n> It would be useful to have a very clear statement of the purpose for CORD-19...\n\n> While it seems appropriate for document inclusion to be query-based, it is important to consider the coverage of the query...\n\nThe reviewer brings up a good point. For any such text corpus, inclusion/exclusion is defined by the curators of the corpus, dramatically affecting how the corpus can be used. We adopted the initial query from Elsevier after consultation with other groups, including PMC. The query has evolved slightly since the beginning and may continue to evolve as needed. Although we don't explicitly include other coronaviruses in the search query, the current query should be catching most if not all of these papers (we match the search terms over title, abstract, *and* body text). PMC also performs some query expansion, which would increase coverage and also handle some of the lexical variations and misspellings (though not all) brought up by the reviewer.\n\nAs for the other example brought up by the reviewer (PubMed 32584542); this paper is included in CORD-19 as of the latest versions (I checked 2020-07-04). The paper was published on 6/25, and is indexed in Medline. Papers outside of PMC take a longer period of time to make it into CORD-19 because of differences in the ingestion pipeline. PMC papers are queried directly from PMC every day, while papers from other sources are ingested by Semantic Scholar and have to go through our entire paper processing pipeline (which sometimes takes days to a week) before they become available in CORD-19. I hope this addresses some of the reviewer's concerns.\n\n> How is the following situation handled: querying source A returns a document (e.g. the source has full text and that matches), but the version in source B does not return it (e.g. the source only has title & abstract, and they do not match). From the description, I would assume that the version from source A is used and the version from source B is ignored; is any reasonably useful data lost by not explicitly querying source B for its version?\n\nIn the case where source A and source B provide the same paper, and major metadata fields (DOIs and other identifiers) match, we cluster them together (preserving both A and B for provenance). The title and abstract are selected from the source that provides full text and the most unrestrictive license. If major metadata fields do not match, which sounds like the case described by the reviewer, we keep these as separate entries in CORD-19. We prefer underclustering to avoid the situation described by the reviewer (that of lost information).\n\n> There are other efforts to provide a repository of scientific articles related to COVID-19, and it would be appropriate to mention these, if only to indicate why CORD-19 has unique value. I am aware of LitCovid (Chen Q, Allot A, Lu Z. Keep up with the latest coronavirus research. Nature. 2020;579(7798):193), are there others?\n\nWe have added a citation and description of LitCovid. We place additional emphasis on other areas of missing COVID-19 literature in the discussion (such as foreign language literature, works in the social sciences and other fields of study). Though we are not currently aware of any released corpora in these other areas, we would really appreciate if any of these are brought to our attention.\n\n> There are also non-COVID-19 efforts to provide a large percentage of the literature in formats appropriate for text mining or other processing. One is (Comeau, Donald C., et al. \"PMC text mining subset in BioC: about three million full-text articles and growing.\" Bioinformatics 35.18 (2019): 3533-3535.), which not only provides the full text of a large percentage of the articles in PubMed Central, but it is also kept up-to-date and converts all documents into a straightforward standardized XML format appropriate for text mining. While this effort is single-source, it specifically addresses some of the issues encountered in the creation of CORD-19 and the representation aspect of the \"Call to Action\".\n\nBioC is very useful for text mining, and the full text representation in CORD-19 is very similar to BioC in some ways (character-level span annotations etc). We have added references and descriptions of BioC to both the \"Design decisions\" and the \"Call to Action\" sections of the paper.\n\nWe adopt the JSON schema we use in CORD-19 from S2ORC (https://arxiv.org/abs/1911.02782), which is essentially an extended version of CORD-19 that covers all of the Semantic Scholar literature corpus. Some differences between CORD-19/S2ORC and BioC is the focus on full paper representation, with fields for bibliography entries, figure and table references, and so on included in S2ORC. \n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It satisfactorily addresses the major issues I had. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
0gLzHrE_t3z
ACL 2020 Workshop NLP-COVID
AnonReviewer3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<0gLzHrE_t3z>>```", "role": "user" }, { "content": "Title: Overview of a highly important Covid-19 dataset\nReview Content: This is a paper that describes an important research dataset that has been produced during the Covid-19 epidemic. The CORD-19 collection is used for much research and some challenge evaluations. Even though this paper does not report any research results per se, and the paper is posted on the ArXiv preprint server, this version will give a citable description of the collection that will likely be widely referenced.\n\nThe authors describe well the process of dealing not only with the technical issues of processing heterogeneous scientific papers but also the non-technical issues, such as copyright and licensing.\n\nThe authors do not make any unreasonable claims, although I do question the value of this collection for non-computational researchers and clinicians. As the authors note, the collection is not complete, which is essential for clinical researchers and certainly for clinicians (who do not typically read primary research papers anyways, and tend to focus more on summations). But the dataset is of tremendous value to computational and informatics researchers, and that should be pronounced.\n\nI appreciate the Discussion that points out the limitations of how scientific information is currently published, and how it could be improved. One other concern that could be addressed is how long the Allen Institute for AI, which is to be commended for this work, will continue to maintain this tremendously valuable resource.\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Thank you and response\nThank you for your review!\n\n> I appreciate the Discussion that points out the limitations of how scientific information is currently published, and how it could be improved. One other concern that could be addressed is how long the Allen Institute for AI, which is to be commended for this work, will continue to maintain this tremendously valuable resource.\n\nWe have automated the maintenance of the dataset, so will continue to update it for the foreseeable future. However, the answer to this question depends more on the value of the dataset to the community: whether the pandemic continues, whether we derive further value from mining these texts, whether the publishing community continues to support our efforts, and the costs / resources available for maintenance. We will try to balance all of these considerations.", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The response covers nearly all of the issues I raised. I maintain my positive recommendation for this work.", "role": "assistant" } ]
0gLzHrE_t3z
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<0gLzHrE_t3z>>```", "role": "user" }, { "content": "Title: CORD-19 is an excellent resource with an impressive integration work for the research community to fight COVID-19.\nReview Content: The authors present the CORD-19 data set and describe how it has been developed and continues to be developed. The CORD-19 data set is a valuable resource that provides access to the latest literature about COVID-19 and coronaviruses and it is updated daily with over 200k downloads. The generation of the CORD-19 requires a coordinated integration and processing effort that is significant. The contribution of this corpus is of high significance and will have a strong impact on the biomedical domain and support the development, for instance, of COVID-19 vaccines. The manuscript is clearly written and it is easy to understand.\n\nThe effort in providing a version of the latest literature in formats that can be processed by text analytics methods is excellent, using the latest of the available technology to do so. In the paper, it is mentioned in the manuscript that there are some problems in turning tables into structured format and the authors provide examples of issues that they have found. Table processing is done by IBM, who has as well a method for table processing that seems to be resilient to the problems mentioned and would be relevant to consider it for table processing (https://arxiv.org/abs/1911.10683).\n\nThe authors give an example of conflict from which it can be inferred that the same DOI might be linked to two different PubMed identifiers, the reviewer is curious why this might be the case and if an example could be provided.\n\nWhen you mention β€œClassification of CORD-19 papers to Microsoft Academic Graph”, is this classification done by a method provided by the authors? is this classification provided as meta-data?\n\nDuring my review the only typo I could find is:\n* β€œother research activity.” β€”> β€œother research activities.”?\n* β€œby not allowing republication of **an** paper”, an β€”> a\n\nPlease consider the following guideline for NLM trademarks: https://www.nlm.nih.gov/about/trademarks.html\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Thank you and replies to comments\nThank you for your review! We've addressed most comments in a revision. Detailed responses below.\n\n> The authors give an example of conflict from which it can be inferred that the same DOI might be linked to two different PubMed identifiers, the reviewer is curious why this might be the case and if an example could be provided.\n\nThis happens for a couple hundred papers in the CORD-19 corpus. There can be two PubMed entries that correspond to the same DOI, sometimes due to duplicate entries in PubMed, but more often when a paper's abstract is available in two languages. In these cases, PubMed can have two entries for a paper with the same DOI. For example:\n\nhttps://pubmed.ncbi.nlm.nih.gov/32532586/\n\nhttps://pubmed.ncbi.nlm.nih.gov/32426303/\n\nBoth of these papers have DOI: 10.1016/j.remn.2020.04.004; the first has an English abstract, the second Spanish. We keep these as separate entries in CORD-19.\n\n> When you mention β€œClassification of CORD-19 papers to Microsoft Academic Graph”, is this classification done by a method provided by the authors? is this classification provided as meta-data?\n\nThis classification is performed in collaboration with the Microsoft Academic Graph research group. The methods used are described in Shen et al 2018. We have added the citation to the sentence to clarify. We provide a mapping to Microsoft Academic Graph identifiers as a supplementary file on the CORD-19 dataset landing page: https://www.semanticscholar.org/cord19/download.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The authors have answered almost all of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
EC1vWkJXpjy
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<EC1vWkJXpjy>>```", "role": "user" }, { "content": "Title: Novel task with under-developed motivation\nReview Content: This paper presents a BERT-based model for solving the novel task of grounding sentences to 3D locations in a representation of a human male body. The authors induce training labels by automatically extracting a list of organs and their synonymous wordforms using the SciSpacy UmlsEntityLinker and extracting sentences containing at least one mention of an organ on their curated list. The ground-truth voxel coordinates are assigned using organ locations from a 3D atlas, and several BERT models are fine-tuned using a distance-based regression loss. The paper presents a novel task, is clearly written, and thoroughly explains their experimental choices and settings.\n\nIssues:\n\n1. One of the key premises of the paper is that physical proximity in the human body reflects functional and semantic relatedness to a significant degree. This statement needs more justification, especially since it is such an important part of the motivation for this work. Looking at a model of a human torso, it seems that the centroid of the left lung may be closer to the centroid of the stomach than to the centroid of the trachea. Similarly, the centroid of the stomach seems closer to that of the spleen and pancreas than of the large intestine.\n\n2. It's not clear what is gained by performing this as a regression task to directly predict 3D coordinates instead of as a classification task that can then be transformed to 3D coordinates using the atlas (the ClassCenter model). The motivation seems to be that in cases when the classification model gets the wrong organ, the predicted 3D location is significantly worse than in the regression model (i.e. NVD-O for ClassCenter is higher). However, the classification model identifies the correct organ with much significantly higher frequency (i.e. IOR is higher). The overall effect is that even in the masked setting, the average per-sentence voxel distance is much lower for ClassCenter (i.e. taking the linear combination of NVD and NVD-O w.r.t. IOR).\n\n3. The use case needs further justification. In what situations is specifying a position in a 3D atlas preferable to submitting a textual (or UMLS-keyword) query? Since the training data only uses the centroids of organs, is there a meaningful difference between querying for the centroid of the stomach vs. somewhere else within the organ? Are the relative positions of sentences embedded within the same organ semantically meaningful, or mostly a product of noise in the regression?\n\n4. Representing abstracts as a point cloud composed of individual sentences seems like it would cause issues for the information retrieval task presented in section 7.2. What happens to the majority class of sentences that don't contain a mention of an organ, either implicitly or explicitly? Does the frequency of sentences containing an organ mention accurately represent the relevance of the document?\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer1\nHello! Thank you for taking the time to read our paper and provide the feedback. Below, we respond to the issues you pointed out.\n\n1) We do not argue that the spatial proximity is always going imply a higher degree of relatedness between terms than the standard semantic similarity inferred from their distribution within the corpus (shared contexts ect.), but we nevertheless believe that it does reflect similarity between concepts to a significant degree. Additionally, in certain contexts, e.g. surgeries, spatial proximity in itself can be more relevant than semantic relatedness. And given the inherent ability to visualize texts within the human body that comes with the method, we believe it to be interesting to the research community.\n\n2) Our motivation for framing the task as regression is that a sentence concerning a certain organ should be mapped preferably to that organ, and if not, at least to the general vicinity of it. We argue that even though the classification model achieves a lower overall distance than the regression based model, the higher and intractable error that it makes when predicting the wrong organ makes it less suitable for general use, and especially for retrieval tasks that rely on representing larger texts as combinations of their sentence mappings.\n\n3) We proposed the use of a 3D point query as an alternative to textual queries for which many works already exist. The training objective takes into the account all voxels of a given organ, and not just organ centroids, and accommodates the functionality where clicking on the upper region of an organ retrieves sentences referring to it in the context of organs above it, while clicking onto its lower region retrieves sentences that also mention the organs below it.\n\n4) We agree that the retrieval task would benefit from a pre-processing step that would discard medically irrelevant sentences (eg. without any named entities from the medical domain) and from a refined distance measure between clusters, but we believe that this is not essential to the general principle of the method.", "role": "user" }, { "content": "I value the authors' response. They have adequately addressed the majority of my concerns, and I am considering revising the rating.", "role": "assistant" } ]
EC1vWkJXpjy
ACL 2020 Workshop NLP-COVID
AnonReviewer3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<EC1vWkJXpjy>>```", "role": "user" }, { "content": "Title: interesting exploration with unclear justification\nReview Content: This paper describes methods for learning a mapping between sentence vectors (as produced by the [CLS] token in BERT) and 3D points in the human body. The idea is that sentences that are about some anatomical site can be mapped to that position in the body for visualization. It is never completely clear from the paper why this needs to be learned, since the training pre-processing already is finding anatomical site mentions and essentially normalizing them to a set of body parts with known coordinates. The only thing I can figure is that they are using their pipeline as a way to generate labeled training data, and they intend for this to be applied to sentences without explicit anatomical site mentions. This is maybe supported by the fact that their training does include 50% masking of anatomical sites. In that case, there is a missing element here, which is that the vast majority of sentences in a document are not \"about\" any organ at all, so mapping them into organ space is meaningless. So for this algorithm to be applicable, one would first need to pre-process sentences somehow for whether they should go onto the organ mapping stage at all. And in any case, their example application presumably is just mapping the sentences that did have anatomical site terms. Overall, I think the logic of the approach needs to be spelled out more clearly. As it is, it feels like an interesting trick that one can learn this mapping but I just don't see why it's necessary.\n\nOther issues:\n- This mapping will surely generate \"point clouds\" around organs (i.e. sentences mentioning lung will be spatially distributed near the lungs rather than all mapping directly to the centroid of the lung). This may give an expert in anatomy some sense that these minor differences are anatomically meaningful where I believe it would be largely noise, and if some finer-grained meaning were there it would be impossible to distinguish from noise.\n- Results tables are never referred to in the text\n- There should be some discussion of what you \"want\" if there are multiple organ mentions. I certainly don't want some \"average\" position, but also don't want it to just show up in 1 position, it should probably show up in all 3 positions. But that is not a function anymore so not really something you can ask a neural network to do.\n- When masking anatomical sites in the training data, do you ever use the same sentence in both the masked and unmasked form? I think it would be preferable with this dataset to randomly choose one form before training, rather than randomly choosing to mask each time an instance is loaded (which could result in different masking behavior for the same instance across epochs) because otherwise it could be memorized.\n- I don't know if self supervised is the right term here. It's more like a clever way of generating supervised instances.\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer3\nHello! Thank you for taking the time to read our paper and provide the feedback.\n\nThe main reason for learning a contextual mapping between sentences and parts of the human body is to go beyond the look-up table feature. Therefore, stochastic masking is applied during training. By stochastically masking the keyword references to the organ locations, the model is forced to learn something that is more meaningful than a 1-to-1 correspondence between the keyword and the organ location. This is further motivated by the fact that a sentence in a document may make an implicit reference to an organ, albeit never explicitly mentioning the organ by its name.\nWhen it comes to sentences without any medical relevance, it would indeed be possible to improve the tools via a pre-processing step that would ensure that the model operates more closely within its domain, for example by performing named entity recognition on input sentences and discarding the ones that do not contain any entity types from the medical domain.\n\nOther issues:\n- Our method and its use of softmin in the training objective accommodates the learning of meaningful spatial mappings around the organ from co-occurrences of different organ terms (e.g. bladder and kidney), and, by the virtue of the masking, to the co-occurrence of one organ term and contextual information typical for some other organ (e.g. bladder and nephron).\n- For the moment, the discussion regarding the results is done within the caption of the tables, but we can certainly update the text so that the results in the table are discussed in the main text.\n- One of the main issues with having an approach that does not use any annotated data is indeed the ambiguity when there are multiple mentions of an organ. However, our goal was to quickly prototype a solution that will help in the exploration of Covid-19 related documents, and not go through the cumbersome process of annotating medical documents which is extremely time consuming. Therefore we create a setup where we rely on the model to learn the more appropriate target during training.\n- Stochastically masking the keywords is essential to our approach, and defining the masking upfront will make the problem trivial for every sentence left unmasked. Namely, any sentence that does not have a masked keyword has a direct 1-to-1 correspondence between the organ name and the location in the human atlas. On the other hand, by stochastically masking the keywords, the network has to balance between the 1-to-1 correspondence, which is definitely desirable when there is an organ mention, and the contextual correspondence, which is essential in a real-life scenario. \n- We think that self-supervised is an appropriate term in a sense that the annotation comes with the occurrences of organ terms which come with the text itself, similarly to the way language modeling task is often referred to as self supervised as it relies on automatically available information found in the structure of the language itself.\n", "role": "user" }, { "content": "Thanks for the response provided by the authors. Their response has resolved the majority of my concerns, and I am considering revising the rating.", "role": "assistant" } ]
EC1vWkJXpjy
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<EC1vWkJXpjy>>```", "role": "user" }, { "content": "Title: Interesting anatomical exploration of COVID articles using BERT-based grounding\nReview Content: The paper proposes a self-supervised approach to ground sentences in COVID-19 articles to a 3D human atlas. This is the first study to use self-supervision to map sentences in medical articles to anatomical locations in the 3D atlas space. Unlike the prior works that utilized the atlas space, they formulated the grounding as a regression problem, thereby allowing a better idea about the semantic closeness of the sentences to the target body locations. The study has also proposed the potential use of this grounding approach for data visualization and document retrieval use cases.\n\nStrengths:\n-- The study employs a fully self-supervised technique for grounding sentences to a 3D human atlas. This is neat and has a lot of potential applications beyond COVID-19.\n-- This work takes into account the prevalence of multiple organs in the same sentence while calculating the loss while training.\n\nMajor issues:\n1. As discussed as one of the limitations that user study needs to be done to validate the query tools, as it is difficult to verify the actual performance of the document retrieval tool from the current version. Moreover, since the distance between the centroids of the point clouds is used to retrieve the articles relevant to a query, it is difficult to interpret whether this approach is useful as this does not take into account which of the sentences in the abstract are more close to the query. Some sentences in the article might just be providing background information and the corresponding organs might not be of interest to the user. To address this, it may be considered to have a way to identify the sentences of interest.\n\n2. A brief discussion of interpreting the results mainly to indicate any potential relation between the impact of using the masked technique and the evaluation metrics used can be included.\n\n3. Although the proposed method definitely obviates the expensive annotation part, it might be good to further compare the evaluations with a supervised model to validate that self-supervision performs better than a fully supervised approach. At the least, it would be good to compare the performance of these models with supervised models used on other datasets in the previous works for grounding medical text to 3D space.\n\nMinor issues:\n1. As already pointed out in the discussion that the models have only been trained on single sentences, it needs to be investigated whether a similar loss contribution strategy would work well when multiple sentences are included in the training process as organs appearing in different sentences might be very different from each other.\n2. An evaluation of the methods to illustrate their performance on sentences having rare or infrequent organs may be reported.\n3. How is the 8000 sentences mark determined for tackling the imbalance of different organs?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer2\nHello! Thank you for taking the time to read our paper and provide the feedback. Below, we respond to each one of the issues you pointed out.\n\nMajor issues:\n1) The most immediate consideration for improving the approach would be using more sophisticated ways to perform retrieval by using entire point clouds as opposed to using just their centroids, starting with the simple extension of incorporating the point cloud spreads in a way where more compact point clouds (therefore representing more specific documents) are given an advantage. \nAdditionally, in order to mitigate the effect of organ-unrelated sentences to retrieval, we are currently working on projecting such sentences (sentences without organ occurrences) to locations outside of the human body. We are expanding the current training set with sentences that do not mention any organ, and train new models that are more robust compared to the ones currently in the paper. For such sentences, the target used in the loss function is a set of points evenly spread across the outside surface of the convex hull of the body, and train the model such that it learns that any sentence that does not contain mentions of an organ should be mapped to a location outside of the human body. The loss function, using these anchor points just outside of the body in the same way as organ points, would encourage the mappings of such sentences to move towards the nearest location outside of the body. During inference, sentences projected to the outside of the human body could then be removed from document point clouds, effectively allowing us to isolate sentences of interest.\n2) The use of the masking during training is essential for achieving high results on the sentences where the organ keywords do not occur i.e. where we rely on the context. During the training process, the use of masking certainly reduces the result on the non-masked evaluations (distances, ior’s etc.). However, without masking, the model would learn a 1-to-1 correspondence between the organ keywords and the locations in the atlas. Consequently, in the masked evaluations, where we check whether the model has learned to associate the context with its corresponding organs, the obtained result would be dramatically lower. In the updated version, we can include an experiment where the model is trained without masking and report the results for each of the metrics. \n3) To the best of our knowledge, there haven’t been works that ground text in 3D space corresponding to the human body. Therefore, we could not obtain any previously annotated data that would be adequate for our setup, and given the current situation, it was not feasible to create any kind of new annotation that would require medical expertise. Considering the global pandemic, our main goal was to find a cheap and scalable technique that could be deployed rapidly, so we did not consider comparisons with supervised methods.\n\nMinor issues:\n1) Using longer sentences/more than one sentence can definitely be beneficial since it includes more context for the model to learn. However, using longer sentences potentially increases the risk of having too many organs occurring in a sentence which is not ideal for our self-supervised setup. Namely, if we expand our work in that direction (which certainly is interesting) we would aim for using some form of label propagation, instead of just inferring the label according to an organ occurrence within the sentence.\n2) Computing a per-organ IOR/distance i.e. an IOR/distance macro-average for each of the organs is certainly feasible and we can include it in the updated version.\n3) The threshold of 8000 was chosen as roughly the number of occurrences of the second, third, fourth and the fifth most occurring organ. \n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It adequately addresses the points I raised. I continue to recommend this work positively.", "role": "assistant" } ]
HxIZzQZy_0F
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<HxIZzQZy_0F>>```", "role": "user" }, { "content": "Title: Engineering white-paper, but regrettably lacking in science\nReview Content: Summary:\nthe article describes the process and end result of creating a chatbot aimed at providing vetted COVID-related information to the public.\n\nGeneral impression:\nJennifer clearly represents a considerable effort, undertaken by a talented group under great time pressure, using vast amounts of information that is highly uncertain and in flux. \n\nNow for the submission: to put it perhaps soberingly, it is 60% helicopter-view engineering white paper, 20% sales pitch, 10% incidental observations, 8% future work, and 2% science. It is the virtual absence of scientific substance that is my main concern. \nAs one illustration: the manuscript has 24 citations, 11 of those look peer-reviewed, 5 of these are provided in the future work (β€˜ongoing challenges’) section, therefore not supporting the past activities but rather motivating future ones. Several of the other ones are referred to outside of their scope (e.g. the Vargas reference). Related research on chatbox design, the spine of this work, is all but absent. (2 citations, from 30 and 15 years ago). \n\nNote that the submission has no β€˜Results’ section. No systematically collected observations or other evaluation materials are introduced. At the very least, end-user usage statistics would have been informative to assess the impact of this work.\nThe engineering decisions that are reported make sense, but are largely not substantiated with supporting literature or first-hand evaluation. \n\nWhile it is claimed, it is not clearly demonstrated that this project addresses misinformation (or disinformation). It is an access point (and repeater of) trustworthy information but not one that specifically points out misinformation, or hunts it down. It is not fully clear how accountability is ensured for the automated provision of information to the general public – can users easily vet the sources of such information, and is there a path of recourse when a user spots incorrect answers? Who is responsible for the answer in such a case, especially so if it led to harm? Are there mechanisms to prevent deliberate infiltration of the expert-crowd or other efforts to sabotage Jennifer (think of Tay AI)? \n\nMore detailed points:\nThe Abstract refers to COQB, but the main article never does.\n\nSection 2.4: \"Answers to questions should be verified against reliable sources in the language of question.\"\n--> That is a debatable statement: why would an English language source suddenly be necessarily inadequate when a person asks a question in French!?\n\nDiscussion: \"Jennifer has successfully demonstrated that, with the right combination of technology and human experts, information from reputable sources can be more quickly and effectively organized and shared at scale.\"\n--> This is a statement that is not supported by anything reported in the submission.\n\n\"Given the evolving tasks and the large number of volunteers with diverse background, putting the right process around tasks, workflow, and sequencing (Norheim-Hagtun and Meier, 2010) is key to ensuring efficient use of the volunteers’ time to the advantage of the project.\"\n--> Unfortunately, full documentation of tasks, workflow, and sequencing is not provided in this manuscript, and use of volunteer time is not assessed.\n\nConclusions: \n\"Jennifer leverages cutting-edge chatbot technology, as well as a diverse network of volunteers from around the globe, to combat misinformation related to the pandemic.\" \n--> Only one of these three claims is supported by the presented material.\n\nAt best, this submission could be included as a discussion article, certainly not a research article. Accepting it as a full peer-reviewed research contribution would lend it more credibility than it deserves.\n\nMinor points, stylistic points:\n\"to best prevention and management practices.\" -->?? \"to prevention and management best-practices.\"\n\nSection 2.3: \" following guidelines and recommendations in (National Academies of Sciences, Engineering, and Medicine, 2017)\". The citation appears to point to a research agenda document, rather than a set of guidelines and recommendations.\n\nThe sentence \"Much of the recent research focusing on automating the task of fact checking (e.g., Adair et al. (2017); Pathak and Srihari (2019)).\" is incomplete.\n\nSeveral citations are incomplete re. sources, most notably the Jurkowitz (Mark, not Marl) citation. \n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to the Concerns Raised. \nWe thank the reviewer for the helpful comments. \n\nAt the onset of COVID-19, our main goal was not so much creating novel scientific knowledge but quickly and effectively disseminating validated information about COVID-19 to help people to better protect themselves against this disease. To race against time, we chose to rely on existing technologies that may not be perfect and add scientific evaluation later during the field work. We believe that sharing our experience of building Jennifer help shred lights on research directions to help fight the current and future crisis. We also release a dataset that can be used to advance NLP research (e.g. QA and semantic similarity). \n\n>> Concerns about citations.\n\nReferences are provided to support information and statements incorporated in the article. We have added more citations to recent work on chatbots and converted some to footnotes. \n\n>> the submission has no β€˜Results’ section. \n \nThe usage statistics are provided in Sec. 1. We do intend to conduct formal user studies in the future. \n \n>> The engineering decisions that ... largely not substantiated with supporting literature or first-hand evaluation.\n\nJennifer is a proof-of-principle system. This paper describes the first steps in its design, implementation, and initial evaluation. The chat design is informed by existing literature (e.g. Walker and Whittaker, 1990 and Pan, et al. 2005). To the best of our knowledge, the currently available literature does not report examples of other chatbots that were designed with a similar scope to Jennifer (Miner et al. 2020, Larajano et al., 2018). We do intend to conduct formal user studies in the future.\n>> it is not clearly demonstrated that this project addresses misinformation\n \nSocial media platforms and several websites available to the general public are plagued with misinformation. The UN secretary-general has warned we’re living through β€œa pandemic of misinformation,” and the head of the WHO has said we are in an β€œinfodemic.” By being easily accessible to the general public, this platform combats misinformation by redirecting users to trusted sources, and answering questions in a way that satisfies the criteria described in section 2.3. \n\nIn addition, the WHO has published a COVID-19 myth busters page with questions such as whether the virus would be a bioweapon. Jennifer provides answers for such questions as well. \n\n>> It is not fully clear how accountability is ensured\n\nThe answers are manually curated by volunteers and validated by trusted medicine experts. Answers always include a link to a trusted source, so that end users can \"learn more\". In general, every answer is evaluated by at least three volunteers: one curator, one tester, and one administrator. As information related to COVID-19 is evolving over time (e.g. the usage of masks), some answers were revised, tested, and validated more than once. We have a dedicated team of volunteers who revise QA pairs as information evolves. In addition, when resources are conflicting, we provide an answer based on the most authoritative resource (e.g. CDC) but also include the alternative answer as appropriate. We rely on our trusted volunteer medicine experts to make the judgement call. \n\n>> The Abstract refers to COQB, but the main article never does.\n \nWe've added a reference to the data to the beginning of Sec. 2.3 and more details to Appendix B.\n \n>> why would an English language source suddenly be necessarily inadequate when a person asks a question in French!?\n\nWhile we agree that the main scientific concepts in an answer do not change with language, there are language style and cultural differences that need to be considered when drafting an answer. We also want to make sure that relevant information is available in language-specific websites, and direct end-users to trusted sources. Furthermore, we do not believe that we could assume that our readers understand English. \n\n>> \"Jennifer has successfully demonstrated that, ....\"\n--> This is a statement that is not supported by anything reported in the submission.\n\nAs we discuss in the text, Jennifer facilitates access to information by providing easy to understand, empathetic, and curated answers to end-user questions, while redirecting them to reputable sources. Rather than browsing independently and potentially accessing unreliable information, end users can rely on Jennifer to provide a trustworthy answer, that is backed up by reliable sources. \n\n>> full documentation of tasks, workflow, and sequencing is not provided in this manuscript, and use of volunteer time is not assessed.\n\nSec. 2.1 describes the workflow and sequencing of tasks. Some details are omitted due to space limit. \n\n>> Only one of these three claims (in conclusion) is supported by the presented material.\n\nWe have modified the text to emphasize areas where these claims are supported.\n\nWe have fixed all other minor issues related to citation and writing as pointed by the reviewer. ", "role": "user" }, { "content": "Thank you for the authors' reply. They have addressed most of the issues I raised, and I will likely adjust the score higher.", "role": "assistant" } ]
HxIZzQZy_0F
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<HxIZzQZy_0F>>```", "role": "user" }, { "content": "Title: Impressive system, lacking crucial details\nReview Content: The authors describe a chatbot and a dataset of frequently asked questions regarding the SARS-CoV-2 disease. They enumerate the challenges in building an effective health chatbot. They introduce a dataset of SARS-CoV2 and COVID19 questions.\n\nI would like to congratulate the authors on having an actual deployed system. I have many concerns about the paper and the datasets themselves that hopefully the authors can address.\n\nPros:\n1. The authors collect what might be a high quality dataset harvested from many QA sources\n2. They provide compelling examples of good bot behavior.\n3. They provide a reasonable roadmap forward.\n4. They are displaying great care in deployment of their system, particularly beyond their initial input domain.\n5. I am a huge fan of the ongoing curation of the answers in their dataset.\n\nConcerns:\n1. The authors should provide a brief overview of the capabilities from Juji: Does the question/QA grouping come from Juji itself, or do the authors have some other approach? (From a brief skim of the Juji paper it seems that the authors must have taken some alternative approach)\n2. There is no measure of question/QA grouping. Presumably this should be a very high precision grouping since incorrectly including a question could mean it has a wrong answer.\n3. Similarly, there is no measure of inter-rater agreement on the evaluation of the answers in the crowd sourced context. A measure such as Krippendorf's alpha (or many others) would be appropriate here and essential to knowing how well the system performs overall. A measure like this becomes doubly important when the authors are attempting to fight the spread of misinformation.\n4. How were the automatically generated question generated? If you used Juji templates, what are some examples? Given the constraints of available structured data (e.g. CDC data), what types of questions did you choose to answer, and which ones did you not? \n5. For the questions gathered from multiple sources, how are conflicts resolved? Presumably two sources do not always have consistent answers.\n6. How many people evaluate each question/answer pair?\n7. How is the translation performed? I can't tell if it's automatic with manual curation or entirely manual. If there was an automatic component, what system did this?\n8. I am concerned about the use of \"engagement\" as a metric of success. This becomes even more concerning to me when more time spent interacting with a system is seen as a measure of its success - while I agree that when answering questions one might need further clarification or to know about related questions and answers - encouraging people to spend more time interacting with a system could lead to a perverse incentive where it keeps users \"engaged\" without answering their questions. \n\nThe authors have built an impressive system, but I am reluctant to recommend it for acceptance without further details in data generation and evaluation. If these details were added (even for the system/questions/answers at a particular point in time), I would gladly recommend this paper for acceptance.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to the Concerns Raised\nThank you for your detailed comments and feedback. We have seemed to address all concerns in the revised version as detailed below. \n\n>> The authors should provide a brief overview of the capabilities from Juji\nThe main capabilities Juji provides is Dialog Management, including given a user question, matching the question to a known question based on a pre-trained machine learning model and return the corresponding answer for the question group where the question below to. The ML model also gradually fine-tuned to the specific workspace based on user feedback (e.g. selection on ambiguous questions). We have added a brief description about to the beginning of Sec. 2.1 (this is a 4-page short paper submission, hence the brevity). \n\n>> 2. There is no measure of question/QA grouping. Presumably this should be a very high precision grouping since incorrectly including a question could mean it has a wrong answer.\n\nThe grouping is done manually, so it is very precise. We've clarified this at the beginning of Sec. 2.3. \n\n>> 3. Similarly, there is no measure of inter-rater agreement on the evaluation of the answers in the crowd sourced context. \n\nThe measure of inter-annotator agreement doesn't apply, as we only add/update an answer when it passes both validation stages by trusted medical experts. Similarly an alternative question is added only when it passes validation by trusted admins. We've clarified this further towards the end of the last paragraph of Sec. 2.1. \n\n>> 4. How were the automatically generated question generated? If you used Juji templates, what are some examples? Given the constraints of available structured data (e.g. CDC data), what types of questions did you choose to answer, and which ones did you not? \n\nWe have provided more details in Appendix A. To sumamrized, we focus on answer questions related to case numbers and death rates in US states with data from CDC and different countries with data from WHO. The auto-generated questions are also included in the COQB-19 dataset. We also provided more details in Appendix B about the data. \n\n>> 5. For the questions gathered from multiple sources, how are conflicts resolved? Presumably two sources do not always have consistent answers.\n\n\nThe answers are manually curated by volunteers and validated by trusted medicine experts. When two resources are conflicting with each other, we will provide an answer based on the most authoritative resource (e.g. CDC) but also include the alternative answer as appropriate. We rely on our trusted volunteer medicine experts to make the judgement call. \n\n>> 6. How many people evaluate each question/answer pair?\nThe exact number varies. In general, every answer has been evaluated by at least three volunteers: one curator, one tester, and one admin. As information related to COVID-19 is evolving over time (e.g. recommendation on the usage of masks), some of the answers were revised, tested, and validated more than once. \n\n>> 7. How is the translation performed? I can't tell if it's automatic with manual curation or entirely manual. If there was an automatic component, what system did this?\n\nIt is done manually. As questions and answers are added to the Jennifer QA database, our volunteer translators incorporate the respective QA translations and links to trusted sources. We've clarified this point in the last paragraph of Sec. 2.4. \n\n>> 8. I am concerned about the use of \"engagement\" as a metric of success. This becomes even more concerning to me when more time spent interacting with a system is seen as a measure of its success - while I agree that when answering questions one might need further clarification or to know about related questions and answers - encouraging people to spend more time interacting with a system could lead to a perverse incentive where it keeps users \"engaged\" without answering their questions. \nWe agree that that \"engagement\" should not be the ultimate measure of success of our system. We do plan to conduct formal user studies in the future to evaluate Jennifer with different metrics such as NPS and user sentiment. \n", "role": "user" }, { "content": "Title: Going the right direction!\nOverall, I'm pretty pleased with the updates and will bump my score a little bit; remaining points below. Basically I want a plan for evaluation before I become enthusiastic about accepting this.\n\n> The measure of inter-annotator agreement doesn't apply, as we only add/update an answer when it passes both validation stages by trusted medical experts. Similarly an alternative question is added only when it passes validation by trusted admins. We've clarified this further towards the end of the last paragraph of Sec. 2.1\n\na) each of these stages is still probabilistic, you already understand this as you have multiple stages of QA\nb) if something can be rejected at any stage, you do in fact have something to measure agreement against (it's not a perfect measure, but it would be better than nothing)\nc) if you don't have some measure of agreement, you should at least have a plan for collecting one\n\nHaving something like this, in mind, becomes effectively non-negotiable for full conference publications, but as a work-in-progress I care more about a long-term plan.\n\n> How many people evaluate each question/answer pair? The exact number varies. In general, every answer has been evaluated by at least three volunteers: one curator, one tester, and one admin. As information related to COVID-19 is evolving over time (e.g. recommendation on the usage of masks), some of the answers were revised, tested, and validated more than once.\n\nA less important note - if you track sources and different versions of questions/answers, this could be a valuable resource for investigating the flow/change of information in a relatively fast moving world.\n\n\n> We agree that that \"engagement\" should not be the ultimate measure of success of our system. We do plan to conduct formal user studies in the future to evaluate Jennifer with different metrics such as NPS and user sentiment. \n\nThis should probably be detailed, at least in an appendix\n", "role": "assistant" }, { "content": "Title: Preliminary Plan for Evaluation Added\nThank you for your positive feedback and insightful comments. \n\nWe have added Appendix C. Preliminary Plan for Evaluation with initial thoughts on how we plan to evaluate Jennifer from both users' as well as volunteers' perspective and both quantitively as well as qualitatively. Please do let us know if you have further comments or suggestions. \n\nWe also plan to seek collaborations with experts in human-computer interaction to carry out some of the studies. We welcome any researcher interested in such opportunities to contact us directly. \n\n", "role": "user" }, { "content": "I am very happy to have the above discussion with the author, which has positively impacted the strength of this work.", "role": "assistant" } ]
HxIZzQZy_0F
ACL 2020 Workshop NLP-COVID
AnonReviewer3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<HxIZzQZy_0F>>```", "role": "user" }, { "content": "Title: N/A\nReview Content: The paper provides a good description of the COVID-19 Q&A chatbot and the process of creating the crowd-sourced Q/A pairs that the bot uses for question answering. The authors also outline the opportunities and challenges of implementing COVID-19 chatbots in the real world.\nWhile the study authors list examples of lessons learned as well as open challenges of using chatbots in a pandemic setting in which information changes quickly, it would be very helpful to better understand the use of a Jennifer-like chatbot in a situation in which the evidence-base is more stable. It would also be very helpful to understand the authors' perspectives on how chatbot technologies and their use could be effectively evaluated, and how such an evaluation could be better integrated into the chatbot development and validation of the provided responses. In addition, it might be helpful to provide examples of other chatbot technologies that have been used successfully during and prior to the pandemic, as this could put better into perspective the issues and imitations that are being raised in the manuscript. \nAnother important aspect regarding the use of chatbot technologies is their acceptance both by public health authorities as well as the intended users. In using novel technologies, governments and public health organizations typically tend to take a more conservative approach, and accept them only following rigorous testing and evaluation. Equally, there may be also hesitancy from certain population groups with formed/strong opinions to engage with chatbots, particularly when provided answers originate from just one or several sources (i.e. when the whole body of evidence is not provided). It remains unclear who is (are) the intended target audience(s) of the chatbot and, ultimately, its objective. It would be very helpful to have that information stated and discussed up front. Lastly, it is important to recognize the significant limitations of the use of chatbot technologies for \"fact-checking\" in the absence of a robust knowledge-base. What has been presented in the paper may be more applicable to situation in which there is a need to help users identify and navigate through the relevant content, particularly when information is not well structured or ambiguous. It would also be great if the authors could reflect in the sustainability of initiatives that have a very high reliance on volunteers \nI would recommend that the manuscript be accepted, as it provides a good example on the potential usefulness of chatbot technologies in supporting the general public to efficiently find credible information on the web. \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank You and Some Clarifications\nThank you very much for your insightful comments. We completely agree that Jennifer is a great example to demonstrate the potential usefulness of chatbot technologies in supporting the general public to efficiently find credible information on the web. \n\n>> Better understand the use of a Jennifer-like chatbot in a situation in which the evidence-base is more stable. \n>> Examples of other chatbot technologies that have been used successfully during and prior to the pandemic\n\nTo the best of our knowledge, , the currently available literature does not report examples of other chatbots that were designed with a similar scope to Jennifer (Miner et al. 2020, Larajano et al., 2018). We do agree that there is a need to better understand the use of a Jennifer-like chatbot in a situation with more stable evidence-base. There are several studies conducted before that compare Juji against other chatbot technologies. (https://juji.io/publications; we cited the most recent one Xiao etc. 2020). We intend to conduct further study when information for COVID-19 is more stable and comapare different chatbots surfaced to help fight against COVID-19. We welcome researchers interesting in the problem reaching out to us for potential collaboration,. \n\nLaranjo L, Dunn AG, Tong HL, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25(9):1248-1258.\nMiner, A.S., Laranjo, L. & Kocaballi, A.B. Chatbots in the fight against the COVID-19 pandemic. npj Digit. Med. 3, 65 (2020). \n\n>> How chatbot technologies and their use could be effectively evaluated, and how such an evaluation could be better integrated into the chatbot development and validation of the provided responses.\n\nEvaluation of chatbots technologies is an important topic. \n\nThe design of Jennifer includes three different perspectives, which can be evaluated separately:\n(1) the technical design of Jennifer\n(2) the interaction of human users with Jennifer\n(3) the COVID-19 related information provided by Jennifer\n\nOur core techical design considerations are 1) Rapid development; 2) Ease of access; 3) Ease of maintenance; 4) Quality assurance. Because of (1) and (3), we selected Juji as our platform and was able to build the first version of Jennifer in less than a day. Several studies were conducted before that compare Juji against other chatbot technologies. (https://juji.io/publications; we cited the most recent one Xiao etc. 2020). It would be possible(but take some time) to compare the design of the chatbots reviewed by Larajano et al. 2018 with that chosen for Jennifer. The scope of these chatbots was different, however, as they were designed as assistants for self-diagnosis. \n\nIt would also be of interest to evaluation of the acceptance of Jennifer by human users in real life situations. We hope to conduct formal user studies in the future. \n\nCurrently, our evaluation and updates to the answers provided by Jennifer are mostly manual with the help from our volunteers. It would be interesting to create more automated evaluation methods and allow users of the chatbot to provide feedback more directly and incorporate them into the chatbot. We plan to explore this further in the future.\n>> Acceptance of chatbot technologies.\n\nChatbot technologies have been adapted by such organizations to help fight COVID-19. For instance, both WHO and CDC have used chatbot technology for assisting people in self-diagnosis (Miner et al., 2020.) We agree that it is important to provide evidence when giving answers. This is precisely when when Jennifer provides an answer to each question, references/links to the corresponding reliable resources are provided to help foster trust. \n\n>> Intended target audience(s) of the chatbot and, ultimately, its objective\n\nThe intended audience is general public who has difficulty navigating information about COVID-19 distributed across different resources. \n\n>> it is important to recognize the significant limitations of the use of chatbot technologies for \"fact-checking\" in the absence of a robust knowledge-base. \n\nAs with other scientific disciplines, knowledge on COVID-19 is evolving and,hence,the answers given by Jennifer have to be regularly reviewed and updated. A chatbot like Jennifer would likely be no good solution to navigate through information that is not well structured or ambiguous because it relies on a human team that performs a careful review ofthe questions that it can understand and the answers that it can give.\n\n>> Sustainability of initiatives that have a very high reliance on volunteers }}\n\nAs shared in Sec. 3, we found that we need to support the initiative with more dedicated resources and better infrastructure to manage the large number of volunteers. In general, people are eager to help, but we need to have better support to ensure that their time and efforts are used in the most effective way. ", "role": "user" }, { "content": "I appreciate the authors' reply. The authors have answered almost all of my concerns. I will continue to recommend accepting this work.", "role": "assistant" } ]
JENSKEEzsoU
ACL 2020 Workshop NLP-COVID
AnonReviewer4
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer4, and the following are the paper you need to review:\n```<<JENSKEEzsoU>>```", "role": "user" }, { "content": "Title: Potentially a good resource\nReview Content: This work presents a question-answering dataset created on a subset of CORD-19. Obviously, having a large QA dataset is valuable and if done well, can be a useful research both in the bioNLP field. The dataset is created by finding \"answers\" to potential questions that the text under examination would answer, surely, this effort has taken many hours to create. For what has been done, much less time and effort is put into writing about it. The manuscript is vert fragmented and hard to follow:\n- Abstract: Not clear what exactly is done and what contributions are\n- Introduction: You could reduce explanation on CORD-19 (removing the name of organisations for example) and refer to it in a citation. Instead it was expected to see what the NLP problem here was and what are the contributions of this work. None of that is there.\n- Related work is hardly a literature review. What is the common practice in the NLP community in creating a QA dataset. This is an old area. Surely there would be reputable (not arxiv) papers there to check and compare? Why tools are there and why are they relevant given they are two IR tools mentioned? \n- Section 3 needs a full rewrite. All the important details are missing. What were the annotation instructions and why? Ho w many annotators did you have? Was there any quality control? Any inter-annotator agreements for example?\n- Not sure if the paragraph before 3.1 makes any sense. BioBERT and SciBERT and so on are more representations. What is the relevance here?\n- In Section 3, it says annotators are told they are developing a \"method\". How is this a method? \n- Method section is quite weak but maybe enough for a dataset work. \n- Please check your references, even old ones are arxiv. Maybe they are published somewhere? or maybe you can find a peer-reviewed citation for some of those.\n\nOverall, even with the urgency of publishing COVID-19 related work, the paper should have acceptable quality which I believe this one need revision to get there. \nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Mixed review giving some good suggestions for improvement but lacks foundation in other parts\n> Reviewers Intro\n\nThe reviewer might be lacking important knowledge of the domain because the dataset is not created by β€œfinding answers to potential questions” as stated by the reviewer but instead β€œThe researcher uses the mouse to highlight the answer (pink) and creates a question based on that answer (box on left).” (caption of Figure 1 in paper). This is also the annotation procedure of the prominent SQuAD dataset that the reviewer should be aware of.\n\n> Abstract: Not clear what exactly is done and what contributions are\n\nPlease be more specific, because we believe the abstract is rather clear (β€œOur dataset consists of 2,019 question/answer pairs annotated by volunteer biomedical experts”) as well as states our contributions (dataset creation + β€œWe found that the additional training on this domain-specific data leads to significant gains in performance”).\n\n> Introduction\n\nAgreed. The first part could be a bit more crisp while the latter part could be more elaborate. We left the first part untouched but added more details to the contribution as well as motivation.\n\n> Related work [usage of arxiv references and why are two IR tools mentioned?]\n\n- Thanks for the hint. We changed references for SQuAD and MRQA to their respective non-arxiv versions. Other references in the related work are, given the recent nature of the crisis, preprints. \n- The last sentence motivates the mention of IR tools: β€œWe view our work as a complementary effort to aid in the evaluation and/or improvement to these existing tools”. Maybe you are aware of the discussion around the submission β€œRapidly Deploying a Neural Search Engine for the COVID-19 Open Research Dataset” where a lot of criticism revolves around the lack of proper evaluation. With our QA dataset these existing tools could be potentially evaluated.\n\n> Section 3 needs a full rewrite. All the important details are missing. What were the annotation instructions and why? How many annotators did you have? Was there any quality control? Any inter-annotator agreements for example?\n\n- Possibly the reviewer failed to notice the hyperlinks to detailed annotation guidelines as well as the videos explaining the annotation procedure. Possibly the reviewer printed out the paper? We had an internal discussion about footnotes vs hyperlinks and thought hyperlinks would be more useful in a paper linking many external resources + people cannot follow or click on footnotes printed on paper anyways\n- General remark. During the review process we added a lot of additional material especially to section 3. Looking now at the section we realized we should divide it into smaller and more digestible parts, which we did by introducing subsections β€œ3.1 Article and Annotator Selection”, β€œ3.2 QA Annotation Process” and β€œ3.3 Dataset Analysis”\n- Number of annotators: Very good point. We added it to section \"3.1 Article and Annotator Selection\"\n- Was there quality control: Quality control is stated in the paper that each annotation was verified by the annotation team lead. We made this more explicit in the newest revision in section \"3.2 QA Annotation Process\", first passage.\n- Inter annotator agreement (also criticized by other reviewers): as stated in the discussions part the dataset does not contain multiple annotations per question. So inter annotator agreement is not possible to compute. We state the reasons for this in the discussion section as well.\n\n> Section 3. What is the relevance of BioBERT and SciBERT\n\nTrue, Bio- and SciBert are Language Models for creating representations (of domain text), these domain representations can be used for doing domain specific QA. All these domain efforts have in common to improve the performance in that domain. This was our hope when creating COVID-QA, improving performance of search systems in this domain.\n\n> Section 3, it says annotators are told they are developing a \"method\". How is this a method? \n\nWe agree, this is a confusing choice of words and change β€œmethod” to β€œdatapoints”.\n\n> Method section is quite weak but maybe enough for a dataset work. \n\nThanks, we take this as a compliment.\n\n> Please check your references\n\nAgreed, referencing preprints instead of peer reviewed publications is not good practice. We went through the references and changed them where applicable.\n\n\n\n\n\n\n\n", "role": "user" }, { "content": "Title: Answer to the authors\nThanks for the clarifications.\n\nI just want to add that my concerns were to help improve the writing of the paper. When you have a dataset paper, referring your reader to hyperlinks is a bit of a stretch and some of the annotation guidelines and important annotation related information must stay in the paper. I had looked in those links but I'm not reviewing those but what you have written here. \n\nFor the current draft:\n- The abstract still needs improvement. Here's what the flow breaks: \"...This presents, to\ndate, the largest manually created Question Answering dataset on Covid-19 related material. We began with a RoBERTa base model that was initially\" Abruptly RoBERTa is mentioned. Again this is not a method paper so you could simply say we used benchmarked our dataset using X methods. \n\n\n- Tools: Again, I'd think there are tools that could be more QA and less IR? Or are there none? No objection to keep IR tools but there should be something on QA as it's most relevant to you. ", "role": "assistant" }, { "content": "Title: Response\nThanks for the prompt answer. We really appreciate these more specific suggestions because they outline potential for improvement.\n\n\n> annotation guidelines and important annotation related information must stay in the paper\n\nWe agree that some of the annotation guidelines need to be in the paper. Though we largely followed \"the SQuAD style label process\" which is well known and we reference in the paper. We included where we diverged from the SQuAD dataset creation process: annotating on longer documents, missing n-way annotations and possibly less controlled quality checks. We believe omitting details that referenced papers explain is a reasonable thing to do but are open for discussion.\n\n> The abstract still needs improvement. Here's what the flow breaks [...]\n\nThanks for specification. We acknowledge that it is a break of flow, but would argue that this break is common practice: creating a dataset, then evaluating this dataset. For more clarity we could add in another revision \"To evaluate the dataset we began with ...\"\nConcerning the use of RoBERTa: We feel that BERT or RoBERTa are nowadays well known for people to understand the relation between a QA dataset and a RoBERTa model. Do you have a proposal to state the evaluation more clearly while being concise and to the point?\n\n\n> Tools: Again, I'd think there are tools that could be more QA and less IR? \n\nIt is difficult to apply pure BERT QA models (joint computation of question and document in one model) in real time to incoming user questions on a whole database of documents. So realistic user facing tools should use IR related components. Nevertheless the tools stated do not just retrieve documents but highlight potential answers, which we believe is more QA than IR. ", "role": "user" }, { "content": "I am grateful for the opportunity to discuss these points with the author, which undoubtedly enhances the strength of this paper.", "role": "assistant" } ]
JENSKEEzsoU
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<JENSKEEzsoU>>```", "role": "user" }, { "content": "Title: QA dataset on CORD-19\nReview Content: Summary of Work: This work provides a new resource, a QA dataset built on top of CORD-19 annotated for span prediction task in form of SQuaD dataset. The dataset contains 2K question from 147 articles (I am not clear on this point, the authors say they extracted 445 documents). They also built a model on top of this dataset by finetuning roberta base and find improvement over original roberta base.\n\nWhile the dataset may be useful, we don't really know what criteria the annotators used to annotate the questions (How do they select what to annotate as answer? , what is considered important? - Do note that the authors also provide online videos of instructions that I have note reviewed which may contain the answer to this.)\n\nAlso the authors mention the questions can be reviewed/corrected by other annotators but again I don't know if it was done for every question and by how many other annotators. The annotators are said to be volunteers (I assume that to mean they were not paid) but I am not sure how they were recruited and how their qualification were evaluated (the paper says the authors require them to have a masters in biomedical sciences, although not sure how that is checked).\n\nIn general, the paper provides us some info about the new dataset, but the details are left out. As such, I am not confident in its utility for useful (and trustable) research .\n\nThe modelling framework is straightforward although their may be obvious improvements that can be made by training on biomed corpora. The top-n-accuracy metric needs more explanation (does the overlap mean overlap in text or position -- the first one is useless since we overlap on stopword easily, the second one might be more useful, but more analysis is needed).\n\nThis work needs a spell check (caronavirus -> coronavirus, fee -> see, and more).\n\nI would note that a lot of my criticism maybe resolved by providing more details. But without them, I cannot truly provide a good review of the dataset itself. Therefore, currently I am giving a marginally below threshold score (and not a clear rejection since the dataset *may* be useful)\n\nUPDATE: The authors answered questions presented above. The concern still remains about the quality of annotations, as well as the utility of model released. On the basis of this, I have increased my score.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Responses to all points in order of occurrence\nThank you for this timely and detailed analysis of our paper. We are keen on adding more details to the paper. We left out some information because we did not want to overburden the reader in the first place but are happy to elaborate more.\nFollowing is a list of responses - we also mention where we applied the relevant changes to the paper.\n--------------\n\n> Clarification on number of articles annotated\n\nGood point, it is a bit unclear. We uploaded 450 articles to the annotation tool, 147 of those contain actual labels. We clarified this in the newer revision by stating: \"Of these 450 scientific articles, 147 were annotated over the 8 week period [...]\" under \"3.1 The Annotation Process\".\n\n> Annotation criteria\n\nYou are right, the guidelines can be found inside the videos mentioned in the paper. For convenience (not everyone likes videos) we also uploaded the annotation guidelines as pdf and will linked them in \"3.1 The Annotation Process\". You can also find them here: https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/Handbook%20-%20Labelling%20Tool.pdf\n\n> Correction of Annotations\n\nThis is an important point, also made by the reviewer no. 1. Unfortunately we cannot give a fully satisfying answer. We manually checked some of the labels and found them to be of good quality as well as created a slack workspace where people could discuss specific annotations. We added information about manual checks under β€œ3.1 The Annotation Process”.\nWe hope to create a more thorough and especially documented review of quality checks in a future version of this dataset.\n\n> Recruiting of Annotators\n\nWe have added how volunteering annotators were recruited to the paper under β€œ3.1 The Annotation Process”.\n\n> Modelling Framework\n\nAgreed, the modelling is not the best you can get with current methods and we talk about improvements to the modelling part in \"5.2 Modelling improvements\" where we also state your suggestion of training on other/biomed data.\n\n> More Explanations for top-n-accuracy metric\n\nThe overlap is on position as stated in the paper under \"4.2 Metric\": \"[...] we choose to take multiple\nmodel predictions and any positional overlap between prediction and label as metric and refer to it\nas top-n-accuracy. Top-n-accuracy compares the\ngold label against n model predictions and looks for\nany overlap between prediction and answer positions. [...]\"\nWe made this explanation more prominent and more explicit in the new revision.\n\n> Spell Check\n\nFixed, thanks for pointing out!", "role": "user" }, { "content": "Thanks to the authors for their clarification. It covers most of my concerns.", "role": "assistant" } ]
JENSKEEzsoU
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<JENSKEEzsoU>>```", "role": "user" }, { "content": "Title: Development of a QA dataset for COVID-19; a work in progress that requires revision\nReview Content: Authors have developed a QA dataset based on scientific articles about coronaviruses released as part of the COVID-19 Open Research Dataset (CORD-19). A small set of 450 articles were selected based on keyword matching, random sampling, and expert search, and around 2K Q-A pairs were developed against these articles, where question were generated by annotators and answers were sentences in the articles. Annotators were required to have a degree in the biomedical sciences. Also, the annotated dataset and a fine-tuned BERT-based model have been released. \n\nAuthors investigate an important problem, and the paper is well organized and well written. In addition, such a dataset could be a valuable resource as it enables computer scientists and clinicians to address their information need about COVID-19 based on reliable sources (scientific literature).\n\nHowever, there are few shortcomings that prevents me from recommending the paper for acceptance at this time: \n\n1. Data Quality: Each QA pair has been annotated by only one annotator. I suggest having the data annotated by at least two annotators followed by a third one (or a discussion between the first two annotators) to adjudicate the disagreements. It's important to consider that a good model should learn the actual task as apposed to optimally mimicking the behavior of a single annotator. In addition, I wonder if annotators labeled \"all\" sentences of articles that answer the questions. This is important for correct evaluation (models should not be penalized for correct predictions that are not labeled). In addition, I found many questions in the dataset that:\n\na): are not necessarily relevant COVID-19:\n- \"What is the main cause of HIV-1 infection in children?\"\n- \"What plays the crucial role in the Mother to Child Transmission of HIV-1 and what increases the risk\"\n- \"What is the molecular structure of bovine coronavirus?\"\n- \"What causes the outbreak of SARS and MERS?\"\n\n\nb) are general with potentially multiple correct answers:\n- \"Who plays a role in regulating the immune response to a vital infection?\"\n- \"Which species are more prevalent but less severe?\"\n- \"What is a significant cause of Influenze like illness among healthy adolescents and adults presenting for medical evaluation?\"\n\n\nc) combined several answers together:\n- Q: \"Which species are more prevalent but less severe?\"\n A: \"HCoV-HKU1, HCoV-OC43, HCoV-NL63, and HCoV-229E\"\n\n\nIn addition, \"no answer\" questions are probably the most interesting and prevalent questions about COVID-19. Such questions have not been developed in the context of this work.\n\n\n2. Modeling: Articles were splitted into multiple chunks of text and only two chucks (a positive chunk containing the answer of a given question and a randomly sampled chunk as negative example) were used to test the model. This design choice can oversimplify the task. In addition, given that the \"no answer\" output of the model was disabled, I'm not sure how the model works on the negative chunks. What should the model predict in cases of negative chunks?\n\n\n3. Evaluation: Top-n-accuracy measure is defined as follows: \"it compares the gold label against n model predictions and looks for any overlap between prediction and answer positions.\" It is not clear how accuracy is derived from overlaps. But, if \"any\" overlap between prediction and answer positions is treated as a hit, then Top-n-accuracy is not a good/reliable metric because a small overlap (e.g. a stopword) is just enough to treat a wrong prediction as a correct answer, and an untrustworthy algorithm as a trustworthy one. This is specially problematic for questions with long answers.\n\nI recommend authors to consider and address the above comments and re-submit their papers.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Responses to all points mentioned - an ordered list based on our perceived importance\nWe highly appreciate the timely as well as accurate review of our efforts and are motivated to improve upon our work with the help of this review. The mentioned points are all relevant and show a high degree of insights into the domain and a thorough analysis of the paper and especially the accompanying annotated dataset. We hope to address each issue in a satisfactory manner. We will especially address changes to the paper - significant changes to the dataset are, by nature of a volunteer led expert annotation project, out of scope before submission deadline. Thanks also for acknowledging strong points of our paper, that an important problem is investigated and the paper is well organized as well as well written.\n\nWe reordered your points based on our perception of how important they are to address.\n\n>1. Data Quality: Each QA pair has been annotated by only one annotator.\n\nAgreed. Multiple annotations per QA pair are very important. The current version of the dataset misses these n-way annotations, a limitation which we state in \"5.3 Dataset Improvements\". \nWe manually checked QA pairs by experts that were familiar with both the label tool and the domain. These manual checks are unfortunately left undocumented. \nWe also set up a slack workspace for volunteers to discuss certain annotations. We might be able to address this in a future version of the dataset.\nWe added details of the manual checking under β€œ3.1 The Annotation Process” \n\n> 2. Modeling: unfair test scenario\n\nThere might be a slight misunderstanding. Only training data is downsampled, tests are done on the whole article.\nThis downsampling made training much faster since there are on average ~36 negative passages per each positive passage.\nTesting is done on the whole article, so we believe the test scenario is not unfair.\nWe made the downsampling procedure more explicit under β€œ4.3 Evaluation” and also emphasize the size of the documents (6118.5 mean tokens per doc COVID-QA vs 153.2 tokens for SQuAD) as being a challenging test scenario under β€œ5.1 Interpretation of Results”.\n\n> 3. Evaluation: Top-n-accuracy\n\nYou are correct, \"any\" overlap creates a binary hit or miss scenario so accuracy can be computed. It is important to note that overlap in position is used, not overlap of strings.\nWe hope the positional overlap on long documents reduces the reliability concern. To be more certain we analyzed a subset of predictions, available at https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/predictions.tsv, and cannot find meaningless predictions (e.g. stopwords).\nWe made the paper more explicit about how accuracy is calculated under β€œ4.2 Metric”.\n\n> 1. Data Quality: a): are not necessarily relevant COVID-19\n\nThe articles were from the CORD-19 dataset which originally contained a fairly generic set of scientific articles. The number of COVID-19-specific ones is understandably low given the newness of the outbreak. We attempted to narrow the CORD-19 articles even further by extracting more relevant subsets. Nevertheless, SARS and MERS have similar domain-specific relevance to COVID-19 since they define similar viral pandemics. HIV-1 infection mechanisms (and indeed all viral infection mechanisms) are highlight relevant to COVID-19 researchers as well. In order to keep the COVID-QA model generalizable, we felt that including these non-COVID (but viral outbreak related) articles was highly relevant. We added a more detailed explanation of chosen articles under \"3.2 Annotation Process\" (first paragraph)\n\n> 1. Data Quality: \"no answer\" questions are probably the most interesting\n\nAgreed, \"no answer\" annotations would be very helpful in creating good QA systems and have not been included in this dataset as mentioned in the paper under \"5.3 Dataset Improvements\".\nUnfortunately we see true \"no answer\" annotations as out of scope for the current version for the following reasons: \n1. True \"no answers\" as present in Natural Questions are hard to get, because we would first need \nnaturally occuring questions - not questions constructed by looking at some text but questions by actual users using a search engine.\nSince we do not operate such a search engine we only might be able to address \"no answers\" in another version of the dataset, if we get search queries by one of the CORD-19 search engines. \n2. One could artificially construct \"no answers\" as in SQuAD v2.0. The usefulness of those constructed \"no answers\" is debatable.\n\n> 1. Data Quality: b) are general with potentially multiple correct answers\n\nWe added the internal discussions about multiple annotations per question to the paper under β€œ3.1 The Annotation Process”.\n\n> 1. Data Quality: c) combined several answers together\n\nThe combination of several answers only happens when the answer is consecutive text. If it were not consecutive text, this problem boils down to 1b above. Variance in answer length is a reason we stated for choosing top-n-accuracy in \"4.2 Metric\".\n\n", "role": "user" }, { "content": "I value the authors' response. It covers most of my concerns, and I'm open to increasing the score.", "role": "assistant" } ]
JQCYcdHfXyJ
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<JQCYcdHfXyJ>>```", "role": "user" }, { "content": "Title: Valuable resource\nReview Content: The paper describes a novel dataset containing answers to a survey that asked participants to describe in short and long text their emotional states and rate their levels of anger, anxiety, desire, disgust, fear, happiness, relaxation, and sadness. The dataset includes responses from 2500 UK residents collected during a critical period of the cover-19 pandemic.\n\nThe paper also investigated the predictive performance of lexicon-based and supervised classifiers. Preliminary analyses of the data showed significant correlations between the respondents emotional ratings and the scores inferred by LIWC for the same emotions. However, these correlations are much higher for longer texts, which is not surprising since LIWC was not created for microblog style content and longer text has more words that can match with the lexicon. Similar trends were observed with a supervised model trained to predict the emotional ratings given the text answers. Finally, topic analysis of the data showed that the main topics expressed in shorter text are different than in longer text. \n\nThe paper is well written, well scoped and well executed. The dataset is very interesting and it will be useful for the NLP community not only from the standpoint of understanding how people respond to global crises but also to better understand how the characteristics of social media might influence what kinds of information people decide to share. The cautionary tale about using Twitter to study the public's reactions to these kinds of events is worth a closer look. I am not sure how much we can extrapolate from this, given that in real life people can post more than one tweet about a subject, and they do so spontaneously which is different from being asked to post about something specific on a survey. Some \"experimenter effect” might be at play here. \nRating: 8 accept, good paper\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to reviewer\nThanks for the positive and helpful review. \n\nWe appreciate your comments and will incorporate the points raised. We fully agree that the Tweet-style texts elicited in the study do not fully mimic a β€˜natural’ setting in which Twitter users may write about their emotions and COVID-19. Indeed, on Twitter people may write multiple Tweets about a subject or reply to other users. Therefore, we have noted this as a limitation in our revised paper.\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
JQCYcdHfXyJ
ACL 2020 Workshop NLP-COVID
AnonReviewer3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<JQCYcdHfXyJ>>```", "role": "user" }, { "content": "Title: Good first steps toward understanding the UK public's emotional response toward covid19\nReview Content: The focus of this manuscript is to 1) describe a corpus of long and short lengthened emotional responses to the covid19 pandemic by the public, 2) identify meaningful associations between linguistic measures and emotional responses, and 3) develop prediction models for automatically classifying unseen texts according to their emotional response categories.\n\nQuality: The usage of LIWC lexicon to identify topical/linguistic information is a good start. I'm interested in how differently tools like empath (https://github.com/Ejhfast/empath-client) would perform in identifying pre-configured and new topics on this corpus and what additional insights could be drawn from it. \n\nThe correlation values appear low, but I'm wondering if that's due to out of vocabulary terms. Perhaps, a quick manual review of LIWC term coverage and a lexicon enrichment might help identify a stronger signal. \n\nSome emotions seem a little hard to tease out or have close relationships e.g., worry, anxiety, fear, etc. It would be interesting to understand commonalities and distinguishing characteristics in terms of linguistic measures of related/close categories. \n\nClarity: A few points should be clarified or explained: 1) did you collect any additional meta-data about the participants e.g., sex, gender, age, race, professions (essential vs. non-essential workers), etc. that could be useful to contextualize or identify particular worries among groups? 2) did these \"texts\" also include emoticons that could be used to convey emotional response and topical information? \n\nIt would also be interesting to sample more than 2 days. I wonder how the topics will shift as the pandemic unfolds. \n\nAlso, I would recommend revisiting the corpus title as its very UK-centric and covers a broad range of emotions. What about \"UK COVID19 Public Emotional Response dataset\"?\n\nOriginality: The coupling of the open survey with traditional linguistic and topic modeling approaches for examining the global threat has some originality. The predictive model serves as an initial baseline. It would be interesting to evaluate other traditional machine learning classifiers to establish reasonable baseline performance. \n\nSignificance: The topic is certainly significant and a nice first attempt at obtaining self-reported emotional concerns of the public. This is also a great tool that if deployed more broadly could capture insights across regions, countries, and continents.\n\nThank you for making this invaluable resource publicly available to researchers!\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer\n\nThanks for the review. The points raised will help us improve the paper for the revision and are much appreciated.\n\nPlease find our response re. your original points below:\n\nRe. Empath: great suggestions. We will look into this and add findings to the revision. Thanks a lot for pointing us to this fantastic tool.\nRe. LIWC dictionary coverage: we had that idea too initially but the coverage was relatively high (92.36% and 90.11% for short and long) which we saw as sufficient evidence that OOV terms are not the issue. The additional Empath analysis we will conduct can hopefully assist us in further understanding this.\nRe. emotion overlap: agreed. While these were taken from validated emotion questionnaires (with our addition of β€œworry”), we think correlational analyses will shed light on potential overlap. We add these to the revision.\nRe. meta data: yes, we have these data and will add them to the demographic statistics under section 2.3 Descriptive statistics\nRe. emoticons: good point - we will check how many there were (if any) and look at what they can tell us. We add information on this to the revision.\nRe. multiple days of sampling: agreed. We were debating this and think it’s something for future work. A complication is that so many events might affect the public’s response that ideally, we’d need high-frequency sampling. That is currently out of the scope of our funding unfortunately.\nRe. dataset name: agreed - we will amend the name to focus more on emotions. The revision will clarify this.\nRe. prediction models: we will explore more models and will add these to the revision (see also Reviewer 1’s point #8).\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The response addresses most of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
JQCYcdHfXyJ
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<JQCYcdHfXyJ>>```", "role": "user" }, { "content": "Title: Useful dataset\nReview Content: Summary: The paper introduces a self-reported dataset of 2500 short and long (each) texts about emotions of people during the COVID-19 pandemic. The paper then analyses the dataset in terms of LIWC properties, output of a topic model (most probable topics) and a linear regression model to predict emotion based on TFIDF and POS-based features. The dataset will certainly be useful. The fact that it is self-reported and has both short and long texts are noteworthy.\n\nSuggestions:\n1) Short and long texts by the same participant do not necessarily have to be 'parallel'/'analogous', it seems. If this is indeed the case, I would suggest mentioning so.\n\n2) It would be good to know the reason behind picking worry as the key emotion. (In other words, the choice of calling this a 'worry dataset' and not an 'emotion dataset' is not clear). The question asked to the participants (when they draft their text) does not mention 'worry' explicitly. They are asked to rate how worried they feel. However, in addition, the participants are also asked to record other emotions.\n\n3a) Section 2.2: The description accompanying worry clubs it with 'anxiety'. However, table 1 shows that only 55% participants reported anxiety.\n\n3b) Please elaborate \"The participants’ self-reported ability to express their feelings in the long text\". Was it a part of the form?\n\n4) It is not clear if the number of tokens is the same as vocabulary. It would be useful to know the vocabulary sizes of the datasets.\n\n5) The github repo also includes LIWC statistics of the texts. This could also potentially be useful.\n\n6) There is a low correlation between the 'concerns' (work, money, death?) and worry. In contrast, the top topics from the model include job and family. Is it surprising?\n\n7) It would be useful to add statistics on how frequently the participants reported using Twitter. This would be helpful to understand the quality of the short text.\n\n8) Observation: The classifier is not state-of-the-art. It would be useful to add citations to papers which use linear regression for emotion analysis.\n\n9) Was the linear regression model also trained using spaCy?\n\n10) Is the low MAE for worry related to the fact that worry was central to the annotation? Could there have been a bias in the annotations? The stdev for worry is also the lowest (as shown in Table 1).\n\nThe dataset would certainly be useful for future work. The analysis of the dataset (LIWC, topic models) is very interesting. The paper is easy to follow as well.\n\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer\nThanks for the encouraging and helpful review.\n\nPlease find our response re. your original points in order below:\n\n1. The participants received the same instructions (β€œto write a text that expresses their feelings about COVID-19”) for both types of text. If they simply shortened the longer text for the Tweet-size version, we would have expected to see the same kind of topics. We will add a clarification about this in the revision.\n2. We agree that the focus on β€œworry” does not fully align with the instructions the participants received. While worries emerged as a general issue, the focus of calling the dataset β€œemotion dataset” is more adequate indeed. We will take this into account for the revision (as this was also raised by reviewer 2) and also change the name of the dataset to more adequately reflect the focus on emotions.\n3a) We will adjust the focus to β€œemotions” more generally. We included worry to be judged as participants might have a broader idea of worries compared to anxiety. However, these two are likely highly correlated (we will add this analysis). The revision will clarify this.\n3b) We add the data of self-reported ability to section 2.3 Descriptive statistics\n4. Agreed - we will add voc. size, tokens, and TTR to the corpus descriptives. \n5. -\n6. We should have clarified this better. The β€œconcerns” from the LIWC do not necessarily relate to negatively connotated concerns but more broadly β€œareas that concern people” rather than something they worry about. This could be why the correlation is low. That these areas are also emerging from the topics is interesting but needs clarification. We include this in the revision.\n7. Yes, fully agreed. We add these statistics to section 2.3 Descriptive statistics.\n8. Agreed. We will add references here and explore better models. The revision will include these.\n9. We used sklearn and will clarify.\n10. We will clarify (see also point 2 above) in the revision better that the core focus was on emotions. Another explanation could be that it naturally was so ingrained in the texts (i.e. people were predominantly worried), that the model was biased towards predicting high worry values. We will explore this further and add insights to the revision.\n", "role": "user" }, { "content": "I value the authors' response. Most of my concerns have been addressed. I continue to recommend this work for acceptance.", "role": "assistant" } ]
M4wgkxaPcyj
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<M4wgkxaPcyj>>```", "role": "user" }, { "content": "Title: Simple but effective method to detect COVID-19-related videos containing misinformation\nReview Content: This work proposes a simple but effective method to detect COVID-19 misinformation YouTube videos by using the percentage of conspiracy comments. While the authors did not examine video contents at all, the classification accuracy reached 82.2 by considering video title and the percentage of conspiracy comments only.\n\npros:\n- The approach is simple but works well.\n- The complete YouTube comments dataset and the Colab notebook file are publicly available.\n- Great efforts were made to construct COVID-19-related misinformation video datasets. \n\ncons:\n- While I have some minor concerns/suggestions, none of them are serious flaws. \n\ncomments:\n- Details in the labeling process should be added: For video labeling, did two authors independently label videos, and were all the labels agreed? For comments labeling, more explanations on the actual labeling process are required. For example, how many labels are collected per comment? \n- Comparison with a bag-of-words model: As the authors already have all the text of comments, building a bag-of-words model based on the first 100 comments (videos with less than 100 comments were easy to predict) is not super hard but will make the author’s approach more concrete by adding another baseline. As there are several attempts to infer the political leaning of news articles based on their comments, bag-of-words models employing comments could be a solid baseline in this case as well. \n- Agree & Conspiracy labels in Table 1: drawing 2x2 table might be helpful to understand the characteristics of Agree and Conspiracy as the authors mentioned β€˜their different linguistic properties.’ \n- Crowdtangle’s historical data of public Facebook posts (Section 3.1): citations should be added.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Great suggestions! Now implemented\n**Thank you very much for the great suggestions!** They allowed us to achieve better results and a more concrete approach. Here are our changes:\n\n> Details in the labeling process should be added\n\nWe added the necessary information about the labeling process. We had previously omitted it for spacing reasons. I agree that it is important to be transparent about labeling.\n\n> Comparison with a bag-of-words model\n\nThank you for the great impulse! This changed the implications of our study. Using the first 100 comments has a good performance to classify COVID19 misinformation videos (83.9%). Adding the conspiracy percentage the new accuracy is 89.4%. We updated the discussion, abstract, and contributions to take into consideration these new insights. \nThe results suggest that early detection of YouTube misinformation videos is possible, and could allow that videos like \"plandemic\" do not go viral and unnoticed for a long time. \n(Note: we used tf-idf instead of a bag of words model to tokenize the data. We tried both and tf-idf gave slightly better results. )\n\n> make the author’s approach more concrete by adding another baseline.\n\nThis made us reflect on the conspiracy comment classifier. We did not provide baselines here either. We decided to change this and included two baseline models, one with LWIC and another one with Multinomial Naive Bayes. The difference between baseline and transformer models is better appreciated on a new precision-recall plot that we introduced for this classifier. \n\n> As there are several attempts to infer the political leaning of news articles based on their comments\n\nThis is true! I added new citations on studies using user comments for prediction. Interestingly, Jiang and Wilson (2018) also used comments to detect misinformation. However, they only use linguistic signals and find no strong prediction value from user comments. (They did not separate between agree and misinformation labels for example)\n\n> Agree & Conspiracy labels in Table 1\n\nFirst, we eliminated Table 1 as the information was already in the text. We did not proceed to create the 2x2 table, but we performed $\\chi^2$ tests on the bag of words of the agree and conspiracy comments. Less than 1% of the tokens appear in both datasets in a significantly similar manner. We hope this discussion helps to understand why the two types of comments are different linguistically.\nOur motivation for dividing the labels is that previous studies have used simply misinformation or believe in misinformation as a label and reported results on classifying such post. The problem with not dividing between comments such as \"I agree with what you say\" and \"#pizzagate Justin Bieber is also part of the ring\" is that the classifier will learn more to differentiate between agreement and real misinformation content.\n\n> Crowdtangle’s historical data of public Facebook posts\n\nWe added the citation\n\nAdditionally the code has been updated on GitHub.\n\n*With the above modifications, the paper is now 5 pages long... I know that for ACL short papers it is allowed to have 5 pages to address the reviewers' comments...I hope this is not a problem, otherwise, I would remove some plots to keep the 4 pages limit*\n\n\n", "role": "user" }, { "content": "Title: Thanks for your R&R\nThanks for your quick r&r to address my concerns.\n\nI'm glad that the authors achieved better performance, believing that the study becomes more solid.\n\nI increased my rating. Thanks!\n\n\nRegarding the paper length, please directly contact the workshop chairs.\n\nI'm not sure whether the resubmission's length violation is important or not in making a decision.\n\n\n", "role": "assistant" } ]
PlUA_mgGaPq
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<PlUA_mgGaPq>>```", "role": "user" }, { "content": "Title: Useful resources collected and introduced\nReview Content: This work is an introduction to a suite of resources and their corresponding ideas revolving around the COVID data provided by the Allen institute. Authors in particular provide Neural Covidex which allows exploration of information retrieval and NLP techniques over the COVID dataset.\n\nThe positives of this manuscript is as it also says gathering all the information and resources one would need to start exploring this dataset and further the techniques without having to spend weeks to put them all together. It is significant effort to get this far and of course authors are relying on their previous work over many years. There are some good pointers for some of the recent work in biomedical IR that is useful to know. I personally learnt from the content and enjoyed reading it.\n\nThis work however is written in haste (as it was needed due to the time limit of the current situation) and it is lacking the formal language of a scientific paper and of course evaluation. Obviously evaluation on this particular dataset is not straightforward given for example TREC is only starting on that effort. However though some initial analysis of the dataset could strengthen it. I was half expecting to see the discussion on evaluation metrics be a bit more mature and be given a separate section at least. This work as it stands, while very useful for the IR/NLP community it needs some reworking of the content to get to the state of publication in an ACL workshop. \n\n\n\n\n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Yup, we'll update with TREC-COVID results when they become available! \nDear Reviewer, thanks for your feedback.\n\nYour comments are very much in line with those of reviewers #1 and #2. We'll update the paper with TREC results as soon as they are available.\n\nBest,\nJimmy (on behalf of all the co-authors)\n", "role": "user" }, { "content": "I value the authors' response. The response covers nearly all of the issues I raised. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
PlUA_mgGaPq
ACL 2020 Workshop NLP-COVID
AnonReviewer3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<PlUA_mgGaPq>>```", "role": "user" }, { "content": "Title: Laudable effort, but currently more preprint material\nReview Content: This paper describes the rapid deployment of an IR system for the CORD-19 dataset. The system borrows state-of-the-art pieces, such as T5 and BioBERT, with training on MS MARCO. A search interface is shown that includes potential answer highlighting. Absent hard evaluation data, the authors provide more of a narrative of the system development, from initial conceptualization, early builds, deployment, and social media advertisement.\n\nFirst and foremost, the authors are to be congratulated for this work. The effort is most laudable.\n\nThe style and tenor of the paper should be described as somewhere between a Medium post and a work-in-progress preprint. While normally this would be out of place, it is less out of place in a workshop such as this one. The casual language is fine and almost cute, while the frankness is quite welcome, but the other preprint-like or web post-like aspects of the paper are more problematic.\n\nLet's start with the lack of evaluation, which the authors acknowledge that \"It is, of course, expected that papers today have an evaluation section that attempts to empirically quantify the effectiveness of their proposed techniques\". Of course, this empirical validation is what makes it science, as opposed to engineering/marketing. I certainly understand that the resource commitment to a TREC-style evaluation is beyond one group's ability under a pandemic situation. But the authors present the solution within their own paper: from footnote 14 it appears that the TREC-style evaluation is underway, and that should have preliminary results available well before this workshop's deadline. So why not simply issue this as a preprint to flagplant their admittedly laudable effort, then wait until some empirical results are available and submit *that* paper to the workshop? If the authors plan to submit a work with results as a separate publication, this should be made more clear and a more compelling argument needs to be made for the scientific usefulness of the \"behind the scenes\" story of this search engine.\n\nThe authors make the claim that instead of the importance of search result ranking (despite Section 3 being about how great the system is at ranking on other tasks), that instead what is really important are the interface improvements that increase the usability of the system. However, every real-world IR evaluation has come to exactly this conclusion: experts always request various bells and whistles. First, note that users can't see the relevant articles that were missed, so they have little choice but to comment on the interface. But more importantly, good ranking != good usability, this is no novel claim. But usability evaluation is a well-honed science, and the authors do not perform any kind of usability evaluation either. These are less resource-intensive as well, typically requiring just an IRB protocol and some experts willing to provide feedback.\n\nMy main stylistic issue is that in many places this paper comes across as a brag-fest for the accomplishments of the group. These accomplishments are beyond doubt, of course, which is all the more reason why the authors can avoid tangents to cite unnecessary prior work (e.g., most of the BERT stuff in the first paragraph of Section 3.3, as this is not even used in the system). The implications in Section 5 that the authors' group's \"culture\" and use of good software practices, unlike other poor-coding researchers, is also unnecessary (why is a \"faculty advisor\" reference even there?). One can extol the virtues of good culture and software engineering without coming across as bragging, so I would recommend a re-write of those parts of this paper to take a more scientific tone.\n\nMinor:\n- In Section 3.1 it is unclear what the \"1\" in \"n+1\" comes from. Do the authors also include a document that is just the title/abstract without the paragraphs?\n- \"who are engage in\" -> \"who are engaged in\"\n\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Yup, we'll update with TREC-COVID results when they become available!\nDear Reviewer, thanks for your feedback.\n\nAs we've responded to reviewer 1 already, we are participating in TREC-COVID and will update the paper to incorporate results from round 1 when they become available.\n\nBest,\nJimmy (on behalf of all the co-authors)\n", "role": "user" }, { "content": "I value the authors' response. It answers most of my concerns, and I will likely adjust the score higher.", "role": "assistant" } ]
PlUA_mgGaPq
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<PlUA_mgGaPq>>```", "role": "user" }, { "content": "Title: Evaluation of a rapidly deployed search system for the CORD-19 data set\nReview Content: The authors have quickly stood up a conventional and neural search system based on the CORD-19 dataset, which is a commendable task given the global crisis caused by the Covid-19 pandemic. However, this paper is just a description of their system and design decisions, with some additional discussion of why a conventional TREC system-oriented retrieval evaluation would be limited. I agree with the authors on that point.\n\nHowever, it would be helpful to have some sort of evaluation of their system to gauge whether their approach offers any novelty beyond the multiple other systems that been stood up and linked to from the Allen Institute site where the data is housed.\n\nWhile I agree that a system-oriented evaluation approach would be limited, it would also be helpful to, say, compare the baseline and neural approaches. The first round of the TREC-COVID challenge evaluation is being conducted as I write this, and the first set of results will be available by late April or early May. I believe a better approach would be to include these results and then discuss their limitations.\n\nIf the authors do not believe the system-oriented results are important, they can explain why. They could provide some usage statistics for their system and describe other real-world use. They can also propose better evaluation studies, including those that involve users as they allude to in their paper.\n\nOverall this is good work, but it could be much better, and we will hopefully learn more as the system is used, the test collection grows, and more complex tasks beyond ad hoc retrieval are evaluated with it.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Yup, we'll update with TREC-COVID results when they become available!\nDear Reviewer, thanks for your feedback.\n\nIt is not the case that we believe that system-oriented results are not important, rather that, as you note and agree with, \"a system-oriented evaluation approach would be limited\" - for the reasons we have articulated in the current paper.\n\nNevertheless, we are participating in TREC-COVID. We shall update the paper to incorporate results from round 1 when they become available, which is a bit less than two weeks from now.\n\nBest,\nJimmy (on behalf of all the co-authors)\n", "role": "user" }, { "content": "I'm grateful for the authors' response. It satisfactorily addresses the major issues I had. I continue to recommend this work positively.", "role": "assistant" } ]
SPxaJuM4Hbz
ACL 2020 Workshop NLP-COVID
AnonReviewer4
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer4, and the following are the paper you need to review:\n```<<SPxaJuM4Hbz>>```", "role": "user" }, { "content": "Title: Study of automatic classification of LitCovid categories.\nReview Content: This article presents a comparison of methods for the classification of articles from the LitCovid collection. From all the evaluated methods, one based on BioBert performs better compared to other evaluated methods. The contribution of the paper is a comprehensive study of a broad range of methods to the multi-class classification of LitCovid. The classifiers are evaluated on a small set of papers from the CORD-19 collection. The article is clearly written and plenty of details. The authors provide the data and code, which would support reusability and reproducibility.\n\nThe performance on CORD-19 is significantly lower compared to the reported numbers for LitCovid. This might be due to several reasons, including the limited size of the CORD-19 articles used or the way manual annotation done by the authors might differ from the one performed in LitCovid.\n\nThe LitCovid collection already provides the classification, thus the classifier might have limited used unless a specific use case is specified in which this work would be used.\n\nThe title might be willing to include the specific collection on which the methods have been tested on, since the collections and the labels are provided by the LitCovid data set.\n\nThe article has been recently uploaded and currently has almost 28k PubMed articles. It might be relevant to mention when the documents were obtained. The authors mention 8k, thus probably it was obtained sometime in April.\n\nIn table 1, 2861 β€”> 2,861\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 3\nWe are grateful for the reviewer taking the time to read and review our paper. We address some concerns below:\n\n> The article has been recently uploaded and currently has almost 28k PubMed articles. It might be relevant to mention when the documents were obtained. The authors mention 8k, thus probably it was obtained sometime in April.\n\nThe version of the LitCovid dataset we evaluated on was obtained on May 20th and is available on our Github. It contained around 14,000 articles at the time out of which we used 8,000 that were either full articles or abstracts. The rest consisted of only article titles since their abstracts were not available on Pubmed. We are currently running experiments on the most up to date version of the dataset and include scripts to download the most up to date version of the LitCovid collection to make future evaluation straight forward.", "role": "user" }, { "content": "I value the authors' response. Most of my concerns have been addressed. I continue to recommend this work for acceptance.", "role": "assistant" } ]
SPxaJuM4Hbz
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<SPxaJuM4Hbz>>```", "role": "user" }, { "content": "Title: Overall a good exploration of previous document classification algorithms applied to Covid-19.\nReview Content: The paper presents an exploration of the main document multi-classification algorithms, training and developing on the LitCovid dataset and testing on a small manually annotated portion of the much larger CORD-19 dataset. The LitCovid dataset includes labels for each document to 8 categories with about 1/3 of documents including multi-labels. The significance lies in the usage of these multi-classification algorithms for researchers, clinicians, epidemiologists, and policy makers to find the most relevant articles in a timely manner given the urgency of the pandemic. \n\nThe paper is clear and well-written. The application of document classification algorithms to Covid-19 is novel, but the algorithms themselves are not.\n\n\nPros:\n- An in-depth exploration of many different algorithms along with tuning for best performance.\n- Contains both intrinsic and extrinsic evaluations.\n- The discussion on data efficiency is quite interesting and can have a large impact on how much data is needed to annotate for future creation of gold standard datasets.\n- An error analysis that suggest future work and updates.\n- Important to be applying all research we have so far on these new datasets to help with Covid-19.\n\n\n\nCons:\n- The manual annotation of the subset of CORD-19 seems to not be reliable as 34% of errors were annotation related.\n- Guidelines and annotation process not provided for the CORD-19 set.\n- The CORD-19 set is small and the performance does drop, suggesting that the models are not generalizable.\n- No discussion of providing these models to LitCovid to help with the annotation labeling, even if it is a starting point.\n- The accuracy is quite low for most algorithms and there is no discussion about this.\n- Very little discussion on the very different distributions of the categories between LitCovid and CORD-19. \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 2 \nWe are grateful for the reviewer taking the time to read and review our paper. We are delighted that you found our work to be significant in addressing the current crisis and interesting. We address some concerns below:\n \n> The manual annotation of the subset of CORD-19 seems to not be reliable as 34% of errors were annotation related.\n \nThe error analysis discussed in the paper was on LitCovid errors and not CORD-19 errors. Even though we did not explore the amount of annotation related errors in the CORD-19 test set we are confident that our annotation process was reliable due to the annotation process we discuss below.\n \n> Guidelines and annotation process not provided for the CORD-19 set.\n \nSince the categories are generally intuitive for biomedical experts annotation guidelines were not explicitly provided. The experts (all with years of biomedical research experience) were given a random sample of LitCovid articles for each category to familiarize themselves with the annotations.\nTo come up with our annotations, as discussed in Section 2.2, we asked biomedical experts to label a small set of 100 articles found only in CORD-19. Each article was labelled independently by two annotators. For articles which received two different annotations (around 15%), a third, more senior annotator broke ties.\"\n \n> No discussion of providing these models to LitCovid to help with the annotation labeling, even if it is a starting point.\n \nWe have reached out to the LitCovid team and would love it if these models would help make their annotation process easier, faster and more accurate.\n \n> The accuracy is quite low for most algorithms and there is no discussion about this.\n \nWe agree that the accuracy scores are quite low for these models and believe that they can be partially explained by the same spurious label correlations we discussed in Section 4.3.\n \n> Very little discussion on the very different distributions of the categories between LitCovid and CORD-19.\n \nWe appreciate your feedback, the discrepancy in label distributions between LitCovid and CORD-19 should be mentioned as a possible explanation for the generalization failure that should be further explored. To elaborate a bit more on where the distribution discrepancy originates. Many articles in CORD-19 were published before the COVID-19 pandemic and focus on general coronaviruses and previous epidemics. Therefore, a large portion of the CORD-19 articles study the disease mechanisms of other coronaviruses. However, since LitCovid is exclusively about the ongoing COVID-19 pandemic, there is a large number of articles discussing the transmission, prevention, and case reports, hence the distribution discrepancy. ", "role": "user" }, { "content": "I acknowledge the authors' response. The reply answers almost all of my questions. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
SPxaJuM4Hbz
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<SPxaJuM4Hbz>>```", "role": "user" }, { "content": "Title: Review of Document Classification for COVID-19 Literature\nReview Content: This paper provides an empirical evaluation of several standard models' performances on a covid-19 related document classification task. Specifically, Covid-19 research papers are classified into one or more of eight subcategories based on their topic. Datasets for this task are constructed from LitCovid, and the Covid-19 Open Research Dataset. These datasets are shared online. Finetuned pre-trained language models are found to outperform more traditional models, such as SVM, CNN, or LSTM. In particular, BioBERT and Longformer are found to be most effective.\n\nThe paper introduces a multilabel document classification dataset which categorizes Covid19 related articles into one or more of eight subcategories. This dataset is distinct in that it represents a large body of work in the medical domain, united by a common subject. The data is drawn largely from LitCovid, with a supplemental test set drawn from the COVID-19 Open Research Dataset (CORD-19). The paper notes that models trained on the LitCovid data generalize poorly to the CORD-19 data, and theorizes that this is due to the models failing to ignore the overarching COVID-19 topic. It is unclear if this is the case, however, as, while CORD-19 includes papers related to other coronaviruses, COVID-19 is the overarching topic of both datasets.\n\nThis paper provides a thorough analysis of many different models' performance with thorough documentation. However, the efficacy of fine-tuned pretrained language models on document classification is well established in a large body of research. While this particular classification task is somewhat novel, demonstrating that finetuned pretrained language models are most effective upon it is a relatively limited contribution.\n\nPros\n - introduces a multilabel document classification task related to COVID-19\n - provides a dataset to support this classification task\n - provides a clear and thorough empirical evaluation of many different models' performances on this task\n\nCons\n - the empirical performance evaluation largely confirms trends shown by previous research\n - models trained on the provided dataset are shown to generalize poorly. The reasons for this are underexplored.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 1\nWe are grateful for the reviewer taking the time to review our work. We are glad that you found the proposed classification task to be novel and our empirical evaluation to be clear and complete. To address some of your concerns:\n \n> The paper notes that models trained on the LitCovid data generalize poorly to the CORD-19 data, and theorizes that this is due to the models failing to ignore the overarching COVID-19 topic. It is unclear if this is the case, however, as, while CORD-19 includes papers related to other coronaviruses, COVID-19 is the overarching topic of both datasets.\n \nEven though CORD-19 is seen as primarily a COVID-19 dataset, around half the articles in its first version were about older coronaviruses and other epidemics. Additionally, the 100 articles in our test set were drawn from the ones in CORD-19 but not in LitCovid, so their overarching theme is not COVID-19 but the general coronavirus family. In fact, this test set contains the term \"COVID-19\" in only one of the articles and is mostly unrelated to COVID-19. This setting aims to test whether the models have learned general signals discriminative for the categories or superficial signals specific to COVID-19. Nonetheless, we agree with the reviewer that it is worth further investigation to ascertain that the models' inability to ignore overarching COVID-19 information is the main reason why they fail to generalize.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The authors have answered almost all of my concerns. I continue to recommend this work for acceptance.", "role": "assistant" } ]
XOkm8xdns5R
ACL 2020 Workshop NLP-COVID
AnonReviewer2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer2, and the following are the paper you need to review:\n```<<XOkm8xdns5R>>```", "role": "user" }, { "content": "Title: Fairly sound approach, but several serious concerns\nReview Content: This manuscript discusses the use of crowdsourcing to label sub-sentence segments with labels that look a lot like the ones used for structured abstracts.\n\nWith few exceptions, I found the manuscript to be clear. The task is a reasonably interesting one (though it would be a good idea to motivate it a bit more by showing what it could be used for) and the methods are reasonable and I am reasonably convinced that the approach described would produce the results shown. This paper is a pretty good example of the sort of work that can be done with crowdsourcing. I am not aware of other work approaching similar classification tasks with crowdsourcing, but I did not go looking. One very important point that seems to be buried is the fact that the annotation took 10 days. Every annotation project I have ever been involved with that used experts took months; this is a strong argument in favor of this approach.\n\nThere are quite a few things that concern me, however.\n\nFirst, the methods are all straightforward applications of fairly well known work. For the machine learning that is ok, but there are few insights in the crowdsourcing methods or the results that would not be strongly expected. In other words the authors need to make much more clear any claims to originality or significance. \n\nThe paper neglects to mention that many of the abstracts in CORD-19 are structured. For example, the abstract for document e9cgnhbq (PMID 20153051) has the following sections: Objective, Methods, Results, Conclusion. While the correlation may not be 1-to-1 to the annotation scheme, it seems that the mapping might be pretty high. How much does this help the crowd workers? Would it help the classifiers?\n\nThe claim that data sparsity is a challenge in general is of course true - there will never be enough annotated data to train a system to do every task we could want at near-human performance levels - but somewhat irrelevant. If there are no existing datasets that can be used to perform this task, then it is a challenge and the comments about the difficulty of annotating for other tasks is irrelevant. If there are existing datasets for the task needed, then the challenge is not lack of data but whether the dataset really does allow the task to be addressed. More likely, there are datasets that solve a similar task, and these should be evaluated and tested.\n\nWhich version of CORD-19 was used? A new version is released every week.\n\nWhile it is appropriate to provide compensation that roughly approximates 'fair', it is not mentioned how long each abstract took on average and therefore what the actual pay rate was. It would be useful to hear how long it took the experts as well. When I do annotation - admittedly for a more complex task - I almost always have to read the text multiple times.\n\nThis is essentially a text classification task, a task that multinomial Naive Bayes performs quite well on (probably not as well as BERT, but still). See:\nMcCallum, Andrew, and Kamal Nigam. \"A comparison of event models for naive bayes text classification.\" AAAI-98 workshop on learning for text categorization. Vol. 752. No. 1. 1998. \nTackling the poor assumptions of naive bayes text classifiers.\" Proceedings of the 20th international conference on machine learning (ICML-03). 2003.). \n\nThere is a strong but imperfect ordering of the sections: \"Background\" will almost always come before \"Method\". Note that at least one of the cited papers mentions this: Huang 2017. So why was that information not used in the classification models? The way to use it would be different for most of the models, but for naive bayes, the way to use it would be to switch to using a hidden markov model, which handles sequential information. \n\nStructured abstracts provide a great deal of information that is not being exploited here. In addition to being used as a feature directly, they could be used to train a second classifier that would serve as a feature: the structured abstracts will provide section labels and your existing TF-IDF vectorizer provides samples of vectors from segments. If these are run over a large corpus - not even necessarily covid-related - then a multinomial naive bayes model (or your favorite high dimensional instance classifier) could be created that predicts the section label from the TFIDF vector (importantly, this would work even if the abstract being predicted was not structured). There would be some section labels that are rare, these can simply be removed from the model. Then any TF-IDF vector for a segment could have section labels predicted, with that prediction fed as a feature for the classification of your annotation scheme.\n\nThere is no analysis provided to show whether 9x annotation is better than, say 5x annotation for this task. There is no analysis provided of how large the corpus needs to be, ie an ablation study with the classifiers. If the classifiers have the same performance on 11k abstracts that they would have on 5.5k abstracts, then maybe we don't need 11k. If 5x annotation of 5.5k abstracts provides close enough performance, then a sufficient corpus could have been created for 1/4 the cost.\n\nIt is not made clear why the annotation scheme will help fight COVID-19: what use cases do research aspects enable?\n\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Revision Submitted\nWe just submitted a new version of the paper, in which we addressed most of your comments. Please let us know if you have more questions. Thanks!", "role": "user" }, { "content": "I'm grateful for the authors' response. It answers most of my concerns, and I will likely raise the rating.", "role": "assistant" } ]
XOkm8xdns5R
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<XOkm8xdns5R>>```", "role": "user" }, { "content": "Title: Potentially important topic, but a number of questionable claims and an overall lack of contact with relevant literature detract from the paper quite a bit.\nReview Content: OVERALL\n\nThank you for the opportunity to review this interesting paper about annotation of abstracts from the CORD-19 corpus. \n\nOVERALL STRENGTHS:\n\n1.The paper is written in the official language of the conference.\n\n2. The paper is within the page limits of the conference.\n\n3. The abstract accurately reflects the contents of the paper.\n\n4. The discussion of worker wages and total cost is very nice. The paper could do a better job of showing the reader why it is important by putting it into the context of Fort et al.’s paper on the ethics of using AMT for NLP data (see the References section below), as well as Cohen et al.’s paper on the relationship between Turker pay rates and actual Turker earnings (from that work, there appears not to be one). Additionally, the paper mentions a target hourly wage, but does not report the ACTUAL hourly wage. This seems like an omission.\n\n5. The data quality assessment is very nice.\n\nOVERALL AREAS FOR IMPROVEMENT:\n\nFor visibility, consider adding β€œCOVID-19” to the title of the paper.\n\nThe literature review is missing quite a bit of relevant work. This is an important gap in the paper because adequate coverage of related work would help in making the significance and innovation arguments.\n\nThere are a couple of claims in the Introduction that are either not relevant or not supportable, and coming in the Introduction, they reduce a reader’s confidence in the paper as a whole. Specifics:\n\nThe statement that producing annotations for thousands of papers can be a prolonged process if only expert annotators are used and that their availability is limited is not connected with the rest of the paper. There is also no evidence that it is true. In fact due to the virus and attendant shutdown, expert annotators might have actually been MORE available at the time this work was done than at any time since the beginning of corpus annotation. I don’t have data on that, but the paper doesn’t present any data for the opposing hypothesis, either, and since there probably is none available, I would advise simply dropping any critique of manual annotation by experts. If the authors feel that it must be included, then be balanced--point out the benefits of expert annotation (see, for example, Amber Stubbs’s dissertation on the topic--sorry I couldn’t find the reference for this one; I suggest contacting her directly).\n\nThe claim that data sparsity is a major challenge for biomedical text mining is not supported, and probably not supportable. There is actually a LOT of data available for BioNLP research--that is why most people got into the field in the first place (see Cohen and Demner-Fushman’s book β€œBiomedical natural language processing)”. Tons of projects have produced quite a bit of data--e.g., Cohen et al.’s paper on multi-model annotation in the CRAFT corpus lists 25 (twenty-five) biomedical corpus constructions just in the five years before the publishing of that paper.\n\nThe scientific premises of the paper include the idea that the annotation task would typically be thought of as requiring expert knowledge; that is why the paper uses it to study the use of non-expert annotators. The approach to such a research question is reasonable, but the data and the task are not suitable to answering the question. The annotation scheme (Background/Purpose/etc.) does not seem like one that is necessarily difficult for non-experts, for example in the case where they are marked explicitly in abstracts. So, either demonstrate that they’re not marked, or improve the argumentation with a reasoned claim that the task would EVER be thought of as requiring expert annotators.\n\nThe paper’s description of the dataset is inadequate. It samples the CORD-19 corpus, but the CORD-19 corpus has a number of versions (commercial, non-commercial, for example), and the paper does not tell us from which part the CODA corpus was sampled.\n\nIn Table 2, it is not clear to me that F1 is the right measure of agreement here. Consider supporting an argument that it is.\n\nThe paper reports kappa, but does not say how P(e)n was calculated. That makes it difficult to interpret the kappa score at all.\n\nThe result given at the end of Section 5 regarding which model gave the β€œbest” classification performance is not relevant. The point of reporting classification scores in a paper on a corpus is as a measure of annotation quality and quantity--the argument would be that if a classifier can build a good model, then the annotations are probably consistent and the quantity of data is probably adequate (see the Pustejovsky and Stubbs reference). Which model performs β€œthe best” is not actually well-evaluated in the paper, and it is not relevant to the paper, so I would strongly suggest removing that small paragraph altogether.\n\nAs academic courtesy if nothing else, I strongly recommend that you mention the classic work on crowdsourcing for production of NLP data.\n\nREFERENCES (sorry for all of the Cohen references--he has done a lot of work in this area)\n\nFort, KarΓ«n, Gilles Adda, and K. Bretonnel Cohen. \"Amazon mechanical turk: Gold mine or coal mine?.\" Computational Linguistics 37.2 (2011): 413-420.\n\nCohen, K. Bretonnel, et al. \"Ethical issues in corpus linguistics and annotation: Pay per hit does not affect effective hourly rate for linguistic resource development on amazon mechanical turk.\" LREC... \n\nInternational Conference on Language Resources & Evaluation:[proceedings]. International Conference on Language Resources and Evaluation. Vol. 2016. No. W40. NIH Public Access, 2016.\n\nCohen, Kevin Bretonnel, and Dina Demner-Fushman. Biomedical natural language processing. Vol. 11. John Benjamins Publishing Company, 2014.\n\nCohen, K. Bretonnel, Karin Verspoor, KarΓ«n Fort, Christopher Funk, Michael Bada, Martha Palmer, and Lawrence E. Hunter. \"The Colorado Richly Annotated Full Text (CRAFT) corpus: Multi-model annotation in the biomedical domain.\" In Handbook of Linguistic Annotation, pp. 1379-1394. Springer, Dordrecht, 2017.\n\nPustejovsky, James, and Amber Stubbs. Natural Language Annotation for Machine Learning: A guide to corpus-building for applications. \" O'Reilly Media, Inc.\", 2012.\n\n\n\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Revision Submitted \nWe just submitted a new version of the paper, in which we addressed most of your comments. Please let us know if you have more questions. Thanks!", "role": "user" }, { "content": "I'm grateful for the authors' response. It satisfactorily addresses most of my concerns, and I will consider raising the rating score.", "role": "assistant" } ]
ZQ_HvBxcdCv
ACL 2020 Workshop NLP-COVID
AnonReviewer3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer3, and the following are the paper you need to review:\n```<<ZQ_HvBxcdCv>>```", "role": "user" }, { "content": "Title: straightforward but interesting case study in building NLP system from scratch for covid-19\nReview Content: This paper describes the creation of a system from scratch for processing clinical text at the VA to detect positive Covid-19 cases. While lab tests done at the VA will be in structured data, patients who may get some care elsewhere would have positive Covid results from other care recorded only in the free text, and the authors want to be able to capture those cases.\n\nThe challenges are that there was no existing data to build training sets for machine learning, there are many mentions of Covid and related terms in contexts that do not represent positive cases (negative test results, family members, describing the reason for a televisit).\n\nThe authors of this work created an NLP pipeline built on Spacy, including Context for negation/uncertainty/experiencer, sectionizer, etc. In the end, the decision of positive/negative was made based on finding a target term that was not uncertain, negated, or experienced by someone besides the patient, and that did have a positive modifier, occur in a diagnosis section, or was used to modify another condition (\"Covid-19 pneumonia\").\n\nSince this is an IR-like problem it is difficult to evaluate recall but straightforward to evaluate precision. For recall what they do is evaluate at the patient level on the subset of patients for whom they do have lab results they can use as a gold standard.\n\nWhile the NLP described here is fairly straightforward rule-based methods, this is an interesting use case of bootstrapping an NLP system from nothing for a new problem where there is no retrospective data available to build ML systems. The authors tease at the end that now that they've done this bootstrapping work they have essentially a labeled dataset that could be used for an ML system. A little more focus on this aspect would've been interesting to this reviewer, even if there are no ML results to report yet.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer #3\nWe thank the reviewer for reading our paper and for providing comments. We agree that machine learning offers a potential to improve system performance, but we identify some challenges to implementing such a model which need to be addressed first. A revised manuscript will include text such as the paragraph below which offers more information on our plans for machine learning and the challenges in training a classifier for our task: \n\nβ€œIn future work, we plan to evaluate machine learning methods to improve identification of positive cases. A machine learning classifier could potentially improve our current system by improving document classification accuracy or identifying high-probability cases for review. This was not feasible in early stages of the response since there were very few known cases and no existing reference set. We have now identified thousands of possible cases which could be leveraged as the basis for a training set for a supervised classifier. However, as stated previously, our clinical review did not equate to creating a reference set and there are some challenges to using our dataset as a training corpus. Specifically, while many positive patients have been identified through chart review, clinical reviewers did not label negative cases, so negative labels may not be consistent. Additionally, labels are assigned at the patient level, rather than the document level, and confirmed positive patients may have a mix of both positive and negative documents. However, we believe that with additional some validation and review, a machine learning classifier has the potential to augment our system’s performance.” ", "role": "user" }, { "content": "I'm grateful for the authors' response. It adequately addresses the points I raised. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
ZQ_HvBxcdCv
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<ZQ_HvBxcdCv>>```", "role": "user" }, { "content": "Title: A Natural Language Processing System for National COVID-19 Surveillance in the US Department of Veterans Affairs\nReview Content: The paper proposes a natural language processing pipeline for identifying positively diagnosed COVID19 patients, which has been deployed as part of the VA national response to COVID19. The objective of the NLP system is to simply detect if a clinical document contains a positive COVID19 case. \n\nThe paper is very USA-specific. One of the motivations of the study is the fact that community-level testing and drive-through testing are often not counted in numbers derived from healthcare systems. Thus, the authors propose an approach that detects these cases from clinical texts within the Department of VA. The motivation could be made stronger by including numbers of people whose COVID19 status are encoded in free text only. What percentage of the patients may potentially be missed by laboratory-based surveillance methods?\nIt is unclear if the problem of not counting certain positive cases also applies to the VA. Do VA health networks not report laboratory test based COVID19?\n\nPreprocessor: β€˜remove problematic template text’ and β€˜normalize lexical variants’ are very vague descriptions of what is actually being done. Normalizing lexical variants can be as simple as simply stemming and as complex as mappings multi-word variants to standard IDs (which of course is a very hard problem).\nTarget matcher: Is exact matching used? What is the rate of misspellings within the clinical notes (i.e., how many misspellings per number of tokens?)\nDocument Classifier: what are the positive modifiers? How many of them are there? How often can it capture actual expressions of the diagnoses. \nIn Document Processing, how many keywords are actually used? How many times are these not spelled correctly?\n\nWhat are the actual sections (incl. number of sections)?\n\nThe number of patients without laboratory evidence but with evidence only in clinical notes is quite largeβ€”and it justifies the development of such a pipeline. \n\nThe causes for the false negatives are quite straightforward and can be easily addressed. It would be good to ascertain how precision/recall would change if these simple issues (e..g, new lines) were fixed during the preprocessing phase. \n\nIt appears as though the authors did not consider inexact machine and/or semantic matching (e.g., via word2vec-type vectors). Recent research has shown that lexical similarity and semantic similarity based measures can generate data-centric variations/misspellings of noisy texts (e.g., from EHRs and social media), and such approaches have the potential to significantly improving performanceβ€”and they will not require machine learning (which would require large amounts of training data). \n\nSince the test results for a very large number of patients already available, it would be pretty straightforward to just use those in a supervised learning framework. \nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to Reviewer #1\nWe thank the reviewer for reading our paper and for their thoughtful comments. Please see below for our responses to the points made in the review. We will include changes in an updated revision following any additional feedback or direction from reviewers and organizers.\n\n---\n**The motivation could be made stronger by including numbers of people whose COVID19 status are encoded in free text only. What percentage of the patients may potentially be missed by laboratory-based surveillance methods?**\n\n**Response:** While the true percentage is not known, we report in the Results 6,360 patients who were found to be COVID-19 positive but did not have laboratory evidence and were identified using free text. This accounts for 36.1% of known positive cases in VA. We will clarify the significance of this percentage in the abstract to make it clear that these are patients without laboratory evidence. \n\n---\n**It is unclear if the problem of not counting certain positive cases also applies to the VA. Do VA health networks not report laboratory test based COVID19?**\n\n**Response:** As described in the Introduction, laboratory test results can be captured and used for reporting when testing is performance within the reporting healthcare system. This holds in the VA as well, and per our findings ~67% of known positive cases were reported using lab results, the remainder being identified using NLP + clinical review.\n\n---\n**Preprocessor: β€˜remove problematic template text’ and β€˜normalize lexical variants’ are very vague descriptions of what is actually being done.**\n\n**Response:** We will revise this text in the manuscript to offer more explanation and clarity: \nβ€œPreprocessor: Modifies the underlying text before text processing. This step removes semi-structured templated texts and questionnaires which can cause false positives and replaces certain abbreviations or misspellings to simplify later processing steps.” \n\n ---\n**Target matcher: Is exact matching used? What is the rate of misspellings within the clinical notes (i.e., how many misspellings per number of tokens?)**\n\n**Document Classifier: what are the positive modifiers? How many of them are there? How often can it capture actual expressions of the diagnoses.**\n\n**In Document Processing, how many keywords are actually used? How many times are these not spelled correctly?**\n\n**What are the actual sections**\n\n**Response:** Due to the operational nature of this project, we do not collect corpus statistics such as misspelling counts or phrase frequency but agree that these would be interesting research questions for future work. \nAs far as the specifics of the target matching and modifying rules, we present a high-level summary of our system here due to space constraints but have made [our code available publicly](https://github.com/abchapman93/VA_COVID-19_NLP_BSV ). We hope that having access to the code can provide additional details about implementation to interested readers. \n\n--- \n\n**It would be good to ascertain how precision/recall would change if these simple issues (e..g, new lines)**\n\n**Response:** Some formatting issues, such as new lines, are difficult to fix because it is challenging to distinguish between new lines which are inserted as part of EHR formatting vs. actual new lines used to mark new sentences or sections. Preprocessing handles some cases, but applying this step too aggressively can lead to additional errors. \n\nWe also include in revisions some additional examples of false negatives which illustrate more causes of errors. For example, positive modifiers which are not included in the lexicon, leading a mention to incorrectly not be marked as positive. \n\n --- \n\n**It appears as though the authors did not consider inexact machine and/or semantic matching (e.g., via word2vec-type vectors)**\n\n**Response:** We agree that this would be a valuable step for future work in expanding the lexicon. While this was not feasible in early phases of the pandemic, where there were relatively few examples, we now have a much larger corpus which contains a great deal of variation and would benefit from this type of approach. \n\n--- \n\n**Since the test results for a very large number of patients already available, it would be pretty straightforward to just use those in a supervised learning framework.**\n\n**Response:** We agree that machine learning offers a potential to improve system performance, but we identify some challenges to implementing such a model which need to be addressed first. A revised manuscript will include text such which offers more information on our plans for machine learning and the challenges in training a classifier for our task. Due to space constraints in the OpenReview platform, please find our full response to this question in our Reply to Reviewer #3\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It resolves the majority of my concerns. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
mlmwkAdIeK
ACL 2020 Workshop NLP-COVID
AnonReviewer1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as AnonReviewer1, and the following are the paper you need to review:\n```<<mlmwkAdIeK>>```", "role": "user" }, { "content": "Title: nice application to new data set to be made available\nReview Content: This paper explores gender differences in linguistic productions between two groups of Redditors who self-identify as either \"male\" or \"female\". It examines a corpus of covid-19 pandemic threads in relation to two areas: emotion analysis (employing a VAD lexicon and word embedding representations); and topic analysis (employing the tool MALLET).\n\nThe paper's novelty is in the application of an established method to a new corpus that the authors have developed pertaining to covid-19 threads. As expected, the language usage for covid-19 posts had a lower Valence when compared to language used in a baseline corpus. There is also a general trend for the language used in the female sub-corpus scoring slightly higher in the Valence scale than male sub-corpus. The trends are reversed when Arousal and Dominance were examined: overall higher for men, and when comparing baseline to the covid-19 posts, the baselines score slightly lower for both male and female data.\n\nTo compare and contrast the different topics covered between the male and female authored posts, topic modelling was applied to each sub-corpus and the topics with the highest coherence scores were presented. However, applying topic modelling to the corpus as a whole and analysing the topic allocation of the male and female posts would give a better indication of similarities and differences of the topics covered in the sub-corpora. However, two different topic models for each sub-corpus were developed and the most cohesive topics were presented.\n\nIn general the VAD study is interesting, although unsurprsing. The goal of discovering if different or similar topics were covered in the two sub-corpora may be best approached by discovering the topics covered by the corpus as a whole and analysing the topic allocation of the sub-corpora.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer\nThank you for taking the time to read and review our paper, we appreciate the feedback!\n\nPerforming topic modelling on the corpus as a whole indeed seems like a strong alternative to our methodology. If time permits, we can conduct this analysis and include it in the camera-ready version of the paper.", "role": "user" }, { "content": "Thank you for the authors' feedback. The authors have answered almost all of my concerns. I maintain my positive recommendation for this work.", "role": "assistant" } ]
End of preview. Expand in Data Studio

Introduction

This dataset is the largest real-world consistency-ensured dataset for peer review, which features the widest range of conferences and the most complete review stages, including initial submissions, reviews, ratings and confidence, aspect ratings, rebuttals, discussions, score changes, meta-reviews, and final decisions.

Comparison with Existing Datasets

The comparison between our proposed dataset and existing peer review datasets is given below. Only the datasets in [bold] can guarantee all provided papers are initial submissions, which is critical for the consistency and quality of the review data. For the β€œTask” column, AP is the abbreviation for Acceptance Prediction, SP for Score Prediction, RA refers to Review Analysis tasks, RG stands for Review Generation, and MG denotes Meta-review Generation. In the three columns representing numbers, β€œ-” means please refer to the original dataset for the number.

Dataset Name Data Source Task Reviews Aspect Scores Rebuttal Score Changes Meta-reviews Final Decision #Paper #Review Comments #Rebuttal
PeerRead ICLR 17, ACL 17, NeurIPS 13-17 AP, SP βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 14,700 10,700 0
DISAPERE ICLR 19-20 RA βœ“ βœ“ βœ“ 0 506 506
PEERAssist ICLR 17-20 AP βœ“ βœ“ 4,467 13,401 0
[MReD] ICLR 18-21 MG βœ“ βœ“ βœ“ βœ“ βœ“ 7,894 30,764 0
ASAP-Review ICLR 17-20, NeurIPS 16-19 RG βœ“ βœ“ βœ“ βœ“ 8,877 28,119 0
[NLPeer] ACL 17, ARR 22, COLING 20, CONLL 16 RA βœ“ βœ“ βœ“ βœ“ 5,672 11,515 0
PRRCA ICLR 17-21 MG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 7,627 25,316 -
Zhang et al. ICLR 17-22 RA βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 10,289 36,453 68,721
[ARIES] OpenReview RG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 1,720 4,088 0
AgentReview ICLR 20-23 RG, MG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 500 10,460 -
Reviewer2 ICLR 17-23, NeurIPS 16-22 (PeerRead & NLPeer) RG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 27,805 99,727 0
RR-MCQ ICLR 17 (from PeerRead) RG βœ“ βœ“ 14 55 0
ReviewMT ICLR 17-24 RG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 26,841 92,017 0
Review-5K ICLR 24 RG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 4,991 16,000 0
DeepReview-13K ICLR 24-25 RG βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 13,378 13,378 0
[Our dataset] 45 venues from 2017 to 2025 RG, MG, AP, SP βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 19,926 70,668 53,818

papers

The papers folder, located in ./papers, contains the latest versions of the papers before review, collected from 45 venues between 2017 and 2025, in both .tex and .md formats. For the .tex files, the references and appendices have been filtered out.

REVIEWS

We organize the reviews corresponding to 19,926 papers into a dictionary keyed by each peer paper, and then merge each paper-review dictionary into a list. This list is split into two files: REVIEWS_train and REVIEWS_test. Specifically, for the ReΒ²-Review dataset, we sample 1,000 papers along with their reviews to form the test set, while the remaining papers and their reviews are used as the training set.

Review Data Format

The format of the review data is below.

  • paper_id refers to the unique identifier of the paper on OpenReview
  • initial_score is the score before the rebuttal
  • final_score is the final score after the rebuttal
  • Fields ending with _unified represent the scores after unification
  • review_initial_ratings_unified and review_final_ratings_unified are the lists of all review scores before and after the rebuttal, respectively
[
    {
        "paper_id": "m97Cdr9IOZJ",
        "conference_year_track": "NeurIPS 2022 Conference",
        "reviews": [
            {
                "reviewer_id": "Reviewer_UuMf",
                "review_title": "null",
                "review_content": "<review_content>",
                "initial_score": {
                    "rating": "7: Accept: Technically solid paper, with high impact...",
                    "confidence": "3: You are fairly confident...",
                    "aspect_score": "soundness: 3 good\npresentation: 4 excellent\ncontribution: 3 good\n"
                },
                "final_score": {
                    "rating": "7: Accept: Technically solid paper, with high impact...",
                    "confidence": "3: You are fairly confident...",
                    "aspect_score": "soundness: 3 good\npresentation: 4 excellent\ncontribution: 3 good\n"
                },
                "initial_score_unified": {
                    "rating": "7 accept, but needs minor improvements",
                    "confidence": "The reviewer is fairly confident that the evaluation is correct"
                },
                "final_score_unified": {
                    "rating": "7 accept, but needs minor improvements",
                    "confidence": "The reviewer is fairly confident that the evaluation is correct"
                }
            },
            {
                "reviewer_id": "Reviewer_15hV",
                "review_title": "null",
                "review_content": "<review_content>",
                "initial_score": {
                    "rating": "5: Borderline accept: Technically solid paper where...",
                    "confidence": "1: Your assessment is an educated guess...",
                    "aspect_score": "soundness: 3 good\npresentation: 3 good\ncontribution: 3 good\n"
                },
                "final_score": {
                    "rating": "5: Borderline accept: Technically solid paper where...",
                    "confidence": "1: Your assessment is an educated guess...",
                    "aspect_score": "soundness: 3 good\npresentation: 3 good\ncontribution: 3 good\n"
                },
                "initial_score_unified": {
                    "rating": "5 marginally below the acceptance threshold",
                    "confidence": "The reviewer's evaluation is an educated guess"
                },
                "final_score_unified": {
                    "rating": "5 marginally below the acceptance threshold",
                    "confidence": "The reviewer's evaluation is an educated guess"
                }
            },
            {
                "reviewer_id": "Reviewer_hkZK",
                "review_title": "null",
                "review_content": "<review_content>",
                "initial_score": {
                    "rating": "7: Accept: Technically solid paper, with high impact..",
                    "confidence": "3: You are fairly confident in your assessment...",
                    "aspect_score": "soundness: 3 good\npresentation: 3 good\ncontribution: 3 good\n"
                },
                "final_score": {
                    "rating": "7: Accept: Technically solid paper, with high impact...",
                    "confidence": "3: You are fairly confident in your assessment...",
                    "aspect_score": "soundness: 3 good\npresentation: 3 good\ncontribution: 3 good\n"
                },
                "initial_score_unified": {
                    "rating": "7 accept, but needs minor improvements",
                    "confidence": "The reviewer is fairly confident that the evaluation is correct"
                },
                "final_score_unified": {
                    "rating": "7 accept, but needs minor improvements",
                    "confidence": "The reviewer is fairly confident that the evaluation is correct"
                }
            }
        ],
        "review_initial_ratings_unified": [
            7,
            5,
            7
        ],
        "review_final_ratings_unified": [
            7,
            5,
            7
        ],
        "metareview": "The paper proves universal approximation in...",
        "decision": "Accept"
    }
]

REBUTTALS

As for the rebuttal, we model the rebuttal and discussion data as a multi-turn conversation between reviewers and authors. This conversational setup enables the training and evaluation of dynamic, interactive LLM-based reviewing assistants, which can offer authors more practical and actionable guidance to improve their work before submission. To facilitate this, we split the rebuttal data into two files: REBUTTAL_train and REBUTTAL_test. Specifically, for the ReΒ²-Rebuttal dataset, we select 500 papers along with their rebuttals as the test set, while the remaining data is used for training.

Rebuttal Data Format

The format of the review data is below.

  • paper_id refers to the unique identifier of the paper on OpenReview
  • messages is formatted as a multi-turn conversation
  • final_score is the final score after the rebuttal
  • When the role is set to system, it defines the overall context for the entire rebuttal multi-turn dialogue
  • The first message with the role of user serves to trigger the review process, providing the model with the paper information and confirming its identity as a reviewer
  • Subsequent messages with the role of user serve as the author's responses
  • Messages with the role of assistant serve as the reviewer's comments or replies during the discussion
[
    {
        "paper_id": "N-eul1pdagX",
        "conference_year_track": "ICLR 2023 Conference",
        "reviewer_id": "Reviewer_s4Ez",
        "messages": [
            {
                "role": "system",
                "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n"
            },
            {
                "role": "user",
                "content": "Now your reviewer ID is assigned as Reviewer_s4Ez, and the following are the paper you need to review:\n```<<N-eul1pdagX>>```"
            },
            {
                "role": "assistant",
                "content": "<review_content>"
            },
            {
                "role": "user",
                "content": "Title: Response to Reviewer s4Ez\nThank you for the extensive review. We address the questions and concerns piecewise:\n\n**Q1**: Fig. 1 - can you add some details on the sampling?\n\n**Response**: As suggested, we have now included some clarifying text..."
            },
            {
                "role": "assistant",
                "content": "Title: Post Review Discussion\nThank you for clarifying.\n\nI did not notice any changes to the text after equation 5, maybe accidentally not added?..."
            },
            {
                "role": "user",
                "content": "Title: Response to Follow-up and Post Reviewer-Discussion Update\nThank you for following up.\n\n> I did not notice any changes to the text after equation 5, maybe accidentally not added?\n\nApologies, indeed, we created this sentence, but it was not added in the upload above due to a version control error on our part..."
            },
            {
                "role": "assistant",
                "content": "It was a pleasure to engage in the above discussion with the author, which has added considerable value to this work."
            }
        ],
    }
]
Downloads last month
141