paper_id
stringlengths
9
13
conference_year_track
stringclasses
139 values
reviewer_id
stringlengths
13
13
messages
listlengths
5
21
7pUJHzNNhvy
AKBC 2021 Conference
Reviewer_vVo1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_vVo1, and the following are the paper you need to review:\n```<<7pUJHzNNhvy>>```", "role": "user" }, { "content": "Title: A well-crafted and thought-provoking study of vagueness with limited outcomes.\nReview Content: # General comments\nIn this paper the authors discuss the problem of vagueness in NLP, present a new categorisation scheme for vague noun phrases (NPs) and demonstrate through a case study annotation of Wikipedia passages that their categorisation is intuitive enough to lead to high inter-annotator agreement. \n\nThe paper is a well-crafted resource for reviewing the literature in the field and the case study (and subsequent analysis) is well designed and potentially useful as a resource in itself. My only concern is that if we consider this exclusively as a position paper, other than \"vagueness is still vague\" it's not clear what the position is. I would have liked to see some more concrete suggestions in the final section on how the recommended categorisation scheme advances the research in the field, especially when it comes to practically representing vagueness in knowledge graphs and other NLP applications.\n\n\n# Specific comments/questions\nIn the Definition paragraph of Section 2, it's not clear why \"head\" is not vague under the vagueness doctrine of American constitutional law. If this a hypothetical example, I would imagine a similar argument could be made for \"heap\" (where a court ruling or a law would define \"heap\" in terms of e.g. weight or countable items). If it is a concrete legal example, I would like to see some more details as to why \"head\" was able to defined but a term like \"heap\" wasn't.\n\nThe example of \"beauty\" is used to justify the rejection of previous categorisations but the reasons for that rejections are themselves vague. It's not clear why neither the \"referent unclear\" or the \"overlapping of disjoint but overlapping concepts\" definitions fit the type of vagueness described by \"beauty\". It would be helpful to see what types of disagreements or coverage gaps appeared during the annotation effort when using one of the existing categorisations. \n\nThe terms \"meaning\" and \"interpretation\" used in the paragraph \"Practical Considerations\" should be defined.\n\nIs the proposed categorisation system adhering to either definition given in Section 2? In particular the vagueness doctrine seems to provide a very useful functional definition, even for the example used in the first section. I can imagine for instance a word like \"sentiment\" being operationalised for a specific legal case. In addition, it would be an interesting exercise to annotate patents or other legal documents with the same annotation scheme applied to Wikipedia and compare the prevalence of vague words in the two fields.\n\nA few questions on the Wikipedia annotation:\n- Are named entities (proper nouns) included in the set of NPs considered for annotation? If not, how were they identified and removed?\n- What tool was used for identifying the different PoS tags used the NP defintion?\n- How many semantic classes were used in total?\n- Were annotators allowed to choose more than one type of vagueness?\n\nThe result of the baseline classifier are already higher than the reported IAA (0.78 vs 0.744). This implies that the dataset is already \"solved\", unless there were efforts to ensure higher agreement post-hoc and the adjudication of disagreements was itself consistent.\n\nThe work by Kees van Deemter \"Not Exactly: In praise of vagueness\" is a great reference that needs to be included, not only for its coverage of different definitions and categorisations, but also because of the discussion of the practical aspects of dealing with vagueness in AI.\n\n\n# Minor comments\n\"Weasel word\" is used (Section 3) before being formally defined (Section 4)\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and feedback! We have uploaded a revised version of the paper with your suggestions.\n- \"*I would have liked to see some more concrete suggestions when it comes to practically representing vagueness in knowledge graphs and other NLP applications.*\" Section 5 presents several formalisms to this end. Section 6 discusses how these could be combined. However, our work is a position paper, whose goal is to entice the community to look into the problem. To this end, we provide a survey of existing building blocks, but we explicitly keep the solution open-ended. \n- \"*I would like to see some more details as to why \"head\" was able to defined but a term like \"heap\" wasn't.*\" The word \"head\" appears in legal contracts such as insurance policies to protect people from [head injuries](https://www.google.com/search?q=%22head%22+accident+insurance). Hence, the \"head\" is not vague. The word \"heap\" is not vague either, if it is accompanied by measures of size or weight (as you suggest). Without these, it is vague, and could not appear in a contract.\n- \"*It would be helpful to see what types of disagreements or coverage gaps appeared during the annotation effort when using one of the existing categorizations.*\" We did not encounter any coverage gaps. Disagreements arose mainly when annotators forgot that a noun phrase has a subjectively vague component in addition to other types of vagueness. \n- \"*The terms \"meaning\" and \"interpretation\" used in the paragraph \"Practical Considerations\" should be defined.*\" The term \"meaning\" stands for what the sender (the writer) wants to convey, while \"interpretation\" concerns what the receiver (the reader) understand from the text. The paper has been updated.\n- \"*it would be an interesting exercise to annotate patents or other legal documents with the same annotation scheme*\" Indeed, this would be a very interesting exercise, e.g., on the privacy corpus of [Reidenberg et al., 2016].\n- \"*Are named entities (proper nouns) included in the set of NPs considered for annotation? If not, how were they identified and removed?*\" Yes, named entities have been manually identified and kept. We added an example to the paper.\n- \"*What tool was used for identifying the different PoS tags used the NP definition?*\" We used **nltk** to tag the text.\n- \"*How many semantic classes were used in total?*\" **11 classes** were used. The paper has been updated accordingly.\n- \"*The result of the baseline classifier are already higher than the reported IAA (0.78 vs 0.744). This implies that the dataset is already \"solved\"*\" The inter-annotator agreement (IAA) is not a percentage, and thus cannot be compared to the accuracy.\n- \"*The work by Kees van Deemter \"Not Exactly: In praise of vagueness\" is a great reference that needs to be included*\" Thank you for the pointer! We now cite it to cement the observation that vagueness is both unavoidable and useful.\n- \"*\"Weasel word\" is used (Section 3) before being formally defined (Section 4)*\" You're right, the error has been corrected.\n Global/general Response:\n```\nReviewer vVo1 = R1\n\nReviewer ygk9 = R2\n\nReviewer vazn = R3\n\nWe thank the reviewers for their thoughtful remarks. We are encouraged they found our study about vagueness well-crafted (R1), fascinating (R2) and fun (R3). We are glad they found our categorization intuitive, the case study well designed and useful (R1), and the problem of vagueness important (R3). We will now reply to each question.\n\n**@R1,R2: What is our vision?** Our position is that vagueness is currently ignored when it comes to knowledge bases and their applications such as question answering systems, and that vagueness should be taken into account. We have made this clearer in the \"vision\" section. More specifically, we define a problem, categorize its instances, analyze its nature, provide a dataset, discuss the approaches that have done something similar, and call for the community to look into the problem. \n\n**@R1,R2: Why is beauty vague?** Even a single observer considers that beauty applies to a certain degree (\"Alice is more beautiful than Bob\"). Yet, there is no consensus on how to measure this degree (someone else may find Bob more handsome than Alice). This makes the concept subjectively vague. In general, we consider vague any noun phrase that cannot appear in a contract, and beauty qualifies here.\nLassiter and Goodman [2017] see just scalar vagueness and “everything else”, under which they also group nouns such as “bird” (presumably because of a lack of precision). Bennett [2005] defines a form of vagueness as the \"overlap of disjoint but overlapping concepts\". For example, Paris in France is both a city and a department, and both objects are the same w.r.t. geography, but are different from a legal perspective. Again, this definition doesn't apply to beauty. Hence the need to introduce a new category, which is what our work does. We find that this category is actually very frequent: It concerns a large number of noun phrases that carry an opinion (like \"sophisticated\" or \"fame\"), and/or that cannot be assessed objectively (like \"power\" or \"great ability\"). \n\n**@R2,R3: In which context do you analyse vagueness?** We treat each document top down, and judge the vagueness of a word at the point where it appears, given all previous text. We did not treat the case where a later explication treats the vagueness that was encountered previously, as the word would still appear vague to human readers when they first encounter it.\n\n**@R1,R3: What about multiple forms of vagueness in a noun phrase?** Annotators were allowed to tag the NPs with multiple types of vagueness. For example, \"complex societies\" is both subjectively vague and quantitatively vague. Our paper discusses this in Section 2, but we also updated the use case section to be clearer. There are 79 NPs with multiple types of vagueness (3.22% of all NPs, or 12.37% of vague NPs). All combinations of vagueness exists in our dataset.\n\nThe paper has been updated to take into account all of these points.\n```\n", "role": "user" }, { "content": "Title: Thanks for the clarifications\nI would like to thank the authors for addressing my questions and adding some clarifications regarding concepts.\n\nRegarding the vision and motivation of the paper, I am still not entirely convinced by the author's response. I agree that the KB community has largely ignored vagueness, but that's not because they aren't aware of its existence, but rather that it is very hard to address concretely and at scale. The current work does indeed provide a valuable contribution in this area by systematising the categories and providing quantitive results (and a useful resource) but I don't see the thesis of the position paper itself as the main contribution.\n\nFinally a minor clarification on the following:\n\n> The inter-annotator agreement (IAA) is not a percentage, and thus cannot be compared to the accuracy.\n\nThis is merely a technicality. Cohen's kappa is chance-corrected agreement which itself is just a percentage. My point was if two experts (or any humans) agree only about 74% of the cases, this means that the highest possible score achievable by a human (against a fully-adjudicated version of the dataset) is 74%. ", "role": "assistant" } ]
8smkJ2ekBRC
AKBC 2021 Conference
Reviewer_UK5j
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_UK5j, and the following are the paper you need to review:\n```<<8smkJ2ekBRC>>```", "role": "user" }, { "content": "Title: A new and interesting benchmark, but more details should be included\nReview Content: Summary: This paper proposed a novel benchmark for relation extraction from very long documents and a new method -- KnowFi to deal with the problem. The dataset is constructed from Wikia containing long fictional texts. The method, KnowFi, generally contains two modules -- passage ranking and multi-context neural extraction. They first rank the source passages via a similarity function and then feed the top-$k$ passages to a BERT-based multi-context neural extraction module to get its relation. Experiments showed that KnowFi achieved strong performance on this new benchmark and two conventional short-text benchmarks.\n\nStrengths:\n1. The benchmark proposed in this work is novel and worth exploring.\n2. The proposed method -- KnowFi is simple yet effective.\n3. Strong results on this benchmark and two conventional short-text benchmarks.\n\nWeaknesses:\n1. The statistics of the constructed dataset -- LoFiDo, are missing. How many samples do you use for training and evaluation? How many passages are given for each sample?\n2. Lacking ablations of the Multi-Context Neural Extraction module, how does the similarity threshold which is set to 0.5, affect the performance?\n\nQuestions:\n1. Have you tried any other baselines for passage ranking? For example, matching SPO by word embeddings or contextualized word embeddings.\n2. What do you mean by \"For context selection, we rely on TF-IDF-based bag-of-words similarity, choosing the top-100 tokens per relation as its context.\"?\n3. Have you tried the comparison of the proposed method with GLRE + the passage ranking? \nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your valuable feedback!\n\n> The statistics of the constructed dataset -- LoFiDo, are missing. How many samples do you use for training and evaluation? How many passages are given for each sample?\n\nAnswer: The dataset includes 81,025 instances for training and 20,257 for validation. On average, each sample contains ~1.5 passages. For the test data, we only test on five specific universes, which take the input from both books and Wikia texts. The total number of instances in the test data from Wikia texts is 14,610 and from books is 64,120. Thanks for this comment, we have added the statistics on LoFiDo in appendix B (LoFiDo Statistics) in the revised paper.\n \n\n> Lacking ablations of the Multi-Context Neural Extraction module, how does the similarity threshold which is set to 0.5, affect the performance?\n\nAnswer: We pick 0.5 as the default threshold (possible values from 0 to 1). Based on your suggestion, we conducted an ablation study on the similarity threshold. We have added the results of this study in appendix D (Additional Experiments) in the revised paper. The results show that, with a similarity threshold around 0.5, the model achieves the best F1. By increasing the similarity threshold, the model can achieve higher precision but lower recall and vice versa.\n\n\n\n> Have you tried any other baselines for passage ranking? For example, matching SPO by word embeddings or contextualized word embeddings.\n\nAnswer: Thanks for this suggestion. We have now conducted another study on the ranking step, where we use contextualized embeddings. The details about this study have been added in appendix D (Additional Experiments). As one can see, using embedding-based ranking, the model can somewhat capture the case of synonyms in the context, hence achieves higher recall. However, in general, both techniques are on par, and embeddings do not improve the results in terms of F1-score.\n\n> What do you mean by \"For context selection, we rely on TF-IDF-based bag-of-words similarity, choosing the top-100 tokens per relation as its context.\"?\n\nAnswer: Context selection is the general term for passage selection/ranking. We compute TF-IDF-based similarity between the relation context and the passage. We choose top-100 tokens (based on their TF-IDF scores) as the relation context. We rephrased this in the revised paper.\n\n> Have you tried the comparison of the proposed method with GLRE + the passage ranking?\n\nAnswer: Thanks for this suggestion. We conducted a new experiment on GLRE + our passage ranking, which has been added to Appendix D (Additional Experiments). The results show that, by using passage ranking to filter the inputs (for both train and test), GLRE is able to improve its precision and recall. However, this enhanced variant is still inferior to KnowFi by a substantial margin.\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It satisfactorily addresses the major issues I had. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
8smkJ2ekBRC
AKBC 2021 Conference
Reviewer_NafS
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_NafS, and the following are the paper you need to review:\n```<<8smkJ2ekBRC>>```", "role": "user" }, { "content": "Title: A new task and benchmark for fictional relation extraction, but missing some details\nReview Content: Strengths:\n1. A new task and benchmark: This paper focuses on a new task to extract relations from long fictional texts, and release a new benchmark with KB annotations from Wikia or manual annotations. \n2. Model design is reasonable: To solve the sparse issue of distant supervision, the authors rank passages based on TF-IDF; to tackle the long text input issue, multiple passages are encoded and are then concatenated. \n3. The experiments show the superiority of the proposed model.\n\nWeakness and detailed comments:\n1. Missing model and experiment details. \n - 1.1 Lack of details of \"prototype representation\". How long is the \"short list of synonyms and close hyponyms or hypernyms\"? Where did you get the \"hyponyms or hypernyms\"? \n - 1.2 According to \"the aggregation layer is configured to concatenate the representations of all passages\", the number of passages should be fixed. Otherwise, it cannot match the dimension of the following classification layer. However, in the experiment, it is said that \"we select all passages with cosine above 0.5 as positive training samples\", which seems that the number of passages is not fixed, but is decided by the similarity threshold.\n - 1.3 What do your mean by \"context selection\" in the parameter setting. There is no such terminology in the model sections. Do you mean the \"Passage ranking\"?\n - 1.4 Also, I cannot follow \"chosing the top-100 tokens per relation as its context\". \n2. Missing dataset details. \n - 2.1 There are no data statistics in both the main paper and the Appendix, which is important for a newly released dataset.\n - 2.2 Will the passages that you selected be released?\n - 2.3 About \"20% of the triples from each of these universes are withheld for testing.\", how about the validation set? It also does not mention whether parameters are selected on the validation set. \n3. Questions about the experiments.\n - 3.1 Why do you choose TF-IDF to rank passages? Did you compare with embedding-based ranking method? It seems that the ranking step is the most crucial step in your model, but TF-IDF model cannot handle synonyms.\n - 3.2 In section 8, \"we pursued a user study to compare entity summaries, one by KnowFi and one by a baseline\", which conflicts with \"We compare KnowFi to all baselines.\" Does the first sentence is written by mistake?\n4. Additional questions: To deal with long text using Transformer-based encoders, the models designed for long text such as Longformer should be a natural thought. Have you tried that?\n\nTypos:\n- \"e1,e2 for which (e1,r,e2 does not hold\" --> \"e1,e2 for which (e1,r,e2) does not hold\"\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your valuable feedback!\n**Missing model and experiment details.**\n> 1.1 Lack of details of \"prototype representation\". How long is the \"short list of synonyms and close hyponyms or hypernyms\"? Where did you get the \"hyponyms or hypernyms\"?\n\nAnswer: We manually select the synonyms/hypernyms/hyponyms for each relation, based on WordNet and the relation filtering step (e.g. relation grouping - Appendix A). On average, each list contains about 3 synonyms/hypernyms/hyponyms.\n\n> 1.2 According to \"the aggregation layer is configured to concatenate the representations of all passages\", the number of passages should be fixed. Otherwise, it cannot match the dimension of the following classification layer. However, in the experiment, it is said that \"we select all passages with cosine above 0.5 as positive training samples\", which seems that the number of passages is not fixed, but is decided by the similarity threshold.\n\nAnswer: The passages are first pre-filtered by the similarity threshold. Then, if more than k passages survive this filter, we pick only top-k passages as input of the neural model (k = 5 in our experiment)\n\n> 1.3 What do you mean by \"context selection\" in the parameter setting. There is no such terminology in the model sections. Do you mean the \"Passage ranking\"?\n\nAnswer: Yes, it should be \"passage ranking\" or \"passage selection\". We revised the terminology in the paper.\n\n> 1.4 Also, I cannot follow \"choosing the top-100 tokens per relation as its context\".\n\nAnswer: For each relation, we select top-100 tokens (based on their TF-IDF scores) as the context. We clarified this in the paper.\n\n**Missing dataset details.**\n> 2.1 There are no data statistics in both the main paper and the Appendix, which is important for a newly released dataset.\n\nAnswer: The dataset includes 81,025 instances for training and 20,257 for validation, extracted from Wikia. For the test data, we use five specific universes, which take input from both books and Wikia texts. The total number of instances in the test data from Wikia texts is 14,610 and from books is 64,120. Thanks for this comment, we have added the statistics on LoFiDo in appendix B (LoFiDo Statistics) in the revised paper.\n\n \n> 2.2 Will the passages that you selected be released?\n\nAnswer: The passages will be released with the final paper, along with the code.\n\n> 2.3 About \"20% of the triples from each of these universes are withheld for testing.\", how about the validation set? It also does not mention whether parameters are selected on the validation set.\n\nAnswer: Please refer to the answer of question 2.1.\n\n**Questions about the experiments.**\n> 3.1 Why do you choose TF-IDF to rank passages? Did you compare with embedding-based ranking method? It seems that the ranking step is the most crucial step in your model, but TF-IDF model cannot handle synonyms.\n\nAnswer: Based on your suggestion, we also conducted another study on the ranking step, where we use contextualized embeddings. The details about this study have been added in appendix D (Additional Experiments). As one can see, using embedding-based ranking, the model can somewhat capture the case of synonyms in the context, hence achieves higher recall. However, in general, both techniques are on par, and embeddings do not improve the results, in terms of F1-score.\n\n> 3.2 In section 8, \"we pursued a user study to compare entity summaries, one by KnowFi and one by a baseline\", which conflicts with \"We compare KnowFi to all baselines.\" Does the first sentence is written by mistake?\n\nAnswer: Each summary pair is generated by picking one from KnowFi and one by randomly drawing from one of the three baselines. We clarified this in the paper. \n\n> Additional questions: To deal with long text using Transformer-based encoders, the models designed for long text such as Longformer should be a natural thought. Have you tried that?\n\nAnswer: Thanks for the suggestion. This is left for future work.\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The authors have answered almost all of my concerns. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
8smkJ2ekBRC
AKBC 2021 Conference
Reviewer_2wDt
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_2wDt, and the following are the paper you need to review:\n```<<8smkJ2ekBRC>>```", "role": "user" }, { "content": "Title: A great step towards KB construction from long docs.\nReview Content: The paper focuses on the relation extraction task from long fictional texts (i.e., book-level RE), which is an important step towards KB construction from books. The authors present a neural architecture to address the challenges such as sparsity, distant-supervision, size limitation, etc. Although the proposed method is not particularly novel, the paper presents a nice first work in this direction. The authors also provide a new benchmark LoFiDo for evaluating different methods, which is also a great contribution.\n\nI don't have any major concerns about this paper. \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you very much!\nThank you for your comments on our work, we appreciate your time and effort!\n\n\n\n", "role": "user" }, { "content": "I value the authors' response. The reply answers almost all of my questions. I maintain my positive recommendation for this work.", "role": "assistant" } ]
AJKd0iIFMDc
AKBC 2021 Conference
Reviewer_z3eg
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_z3eg, and the following are the paper you need to review:\n```<<AJKd0iIFMDc>>```", "role": "user" }, { "content": "Title: Can automated dialog summarization summarize concepts that are not immediately available in the input? \nReview Content: # Summary\nThe authors introduce a new problem, inferencing Concepts Out of DIalogue Context (CODC) wherein a ML model is tasked with producing concepts that are not explicitly mentioned in the input. The authors define this problem concretely and introduce metrics to study it. Particularly they define concepts as noun phrases that follow a very specific set of rules within wordnet. The authors propose metrics and a model to test the ability of systems on these CODC problems. \n\n# Strengths\n- The paper is well written.\n- I like the rules that are clearly defined to be concepts. They are intuitive and make the phrase concepts seem less abstract.\n- Adding human evaluations strengthens the paper. \n- Modelling approaches are intuitive and well presented. \n\n# Weakness\n- This appears to be a very hard problem. The image is first \"summarized\" by dialog which must then be further summarized as a part of this setup. Since summarization almost guarantees a loss of information, I wonder how much scope there is to actually improve on the results on this dataset. \n- The related work section does not do a good job of placing the work in the context of related works. The current work should be situated more clearly within the field as much as possible.\n- I do not really like the phrase \"concept\" as it comes across as a very abstract term to me. Changing this is not critical but I think it would help to introduce the definition of what the author's mean exactly by 'concepts' asap in the text. \n- The phrases \"abstractive\" and \"extractive\" models should be defined early on. I personally have never come across these phrases before. A simple sentence each should be sufficient. \n- One line descriptions of the baseline models in the main text would strengthen the paper. \n\n# Other thoughts\n- Figure 1 in the appendix helps develop the intuitions for the reader. Consider moving more examples from the paper in to the figure as it should enhance readability.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer's evaluation is an educated guess\n", "role": "assistant" }, { "content": "Title: Response to review 1\nThanks for the review. \n\n- Related works \n\nThe problem of inferencing out-of-context concepts is a new problem where no identical related works have been proposed. In this sense, we divide the related works section in our paper into two parts, dialogue summarization and pragmatics, where the first part is the concrete task in our work and the second one is where out-of-context inference is originated from. We believe this can provide enough background knowledge related to our work. \n\n- Concepts \n\nIt’s been stated in the Introduction and the beginning of section 3.1 that we narrow down the definition of `concept’ to a noun word that satisfies specific restrictions using WordNet. It’s indeed valuable to have an overall definition at the beginning of the paper anyway. Thanks for pointing it out. \n\n- Abstractive and Extractive \n\nAbstractive and extractive summarization are two mainstream approaches for summarization models in NLP. We have already briefly explained the differences at the end of the first paragraph in the Introduction. \n\n- Baseline \n\nSure, thanks for the suggestion. The one-line descriptions of baselines are in the appendix and we will move them to the main text if space permits. ", "role": "user" }, { "content": "Thanks to the authors for their clarification. It adequately addresses the points I raised. I continue to recommend this work positively.", "role": "assistant" } ]
AJKd0iIFMDc
AKBC 2021 Conference
Reviewer_EVVb
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_EVVb, and the following are the paper you need to review:\n```<<AJKd0iIFMDc>>```", "role": "user" }, { "content": "Title: Interesting task and a solid model and evaluation, but unclear if the dataset is a good fit for the task\nReview Content: This paper introduces a new dialogue summarization task that requires summaries to mention new concepts that are not explicitly mentioned in -- but may be implied by -- the dialogue. The task, CODC, repurposes the VisDial dataset (itself based on the MSCOCO captioning dataset), which involved information-seeking question-and-answer dialogues about an image, where the dialogue participants were prompted by the image's caption. The captions frequently mention some concepts that are not mentioned in, but are implied by, the dialogue. In CoDC, the task is to produce the reference caption (which is treated as a dialogue summary) from the dialogue (without using the images). Summarization models are evaluated on standard text overlap with the references, as well as an WordNet-based F1 metric that measures whether the model's output has inferred concepts in the references which were not mentioned in the dialogue.\n\nThe paper proposes a Trans-KnowAttn model, which uses the co-occurrences and word relatedness between concepts between dialogues and summaries in the training data to propose candidate concepts for a given dialogue, then attends to these concepts in a neural sequence-to-sequence abstractive summarization system. The proposed model outperforms a variety of reasonable neural sequence-to-sequence summarization baselines, as measured by both of the metrics above, and also in pairwise human judgements between Trans-KnowAttn and the best baseline system.\n\n*Strengths*\n1. Predicting inferrable knowledge (and more generally pragmatic implicatures, which this paper indirectly touches on) is an exciting and underexplored direction.\n2. The model was well-motivated, combining a variety of sources of knowledge (co-occurrences in training data and lexical similarity) and approaches (classifiers and attentive neural models) in a reasonable and appropriate way. Ablations show that most of the components are valuable.\n3. For a task that seems potentially very challenging (see weaknesses below for some questions about how difficult / well-defined the task is, as constructed using the dataset), the results seemed strong, with the proposed model outperforming a range of reasonable neural baselines.\n\n*Weaknesses*\n1. While I appreciated the motivation for the task, and that reusing VisDial allows this work to draw on a large body of work analyzing VisDial, I'm not totally convinced that VisDial is a good fit for this task. In the VisDial dataset collection, both dialogue participants saw the caption, and then produced the dialogue (with one participant seeing the image and answering questions asked by the other participant). The caption seems likely to have complementary information to the dialogue, but it's not totally clear to me that captions (or their novel concepts) are inferrable from the dialogue or can be reasonably called summaries. The example dialogues the paper shows did help a bit to convince me, but it's unclear whether these dialogues are cherry-picked. The paper would be much stronger if it evaluated the dataset quality with a human ceiling, by e.g. having people annotate summaries for some fraction of the dialogue and compute the CoDC metrics between these human annotations and the reference captions, or compute human inter-annotator CoDC.\n2. While the paper was overall very clearly structured and easy to understand, a few details about the evaluation and models were unclear to me (see below), and there were a lot of minor writing issues throughout which could be fixed by a thorough proofreading (see below).\n\n*Questions*\n1. In Sec 3.1 CoDC definition, \"shortest path between any pair of synsets\": does distance use hypernym/hyponym relations, or synonym relations?\n2. should $R_{CODC}$ also use max (as $P_{CODC}$ does)?\n3. What is the copy mechanism used in Trans-Copy?\n4. The performance of BertSum is surprisingly low. Was it fine-tuned on this dataset? \n5. Are the hyperparameters of Trans-KnowAttn the same as Trans-Copy?\n6. Human evaluation: how many times did annotators chose the neutral comparison option? Since these neutral results seem to be excluded from Table 2, it's unclear how many times Trans-KnowAttn was rated better than Trans-Copy, so it's especially hard to evaluate significance. It would help to do a statistical significance test here.\n7. Can more details be given for the Randomization Test -- is this a permutation test? if so how was it performed?\n\n*Suggestions*\n- Be more specific about the CODC-based metric in the intro: does it compare against references?\n- Figure 1 would be clearer with the role of images (they're just used in the VisDial construction) specified.\n- Using new concepts is not exactly lexical entrainment, which (as defined by Brennan 1996) means two people coming to use the same terms when repeatedly discussing an object.\n- \"gold summarization $y$ is provided by an annotator\" => it's a gold caption, not really a gold summary, and it was provided by an annotator in the captioning task.\n- It would help to give more detail on how VisDial was constructed: both participants saw the caption, but only one person (\"the answerer\") saw the image, while the questioner is supposed to ask questions to find out more detail about the image (which often extends what's given in the caption).\n- It's a bit difficult to reason about likely values for P_{CODC} and R_{CODC}, given the relatively complex (but reasonable) definitions of the functions, which rely on the structure of WordNet. Some example calculation of these or their reasonable ranges (e.g. give the statistics for an example dialogue) might be helpful to build intuition.\n- A few more details should be given about the training procedure for clarity: I'm assuming the models need to generate the full reference caption (rather than just the CODC words), are trained to maximize log likelihood of captions given dialogues, and that \"All five references are used in the training process\" means a given dialogue is used in five different training instances, one time with each caption. This should all be clarified given more space though (especially if I'm misunderstanding).\n- Unclear why using both GloVe and Word2Vec hurts performance, but could be due to some details about how the embeddings are combined (details on this are missing from the paper).\n- Sec 6: \"pragmatics may be hurt when people are using less precise and consistent languages\". I'm unclear on how to interpret this: pragmatic modeling hurts the system? Pragmatic interpretations are difficult (for people? or systems?) when language is less precise and consistent?\n\n*Typos*\n- last para of page 2: extandable => extendable, \"co-occurance\" => \"co-occurrence\"\n- \"ability of inferencing\" => \"ability to infer\"\n- Sec 3: \"by concept, we mean a noun word\": noun word => noun phrase\n- Sec 3: \"most of examples\" => \"most examples\"\n- Sec 4: \"while are out\" => \"while being out\"\n- \"groud-truth\" => \"ground-truth\"\n- \"typical similarity measurement\" => \"typical similarity measurements\"\n- \"attend to knowledge embedding list\" => \"attends to the knowledge embedding list\"\n- \"*a* copy mechanism is used\"\n- \"from which human will easily inference\" => \"from which a human will easily infer\"\n- \"fails to inference such new\" => \"fails to infer this new\"\n- \"yield such inference\" => \"yield such an inference\"\n\n*** Update after response ***\n\nThanks to the authors for the response! Although I still think that the paper would be much stronger if it added a human ceiling (see suggestions in weakness 1 in my original review) to see how well-defined / challenging the dataset is, the response helped with my concerns, and I'm more positive about the paper. I've raised my score to a 6 (from a 5).\n\nI would feel better about the framing if the captions weren't referred to as \"summaries\", since they're not summaries in the sense that's often used in NLP (written while referring to the document being summarized) and this may be a bit misleading to people (it was to me), since this really is a new task in a sense. But I think this could easily be changed.\n\nThanks too for the answers which clarified my other questions! It would also help to add a significance test to the human evaluation results.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review 2\nThanks for the careful review. \n\n \n\nResponse to Weakness: \n\n \n\nIt is the fact that both participants are aware of the caption that contributes to our assumption that contributes to an inference gap between dialogues and captions. As captions are explicitly accessible to both participants, participants don't have to express the caption context explicitly in their conversations, and we hope to address this gap. The statistics on the dialogues where inference is needed are shown in Figure 2 (a) of our paper. \n\n \n\nResponse to the questions: \n\nThe paths use synonym relations to filter out synonyms. \n\nThe R_CODC uses max already, while in a different way. We include the ground-truth summary where the generated summary can achieve the maximum recall in the numerator, as indicated in equation (3). \n\nThe copy mechanism is the same as in Pointer Generator Network (PGN) [1], which has been introduced in the appendix. \n\nYes, it’s fine-tuned on this dataset. BERT may not be good enough at such generation task where out-of-context inference is required. \n\nYes, they are the same. \n\nFor the 500 comparisons, 74 votes are neutral. votes favoring Trans-Copy, Trans-KnowAttn, and neutral: (206, 220, 74) \n\nYes, it’s a permutation test. Denote the summaries generated by system A as {x_A_1, x_A_2, …, x_A_n} and system B as {x_B_1, x_B_2, …, x_B_n}. Denote the score generated by the specific evaluation metric, say, CIDEr, of system A as f(A). We would like to know whether f(A)-f(B) is significant. For each data entry with index i, permutate {x_A_i, x_B_i} to form different simulated systems A’ and B’, which yield 2^n different system pairs (A’, B’). Sample K permutations from all (A’ B’) pairs and calculate f(A’)-f(B’) for each of them. The f(A’)-f(B’) scores form a distribution histogram. The p-value can be simulated by checking the CDF position of f(A)-f(B) on the histogram. \n\n \n\n[1] Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization \n\nwith pointer-generator networks. ", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised.", "role": "assistant" } ]
AJKd0iIFMDc
AKBC 2021 Conference
Reviewer_sXci
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_sXci, and the following are the paper you need to review:\n```<<AJKd0iIFMDc>>```", "role": "user" }, { "content": "Title: Well-written paper that identifies a new problem; model innovation is limited but overall a good paper\nReview Content: The paper tackles the problem of dialogue summarization, where a generated summary should include concepts that are not literally mentioned in the dialogue but can be inferred based on one’s commonsense. The paper provides precise definition of such concepts (called CODC in the paper) and evaluation metrics. It then proposes a new model that retrieves relevant concepts from ConceptNet, and feeds them to the text generator. Experiments show small improvements over a range of baselines in automatic evaluation, CODC evaluation, and manual human evaluation.\n\nStrength\n* The paper is generally well written and is easy to follow.\n* The problem is generally well-motivated, and the definition of CODC and evaluation metrics are all reasonable.\n* The model is clearly described and is well-executed.\n* Experiments are comprehensive and useful, including a comparison with a range of baselines, ablations that show impact of different model components, and human evaluation.\n* The paper includes good conclusions/discussions, e.g., \n\nWeakness:\n* While I think the problem of CODC is generally well-motivated in the paper, it would still be good to see more evidence on whether inferencing CODCs is practically useful, i.e., whether humans actually prefer the summary with more CODCs. Human evaluation in the later part of the paper is not the fair evidence because crowdworkers were explicitly asked to consider the CODC aspect.\n* Innovation in the model is limited. Specifically:\n * The specific way Knowledge Attention Network retrieves concepts is tightly coupled with the specific way CODC metrics are defined, which possibly overly favors Knowledge Attention Network in CODC metrics. For example, CODC metrics only consider new concepts which (1) appear in ConceptNet, and (2) are connected with concepts in the dialogue with depth of at most 4, which are fairly specific definitions. These definitions are exploited by the model based on Section 4.1 (e.g., in step 1, initial pool of concepts are retrieved based on the definition of CODC in Section 3).\n * Improvements over the baseline (Trans-Copy) are not substantial. Even based on a CODC metric (F1), the improvement is 0.6% absolute. (As a side note, I believe that the most important metric has to be F1, rather than precision or recall. High precision can be achieved by generating concepts as little as possible, and high recall can be achieved by generating concepts as many as possible.) Although the paper includes human evaluation, it is hard to solely rely on human evaluation results due to lack of details such as instructions and the agreement rate by crowdworkers.\n\nQuestions:\n* During human evaluation, are annotators given groundtruth CODCs? Or are they only given the groundtruth summary and the generated summary?\n* Based on the ablation, it looks like retrieval of concepts heavily relies on “co-occurrence”, which is a fairly naive method. I am curious to see the performance of the baseline which only uses co-occurrence, without leveraging the graph structure of ConceptNet. One specific way of implementing this baseline would be to keep a set of concepts from ConceptNet, and get the top-K concepts with the most co-occurrence with concepts in the dialogue. If this baseline is as good as the Knowledge Attention Network, it may mean that the role of “knowledge” in the model is limited.\n* It would be useful to plot a graph of metrics (both standard metrics and F1_CODC) with varying numbers of CODCs at inference time. It would be ideal to see that improvements come from examples with many CODCs.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review 3\nThanks for the review! \n\n \n\n- Human evaluation \n\n \n\nThe annotators are not given ground-truth CODCs. They are only given the dialogue, ground-truth, and generated summaries. The annotators are not given suggestive questions and are asked to compare the two results fairly. \n\n \n\n- Ablation on Co-occurrence \n\nIf we use Co-occurrence feature only for inferencing, the retrieved knowledge list can only reach F1_CODC score 11.23, compared to the figures in Table 2 (b), where when using all features the F1_CODC can reach 14.85. By feeding the retrieved knowledge using co-occurrence only to the summarization model, the resulting CIDEr is 102.89, compared with 104.00 when using all features. With these results, we can conclude that the CODC inference is not a naïve problem that can be solved using Co-occurrence solely and requires more sophisticated features like external knowledge from WordNet. \n\n \n\n- Figure with performance v.s. number of CODCs. \n\n \n\nThe result is already shown in Figure 4 (a). The x-axis is the number of CODCs leveraged by the summarization model. The trend of the figures shows that within a certain threshold, the more inferred CODC the better the summarization performance. ", "role": "user" }, { "content": "I acknowledge the authors' response. Most of my concerns have been addressed. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
GF8wO8MFQOr
AKBC 2021 Conference
Reviewer_nKEs
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_nKEs, and the following are the paper you need to review:\n```<<GF8wO8MFQOr>>```", "role": "user" }, { "content": "Title: New Setting for Temporal Knowledge Graph Link Prediction, but with limited experiments\nReview Content: ### Summary:\n\nThe paper addresses the problem of one-shot link prediction for temporal knowledge graphs (TKG). Specifically, it focuses on relations with very few triples. The proposed approach uses the framework of few-shot learning and episodic training on TKG using a temporal neighborhood encoder. The experiments demonstrate the proposed approach's effectiveness on three datasets against existing TKG models and few-shot learning models for static graphs.\n\n\n### Strengths:\n1. The paper presents a new setting for TKG link prediction. The paper also adapts the existing TKG datasets for the few-shot setting.\n2. The proposed approach performs well across three datasets.\n3. The problem is well-motivated, and the paper is well-written and easy to follow.\n\n### Weaknesses:\n1. Although it is a new setting, the temporal neighborhood encoder is similar to existing works (e.g., TeMP). Therefore, the novelty of the proposed method is limited. Further, it does not compare with alternate temporal encoders. Many TKG encoders are present in the literature, and it would have been interesting to evaluate their effectiveness in this setting.\n2. The evaluation dataset only consists of relations with frequency >= 50. It would have been interesting to see the method's effectiveness on highly infrequent relations.\n\n\n### Questions to the author:\n- Did you also try relations with frequency <= 50? If yes, how did it perform?\n\n### Presentation Improvements:\n- The experiments section in the main paper is small. Probably, it can include concise versions of some experiments from the appendix.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Clarification on evaluation datasets. We ran more baselines; please see our general response above for more details. \nThank you for your detailed comments and insightful feedback. Please see our general comment in response to the points raised about the novelty and the experiments. \n\n**Evaluation Dataset:** The entire training/validation/test is done over the relations with a frequency between 50< < 500. Please note we have followed the setup in Xian et. al 2018. Relations with frequency > 500 are used as metadata and as you noted, the relations with frequency < 50 are ignored. The ٰreason for ignoring relations with frequency < 50 was that when the quadruples of a relation only appear 50 times in course of a two year period, it is highly likely that they appear less than 5 times during the test/validation period (3 months of data in our experiments), thus they wouldn’t have a significant effect on the final results.\n\n**Presentation:** We thank the reviewer for the suggestion and will provide a concise summary of experiments in the main body of the paper while keeping details in the appendix. \n\n Global/general Response:\n```\nWe thank the reviewers for their detailed comments and insightful feedback. Two repeating themes in the reviews were concerns about the limited novelty of our work and the insufficient number of baselines used in our experiments. Below we provide our response on these two issues. Other questions raised by the reviewers will be addressed via individual responses. \n\n### **Novelty:**\nWe stress that the main novelty of our work is the formulation of the one-shot link prediction problem for temporal knowledge graphs and the introduction of a novel one-shot episodic learning for TKGs and. Furthermore, while the attention mechanism for temporal encoding was suggested before, empirically validating the effectiveness of self-attention for a one-shot setting is also an innovative aspect of our work. \n\n### **Insufficiency of the baselines:**\nFirst, We note that the baselines used in our work were state-of-the-art methods at the time of finalizing our experiments, e.g., ReNet had the best performance and many of the recent TKG methods had not been peer-reviewed or did not have a proper implementation to work with. \n\nWe now elaborate on specific methods mentioned by the reviewers: \n\n* **ATiSE:** Following the suggestion by Reviewer, we have run ATiSE (Xu, Chengjin, et al. “Temporal knowledge graph embedding model based on additive time series decomposition”, arXiv preprint arXiv:1911.07893, 2019) on ICEWS14 and ICEWS17. Our preliminary results suggest that our method outperforms ATiSE in all metrics for both datasets. A summary of the results are as follows:\n\n| | ICEWS14 | | | | ICEWS17 | | | |\n|-------|---------|-------|-------|-------|---------|-------|-------|-------|\n| | H@1 | H@5 | H@10 | MRR | H@1 | H@5 | H@10 | MRR |\n| **ATiSE** | 0.064 | 0.325 | 0.456 | 0.196 | 0.031 | 0.248 | 0.357 | 0.137 |\n| **OAT-R** | **0.164** | **0.330** | **0.538** | **0.268** | **0.191** | 0.479 | 0.641 | **0.325** |\n| **OAT-T** | 0.084 | 0.323 | 0.429 | 0.177 | 0.170 | **0.519** | **0.743** | 0.323 |\n\nNote: We have used the hyper-parameters used in ATiSE for ICEWS14, except for the embedding size that we set to 50 to be consistent with our model embedding size. \n\n* **TeMP:** We did not compare our approach to TeMP because the latter considers interpolation setup for graph completion (smoothing), whereas our work targets extrapolation/forecasting setup, where one predicts links for future timestamps. We note that the authors of the TeMP paper themselves did not compare their approach to extrapolation-based methods such as ReNet, e.g. in Section 4.3 of their paper, they state that: \n_“We don’t compare with RE-NET, GHN (Han et al., 2020), DartNet (Garg et al., 2020) and Know-Evolve since these work focus on graph extrapolation task.”_ \n We are currently trying to compare TeMP to our method but having issues running their code for the extrapolation setting. We will update our response if we are able to generate results in time. We also note that except for the self-attention mechanisms, TemP is very similar to RE-Net, so we don’t expect any significant difference in the results. \n\n* **DySAT:** As you pointed out, DySAT uses a self-attention mechanism for temporal graphs, however, it is not directly applicable for temporal knowledge graphs with different relation types. They do not conduct experiments on any of the common TKG benchmarks such as GDELT and ICEWS nor do they compare with any state-of-the-art TKG model. We have cited their papers in the related work as it can be related to the self-attention encoder of our model. However, we consider DySAT to be more related to the dynamic graphs literature. \n\n\n\n\n\n\n\n\n\n```\n", "role": "user" }, { "content": "Thanks to the authors for their clarification. It satisfactorily addresses the major issues I had. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
GF8wO8MFQOr
AKBC 2021 Conference
Reviewer_Ljqx
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Ljqx, and the following are the paper you need to review:\n```<<GF8wO8MFQOr>>```", "role": "user" }, { "content": "Title: TKG Completion in one-shot setting\nReview Content: The work proposes a principled approach to knowledge completion in temporal knowledge graphs. This is achieved by two novelties — 1) a one-shot learning framework for TKGs that is based off similar approaches in static KGs and 2) a temporal neighborhood encoder that allows modeling of graphical and temporal dependencies, perhaps the key contribution of the work. Further, the work reports detailed exports highlighting the utility of their approach in addition to providing two additional benchmark datasets. \n\nComments and Questions:\n- Caption under Fig 3 can be improved — e.g. explaining the notation / input / output. \n- The paper is generally well-written and the described method is easy to follow\n- The two architectural contributions proposed are not novel by themselves. However, their application in this setting is interesting and their utility, as demonstrated by experiments can be of use to the relevant community.\n- Do the authors have intuition on the pros/cons of OAT-R or OAT-T for different types of data? Are there certain temporal properties that might favor one vs other? This might be useful to someone interested in applying the methodology for their problem. \n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: More intuition on model design and architecture\nThank you for your time and valuable feedback. \n\n* We have updated the caption for Fig 3.\n\n* As you mentioned, the main purpose of the paper is to introduce a novel one-shot link prediction setting for temporal knowledge graphs as well as a novel time-dependent episodic training for TKGs. We empirically show that the combination of a self-attention temporal encoder with episodic training would significantly enhance the link prediction performance \nperformance for infrequent relations in temporal knowledge graphs. \n\n\n* **OAT-R vs OAT-T**: Our intuition is that, the more non-stationary the data is, the more OAT-T would be helpful. To be more specific, it would be difficult for the model to generalize from only one sample, to the quadruples that are selected from a period of three/two years of data, if the distribution changes drastically over the course of three years. \n\n\n Global/general Response:\n```\nWe thank the reviewers for their detailed comments and insightful feedback. Two repeating themes in the reviews were concerns about the limited novelty of our work and the insufficient number of baselines used in our experiments. Below we provide our response on these two issues. Other questions raised by the reviewers will be addressed via individual responses. \n\n### **Novelty:**\nWe stress that the main novelty of our work is the formulation of the one-shot link prediction problem for temporal knowledge graphs and the introduction of a novel one-shot episodic learning for TKGs and. Furthermore, while the attention mechanism for temporal encoding was suggested before, empirically validating the effectiveness of self-attention for a one-shot setting is also an innovative aspect of our work. \n\n### **Insufficiency of the baselines:**\nFirst, We note that the baselines used in our work were state-of-the-art methods at the time of finalizing our experiments, e.g., ReNet had the best performance and many of the recent TKG methods had not been peer-reviewed or did not have a proper implementation to work with. \n\nWe now elaborate on specific methods mentioned by the reviewers: \n\n* **ATiSE:** Following the suggestion by Reviewer, we have run ATiSE (Xu, Chengjin, et al. “Temporal knowledge graph embedding model based on additive time series decomposition”, arXiv preprint arXiv:1911.07893, 2019) on ICEWS14 and ICEWS17. Our preliminary results suggest that our method outperforms ATiSE in all metrics for both datasets. A summary of the results are as follows:\n\n| | ICEWS14 | | | | ICEWS17 | | | |\n|-------|---------|-------|-------|-------|---------|-------|-------|-------|\n| | H@1 | H@5 | H@10 | MRR | H@1 | H@5 | H@10 | MRR |\n| **ATiSE** | 0.064 | 0.325 | 0.456 | 0.196 | 0.031 | 0.248 | 0.357 | 0.137 |\n| **OAT-R** | **0.164** | **0.330** | **0.538** | **0.268** | **0.191** | 0.479 | 0.641 | **0.325** |\n| **OAT-T** | 0.084 | 0.323 | 0.429 | 0.177 | 0.170 | **0.519** | **0.743** | 0.323 |\n\nNote: We have used the hyper-parameters used in ATiSE for ICEWS14, except for the embedding size that we set to 50 to be consistent with our model embedding size. \n\n* **TeMP:** We did not compare our approach to TeMP because the latter considers interpolation setup for graph completion (smoothing), whereas our work targets extrapolation/forecasting setup, where one predicts links for future timestamps. We note that the authors of the TeMP paper themselves did not compare their approach to extrapolation-based methods such as ReNet, e.g. in Section 4.3 of their paper, they state that: \n_“We don’t compare with RE-NET, GHN (Han et al., 2020), DartNet (Garg et al., 2020) and Know-Evolve since these work focus on graph extrapolation task.”_ \n We are currently trying to compare TeMP to our method but having issues running their code for the extrapolation setting. We will update our response if we are able to generate results in time. We also note that except for the self-attention mechanisms, TemP is very similar to RE-Net, so we don’t expect any significant difference in the results. \n\n* **DySAT:** As you pointed out, DySAT uses a self-attention mechanism for temporal graphs, however, it is not directly applicable for temporal knowledge graphs with different relation types. They do not conduct experiments on any of the common TKG benchmarks such as GDELT and ICEWS nor do they compare with any state-of-the-art TKG model. We have cited their papers in the related work as it can be related to the self-attention encoder of our model. However, we consider DySAT to be more related to the dynamic graphs literature. \n\n\n\n\n\n\n\n\n\n```\n", "role": "user" }, { "content": "Thanks to the authors for their clarification. The response covers nearly all of the issues I raised. I maintain my positive recommendation for this work.", "role": "assistant" } ]
GF8wO8MFQOr
AKBC 2021 Conference
Reviewer_Bk3F
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Bk3F, and the following are the paper you need to review:\n```<<GF8wO8MFQOr>>```", "role": "user" }, { "content": "Title: A new setting for temporal knowledge graphs, \nReview Content: This paper proposes a new setting for temporal knowledge graphs (\"TKG\") --- one-shot link prediction for TKGs. The motivation is that knowledge graphs are long-tail (thus we need one-shot/few-shot link prediction) and for temporal knowledge graphs, the distribution of relational facts can be highly heterogeneous. The authors formalize the one-shot link prediction as an episodic learning problem, where in each episode there is a support set and a query set. The quadruples are divided into meta-train, meta-val and meta-test sets based on the timestamp. The authors also propose a new method to tackle the problem, including a neighborhood encoder to get entities' representation, with consideration of the graphs at previous timesteps, and a similarity network to get the similarity between query and support instances. The authors compare their model with several TKG or one-shot link prediction baselines on three datasets and show that the proposed model (OAT) achieves superior performance. The ablation study verifies the effectiveness of each component of the method.\n\nStrength:\n\n1. The paper is well motivated and the newly proposed setting is challenging and meaningful.\n2. The proposed method is well described and is very intuitive.\n3. The results are very strong and the ablation study is very comprehensive.\n\nWeakness:\n\n1. It would be nicer to provide some examples of relations and quadruples, especially for such a new setting.\n2. The paper mainly talks about the one-shot setting. A few-shot setting and method would be more generalizable and more realistic in applications. \n3. Although the authors have already compared several baselines, it would be better to compare the proposed method to more recent TKG baselines (it is even more necessary considering the paper is setting a new direction).\n\nQuestion:\n\n1. Do you have any idea why OAT-T is much worse than OAT-R on ICEWS14 where it is better on the other datasets?\n\nMisc.:\n\n1. Missing space on page 3: \"Xiong et. al. 2018,Du et. al. 2019\"\n2. Eq. (1): isn't Ti~T and Si, Qi the same thing?\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Further clarification and examples on model setting, and design, We also ran more baselines (details in general response above)\nThank you for your time and your valuable and encouraging comments, which we think will help to improve the paper. Please also see our general comment regarding the novelty and the experiments. \n\n**Examples of Support & Query Set:** We have added examples and a pictorial demonstration on how the support set and the query set are created in meta training/validation/test for the episodic training. Since OpenReview does not allow figures in the response, here’s an example as structured text:\n\n\nEpisodic Training : \n```\n Meta-Train : { \n Training Task 1: { \n\t\t-Support set : {(US, Fight, Syria, 04-14-18)} \n\t\t-Query set : { \n\t\t\t(Turkey, Fight, Iraq, 07-04-18) \n (Israel, Fight, Syria, 05-01-18) \n (US, Fight, Syria, 04-14-18) \n .... \n\t\t}}\n\tTraining Task 2: { \n\t\t- Support set: {(US, Consult, Germany, 04-14-17)} \n\t\t- Query set: { \n (India, Consult, Syria, 05-25-18) \n (China, Consult, Japan, 01-01-19) \n ....\n\t\t}} \n ....\n } \n```\n\n\n```\nMeta Test: { \n\tTest Task 1: { \n\t\t- Support set : {(Greece, Yield, Spain, 03-16-19) } \n\t\t- Query set: { \n (India, Yield, Yemen, 07-19-19) \n (Iraq, Yield, India, 01-29-19) \n (France, Yield, UK, 02-10-19) \n ....\n\t\t}} \n .....\n} \n```\n\n\n**Few-shot learning:** We agree that the few-shot setup is more general but extension to this set is beyond the scope of our work. Please also note that the one-shot setup has practical implications, especially when the data is sparse and there is only one example. In addition, the episodic training framework introduced in the paper and the definition of support set and query set are naturally extendable from one-shot to few-shot setup. \n\n**Response to you question, OAT-T vs OAT-R:** Our intuition is that the more non-stationary the data is, the more OAT-T would be helpful. To be more specific, it would be difficult for the model to generalize from only one sample, to the quadruples that are selected from a period of three/two years of data, if the distribution changes drastically over the course of three years. \n\n**Typos & Misc:** Thank you for reading the paper carefully. We have corrected the typos. \nEq. (1): isn't $T_i \\sim T$ and $S_i$, $Q_i$ the same thing? This equation is meant to describe few-shot learning in general terms for classification tasks. In that case, $T_i \\sim T$ consists of a set of labeled data, and at each iteration, $S_i \\sim T_i$ and $Q_i \\sim T_i$ are sampled to calculate the optimization objective. We have changed $S_i$, $Q_i$ to $S_i \\sim T_i, Q_i \\sim T_i$ in the paper.\n\n\n Global/general Response:\n```\nWe thank the reviewers for their detailed comments and insightful feedback. Two repeating themes in the reviews were concerns about the limited novelty of our work and the insufficient number of baselines used in our experiments. Below we provide our response on these two issues. Other questions raised by the reviewers will be addressed via individual responses. \n\n### **Novelty:**\nWe stress that the main novelty of our work is the formulation of the one-shot link prediction problem for temporal knowledge graphs and the introduction of a novel one-shot episodic learning for TKGs and. Furthermore, while the attention mechanism for temporal encoding was suggested before, empirically validating the effectiveness of self-attention for a one-shot setting is also an innovative aspect of our work. \n\n### **Insufficiency of the baselines:**\nFirst, We note that the baselines used in our work were state-of-the-art methods at the time of finalizing our experiments, e.g., ReNet had the best performance and many of the recent TKG methods had not been peer-reviewed or did not have a proper implementation to work with. \n\nWe now elaborate on specific methods mentioned by the reviewers: \n\n* **ATiSE:** Following the suggestion by Reviewer, we have run ATiSE (Xu, Chengjin, et al. “Temporal knowledge graph embedding model based on additive time series decomposition”, arXiv preprint arXiv:1911.07893, 2019) on ICEWS14 and ICEWS17. Our preliminary results suggest that our method outperforms ATiSE in all metrics for both datasets. A summary of the results are as follows:\n\n| | ICEWS14 | | | | ICEWS17 | | | |\n|-------|---------|-------|-------|-------|---------|-------|-------|-------|\n| | H@1 | H@5 | H@10 | MRR | H@1 | H@5 | H@10 | MRR |\n| **ATiSE** | 0.064 | 0.325 | 0.456 | 0.196 | 0.031 | 0.248 | 0.357 | 0.137 |\n| **OAT-R** | **0.164** | **0.330** | **0.538** | **0.268** | **0.191** | 0.479 | 0.641 | **0.325** |\n| **OAT-T** | 0.084 | 0.323 | 0.429 | 0.177 | 0.170 | **0.519** | **0.743** | 0.323 |\n\nNote: We have used the hyper-parameters used in ATiSE for ICEWS14, except for the embedding size that we set to 50 to be consistent with our model embedding size. \n\n* **TeMP:** We did not compare our approach to TeMP because the latter considers interpolation setup for graph completion (smoothing), whereas our work targets extrapolation/forecasting setup, where one predicts links for future timestamps. We note that the authors of the TeMP paper themselves did not compare their approach to extrapolation-based methods such as ReNet, e.g. in Section 4.3 of their paper, they state that: \n_“We don’t compare with RE-NET, GHN (Han et al., 2020), DartNet (Garg et al., 2020) and Know-Evolve since these work focus on graph extrapolation task.”_ \n We are currently trying to compare TeMP to our method but having issues running their code for the extrapolation setting. We will update our response if we are able to generate results in time. We also note that except for the self-attention mechanisms, TemP is very similar to RE-Net, so we don’t expect any significant difference in the results. \n\n* **DySAT:** As you pointed out, DySAT uses a self-attention mechanism for temporal graphs, however, it is not directly applicable for temporal knowledge graphs with different relation types. They do not conduct experiments on any of the common TKG benchmarks such as GDELT and ICEWS nor do they compare with any state-of-the-art TKG model. We have cited their papers in the related work as it can be related to the self-attention encoder of our model. However, we consider DySAT to be more related to the dynamic graphs literature. \n\n\n\n\n\n\n\n\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The response addresses most of my concerns. I maintain my positive recommendation for this work.", "role": "assistant" } ]
GF8wO8MFQOr
AKBC 2021 Conference
Reviewer_PPaE
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_PPaE, and the following are the paper you need to review:\n```<<GF8wO8MFQOr>>```", "role": "user" }, { "content": "Title: The paper introduces a novel task and benchmark datasets, but lack of comparisons with several recent baseline methods\nReview Content: The paper first introduces the task of one-shot learning for temporal knowledge graph and constructs two benchmark datasets to conduct experiments and make comparisons. For addressing the problem, the author proposes One-shot Attention Temporal Graph Learning (OAT) which utilizes self-attention to encode temporal interactions and a scorer network for link predictions. The experimental results and ablation study confirm the superiority of the proposed OAT approach against several TKG and FSL baselines.\n\nStrengths: \n1) The paper is generally well-written and easy to follow, the code and data are both provided for reproducibility.\n2) This paper is well-motivated. One-shot learning for temporal knowledge graphs is an interesting research topic with practical scenarios and a wide range of applications. Two new publicly available benchmarks are also introduced by this paper and will benefit future research in this direction.\n3) Compared with existing TKG models and few-shot learning methods for static graphs, the proposed OAT shows better performances on both GDELT and ICEWS benchmarks especially for sparse relations under the one-shot learning setting.\n\nWeakness and Questions: \n1) Although cited in the paper, the author forgets to compare with two recent competitive baselines in the experiment section. They are TeMP (Wu, Jiapeng, et al. “TeMP: temporal message passing for temporal knowledge graph completion”, EMNLP 2020) and ATiSE (Xu, Chengjin, et al. “Temporal knowledge graph embedding model based on additive time series decomposition”, arXiv preprint arXiv:1911.07893, 2019). Moreover, another approach called DySAT (Sankar, Aravind, et al. “Dysat: Deep neural representation learning on dynamic graphs via self-attention networks.” WSDM 2020) is also a recent competitive TKG model, but the author didn’t cite and compare it with DySAT in the paper.\n2) TeMP and DySAT both apply self-attention to encode/integrate temporal information from the graph entity representations. The only difference is the similarity network of OAT, which is designed for the one-shot learning setting. Hence, the technical novelty of the paper is relatively limited.\n3) The format (mainly the font and header) of the paper didn’t follow the original AKBC template and should be updated.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: We ran ATiSE under one-shot setup and we will add more experiments if appropriate (more details in general response above)\nThank you for your comments and insightful feedback. We applied your comments and tried to improve the paper based on your feedback.\n\nYou had raised points about the novelty and the limited experiments of the paper. Please see our detailed response in the general comment above. To summarize, we have added one more baseline (ATiSE) to the experiments based on your suggestion and will continue to work on adding TeMP to the revised version. \n \nWe will make sure that the format is fixed in the revisions. \n\n Global/general Response:\n```\nWe thank the reviewers for their detailed comments and insightful feedback. Two repeating themes in the reviews were concerns about the limited novelty of our work and the insufficient number of baselines used in our experiments. Below we provide our response on these two issues. Other questions raised by the reviewers will be addressed via individual responses. \n\n### **Novelty:**\nWe stress that the main novelty of our work is the formulation of the one-shot link prediction problem for temporal knowledge graphs and the introduction of a novel one-shot episodic learning for TKGs and. Furthermore, while the attention mechanism for temporal encoding was suggested before, empirically validating the effectiveness of self-attention for a one-shot setting is also an innovative aspect of our work. \n\n### **Insufficiency of the baselines:**\nFirst, We note that the baselines used in our work were state-of-the-art methods at the time of finalizing our experiments, e.g., ReNet had the best performance and many of the recent TKG methods had not been peer-reviewed or did not have a proper implementation to work with. \n\nWe now elaborate on specific methods mentioned by the reviewers: \n\n* **ATiSE:** Following the suggestion by Reviewer, we have run ATiSE (Xu, Chengjin, et al. “Temporal knowledge graph embedding model based on additive time series decomposition”, arXiv preprint arXiv:1911.07893, 2019) on ICEWS14 and ICEWS17. Our preliminary results suggest that our method outperforms ATiSE in all metrics for both datasets. A summary of the results are as follows:\n\n| | ICEWS14 | | | | ICEWS17 | | | |\n|-------|---------|-------|-------|-------|---------|-------|-------|-------|\n| | H@1 | H@5 | H@10 | MRR | H@1 | H@5 | H@10 | MRR |\n| **ATiSE** | 0.064 | 0.325 | 0.456 | 0.196 | 0.031 | 0.248 | 0.357 | 0.137 |\n| **OAT-R** | **0.164** | **0.330** | **0.538** | **0.268** | **0.191** | 0.479 | 0.641 | **0.325** |\n| **OAT-T** | 0.084 | 0.323 | 0.429 | 0.177 | 0.170 | **0.519** | **0.743** | 0.323 |\n\nNote: We have used the hyper-parameters used in ATiSE for ICEWS14, except for the embedding size that we set to 50 to be consistent with our model embedding size. \n\n* **TeMP:** We did not compare our approach to TeMP because the latter considers interpolation setup for graph completion (smoothing), whereas our work targets extrapolation/forecasting setup, where one predicts links for future timestamps. We note that the authors of the TeMP paper themselves did not compare their approach to extrapolation-based methods such as ReNet, e.g. in Section 4.3 of their paper, they state that: \n_“We don’t compare with RE-NET, GHN (Han et al., 2020), DartNet (Garg et al., 2020) and Know-Evolve since these work focus on graph extrapolation task.”_ \n We are currently trying to compare TeMP to our method but having issues running their code for the extrapolation setting. We will update our response if we are able to generate results in time. We also note that except for the self-attention mechanisms, TemP is very similar to RE-Net, so we don’t expect any significant difference in the results. \n\n* **DySAT:** As you pointed out, DySAT uses a self-attention mechanism for temporal graphs, however, it is not directly applicable for temporal knowledge graphs with different relation types. They do not conduct experiments on any of the common TKG benchmarks such as GDELT and ICEWS nor do they compare with any state-of-the-art TKG model. We have cited their papers in the related work as it can be related to the self-attention encoder of our model. However, we consider DySAT to be more related to the dynamic graphs literature. \n\n\n\n\n\n\n\n\n\n```\n", "role": "user" }, { "content": "Thank you for the authors' detailed response. It satisfactorily addresses most of my concerns, and I'm open to increasing the score.", "role": "assistant" } ]
HozL9BGbnKr
AKBC 2021 Conference
Reviewer_NgVF
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_NgVF, and the following are the paper you need to review:\n```<<HozL9BGbnKr>>```", "role": "user" }, { "content": "Title: Interesting approach for jointly learning structural and textual embeddings for KGs, comprehensive experiments and analysis\nReview Content: The paper proposed a joint approach to learn embeddings for KG entities from both descriptions and KG structure, for the task of open-world knowledge graph completion. The main idea is separate modules for obtaining structural embeddings (of entities and relations in a KG) and description embeddings (of descriptions of possibly new entities), and a description projection module to project the description to a space where the structure and description embeddings are aligned (KG embeddings). \n\n(+) The approach was evaluated on three different datasets, showcasing the robustness of the approach.\n\n(+) Comprehensive analysis of the proposed approach compared with several prior work on the topic.\n\n(+) The differences with considered baselines are well explained.\n\n(-) Some design choices of the approach need justification (see Q1).\n\nQuestions:\n1. What is the disadvantage of using transformer-based model to represent an entity description when the description is short (ten of fewer words)? Why is there a need to use two different encoders depending on the description length?\n2. In Section 6.2, four tasks were introduced but the last one was without any elaboration unlike the other three?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: A note of thanks, addressed the concerns, have clarified and included experiment on geometric properties\nWe thank the reviewer for their insightful and positive comments. We are glad that R1 finds our experiments and analysis comprehensive. Collating results from multiple disparate KG codebases was a considerable challenge and we are encouraged that R1 finds our comparisons adequate. We are also pleased that R1 finds our differences with existing state-of-the-art baselines well explained.\n\nWe address R1’s questions below:\n1. We already report the results of using a transformer for short descriptions in Table 2. The simpler single-phase approach results in empirical improvements on most metrics. In addition to the modest empirical improvement, the simpler approach converges faster (8hrs vs 48hrs) and has a smaller peak memory requirement. For these reasons, we think it is important to modify our algorithm to adapt to different KG datasets.\n\n2. We partly studied geometric properties in Figure 2 of the original paper. To avoid confusion, we have now removed the reference to this fourth task. We have also added an additional experiment to the supplementary materials that study the geometric properties of the embeddings explicitly.\n\n Global/general Response:\n```\nWe thank all the reviewers for their useful suggestions. We are happy that the reviewers find our experiments and analysis complete, our improvements clear and our method robust across datasets. We are also encouraged that most reviewers find our differences from our baselines clearly explained.\n\nWe are glad that most of the concerns brought up have to do with the clarity of the method section. In our original submission, we ran up against the 10-page limit. Given the additional page, we’ve managed to incorporate most if not all of the suggestions brought up by the reviewers. \n\n```\n", "role": "user" }, { "content": "Thanks for the authors' response. It resolves the majority of my concerns. I maintain my positive recommendation for this work.", "role": "assistant" } ]
HozL9BGbnKr
AKBC 2021 Conference
Reviewer_mGET
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_mGET, and the following are the paper you need to review:\n```<<HozL9BGbnKr>>```", "role": "user" }, { "content": "Title: Interesting fusion of text and conventional KG Embeddings, consistent improvements over baselines for all datasets/metrics\nReview Content: The authors propose a new method for computing Knowledge Graph Embeddings on KGs where textual descriptions of entities are available. They evaluate the effectiveness of their method on a number of downstream tasks, including open-world KG completion, closed-world KG completion and entity classification. They show that their method (FOlK) outperforms baselines across all tasks in key metrics.\n\nThe authors also perform several ablations to show effectiveness of different parts of their model. They also highlight areas in which their model is better than baselines such as OWE, and provide intuition for it.\n\nHowever, there are several shortcomings in the way the paper is written and organized. Some of these are:\n\n1. 'Cross domain alignment' Sec 4.3 is hard to understand. Authors say they 'treat an entity’s immediate graph neighbourhood as its context and use skipgram objective to maximize the probability of the entity given the neighbourhood entities'. However, this is nowhere reflected in the loss functions used. Loss function $\\mathscr{L}$ as well which is used in eqn 2 and 3 hasn't been defined in the main text which makes things even more confusing. $\\mathscr{L_{proj}}$ is not described anywhere. Training algorithm should be part of main text, since it is a vital differentiator from OWE.\n\n2. DSA score is hard to understand, given description is not enough to be able to reproduce it.\n\n3. 'FOlK is the first framework to use a transformer-based encoder to embed open-world entities'\nKG-BERT (2019) uses BERT for encoding closed-world entities. Extending it to open-world entities is a straight forward solution and not a novelty.\n\n4. 'the score for any triple must be of the order of the embedding dimensions' Section 1, point 3-'Efficient Ranking'. What does this mean?\n\n5. Whats the difference between FOlK(s) and FOlK(l)? It's not specified\n\n6. A lot of the content that is being referred to in the main text lies in the supplementary material. The authors should move at least the most relevant stuff (such as training algorithm, loss functions) to main text.\n\nIt seems that the paper has been written in a hurry and is lacking sufficient description for the method, given that the main contribution is the proposed new method FOlK. A better method section and reorganizing is needed.\n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Made significant changes to clarify method section with additional space\nWe thank the reviewer for their comments. We are encouraged that R2 finds our improvements over the state-of-the-art baselines consistent. We are also happy that R2 has taken note of our efforts to delineate our differences from the baselines clearly.\n\nWe address R2’s concerns below:\n\n1. We acknowledge the comment and have edited Sec. 4.3 to improve clarity. In BiSkip, the authors\nuse a skipgram objective where the probability of a target language word given the source language context is maximised and vice versa. We cite BiSkip to support our decision to include cross-modality objectives in addition to paired $\\ell_{2}$ projection loss. Since we use the same definition for $\\mathscr{L}$ as other closed-world models, we thought defining it explicitly in the supplementary materials was sufficient. Given the additional space, we have now moved the definition of the loss term used by complex to the main text. With the additional space, we have now added a definition of $\\mathscr{L_{proj}}$ and training algorithm to the revised version of the paper (Section 4.3).\n\n2. We have added more details about the DSA score (Section 5) to the revised version\n\n3. We strongly disagree that extending transformers to open-world entities is straightforward for two reasons. (1)Unlike KG-BERT, in the open-world embedding generation problem we are working with two modes of information - text and KG structure. Aligning the two is a significant challenge that we study in detail in this work. (2)KG-BERT isn’t a KG embedding technique. We outline the differences between KG Completion and embedding a KG entity in the introduction. KG-BERT only performs the task of KG completion. So, methods like KG-BERT don't produce entity embeddings that can be used by downstream tasks. Also, using large language models at inference instead of using embeddings is very inefficient (Section 6.3 RQ1).\n\n4. By 'order of the embedding dimensions' we are referring to the time complexity of scoring a triple. That is, that the time complexity must be O(d) where d is the dimension of the entity and relation embeddings. Models like FOlK that use a simple embedding scoring function at inference time can perform entity ranking that is several times faster (22x as noted in Sec. 6.3 RQ2) than models like ConMask that require large language models to score the likelihood of a triple. The ranking performed by models like ConMask isn't O(d) but rather depends on the language model and the size of the descriptions. This is a very important difference as ranking is a fundamental operation in KGs, that we have to perform every time we want to discover a new connection / add a new entity to our KG. By 'efficient ranking' we simply mean ranking that is faster and has lower peak-memory requirements. We have modified the Introduction section to make this clearer.\n\n5. In section 4.2 we indicate that we use s & l to indicate the short and long descriptions. FOlK(l) is the transformer-based model we use for long descriptions and FOlK(s) is the CBOW-based model we use for short descriptions. We have now added information to Section 4.3 that makes the notation explicit.\n\n6. We acknowledge that a lot of content lies in the supplementary material. However, we disagree that the paper was written in a hurry. This work has gone through several careful revisions. We made a conscious decision to abridge certain definitions and move less important information to the supplementary section to meet AKBC’s 10-page limit.\n\nWe thank R2 for their suggestions. With the additional page that has now been allowed, we have been able to make the method section much clearer and have added the algorithm in main paper. We respectfully request that the reviewer give the new method section a read and revise their score if they see fit.\n\n Global/general Response:\n```\nWe thank all the reviewers for their useful suggestions. We are happy that the reviewers find our experiments and analysis complete, our improvements clear and our method robust across datasets. We are also encouraged that most reviewers find our differences from our baselines clearly explained.\n\nWe are glad that most of the concerns brought up have to do with the clarity of the method section. In our original submission, we ran up against the 10-page limit. Given the additional page, we’ve managed to incorporate most if not all of the suggestions brought up by the reviewers. \n\n```\n", "role": "user" }, { "content": "Thanks to the authors for their clarification. The response addresses most of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
HozL9BGbnKr
AKBC 2021 Conference
Reviewer_Rvrn
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Rvrn, and the following are the paper you need to review:\n```<<HozL9BGbnKr>>```", "role": "user" }, { "content": "Title: The paper is a decent focused contribution. However, the experimentation and the writing can be significantly improved.\nReview Content: This paper presents an approach to learn representations of (open-world) entities in a KG with given textual descriptions of open-world entities. This paper describes a technique that jointly learns embeddings for KG entities from descriptions and KG structure for open-world knowledge graph completion. The technique is experimentally validated on the YAGO3-10-Open and WN18RR-Open datasets and it beats previous open world entity representation learning methods.\n\nI think this paper is a decent focused contribution. However, I think it can be significantly improved in its presentation quality:\n1. The drawbacks of existing approaches is not clearly described and thus it is hard to understand the contributions of this work\n2. The method description is not complete. For example L_proj is not described. It is hard to understand the approach.\n3. The descriptions of the DSA score is also fuzzy. It is not clear how I would implement this with just this description.\n\nThe method is taken from BiSkip. But it is hard to understand how the application is different here.\n\nStatistical significance numbers should help in table 4.\n\nI also have the following questions:\n1. Why do we need a 2 phase learning approach. How does it compare to a one phase learning model?\n2. how do you find the optimal value of alpha?\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Addressed all the comments and have detailed few sections in the paper as per recommendation\nWe are happy that the reviewer finds our contribution focused. We address R3’s concerns below while pushing back on the idea that our work needs major changes.\n\n“However, the experimentation and the writing can be significantly improved. ”\n\nWe would like to point out that both reviewers R1 and R2 acknowledge that our experiments are comprehensive, the improvements over the baselines are clear and that the differences from earlier methods are clearly explained. Below we address the few concrete comments by R3.\n\nWe acknowledge that the method section is a point of concern shared by R2 and have made use of the additional space to clarify key details. We also address R3's concerns about differences with previous approaches while pointing out that both R1 and R2 find the differences clearly explained.\n\n“The drawbacks of existing approaches is not clearly described and thus it is hard to understand the contributions of this work”\n\nWe discuss the limitations of existing approaches in section 2. We go into further limitations in Sec. 6.3 RQ1. To make things even clearer, we have now added a table 9 to the supplementary materials that clarifies the differences further.\n\n“For example $L_{proj}$ is not described”\n\nWe have clarified $L_{proj}$ in the revision to the paper. $L_{proj}$ is the $\\ell_{2}$ distance between the projected structural embeddings of the entities with their corresponding description embeddings\n\n“The descriptions of the DSA score is also fuzzy”\n\nWe have clarified the DSA score in our revision\n\n“The method is taken from BiSkip. But it is hard to understand how the application is different here.”\n\nWe think it is a mischaracterization to say that we have 'taken' our method from BiSkip. BiSkip is used for a completely different task- that of bilingual lexicon induction and the algorithms are completely different. We only bring up BiSkip to indicate that the practice of using cross-modality where we minimise more than just the paired projection loss has support in other domains. We have now expanded upon the section to further clarify things. \n\n“Statistical significance numbers should help in table 4.”\n\nWe outperform all our baselines on all metrics. The differences between FOlK and our baselines range from 16.6% at the low end to 86.1% at the high end. The test sets in question have between 3000 and 39000 examples. Given the magnitude of the difference and the size of the test sets, we didn't feel the need to report statistical significance numbers \n\n“Why do we need a 2 phase learning approach. How does it compare to a one phase learning model?”\n\nWe report exactly this in tables 2, 4 and 12. Wherever we report the results of FOlK(l), we also include the performance improvement across iterations mentioned in Section 6.1. The motivation behind using a 2-phase model is to allow the KG structure and entity descriptions to inform each other. This is elaborated in the 'Joint Training' section of our introduction. We also explain the need for a phase-wise training approach in section 4.4. We find that empirically this is beneficial for both open-world (Tables 2 and 4) and closed-world (Table 12) performance.\n\n“how do you find the optimal value of alpha?”\n\nWe performed a geometric search and varied alpha in [1, 0.1, 0.01, 0.001]. We find that a value of 0.001 works well across all datasets. We have now added this detail to the paper.\n\nWe request the reviewer to revise their rating if they see fit. We are happy to address additional concerns.\n\n Global/general Response:\n```\nWe thank all the reviewers for their useful suggestions. We are happy that the reviewers find our experiments and analysis complete, our improvements clear and our method robust across datasets. We are also encouraged that most reviewers find our differences from our baselines clearly explained.\n\nWe are glad that most of the concerns brought up have to do with the clarity of the method section. In our original submission, we ran up against the 10-page limit. Given the additional page, we’ve managed to incorporate most if not all of the suggestions brought up by the reviewers. \n\n```\n", "role": "user" }, { "content": "Thank you for the authors' reply. It satisfactorily addresses most of my concerns, and I'm likely to revise the rating.", "role": "assistant" } ]
JQHqeGx6qFw
AKBC 2021 Conference
Reviewer_6Wgd
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_6Wgd, and the following are the paper you need to review:\n```<<JQHqeGx6qFw>>```", "role": "user" }, { "content": "Title: Some insights, but only minor improvements obtained.\nReview Content: Knowledge Graph Embedding (KGE) methods solve link prediction problems by learning node embeddings and a score function such that the correct triples are ranked the highest. On the hand, AnyBURL is a symbolic alternative which learns rules for ranking triples. The paper proposes a simple aggregation approach for combining the ranking results from KGE methods with those from AnyBURL. The motivation of this approach is to ultimately harness the strengths of both symbolic and sub-symbolic methods. The method is evaluated by studying the improvement in link prediction performance using benchmark datasets.\n\nOverall, the idea presented in the paper is interesting because of its simplicity. Despite the weak results obtained in the empirical study, I think it presents an exciting new direction for the integration of symbolic and sub-symbolic methods. However, I strongly advice the authors to improve the text to make it read better.\n\nStrengths:\n* The aggregation function (in Section 5) is simple and the intuition behind this approach is easy to understand.\n* Structure of the paper is easy to follow.\n* The analysis presented in Section 4.2 are insightful.\n* The results from the rule-based approach are explainable. However, this feature has not been thoroughly studied.\n\nWeaknesses:\n* The writing style of the paper could be improved to make it read better. There are many awkwardly phrased sentences throughout the text: “We abstain here from...“; “Unfortunately, AnyBURL is to our knowledge the only symbolic approach...“; “Within this work we have not presented a sophisticated new knowledge base completion method.”\n* The results reported in the paper show only very little improvements (1-3% improvement in Mean Reciprocal Rank). Therefore, I feel that the title of the paper is misleading.\n\nMinor Typos:\nSection 4.2 (page 5) “We first look at two predictions” → “three predictions”\nAbstract (page 1) “embbedings” → “embeddings”\nRelated Work (page 2) “the the” → “the”\nSection 6.1 (page 8) “Further” → “Furthermore”\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your feedback\nFirst, we thank the reviewer for the valuable comments. \n\nWe thank the reviewer for pointing to problems related to the readability of the paper. We rephrased all the mentioned passages and fixed all mentioned typos. We also plan to check the paper with the help of an external person that was not involved in the writing. \n\nAside from readability and typos the following point has been raised:\n\n**Reviewer Comment**\n\n*The results reported in the paper show only very little improvements (1-3% improvement in Mean Reciprocal Rank). Therefore, I feel that the title of the paper is misleading.*\n\n**Our Answer** \n\nWe measured the improvements in terms of percentage points, and we made this more explicit in the revised version. For example, ComplEx improved its MRR on CoDEx-L from 29.4% to 32.9% which is a plus of 3.5 percentage points. The improvement in terms of percentage points sounds small, however, this is an improvement of 11.9 percent. We added HittER* to our experiments to ensure that the improvements can also be observed for the latest state-of-the-art. Here we have an improvement on CoDEx-L from 32.2% to 34%, which is a plus of 1.8 percentage points and an improvement of 5.5 percent. Please also note that the average improvement over all datasets and models is 2.6 percentage points if we exclude TransE (including TransE would increase the average improvement to 4.24 percentage points) which is quite substantial. \n\nThere are many research publications that report on small incremental improvements based on complex architectures, complicated theoretical frameworks, or a specific modification within these architectures. Within our experiments, we observed constant and significant improvements over all models and datasets that we used. We found the consistency of these results noteworthy (maybe even surprising) given that our method is very simple (almost naive). \n Global/general Response:\n```\nThank you for your comments! Based on these comments, we uploaded a revised version of our paper. The most important improvement is related to a new Section 6.3, which contains an ablation study that analyses the impact of the beta parameter.\n\nFurthermore, we give a specific answer to each reviewer in the comments below.\n```\n", "role": "user" }, { "content": "Thanks for the authors' response. The reply answers almost all of my questions. I continue to recommend this work for acceptance.", "role": "assistant" } ]
JQHqeGx6qFw
AKBC 2021 Conference
Reviewer_hMxP
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_hMxP, and the following are the paper you need to review:\n```<<JQHqeGx6qFw>>```", "role": "user" }, { "content": "Title: An interesting paper with well-motivated examples and a simple aggregation approach\nReview Content: The paper presents a comparative study on symbolic methods and embedding methods for knowledge graph completion. From the manually observed ranking results of two kinds of approaches, the authors find that rule-based method AnyBURL tends to perform well on complicated rules that express transitivity of multiple relations, and that KG embedding methods sometimes will output meaningless result due to its learned similarity measure, which may be remedied by rule-based methods. Accordingly, a simple aggregation reranking approach is also proposed, and it shows promising performance gain for several embedding approaches in the KG completion task. \n\nI find qualitative analysis in comparing symbolic and embedding-based methods is insightful. The illustrated examples are closely related to the proposed approach, which is simple yet shown to be effective in boosting prediction performance. In general, the paper is well organized and easy to follow. Besides, I still have some questions regarding the analysis and experiments.\n\n- It is intuitive to examine the cherry-picked examples and discover the potential patterns of various methods, but from the macro perspective, it would be better to provide more global statistical information, for instance, how many examples are there similar to the Darwin example? If it accounts for a very large proportion, it is not surprising the proposed method can work well. If there are only very limited similar cases, the performance gain may be caused by other reasons. In other words, larger-scale of analysis is preferred to demonstrate global statistical information.\n\n- Since KG embedding methods can be also used to predict head entity or relation, is the proposed aggregation method able to be applied to these two tasks as well?\n\n- It seems that the coefficient beta in the scoring function plays an important role in balancing the contribution of rule-based confidence scores and embedding-based scores. It would be more convincing to show the ablation study on this parameter.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your feedback\nWe thank the reviewer for the valuable comments. The following critical points have been raised. \n\n**Reviewer Comment**\n\n1. *It is intuitive to examine the cherry-picked examples and discover the potential patterns of various methods, but from the macro perspective, it would be better to provide more global statistical information, for instance, how many examples are there similar to the Darwin example? If it accounts for a very large proportion, it is not surprising the proposed method can work well. If there are only very limited similar cases, the performance gain may be caused by other reasons. In other words, larger-scale of analysis is preferred to demonstrate global statistical information.*\n\n**Our answer:** \n\nWe agree that a statistical overview would be helpful to estimate the impact of our method, however, it is not easy at or might even be impossible to define an automated procedure to generate these numbers. Understanding a small example as the Darwin example is in itself not trivial at all. It requires to check the complete neighbourhood of the involved entities, but even more important, in our analysis we used common sense knowledge about the types of the entities (some entities are territories, some are countries that can be divided into territories, and so on). Without this knowledge, it would not be possible to understand that these types, which are implicitly encoded in sets of triples, make certain entities “similar” to each other. This is another reason, why an automated statistical analysis is beyond the goal that we can achieve with the paper. \n\nWhat we checked with all relevant statistics in Table 4 in the Appendix is the impact of using the rule-based ranking as a filter (by setting beta to 0). This is one aspect of an automated method and it helped us to understand that ¼ to ½ of the positive impact can be explained in terms of filtering. \n\nWe also analysed the impact of suppressing self-referential predictions as we found this type of error/problem in some of the examples. Here we could provide a full statistical analysis as this type of error can be automatically detected. Results are shown in Table 6 in Appendix B. \n\nHowever, we are aware that we cannot exactly quantify exactly which type of specific behaviour appears how frequently for each of the observed phenomena. \n\n**Reviewer Comment**\n\n2. *Since KG embedding methods can be also used to predict head entity or relation, is the proposed aggregation method able to be applied to these two tasks as well?*\n\n**Our answer:** \n\nIndeed, the aggregation method can be applied to others than the tail direction. In fact, the main results in the paper are based on the joint MRR in both, head, and tail directions. Now we clarified this in the introduction of the updated paper version \n\nWe abstained from evaluating the model towards answering queries about relations to be consistent with the common evaluation protocols in order to make our results comparable to and usable for a wide research community. Nevertheless, we acknowledge that this would be an interesting direction. \n\n**Reviewer Comment**\n\n3. *It seems that the coefficient beta in the scoring function plays an important role in balancing the contribution of rule-based confidence scores and embedding-based scores. It would be more convincing to show the ablation study on this parameter.*\n\n**Our answer:** \n\nWe agree with the reviewer in that a more detailed ablation study regarding the hyperparameter beta would benefit the work. For that reason, we added a new Subsection 6.3 where we focussed on several experiments reporting about results for fixed beta-values and results for restricted sets of possible values. Overall, we added five new tables to the paper. \n\nNote that the first part of the new section contains material we put in the previous version of the paper at the end of Section 6.2. However, with the new structure it fits better at the beginning of Section 6.3. The second part of Section 6.3 contains new material. This also holds for Table 3 (in the main text) and Table 9 to 12 in the Appendix. We believe the new ablation material improved the paper and thank the reviewer for the comment. \n Global/general Response:\n```\nThank you for your comments! Based on these comments, we uploaded a revised version of our paper. The most important improvement is related to a new Section 6.3, which contains an ablation study that analyses the impact of the beta parameter.\n\nFurthermore, we give a specific answer to each reviewer in the comments below.\n```\n", "role": "user" }, { "content": "Thanks to the authors for their clarification. It satisfactorily addresses the major issues I had. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
JQHqeGx6qFw
AKBC 2021 Conference
Reviewer_Bzsz
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Bzsz, and the following are the paper you need to review:\n```<<JQHqeGx6qFw>>```", "role": "user" }, { "content": "Title: Well motivated ensemble of symbolic and soft KB completion models.\nReview Content: The authors investigate ways to ensemble symbolic and soft KB completion models. The symbolic approach infers rules from the training data, and the soft approach learns to embed the KB into a geometric space.\nThe authors start by analyzing the behavior of the two approaches and specify cases where each of them fail.\nThis analysis is used to come up with several heuristics for combining the predictions of the two approaches. \nOn several datasets, the authors show that the proposed ensemble is better (although not drastically) than the two approaches on their own.\n\nThe paper is well motivated, and can give insights about the task and its leading models. The paper is also well written and easy to follow.\n\nHaving said that, the improvements the paper shows are somewhat limited (few MRR points), and ensembling two different approaches together is standard for pushing the performance.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your feedback\nWe are happy about the positive impression that the paper made and thank the reviewer for the valuable comments. \n\nTwo critical points have been raised, that we address in the following: \n\n1. *the improvements the paper shows are somewhat limited (few MRR points)*\n2. *ensembling two different approaches together is standard for pushing the performance.* \n\n(1) The improvements of our aggregation methods are indeed within the scope of a few MRR points for many of the presented results. However, the results for the respective models before using our aggregation method are already state-of-the-art or close to the state-of-the-art. A few MRR points is a significant improvement when starting from an already strong level. At the same time, our approach strongly improves models that perform notably worse such that the resulting ranking is competitive with the best performing models. For instance, quite surprisingly, the aggregation method improves TransE on WNRR from 0.228 to 0.503 which is only 0.003 percentage points less than the best performing model and 0.006 better than AnyBURL. \n\nAs a similar point has been raised by Reviewer 6Wgd, we also refer to the answer we give there. \n\n(2) This point is also true in general. However, instead of constructing an ensemble from two arbitrarily chosen models, we investigate the complementary nature of latent representations and a model that uses symbolic representations in terms of rules. Moreover, each candidate that appears in the final ranking has been proposed by the symbolic method. This means that the result of the joint model can still be explained in terms of the rules that generated this candidate. Thus, we keep most of the explanatory power of the symbolic method while exploiting the benefit of the better aggregation of the embedding based method. In that sense, our method is more specific than a standard ensemble. From a different point of view, our work can also be understood as a technique to explain the results of an embedding based KGE model. We tried to make this point a bit clearer by adding some additional sentences related to this aspect (e.g., last sentences of the introduction, last bullet point in the conclusion). \n Global/general Response:\n```\nThank you for your comments! Based on these comments, we uploaded a revised version of our paper. The most important improvement is related to a new Section 6.3, which contains an ablation study that analyses the impact of the beta parameter.\n\nFurthermore, we give a specific answer to each reviewer in the comments below.\n```\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It resolves the majority of my concerns. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
OFLbgUP04nC
AKBC 2021 Conference
Reviewer_oV86
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_oV86, and the following are the paper you need to review:\n```<<OFLbgUP04nC>>```", "role": "user" }, { "content": "Title: Interesting new task with dataset and metrics\nReview Content: Summary:\n- The paper proposes the new task of hierarchical cross document coreference resolution (HCDCR), where mention clusters are organized in a hierarchy. This can enable applications such as faceted query where the user can explore papers that refer to similar (but not necessarily the same) concepts (via a hierarchy). To support this task, the authors contribute both a dataset (SciCO consisting of ~20K documents and ~26K mentions) and metrics for evaluation. The dataset is constructed in a semi-automatic fashion, annotators are asked to annotate mention clusters and concept hierarchy, starting from candidate mention sets that are selected for annotators to label (by using a mix existing hierarchy of ML terms, automatically extracted hypernyms, and small curated list of lexically diverse concepts), while . Baselines (a coreference and a hierarchical model) and a unified model (predict both coreference and hierarchy) is proposed and compared on the dataset to provide an initial set of benchmark results. \n\nQuality: The submission appears to be technically sound with well-thought out metrics and a reasonable set of experiments and natural initial models for the proposed task.\n\nClarity: The paper is very well written with clear description of the proposed task, metrics, data annotation process and experiments.\n\nOriginality: The paper contributes several original ideas (the HCDCR task, evaluation metrics, and proposed models - the models are mostly based on prior work and applied to the new task)\n\nSignificance: The work could stimulate research in a more fine-grained connections between documents.\n\nPros:\n- The paper presents and interesting new task with dataset and metrics\n- A initial model and set of experiments is provided \n\nCons:\n- From a quick read over the paper, without careful examination of the data, it is difficult to judge the quality of the annotations and the dataset, and potential issues with the metrics. \n- The automated construction of the dataset may introduce biases that favor algorithms that use the same resources/methods.\n- It's unclear if the task would be of interest to the broader community or just a very specialized subset\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and feedback!\nThank you for your time reviewing our paper!\n\nWe are encouraged that the reviewer finds that we contributed original ideas and that our tasks and metrics are interesting and sound.\n\n*RE quality of the dataset\n\nWe would like to recall that the inter annotator agreement is relatively high according to both coreference and hierarchical links, indicating sufficient quality in SciCo. Notably, the IAA on coreference is even higher than the prominent Ontonotes corpus (as mentioned in Section 3.4), despite the specific challenges of the cross-document setting and the scientific domain. Additionally, we provide the full dataset in supplementary material for further examination of the annotation. \n \n\n*RE semi-automated construction of SciCo and bias\n\nWe agree that the semi-automated construction of the dataset may introduce biases in SciCo. However, as discussed in the paper (Section 3.1), showing random mentions from a large corpus is ineffective as they will rarely be related. In fact, any non-random strategy for making the data collection feasible will introduce some form of bias in the dataset. In our work, we follow previous work on cross-document coreference annotation (e.g., Ravenscroft et al. “CD2CR: Co-reference Resolution Across Documents and Domains”; EACL 2021) and bootstrap the annotation. We use several resources (PwC, hypernym graph, and curated lists) in order to broaden the scope of the data, thus limiting possible biases. \n\n\n*RE: interest of broader community or specialized subset\n\nWe believe our work could be useful for various research communities. The field of scientific information extraction has seen growing interest (Luan et al., EMNLP 2018, Hou et al, ACL 2019; Jain et al., ACL 2020; Hou et al., 2021), and coreference is an important and challenging subtask for IE, with no existing resources for cross-document coreference in science. SciCo also has the potential to enhance downstream applications at the corpus level, such as automated knowledge base construction (Hope et al., NAACL 2021; Mondal et al., NAACL 2021), citation recommendation (Bhagavatula et al., NAACL 2018), plagiarism detection (Zhou et al., EMNLP 2020), explaining relationships between documents (Luu et al., ACL 2021), etc. \n\nIn addition, our task of hierarchical CDCR can serve as a general benchmark for several cross-document NLU challenges such as coreference and hierarchical relations based on multiple documents. As hierarchical relations are ubiquitous in language (e.g., whole-part, sub-events, etc.), our task could be explored in different settings from broader domains. \n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. The reply answers almost all of my questions. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
OFLbgUP04nC
AKBC 2021 Conference
Reviewer_EBkC
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_EBkC, and the following are the paper you need to review:\n```<<OFLbgUP04nC>>```", "role": "user" }, { "content": "Title: An interesting reformulation of CDCR accompanied by a robust dataset and models \nReview Content: This paper proposes a reformulation of CDCR to include an hierarchical component (H-CDCR), whereby an hierarchy of co-reference clusters is jointly \ninferred with the clusters. \n\nAlong with the new task, two evaluation metrics are proposed, the cluster-level hierarchy score that evaluates hierarchical links while not penalizing co-reference errors repeatedly (TP when at least one Parent -> Child relation between mentions in common between the system output and the gold cluster hierarchy), the path-based score average path-distance ratios over all pairs of mentions. \n\nThe authors propose SciCo, a resource and benchmark for (H-)CDCR thrice the size of the previous reference dataset (ECB+).\nThe dataset is constructed following a robust protocol that follows established conventions for the annotation of coreference data and that is based on two main components: a large corpus of scientific papers annotated with mentions and a set of seed taxonomies to select mentions that are likely to have hierarchical relations. \n\nThe corpus of mentions is build on a corpus of 10M computer science abstracts + SciREX using a state-of-the-art information extraction model DyGIE++. The model is trained on SciERC, a reference corpus of manually verified mention annotations. \n\nThe selection of likely cluster candidates is performed using the papers with code taxonomy, hypernym relations extracted by DyGIE++ (aggregated by concept across all papers). The candidates are augmented with a manually curated list to add diversity in the dataset. \nCandidates are grouped into one-parent-multiple-children hierarchies and candidate mentions are retrieved for each group to form the CDCR \"topics\" using a vector-based retrieval of mentions from the large corpus annotated with mentions. \n\n4 annotators (PhD or graduate) were hired (upwork) to perform the annotations. The corpus shows good agreement (well above system performance, leaving room for improvement). \n\nThe authors then perform an evaluation of several baseline systems adapted for the new formulation of the task (SOTA Coreference model and a textual entailment model (for the hierarchy)) and implement their own joint model that significantly outperforms the baseline models. \nThe joint model casts the association of a pair of mentions as a text classification problem, where two mentions can belong to the following classes: coreference, m1 > m2, m2 > m1, unrelated. \n\nThe evaluation is complemented with an ablation study that examines performance with coreference and hierarchy only variants. The impact of lexical diversity is also studied. \n\n\nStrengths: \n - Well written paper\n - Very strong contributions, both data-wise and methodology-wise\n\nShortcomings: \n - The paper feels somewhat compressed to fit into the format, some of the elements in the appendices should really be in the paper as the sole descriptions in the paper wouldn't allow to fully reproduce the work. I really needed to read the appendices to fully understand some aspects. \n\n\n\n\nOther comments and questions: \n\n\"SciREX has the advantage of introducing mentions from full paper texts, vetted by a human annotator for quality\" -> do you mean SciERC?\n\nFormatting issue with the reference for \"Hierarchical grouping to optimize an objective function.\" \n\nAbout the annotators, it's common to report whether the annotators were compensated fairly for the work and the associated workload. \n\nThe textual description for the path-based measure is somewhat difficult to follow, a formula or small algorithm that summarizes the computation would be a welcome addition. \n\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and feedback! We have uploaded a revised version of the paper and moved important details from the appendices to the main paper.\nThank you for your time reviewing our paper and helpful feedback! \n\nWe are encouraged that the reviewer finds strong contributions in our paper, both data-wise and methodology-wise. See below the answers to your questions:\n\n*RE \"SciREX has the advantage of introducing mentions from full paper texts, vetted by a human annotator for quality\" -> do you mean SciERC?\n\nWe actually meant SciREX. This sentence explains why we chose SciREX as an additional data source for SciCo. In the large corpus of 10M abstracts (Lo et al., 2020), mentions appear only in the abstract and they are automatically extracted using the Dygie++ model. Using SciREX as a data source enriches the diversity and quality of SciCo because (1) mentions also appear in the rest of the paper and (2) mention spans were annotated by human annotators. \n\n*RE compensation of the annotators\n\nAs mentioned in Appendix A.3, annotators were paid with an hourly reward between 20 and 30 dollars, where each topic took an average of 30 minutes. We have moved this information in the main paper.\n\nFinally, we added a formula for the path-distance metric. Also, given the additional page, we moved some important details from the appendices to the main paper, including (1) full description of our unified model in Section 4.2 and (2) more details about the annotation process in Section 3.3.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The response covers nearly all of the issues I raised. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
OFLbgUP04nC
AKBC 2021 Conference
Reviewer_Y8ng
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Y8ng, and the following are the paper you need to review:\n```<<OFLbgUP04nC>>```", "role": "user" }, { "content": "Title: An interesting expansion on cross-document coreference, rigorously annotated dataset and good baselines\nReview Content: # General comments\nThis paper presents a new dataset and baselines for the task of cross-document coreference resolution for concepts in CS papers. In addition to the regular coreference clusters, the authors introduce a hierarchical element, adding parent-child relations between the concepts. Overall, this is a rigorous annotation workflow and the baseline experiments are well designed show the challenging nature of the dataset. I only have a few issues.\n\n\n# Specific comments/questions\nWhile it is certainly novel in its current formulation (and in the context of scientific concepts), I would argue that the task of hierarchical cross-document coreference has been studied in the past, e.g. the work of Signh et al. (2011) \"Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models\".\n\nIn Section 3.2 there is a mention of a \"parent concept\". How is that determined for the automatically generated groups and where is it specified for the curated list presented in Section 3.1?\n\nThe example (and justification) given for the referential hierarchy (Section 3.3 and Appendix A.3) isn't entirely convincing. From an information retrieval point of view, I wouldn't mind if a query for Penn TreeBank POS tagging returned documents related to POS tagging in general (in fact it might be more useful to place a specific tagging scheme in the context of its research area).\n\nWhy is the Longformer needed in Section 4.2 if the pairwise scorer f(.,.) accepts only the mentions as input? Are the contexts of each also used in the scorer?\n\n# Minor comments\n- Flip the order of the presentation of the metrics in Figure 2 to match the order of their introduction in the text.\n- I assume the annotators hired (Section 3.3) were from the CS field?\n- The CoNLL F1 metric is used in Section 3.4 before its definition in Section 5.\n- Are \"deep neural networks\" and \"neural networks\" equivalent (as stated in Section 4.2)? I would argue the two terms have exactly the same relationship as \"PTB POS tagging\" and \"POS tagging\" (one being a subset of the other).\n- For clarity, add the human IAA scores in Table 3.\n- The names of the Ablation experiments (Table 4) don't match the names/descriptions used in the text.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and feedback! We have uploaded a revised version of the paper with your suggestions.\nThank you for your time reviewing our paper and helpful feedback! \n\nWe are encouraged that the reviewer finds interesting our expansion of cross-document coreference resolution with hierarchical relations, along with rigorous annotation workflow and well-designed experiments. See below the answers to your specific questions:\n\n\n\n\n*RE comparison to Singh et al. (2011): “Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models”.\n\nSingh et al. (2011) address the problem of cross-document coreference resolution (CDCR) on large-scale corpora. To overcome the computation complexity, they propose a hierarchical _model_ where mentions are distributed across several machines (super-entities) and initial coreference clusters are computed in parallel in each machine (sub-entity). These sub-entities are finally merged to form the entity clusters.\nIn contrast, we introduce a novel _task_ of hierarchical cross-document coreference resolution, where final coreference clusters have hierarchical relations between them, explicitly annotated in our dataset (e.g., sentiment analysis → aspect-based sentiment analysis). \n\n\n\n*RE “parent concept” for the automatically generated groups and curated list\n\nThe “parents\" referred to in Section 3.2 are either taken directly from the PwC graph (which has explicit parent links) or from the hypernym links extracted by Dygie++. For example, given the sentences (from two different papers): “The Travelling Salesman Problem (TSP) is one of the major problems in graph theory” and “Graph isomorphism is a major problem in graph theory”, the Dygie++ model identified the relations graph theory → Travelling Salesman Problem (TSP) and graph theory → Graph isomorphism. The curated list does not have explicit parent relationships, and is instead treated as a flat list of concepts.\n\n\n\n\n*RE Penn TreeBank POS tagging vs. POS tagging.\n\nIn our referential hierarchy, a concept A is an ancestor of another concept B in the hierarchy if and only if a mention of B is also implicitly a reference to A. For example, a text that mentions the concept “BERT model” also (implicitly) invokes several other concepts (“neural language model”, “neural nets”, “NLP”, etc.) but not others (“robotics,” “RoBERTa model”). The information retrieval setting In section 3.3 and Appendix A.3 is used for grounding this definition with a faceted search application, used to help annotators reach substantial agreement. We certainly agree that when searching for a specific variant of POS tagging it could also be useful -- in certain use cases -- to view more general POS tagging results as well. In hierarchical faceted search, this information would be presented in the form of concept hierarchies that can be browsed (Marti Hearst, CACM 2006), such that POS tagging would appear as a parent concept for specific variants.\n\n*RE Why is the Longformer needed in Section 4.2 if the pairwise scorer f(.,.) accepts only the mentions as input? Are the contexts of each also used in the scorer?\n\nYes, as mentioned in 4.2, we use Longformer to encode the mentions with their corresponding paragraphs as context. \n\n*RE: minor comments\n\nWe addressed your comments and integrated your suggestions in an updated version of the paper. The hired annotators were indeed from the CS field and we added this detail in the paper. \n", "role": "user" }, { "content": "Title: Thanks for the clarifications\nI would like to thank the authors for their clarifications. The distinction regarding Singh's model and your task definition makes sense. I would appreciate if your detailed explanations about my other comments made it into the final version of the paper.", "role": "assistant" } ]
Ozp1WrgtF5_
AKBC 2021 Conference
Reviewer_8qLK
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_8qLK, and the following are the paper you need to review:\n```<<Ozp1WrgtF5_>>```", "role": "user" }, { "content": "Title: Nice survey on some of the limitations of modern transformer-based models\nReview Content: Authors survey the literature on probing the understanding, reasoning, and commonsense properties of transformer-based models, outlining several limitations of current models. They cover mispriming, negations, the tendency of such models to learn simple heuristics, under-sensitivity to word order, reasoning with patterns resembling Horn clauses, commonsense reasoning, reasoning with temporal knowledge. Furthermore, they also cover several natural language understanding tasks such as natural language inference and mathematical reasoning.\n\nIt would be nice to have snippets from the different datasets, to get a better understanding of what's out there for training and evaluating our models, but I understand the space limitations.\n\nThere are a few missing references:\n- On the Power of Saturated Transformers: A View from Circuit Complexity (https://arxiv.org/abs/2106.16213)\n- A couple of systems focus on solving CLUTRR using dedicated architectures: https://arxiv.org/abs/2007.06477 and https://github.com/ML-KULeuven/deepproblog\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review and valuable feedback! We have uploaded an updated version of the paper with your suggestions. \nWe thank you for your feedback! We are glad that you found our survey useful! Thank you also for providing us with the missing references! \n\n**1. It would be nice to have snippets from the different datasets, to get a better understanding of what's out there for training and evaluating our models, but I understand the space limitations.**\n\nThank you for your suggestion. We have now added an appendix to the paper, which includes the descriptions of the basic models, a description of the challenging datasets, and a table of the model performances. The appendix will be part of the Arxiv version of our paper.\n\n\n**2. There are a few missing references:**\n\n**On the Power of Saturated Transformers: A View from Circuit Complexity (https://arxiv.org/abs/2106.16213)**\n\nThank you for this reference! We have added in the conclusion a proposal for future work, which is to test the saturated attention mechanism on our natural tasks (light switch and cake tasks).\n\n**A couple of systems focus on solving CLUTRR using dedicated architectures: https://arxiv.org/abs/2007.06477 and https://github.com/ML-KULeuven/deepproblog**\n\nThank you, we have added these references in Section 3.4 as neuro-symbolic approaches! \n\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable feedback! We are happy that all reviewers appreciated our survey paper! \n\nOur article traces the boundary of the reasoning that is possible with transformer-based models in the current state of the art. It takes a breadth-first approach: It provides an overview of reasoning tasks, categorizes them, and describes the approaches that have been used to tackle them. It thus serves two purposes: First, the audience is made aware of the weaknesses of current models, so as to sensibilize them to pitfalls they may encounter in applications. Second, the audience can use our work to find interesting research questions that are still open. Our work is also the first survey of such reasoning with transformer-based models.\n\nA common request by the reviewers was to provide more details on the dataset and models. We have added an appendix to the paper, which includes the descriptions of the basic models, a description of the challenging datasets, and a table of the model performances. This appendix can be found in the updated version of the paper. Should the paper get accepted, the appendix will be published in the Arxiv version of the paper.\n\nWe also reply individually to each reviewer.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The reply answers almost all of my questions. I maintain my positive recommendation for this work.", "role": "assistant" } ]
Ozp1WrgtF5_
AKBC 2021 Conference
Reviewer_6i26
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_6i26, and the following are the paper you need to review:\n```<<Ozp1WrgtF5_>>```", "role": "user" }, { "content": "Title: Review\nReview Content: ### Summary\n\nThis paper surveys the performance of BERT on tasks that requires logical reasoning. The survey first introduces basic building blocks that a model should have in order to perform reasoning on natural language (e.g. handling negations, stability to word-order, etc). Next it describes a series of reasoning tasks and report performance on BERT-based models. BERT-based model performs well on reasoning tasks when all information required to perform deductive reasoning, such as facts and rules, are provided (e.g. logical reasoning). However, when certain information needs to be inferred (background knowledge), these models fail (text understanding, mathematical reasoning). Finally, the survey discusses two tasks (Parity and Dyck-2) that has been theoretically proved that transformers cannot perform and the paper comes up with two example reasoning tasks and shows that transformer models indeed fail on them.\n\n### Strengths:\n\n1. The paper presents a comprehensive list of reasoning tasks and cites many relevant work and can serve as a go-to survey for the BERT-based model's performance on natural language reasoning tasks\n\n### Weaknesses:\n\n1. The survey currently reads like a list of reasoning tasks which BERT mostly fail at and one or two example, demonstrating where it failed. It would be more helpful to have a discussion around each of the failed reasoning tasks. For example, why is BERT not able to perform those reasoning tasks? What % of examples need this reasoning, does the model fail at? Do they fail at all examples?\n2. I think the paper would benefit if it clearly lays down what the audience will learn after reading the survey. A discussion about how we can make the transformer model perform reasoning could also be helpful\n\nMissing citations:\nBoratko et al 2020 ProtoQA: a question answering dataset for prototypical common-sense reasoning shows another case where pre-trained LMs don't perform well because of the lack of background knowledge.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review and valuable feedback! We have uploaded an updated version of the paper with your suggestions. \nWe are happy to see that you found our list of reasoning tasks comprehensive, our citations extensive, and the survey in general to be a reference for the performance of transformer-based models on natural language reasoning tasks! We reply to your suggestions below.\n\n**1. The survey currently reads like a list of reasoning tasks which BERT mostly fail at and one or two example, demonstrating where it failed. It would be more helpful to have a discussion around each of the failed reasoning tasks. For example, why is BERT not able to perform those reasoning tasks? What % of examples need this reasoning, does the model fail at? Do they fail at all examples?**\n\nWe agree that such an analysis is of utmost interest. To some degree, the individual articles that we cite actually provide such an analysis. Our article, in contrast, takes a breadth-first approach: It provides an overview of the tasks, categorizes them, and describes the approaches that have been used to tackle them. It thus serves as an entry point to the cited papers, rather than as a discussion of individual performances. A general interpretation of why and where transformers fail is in Section 3.6. \n\n**2. I think the paper would benefit if it clearly lays down what the audience will learn after reading the survey. A discussion about how we can make transformer model perform reasoning could also be helpful.**\n\nOur article traces the boundary of the reasoning that is currently possible with transformer models. Our paper thus serves two purposes: First, the audience is made aware of the weaknesses of current models, so as to sensibilize them to pitfalls they may encounter in applications. Second, the audience can use our work to find interesting research questions that are still open. As you requested, we also provide an outlook of possible research avenues in the last paragraph of Section 3.6.\n\n**3. Missing citations: Boratko et al 2020 ProtoQA: a question answering dataset for prototypical common-sense reasoning shows another case where pre-trained LMs don't perform well because of the lack of background knowledge.**\n\nThank you! We added this reference in Section 3.2.\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable feedback! We are happy that all reviewers appreciated our survey paper! \n\nOur article traces the boundary of the reasoning that is possible with transformer-based models in the current state of the art. It takes a breadth-first approach: It provides an overview of reasoning tasks, categorizes them, and describes the approaches that have been used to tackle them. It thus serves two purposes: First, the audience is made aware of the weaknesses of current models, so as to sensibilize them to pitfalls they may encounter in applications. Second, the audience can use our work to find interesting research questions that are still open. Our work is also the first survey of such reasoning with transformer-based models.\n\nA common request by the reviewers was to provide more details on the dataset and models. We have added an appendix to the paper, which includes the descriptions of the basic models, a description of the challenging datasets, and a table of the model performances. This appendix can be found in the updated version of the paper. Should the paper get accepted, the appendix will be published in the Arxiv version of the paper.\n\nWe also reply individually to each reviewer.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' reply. The authors have answered almost all of my concerns. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
Ozp1WrgtF5_
AKBC 2021 Conference
Reviewer_Nrsx
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Nrsx, and the following are the paper you need to review:\n```<<Ozp1WrgtF5_>>```", "role": "user" }, { "content": "Title: Survey paper on recent work applying transformers for a variety of reasoning tasks.\nReview Content: Overview: \nThis paper provides a survey on recent work in the area of reasoning using transformer based models. It first describes common issues that are faced by transformers such as falling prey to lexical overlap and indifference to word order, and then goes on to describe different kinds of reasoning that can be tackled using transformers and the relevant work tackling each. Finally, the paper gives two examples of types of tasks that are beyond the capabilities of transformers. \n\nHigh level comments: \nThe paper is well written, easy to read and covers many sub-types of reasoning tasks. It would benefit from being more detailed on the models that are being evaluated on each of the datasets, maybe in some form of aggregated table so the reader has a sense of which models perform better on which types of reasoning tasks. Since the different models - GPT-3/BERT/etc. are all trained on different amounts of data as well as different types of data, it would be very useful to know what scale of data and size of model is required for good performance on these various tasks. I believe this would be a more useful addition to the paper than section 3.5.\n\nDetailed comments: \nI would rename the first section as “basic building blocks” doesn’t seem appropriate when one of the subtitles is “mispriming”. This section seems to cover the artifacts that the model needs to learn to be resilient to, such as not falling prey to mispriming, not overly relying on pattern heuristics, not only taking into account words without using order information etc. Maybe a better section title would be “Commonly observed pitfalls and ways to avoid them.” ?\nSimilarly, I feel the section 2 title would be more informative if you named it “Types of reasoning with transformers” since just “Reasoning with transformers” is not very informative given that it’s the topic of the whole paper. \n\nSection 3.5, It would be useful to the reader if you give some intuitions on *why* it’s not possible to solve tasks such as parity using transformers in addition to stating that a certain paper shows that it cannot. For instance, in parity tasks, it requires the model to aggregate information from each segment and since all the tokens are processed in parallel by self attention, it would need the network depth to increase proportionally to the length of the sequence, to be able to solve the task making it not possible for non-recurrent models like transformers solve the task. \n\nIn the lights example, it is mentioned that the model is trained for 50 iterations. Did the model learn the task in these 50 iterations? Did the training converge? It’s hard to make conclusions about why the test precision was 0.5 - whether it is because the model was unable to learn the task during training or if it has to do with the train test distribution shift which causes the model to not do well on the task, without this information. \n\nSame issue with the cake task: missing details about model convergence and performance on the training data. Further it would be interesting to know if changing the type of positional encodings (learnt embeddings/sinusoidal embeddings/CAPE [1] ) can change the results for generalizing to longer length.\n\nMissing citation: \nTransformers for symbolic math : Lample, Guillaume, and François Charton. \"Deep learning for symbolic mathematics.\", ICLR 2020.\n\nMinor typos: \nIntro last paragraph, make transformer-based reason -> make transformer-based models reason\nSection 3.4 Did you mean to say performer based models?\nSection 4.2 contining -> containing\n\n[1] CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings\n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review and valuable feedback! We have uploaded an updated version of the paper with your suggestions. \nWe would like to thank you for your review, and for your positive assessment of our paper! We are happy that you found the paper well written, easy to read, and extensive. We reply to your questions below.\n\n**1. High level comments: The paper is well written, easy to read and covers many sub-types of reasoning tasks. It would benefit from being more detailed on the models that are being evaluated on each of the datasets, maybe in some form of aggregated table so the reader has a sense of which models perform better on which types of reasoning tasks. Since the different models - GPT-3/BERT/etc. are all trained on different amounts of data as well as different types of data, it would be very useful to know what scale of data and size of model is required for good performance on these various tasks. I believe this would be a more useful addition to the paper than section 3.5.**\n\nThank you for this suggestion! We have now included an appendix in the paper, which describes the basic models, the challenging datasets, and the performance of the models.\n\n**2. Detailed comments: I would rename the first section as “basic building blocks” doesn’t seem appropriate when one of the subtitles is “mispriming”. This section seems to cover the artifacts that the model needs to learn to be resilient to, such as not falling prey to mispriming, not overly relying on pattern heuristics, not only taking into account words without using order information etc. Maybe a better section title would be “Commonly observed pitfalls and ways to avoid them.” ? Similarly, I feel the section 2 title would be more informative if you named it “Types of reasoning with transformers” since just “Reasoning with transformers” is not very informative given that it’s the topic of the whole paper.**\n\nYes we agree. We have replaced ”basic building blocks” by “common pitfalls for BERT-like models”, as you suggested, and we have replaced the title of Section 2 by “Types of reasoning with transformers-based models.”\n\n**3. Section 3.5, It would be useful to the reader if you give some intuitions on why it’s not possible to solve tasks such as parity using transformers in addition to stating that a certain paper shows that it cannot. For instance, in parity tasks, it requires the model to aggregate information from each segment and since all the tokens are processed in parallel by self attention, it would need the network depth to increase proportionally to the length of the sequence, to be able to solve the task making it not possible for non-recurrent models like transformers solve the task.**\n\nCorrect. Thank you for suggesting these intuitive explanations! We have added them to Section 4.1!\n\n**4. Same issue with the cake task: missing details about model convergence and performance on the training data.**\n\nWe have added the performance of the models on the training and validation sets. Each time, the models then achieve an F-score > 0.99.\n\n**Further it would be interesting to know if changing the type of positional encodings (learnt embeddings/sinusoidal embeddings/CAPE [1] ) can change the results for generalizing to longer length.**\n\nThis would indeed be interesting to know! Unfortunately, we could not verify this in the frame of the rebuttal period. But we have added this idea in the conclusion.\n\n**5. Missing citation: Transformers for symbolic math : Lample, Guillaume, and François Charton. \"Deep learning for symbolic mathematics.\", ICLR 2020.**\n\nThank you, we added this work to Section 3.5 (Mathematical Reasoning Tasks).\n\n**6. Minor typos: Intro last paragraph, make transformer-based reason -> make transformer-based models reason Section 3.4 Did you mean to say performer based models? Section 4.2 contining -> containing**\n\nThank you for spotting these typos! We fixed them. In Section 3.4, we meant indeed “transformer-based models.”\n\n\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable feedback! We are happy that all reviewers appreciated our survey paper! \n\nOur article traces the boundary of the reasoning that is possible with transformer-based models in the current state of the art. It takes a breadth-first approach: It provides an overview of reasoning tasks, categorizes them, and describes the approaches that have been used to tackle them. It thus serves two purposes: First, the audience is made aware of the weaknesses of current models, so as to sensibilize them to pitfalls they may encounter in applications. Second, the audience can use our work to find interesting research questions that are still open. Our work is also the first survey of such reasoning with transformer-based models.\n\nA common request by the reviewers was to provide more details on the dataset and models. We have added an appendix to the paper, which includes the descriptions of the basic models, a description of the challenging datasets, and a table of the model performances. This appendix can be found in the updated version of the paper. Should the paper get accepted, the appendix will be published in the Arxiv version of the paper.\n\nWe also reply individually to each reviewer.\n\n```\n", "role": "user" }, { "content": "I value the authors' response. It satisfactorily addresses the major issues I had. I continue to recommend this work for acceptance.", "role": "assistant" } ]
Ozp1WrgtF5_
AKBC 2021 Conference
Reviewer_RsKf
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_RsKf, and the following are the paper you need to review:\n```<<Ozp1WrgtF5_>>```", "role": "user" }, { "content": "Title: A good overview of how transformer models perform on logical reasoning tasks, though the scopes of \"transformer models\" and \"logical reasoning\" are not well-defined.\nReview Content: **Summary:** The paper surveys the ability of pretrained transformer-based models on logical reasoning tasks.\n\n* Section 2 explores common reasoning abilities required to solve logical reasoning tasks.\n* Section 3 considers different logical reasoning tasks, including both standard and adversarial variants. It also outlines the performance on such tasks of different model variants (i.e., pre-training only, fine-tuned on the target task, and fine-tuned with additional training objectives or symbolic components).\n* Finally, Section 4 touches on tasks that are theoretically impossible for a transformer-based model to completely solve. It also presents natural language versions of such theoretical tasks.\n\n**Strengths:**\n\nThe content of this survey is comprehensive and appropriate for the page limit. It gives a good overview of the major logical reasoning tasks and how well the models perform on them. Section 3.6 in particular provides a good summary: the models perform well when all information is explicit, perform reasonably on commonsense reasoning, and struggle on adversarial tasks but can be remedied by additional techniques.\n\n**Weaknesses:**\n\n1. The **scope of the models** explored is not well-defined.\n - \"Transformer-based models\" is a very wide scope. Even when considering only BERT-like models (BERT, RoBERTa, T5, GPT-3, etc.) without additional modifications, the model performance still depends on many factors such as the model architecture, model size, pre-training data, pre-training task (mask filling, generation, sentence-level objective), fine-tuning data, and fine-tuning methods.\n - Most works cited in this paper use BERT or its close variants such as RoBERTa. It is unclear if the results carry on the models trained on generative objectives such as T5 and GPT-3.\n - The paper often makes claims that can be considered too general (e.g., \"Pretrained transformers-based models **cannot** differentiate between positive and negative statements\" in Section 2.1, or \"mispriming works only when ...\" in Section 2.2) while they might only apply to certain model variants.\n - The paper also mixes the results of models with and without fine-tuning, especially in Section 2. While these can be discerned from careful reading, it would be great to make this more clear to prevent readers from drawing incorrect conclusions.\n\n2. Likewise, the **scope of the tasks** is not well-defined.\n - It is not explained what \"logical reasoning\" means, and the paper contains many examples that are not \"logical reasoning\" in the traditional sense. Examples include having world knowledge (e.g., that the iPhone was produced by Apple, as mentioned in the introduction), having commonsense knowledge (e.g., that ravens can fly, as mentioned in Section 3.2), and doing arithmetic (e.g., adding two numbers in Section 3.5). While these skills are useful for some logical reasoning tasks, I wouldn't call them logical reasoning on their own.\n - Mispriming (Section 2.2) does not seem to be a building block for any logical reasoning task. Though one could argue that the ability to distinguish useful from distracting contexts in general is an essential building block.\n - \"Text understanding\" (Section 3.4) also sounds too broad and encompasses the other types of reasoning in prior sections.\n\n3. (minor) The natural language version of impossible reasoning tasks (Sections 4.2 and 4.3) is not very informative.\n - BERT is known to not generalize well to long inputs, especially when it's longer than what the model have seen during training. When the model performs poorly on the proposed light switch and cake tasks, it is unclear whether it's due to the length problem above or the theoretical impossibility of the task.\n\n**Other comments:**\n\n1. Title: What does \"shallow reasoning\" mean? Some of the presented tasks, such as the Horn clause one, require quite deep reasoning.\n2. Section 2: The term \"building block\" is a bit ambiguous. I thought it meant model building blocks (e.g., a self-attention layer) during the first read. Maybe something like \"building blocks for logical reasoning tasks\", \"subtasks\", or \"logical phenomena the models need to handle\" would be clearer?\n3. Section 2.4: \"they will still reply\" --- reply correctly or reply randomly?\n4. Section 3.1: It would be nice to mention that all information is present in the task and no external knowledge is needed.\n5. Section 3.2: While the listing of datasets is helpful as references, it would be nice to give some descriptions of what each dataset aims to do, possibly in the appendix. This comment also applies to other sections.\n6. Section 3.4: \"e.g.,\" --> \"for example\" (should be spelled out here)\n7. Section 3.4: The provided example from LogiQA looks similar to the Horn clause examples in Section 3.1, yet the models struggle here. Why is this the case?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review and valuable feedback! We have uploaded an updated version of the paper with your suggestions. \nWe would like to thank you for your insightful and positive feedback. We are happy to learn that you found our survey comprehensive, and that it gives a good overview of the major reasoning tasks. We reply here in detail to your suggestions:\n\n**1. The scope of the models explored is not well-defined.**\n* That is true. We report on the performance of the best known models, as put forward by the cited works. Thus, our survey traces the current state of the art, as a lower bound of what is possible. We also briefly explain the architectures that were used to achieve these performances, but we cannot describe their entire configuration. For this, the reader is invited to read the respective papers. However, we now make explicit whether a model was finetuned or not.\n* This is true. We always cite the current best performing model. Unfortunately, we cannot establish whether another model would have had a better performance. This is something we leave to the individual papers that we cite. Our survey allows the reader to understand the current state of the model performance.\n* That is indeed true for Section 2. We have now made it clear that these comments apply mainly to BERT.\n* Indeed. We now specify for each model whether it was finetuned or not.\n\n**2. Likewise, the scope of the tasks is not well-defined**\n* Yes, we agree. We have replaced “logical reasoning” by “reasoning” because as you said not all the tasks fall under “logical reasoning.”\n* Exactly, the ability to distinguish useful from distracting contexts in general is an essential building block. Thank you for proposing this reformulation! We now use it in the paper.\n* Yes, we agree. In the updated version of the paper, we replaced “Text understanding” with “Implicit Reasoning”. The datasets in this section cover different logical reasoning types such as deductive, inductive and abductive reasoning. Different from the other subsections, they do not explicitly specify the rules, and they rely less on common sense knowledge. We have now made that explicit.\n\n**3. (minor) The natural language version of impossible reasoning tasks (Sections 4.2 and 4.3) is not very informative.**\n* There are some tasks where transformer-based models can generalize to longer inputs than those seen during training. The work “On the Ability and Limitations of Transformers to Recognize Formal Language” shows different such tasks (https://arxiv.org/pdf/2009.11264.pdf). \nFor the specific tasks of Dyck and Parity, however, we know that transformer-based models cannot generalize due to principal limitations. This is what our experiments also show in practice. Just for curiosity, we ran an experiment where we trained the Light-Switch-Task on sentences with 16-20 switch operations, and tested on sentences with 1-15 switch operations. The results are still dismal, as expected, with a precision of 50%.\n\n**Other comments:**\n\n**1. Title: What does \"shallow reasoning\" mean? Some of the presented tasks, such as the Horn clause one, require quite deep reasoning.**\n* We modified the title to “Reasoning with Transformer-based Models: Deep learning, _but_ Shallow Reasoning”, to point out that transformers in general cannot solve reasoning tasks in a way a human can. This does not exclude that they can solve some tasks of impressive complexity, including the Horn rule reasoning that you mention.\n\n**2. Section 2: The term \"building block\" is a bit ambiguous. I thought it meant model building blocks (e.g., a self-attention layer) during the first read. Maybe something like \"building blocks for logical reasoning tasks\", \"subtasks\", or \"logical phenomena the models need to handle\" would be clearer?**\n* Thank you! We updated the section title to “Common Pitfalls for BERT-like Models”.\n\n**3. Section 2.4: \"they will still reply\" --- reply correctly or reply randomly?**\n* In the example given in Section 2.4, “still reply” means “still reply correctly”. We have fixed this. Thank you!\n\n**4. Section 3.1: It would be nice to mention that all information is present in the task and no external knowledge is needed.**\n* Thank you! We added this observation to Section 3.1.\n\n**5. Section 3.2: While the listing of datasets is helpful as references, it would be nice to give some descriptions of what each dataset aims to do, possibly in the appendix. This comment also applies to other sections.**\n* Following your suggestion, we added an extensive appendix that describes the challenging datasets. \n\n**6. Section 3.4: \"e.g.,\" --> \"for example\" (should be spelled out here)**\n* Thank you, we fixed it.\n\n**7. Section 3.4: The provided example from LogiQA looks similar to the Horn clause examples in Section 3.1, yet the models struggle here. Why is this the case?**\n* In contrast to the Horn clauses examples in Section 3.1, the examples in Section 3.4 do not explicitly provide the facts and rules to the model. Hence, the model needs to perform implicit reasoning to solve such tasks. We have now clarified this in the paper.\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable feedback! We are happy that all reviewers appreciated our survey paper! \n\nOur article traces the boundary of the reasoning that is possible with transformer-based models in the current state of the art. It takes a breadth-first approach: It provides an overview of reasoning tasks, categorizes them, and describes the approaches that have been used to tackle them. It thus serves two purposes: First, the audience is made aware of the weaknesses of current models, so as to sensibilize them to pitfalls they may encounter in applications. Second, the audience can use our work to find interesting research questions that are still open. Our work is also the first survey of such reasoning with transformer-based models.\n\nA common request by the reviewers was to provide more details on the dataset and models. We have added an appendix to the paper, which includes the descriptions of the basic models, a description of the challenging datasets, and a table of the model performances. This appendix can be found in the updated version of the paper. Should the paper get accepted, the appendix will be published in the Arxiv version of the paper.\n\nWe also reply individually to each reviewer.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' feedback. Most of my concerns have been addressed.", "role": "assistant" } ]
StHCELh9PVE
AKBC 2021 Conference
Reviewer_MBrX
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_MBrX, and the following are the paper you need to review:\n```<<StHCELh9PVE>>```", "role": "user" }, { "content": "Title: Good paper - simple, well executed and well presented\nReview Content: This paper analyzes the emergence of commonsense knowledge in few-shot knowledge models. In particular, it attempts to answer whether and when is this knowledge learned: during pretraining or from fine-tuning on KG examples. To investigate this, various commonsense knowledge models are trained in few-shot settings. The results and analysis in the paper shows that commonsense knowledge models can adapt from limited examples. Moreover, similar to a recent line of work, this work confirms that KG fine-tuning learns an interface to the commonsense knowledge learned during pretraining. Finally, the paper also presents an analysis of absolute, angular, and distributional parameter changes during few-shot fine-tuning. However, the insights here are less convincing to me.\n\nPros:\n1. The paper is timely. It is presented well and answers some interesting question on the emergence of commonsense knowledge in these models in a very simple and intuitive way.\n2. The analysis (while not always convincing) mostly makes sense and all the conclusions seem reasonable.\n\nCons:\n1. The related work discussion seems too sparse to me.\n2. It would help if all the conclusions from the experiments in the paper were presented in one place e.g. in the intro.\n\nQuestions and suggestions:\n1. Why do we need to consider an encoder decoder model for this analysis? Why cant we directly analyse an encoder only or decoder only model?\n2. It might help to provide more motivation and intuition for the various parameter change metrics e.g. angular parameter change in the paper. This would help the readers understand the conclusions much more easily. This part is less clear in my opinion.\n3. This paper analyses commonsense knowledge. What would change in this paper if we were analysing declarative knowledge such as in other KGs? Is emergence of commonsense knowledge any different from other kinds of knoweldge?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 3\nWe thank the reviewer for their helpful comments and positive rating, in part stating that the work is timely, presented well, and answers interesting questions regarding the emergence of commonsense knowledge.\n\n**The majority of the interpretability analysis was done on T5. Why is T5 (an encoder-decoder model) the best model for analysis?**\n\nThe reviewer asks whether this analysis could have been suitable for other model types besides T5. We picked T5 as the model for our detailed analysis for the following reasons. First, T5 has the highest fully supervised and few-shot accuracy among knowledge models investigated in the fully-supervised setting (Table 3). This observation tracks with other commonsense benchmarks where T5 models typically achieve the highest performance among pretrained LMs. Second, encoder-only models (such as RoBERTa) are difficult to convert as text generators -- a requirement for this task. The masking scheme forces the model to either have the number of tokens of the output specified with a given number of [MASK] tokens, or be finetuned extensively to learn text-generation abilities, precluding the possibility of investigating few-shot behavior. Decoder-only models (e.g., GPT2) could more easily be investigated, but Hwang et al., 2021 showed they perform worse on the task than encoder-decoder models. Finally, there are available T5 models at multiple sizes (small, large, 11B), allowing us to investigate model scale as a principal focus of our analysis (which would not be possible with other encoder-decoder models such as BART). For these reasons, we chose the T5 class of models for our analysis, though investigating these patterns for other classes of models would be an interesting study for future work. \n\n**Could additional motivation be added to clarify the importance of analysis on angular change?**\n\nWe thank the reviewer for asking for clarification regarding the importance of angular distance. Observing relative differences in L1 change across parameter sets vs. the relative differences in angular change among the same parameter sets can be informative as to how the parameters are changing in these matrices. High L1 change for a particular parameter matrix with no corresponding increase in angular change relative to other parameters could be an indication that most of the learning reinforces previously learned knowledge by the parameters during pretraining (i.e., changing the scale of the output vectors of these transformations, rather than their directions). Meanwhile, a high relative angular change combined with low relative L1 change could be an indicator that changes in parameters in this matrix were concentrated on a few parameters. For example, in Figure 5, the output matrix (wo) in the decoder of the second layer of T5-11B has low L1 distance, but high angular distance, suggesting that there are large specialized rotational changes in certain rows of the parameter matrix.\n\n**The analysis is only on commonsense knowledge graphs. What would change if we analyzed other kinds of non-commonsense knowledge (for example, declarative knowledge)?**\n\nWe agree that comparing parameter changes between commonsense knowledge models and models trained to capture other kinds of knowledge is an interesting question. We predict that parameter changes might be more concentrated for declarative factual knowledge as commonsense knowledge is more distributional, potentially requiring broader updates to reflect learned relationships among related inferences. Meanwhile, declarative knowledge may have many functional relations where only a single tail entity can be correct for the corresponding head entity (e.g., DateOfBirth). Consequently, factual knowledge emergence may require learning to encode specific facts, rather than learning to encode more general rules, which may be reflected with more concentrated parameter updates. Confirming these hypotheses would be an interesting direction for future work.\n Global/general Response:\n```\n\nSummary of changes made to revised copy:\n\n- We added a table of relation templates for each of the relations mentioned in the paper in the Appendix.\n\n- We added a table to the appendix regarding the impact of prompt formulation (by paraphrasing each of the prompts in ATOMIC2020).\n\n- Regarding the above, we added a table of relation templates for each of the paraphrased prompts.\n\n- We added an example of the input to the language model for augmentation experiments mentioned in the paper.\n\n- We clarified the optimizer used by T5 in the experimental section of the paper.\n\n- We added a forward reference to experiments mentioned later in the paper at the end of section 4, to improve fluidity.\n\n- We added clarification in 5.3 on the importance of analyzing angular change in addition to L1 change.\n\n- We expanded our related work section on knowledge models and prompting.\n\n- We added context on our prompt-based experiments in the introduction section, completing the introduction with a brief summary of all experiments run in the paper.\n\n- We fixed minor typos and formatting issues per reviewer comments.\n```\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It resolves the majority of my concerns. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
StHCELh9PVE
AKBC 2021 Conference
Reviewer_gh1k
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_gh1k, and the following are the paper you need to review:\n```<<StHCELh9PVE>>```", "role": "user" }, { "content": "Title: Well-motivated and insightful analysis that interpolates between zero-shot and fully-supervised\nReview Content: This paper analyzes fine-tuning of pre-trained generative language models on a commonsense knowledge task, investigating how much the strong performance of the models is due to pre-training or to task-specific learning (via fine-tuning or prompting/in-context learning). The task is to predict natural text for the tail of (head, relation, tail) relationships from the Atomic_20^20 knowledge graph (KG), e.g. (\"PersonX goes to the mall\", xIntent) -> \"to buy clothes\". The paper shows that a T5-based model obtains a substantial fraction of the performance of fine-tuning on the full Atomic dataset, using few-shot learning with only a small number of examples (3) per relation. Few-shot learning also obtains a substantial improvement over zero-shot performance for this model. The paper also presents several measures of parameter differences between pre-trained and fine-tuned models and performs an analysis with them, showing that more parameters are updated in the models' decoders than in their encoders. Taken together with past work, the evaluation results and analysis support the paper's claim that most of the knowledge useful for the KG task was already present in the pre-trained models.\n\n*Pros*:\n- I found the paper well-motivated -- it's a natural and sound way to investigate how much performance on the KG tasks is due to pre-training versus task-specific fine-tuning, and an indirect way of investigating how much knowledge (of the type useful for this KG task) is encoded in the pre-trained language models. I expect this will be of interest to the community.\n- The experimental results were overall very thorough (although see below for one possible gap), performing human evaluations, multiple trials, and evaluating a range of models.\n- The parameter analysis was particularly strong. This method is simple (which I think is a good thing!) and well-motivated, and the results were well-explained, convincing to me, and tied in well to past work. \n\n*Cons*:\n- The only model that is compared across all three conditions (zero-shot, few-shot, and fully-supervised) in the experimental results is GPT-2 XL, but for this model few-shot learning recaptures barely any of the performance of the fully-supervised condition (36.6% acceptability in zero-shot, at most 40.1% acceptability in few-shot, and 72.5% acceptability in fully-supervised). However, I suspect that this may be due to limitations of the T5 and GPT-3 models. Please see questions below.\n- A few minor things about the presentation could be improved, see suggestions below. But this isn't at all a crucial weakness and should be easily fixable.\n\n*Questions*\n- Is it possible to evaluate either GPT-3 or T5 in all of the zero-shot, few-shot, and fully-supervised settings, as GPT-2 XL doesn't do well in the few-shot condition? I suspect GPT-3 may not be fine-tunable (for the fully-supervised condition), and T5 may not work well in the zero-shot since it's an encoder-decoder model, but it would help to clarify this.\n- How is the \"automatic formatting\" of relations into prompts done (Section 3) -- was the prompt text for a given relation obtained from some source, or written by hand? Were prompts tuned on validation data in either the few-shot or zero-shot setting? This is relevant given Jiang et al. 2020 showing that performance is sensitive to subtle variations.\n- Could more details be given on the setup in the \"augmentation\" experiments? Do these include the n=3 training examples as part of the prompt, but without fine-tuning the model? Was the formatting of the prompt the same across both GPT-2 XL and GPT-3, and does the formatting have any hyperparameters / design decisions that needed tuning?\n\n\n*Suggestions*:\n- In the experimental setup it would help to clarify the optimizer used.\n- I was surprised when section 4 ended without varying n over a larger budget of training examples (which comes later in Section 5), given the focus on this in the intro. I'd suggest at least adding a forward reference to the text in Section 4.\n- The paper used both \"learning\" in Table 3 and \"fine-tuning\" elsewhere in the text to mean the same thing (I think); it would help to use the same term for both.\n- Could more motivation be given for using rows in angular parameter change (rather than e.g. columns, or flattening the matrices and treating them as a single vector)? Rows might be motivated by e.g. each row's correspondence to an individual element in the output vectors, but I don't know if this is true because I don't know whether inputs are row or column vectors. \n- I'm also not totally clear on the advantage of angular parameter change over l1 distance -- I found it a bit difficult to think about how l1 change could be low but angular change high (in section 5.3).\n- 5.1: \"large number of the learning\"\n- Super minor, but the paragraphs before equations (1-3) seem to have compressed line spacing. If these equations are wrapped in \\small, this might be fixed by inserting \\par\\nobreak or \\par\\nobreak\\vspace{-1em} between the paragraphs and the equations. (I once spent a while figuring out how to fix this.)\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 2\nWe thank the reviewer for their overall positive rating and helpful comments, and their comments that our paper is well-motivated, of interest to the community, and sharing well-explained and convincing results.\n\n**Out of the three settings (zero-shot, few-shot, fully-supervised), T5 is evaluated across few-shot and fully-supervised settings, and GPT-3 is evaluated in zero-shot and few-shot settings. Do you include an evaluation across all three settings?**\n\nThe reviewer’s intuition is correct that the two additional settings (fully supervised GPT-3, zero-shot T5) were considered but excluded for technical reasons. T5 models are not pretrained using a causal language modeling objective, so we felt reporting zero-shot evaluation performance in the main table for a multi-word generation task would be an unfair (and perhaps misleading) comparison. However, we did report limited numbers for zero-shot T5 tuple completions in the Appendix. For reference, T5-11B gets a BLEU-1 score of 6.7 using the masking objective, which is lower than GPT-2 and GPT-3 with their causal language modeling objectives. GPT-3 is difficult to evaluate in the fully-supervised setting, since the only option at the time of submission was to include example tuples in the prompt via augmentation.\n\n**The experiments mention relations formatted into natural language into prompts. How do you automatically format the prompts for each of the relations in ATOMIC2020?**\n\nWe manually wrote prompts for each of the 23 relations in ATOMIC2020 (see examples in Table 2). Following (Hwang et al., 2021), we did not tune prompts on validation data (which Perez et al., 2021 show artificially inflates performance). With respect to sensitivity to minor changes in the prompt, we mention Jiang et al., 2020’s conclusion in our motivation in Section 5.4. Driven by this sensitivity, we repeated few-shot experiments with a paraphrase of the prompt, but did not find significant differences in generated tuple quality between different prompt variations (Our paraphrased prompts for n=30 had quality scores of BLEU_1 = 30.5. For reference, the original prompts had quality scores of BLEU_1 = 31.9.) We concluded that while generating directly from prompts may be quite sensitive to the construction of the prompt, minimal fine-tuning on examples directly using the prompt removes some of this sensitivity. We will add the full results of our paraphrased prompts to the paper. \n\n**What are the advantages of measuring angular parameter change on top of L1 parameter change?**\n\nWe thank the reviewer for asking for clarification regarding the relationship between L1 and angular distance. While these two measures can often be correlated, interesting information can be gleaned in cases where they are not (i.e., relative differences in L1 change across parameter sets do not match differences in angular change among the same parameter sets). For example, high L1 change for a particular parameter matrix with no corresponding increase in angular change relative to other matrices could be an indication that most of the learning reinforces previously learned knowledge by the parameters during pretraining (i.e., changing the scale of the output vectors of these transformations, rather than their directions). Meanwhile, a high relative angular change combined with low relative L1 change could be an indicator that changes in parameters in this matrix were concentrated on a few elements of the parameter matrix. Consequently, complementary information can be gleaned from analyzing both measures together.\n\n**Angular parameter change is calculated by comparing and averaging the angular change between rows in the parameter matrix. Why compare rows, instead of columns?**\n\nThe reviewer asks about the motivation for using rows in angular parameter change and their intuition about rows was our same reasoning. Each row in the parameter matrix performs a dot product with the input representation, and has a correspondence to an individual element in the output vector.\n Global/general Response:\n```\n\nSummary of changes made to revised copy:\n\n- We added a table of relation templates for each of the relations mentioned in the paper in the Appendix.\n\n- We added a table to the appendix regarding the impact of prompt formulation (by paraphrasing each of the prompts in ATOMIC2020).\n\n- Regarding the above, we added a table of relation templates for each of the paraphrased prompts.\n\n- We added an example of the input to the language model for augmentation experiments mentioned in the paper.\n\n- We clarified the optimizer used by T5 in the experimental section of the paper.\n\n- We added a forward reference to experiments mentioned later in the paper at the end of section 4, to improve fluidity.\n\n- We added clarification in 5.3 on the importance of analyzing angular change in addition to L1 change.\n\n- We expanded our related work section on knowledge models and prompting.\n\n- We added context on our prompt-based experiments in the introduction section, completing the introduction with a brief summary of all experiments run in the paper.\n\n- We fixed minor typos and formatting issues per reviewer comments.\n```\n", "role": "user" }, { "content": "I value the authors' response. Most of my concerns have been addressed. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
StHCELh9PVE
AKBC 2021 Conference
Reviewer_YU6U
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_YU6U, and the following are the paper you need to review:\n```<<StHCELh9PVE>>```", "role": "user" }, { "content": "Title: Interesting Commonsense Analysis Using Few-Shot Knowledge Models\nReview Content: Summary:\n\nThe paper demonstrates that commonsense knowledge models can learn declarative commonsense relationships very quickly in a few shot setting.\nThis paper aims to address the question of whether commonsense knowledge is directly encoded in pretrained LMs or whether the encoding is learned during finetuning with a favorably initialized pretrained model. \nBy varying the number of training examples in a few shot setting, the authors show that models seem to learn an interface to knowledge they already contain with only a few commonsense examples.\nFurthermore, the paper performs analysis wrt parameter change to support that the model seems to be learning an interface to knowledge it already knows, showing that 1) few shot training largely updates parameters in the decoder attention heads and not the large FF networks (that have been showed to embody memory storage during pretraining) and 2) that larger models exhibit parameter change across a concentrated set of parameters, showing that their larger size allows for less entangled knowledge retention. \n\n\nStrengths:\n\n-Good evaluation metrics, extensive analysis, and interpretable methods to support claims in the paper\n\n-Very interesting use of LMs as few shot learners to investigate commonsense knowledge abilities \n\n-Useful findings on prompt performance in few shot commonsense interface learning \n\n-Paper is very well written and structured\n\n\nWeaknesses:\n\n-The paper only considers ATOMIC as a KG, it could be useful to experiment w/ more KGs. Extending this to ConceptNet should not be very difficult given the overlap of relations in ATOMIC and ConceptNet. Would be helpful and interesting to see if the findings generalize to other types of commonsense knowledge or whether the findings are specific to the type of knowledge found in ATOMIC. \n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 1\nWe thank the reviewer for their helpful comments and overall positive rating, in part commenting on our extensive analysis, useful findings on prompt performance, and the structure and clarity of the paper.\n\n**The analysis in the paper focuses on ATOMIC2020, a large-scale commonsense knowledge graph. Should other commonsense knowledge graphs be considered also (ConceptNet)?**\n\nWe appreciate the reviewer’s question about running our emergence analysis on other commonsense knowledge graphs beyond ATOMIC2020 (e.g., ConceptNet). We note that ATOMIC2020 is one of the most comprehensive commonsense knowledge resources available, subsuming examples from many other known KGs (Hwang et al., 2021). The ATOMIC KG (Sap et al., 2019), for example, is completely contained in ATOMIC2020. Meanwhile, ATOMIC2020 also contains a high-quality subset of 172k tuples from ConceptNet that covers 17 of the commonsense relations present in ConceptNet. According to Hwang et al., 2021, these 17 relations are consolidated into overlapping relation types (for example, the ConceptNet CausesDesire relation is merged into the ATOMIC2020 relation of xWant). Also, the parts of ConceptNet excluded by this merge are geographical facts, dictionary-based knowledge, lexical relationships, and relations based on lexical co-occurrence (Hwang et al., 2021), which are not the focus of this work. Analyzing the emergence of these types of facts in language models would be an interesting task on its own, though.\n Global/general Response:\n```\n\nSummary of changes made to revised copy:\n\n- We added a table of relation templates for each of the relations mentioned in the paper in the Appendix.\n\n- We added a table to the appendix regarding the impact of prompt formulation (by paraphrasing each of the prompts in ATOMIC2020).\n\n- Regarding the above, we added a table of relation templates for each of the paraphrased prompts.\n\n- We added an example of the input to the language model for augmentation experiments mentioned in the paper.\n\n- We clarified the optimizer used by T5 in the experimental section of the paper.\n\n- We added a forward reference to experiments mentioned later in the paper at the end of section 4, to improve fluidity.\n\n- We added clarification in 5.3 on the importance of analyzing angular change in addition to L1 change.\n\n- We expanded our related work section on knowledge models and prompting.\n\n- We added context on our prompt-based experiments in the introduction section, completing the introduction with a brief summary of all experiments run in the paper.\n\n- We fixed minor typos and formatting issues per reviewer comments.\n```\n", "role": "user" }, { "content": "I appreciate the authors' reply. It adequately addresses the points I raised. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
U2adryQpew5
AKBC 2021 Conference
Reviewer_mZbd
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_mZbd, and the following are the paper you need to review:\n```<<U2adryQpew5>>```", "role": "user" }, { "content": "Title: Interesting Theoretical Analysis on Different Models. \nReview Content: **Paper summary**:\nThis paper tackles the problem of modeling monotonic and non-monotonic reasoning rules in different embeddings spaces, mostly vectors, but under different score functions used by the vector embeddings. The paper first provides the definition of monotonic reasoning, then iterates through a few vector-based embedding and label functions to check if they are able to model the monotonic reasoning part. Then the paper proposed a Relu based model that is able to model both monotonic and non-monotonic reasoning. \n\n\n**Strength**:\n- This paper did a good theoretical analysis on monotonic and non-monotonic kinds of reasoning for word embeddings. \n- They iterate different embeddings and label functions for Euclidean vector embeddings. For cases that the proposed methods did not work, they give enough counterexamples and provide proof. \n- The paper also includes a Relu based model that is able to model both monotonic and non-monotonic reasoning.\n\n**Weakness / Questions / Suggestions**\n- There are many counterexamples in the paper to show that existing embedding and score functions can not model monotonic reasoning. Still, there are limited real-world examples to reflex the effect except for the one example after counterexample 2. If such counterexample does not or seldom exist in real data, the claim and the effect of the counterexample will be weakened. \n- The different combinations of the vector embedding model and label score function are limited. \n- The writing of the paper is a bit hard to follow, probably due to my limited familiarity with propositional logic. But it would be helpful to refine the writing, especially the part at the end of the introduction where monotonic reasoning and non-monotonic reasoning are first brought up. One specific suggestion is that it would be helpful to move the example of “parent, female and mother” right after the definition of the specific embedding and label functions to make it easier to make the connection of why unwanted dependencies are created due to these functions. \n- Echoing the first point, this is a theoretical analysis paper, in which a lot of the examples are hypothetical. Whether such reasoning exists in real data or how much it is there in real data remains a question. In addition, for models where you can model monotonic and non-monotonic reasoning, it remains a question of how easy it is to reach the perfect solution since none experiments are performed, not to mention the Relu function tends to have dead gradient issue which makes training harder. \n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review\nRegarding the real-world relevance of the counterexamples, one point to note is that Counterexample 3 and its variants rely on a more intricate set of dependencies than Counterexample 2. This suggests that the theoretical limitations of the corresponding strategies might indeed be somewhat less problematic in practice (compared to the strategy underpinning Counterexamples 1 and 2). However, while the types of reasoning that are required for Counterexample 3 may feel somewhat artificial, there could be other sets of dependencies that cannot be modelled with the corresponding strategies either, and which are more likely to occur in practice. More generally, the extent to which the theoretical limitations of particular strategies are problematic in practice is likely to be application-dependent.\n \nRegarding the considered combinations of embedding and labelling functions, clearly the analysis cannot be exhaustive in this respect. However, the considered combinations include the most commonly used approaches, including averaging based strategies and skipgram-like embeddings. Moreover, it should be noted that some of the results could straightforwardly be generalised. For instance, the Sigmoid based embeddings only rely on the fact that entity embeddings are a conical combination of attribute embeddings. The labelling functions include the common case where properties are predicted using a linear classifier (or a sigmoid layer) and distance a based approach.\n\nWe appreciate the suggestion for improving the readability of the paper.\n\nIt is indeed the case that strategies which can successfully simulate monotonic and non-monotonic reasoning in theory may not work well in practice. The aim of these “positive” results is merely to show that it is possible to simulate reasoning, even for relatively simple embedding strategies, rather than the recommend particular embedding strategies. In fact, it should still be possible to simulate monotonic and non-monotonic reasoning when replacing ReLU by sigmoid, although the proof would be “messier”. Indeed, for the entity embedding $(y_1,…,y_{l+1})$ in the proof of Proposition 1, we have that that $\\sigma(y_i)$ can be made arbitrarily close to 0 by choosing $\\delta$ sufficiently large. Many other types of non-linearities can be used as well. For instance, if we guarantee that all coordinates are strictly positive, we can choose $f(x_1,…,x_{l+1}) = f(x_1^{-k},…,x_{l+1}^{-k})$ for $k$>1, using again a very similar argument to the one in Proposition 1.", "role": "user" }, { "content": "Thank you for the authors' feedback. It adequately addresses the points I raised. I maintain my positive recommendation for this work.", "role": "assistant" } ]
U2adryQpew5
AKBC 2021 Conference
Reviewer_ceQ9
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_ceQ9, and the following are the paper you need to review:\n```<<U2adryQpew5>>```", "role": "user" }, { "content": "Title: Excellent paper exploring fundamental questions related to representational capacity\nReview Content: ### Summary\nThis paper provides a thorough analysis of the representational capacity of KB embedding models, analyzing the extent to which they can capture logical dependencies of a knowledge base. The authors consider a large range of common scoring functions, and both monotonic and non-monotonic dependencies. The authors make salient points as to the relevance of this sort of analysis - i.e. if a model is incapable of representing such dependencies then it cannot possible capture them from data - however there are a few limitations. For one, the authors are focusing on a setting where entity embeddings are pooled representations of attribute representations, which is not particularly standard (as acknowledged by the authors). Furthermore, there is a question of the extent to which the counterexamples as presented often manifest in real data, and how poorly a model which compromises to meet its objective would perform on various relevant queries. Both of these are reasonable limitations, however, in that they make the theoretical analysis possible and suggest conclusions that may reasonably apply to real-world data settings.\n\n### Questions\n\nAfter equation (5): \"Here, we need to add this term, which amounts to imposing a Gaussian prior, to ensure that $Emb_{sig}$ is well-defined.\" Is it trivial to see that this actually ensures $Emb_{sig}$ is well-defined?\n\n\n### Suggestions\n\nFor the counterexamples, it may be clearer to read if they are presented as statements and proofs, for example:\n\n> Counterexample #: The knowledge base $K = \\{\\text{ (rules) }\\}$ cannot be represented using $E_\\text{something}$, $L_\\text{something}$.\n> **Proof:** (details here)\n\nThe statement that the set of $K$ which are representable using $Emb_{norm}$ are equivalent for $Lab_{dot}$ and $Lab_{dist}$ (following Counterexample 3) could be made into a proposition.\n\n### Typos\n\nAfter Counterexample 2: \"result\" -> \"results\"\n\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to review\nOur setting does not cover approaches where entity embeddings are learned from text descriptions using deep neural networks, for instance, but the idea of learning entity embeddings by pooling attribute vectors is nonetheless rather common. First, this setting encompasses skipgram-like models, which remain popular, even in combination with transformer models. For instance, [1] proposes a model for representing entities in which “attribute vectors” correspond to embeddings of sentences that mention the entity. It follows from our analysis of the sigmoid based embeddings that this model cannot capture arbitrary propositional dependencies. Second, averaging based approaches are commonly used in NLP (e.g. representing topics as weighted averages of document vectors, or document/sentences as weighted averages of word vectors). Finally, while we have focussed on the propositional setting, the limitations we identified are also relevant for knowledge graph embedding. This is clearly the case for inductive knowledge graph embedding methods, which often proceed by computing the embedding of an unknown entity by inferring an “attribute vector” from each of the triples mentioning that entity, and then taking a weighted average of these vectors. Moreover, many standard knowledge graph embedding methods essentially proceed by iteratively updating entity vectors in a similar fashion. We therefore suspect that the results from our paper can be used to identify theoretical limitations of standard knowledge graph embedding models, although a formal analysis of such limitations is left for future work.\n\nWe agree that the theoretical limitations we have identified may not tend to be problematic in practice, although this will be task-dependent. Our personal motivation for studying these limitations is inspired by applications where embeddings are used as knowledge bases, where guarantees on expressiveness are perhaps more important. Please also see our response to the first review on this point. \n\nThe regularisation term in $\\textit{Emb}_{\\textit{sig}}$ ensures that the maximum is attained, which was the intended purpose. However, this does not guarantees uniqueness, which we will clarify in the final version; if multiple maxima exist, the results hold regardless of which one is chosen.\n\nWe appreciate the suggestions for improving the readability of the paper.\n\n[1] Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski:\nLearning Cross-Context Entity Representations from Text. CoRR abs/2001.03765 (2020)", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It resolves the majority of my concerns. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
U2adryQpew5
AKBC 2021 Conference
Reviewer_wDBJ
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_wDBJ, and the following are the paper you need to review:\n```<<U2adryQpew5>>```", "role": "user" }, { "content": "Title: Decent paper with a good contribution. However, the writing can be significantly improved.\nReview Content: This paper looks at entity embeddings and theoretically tries to answer which kinds of semantic dependencies can be modeled by them. Mainly focusing on settings where the entity embeddings are obtained by pooling the embeddings of its attributes (although this is a strong assumption), this paper presents various theoretical limitations of the embedding approaches in learning logical/structured attribute representations with known logical or structured dependencies in practice. The paper shows that some of the most popular embedding models are not able to capture basic logical rules.\n\nI am not an expert in this topic. However, some of the assumptions made in the paper seem too strong to me e.g. that entity embeddings are obtained by pooling the embeddings of its attributes seems like a strong assumption. Most popular embedding approaches do not seem to follow this assumption. I also found the paper hard to follow. The authors can perhaps work a bit on making this paper accessible to those not working exactly on this problems. For example, the assumptions and the results and how it relate to well known entity embedding approaches can be made more clear. The introduction is too long and already presents technical details, so rewriting it might help.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer's evaluation is an educated guess\n", "role": "assistant" }, { "content": "Title: Response to review\nThe assumption that entity embeddings are obtained by pooling attribute vectors indeed restricts the scope of the framework. However, this setting still covers a wide range of popular approaches. Please also see our response to Reviewer 2.\n\nWe will attempt to make the paper more accessible, and would welcome any further suggestions on this point.", "role": "user" }, { "content": "Thank you for the authors' feedback. Most of my concerns have been addressed. I continue to recommend this work positively.", "role": "assistant" } ]
V61-62OS4mZ
AKBC 2021 Conference
Reviewer_ygGy
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_ygGy, and the following are the paper you need to review:\n```<<V61-62OS4mZ>>```", "role": "user" }, { "content": "Title: Well-written paper with interesting idea but the evaluation could be strengthened\nReview Content: This paper studies the problem of knowledge base (KB) reasoning and propose to learn new concepts via clustering vectorized (propositionalized) entities and use the resulted entity-concept membership information as data augmentation for neural link prediction models.Two methods of finding such concepts and entity-concept memberships are proposed, one based on hard clustering of entity embeddings, and one based on an EM-style joint learning with the base neural link prediction model. Experiments are mainly conducted on FB15K-237 and WN18RR, which shows that the proposed methods can improve over ComplEx and DistMult, especially when sparsity is high.\n\nStrengths:\n\n- The paper is well-written and pretty easy to follow\n- The proposed concept formation idea is interesting and plausible\n- Some of the experiment designs are well thought-out and the results seem supportive\n\nWeaknesses:\n\n- I have reservations about the whole discussion around the concepts in existing knowledge graphs/bases. Concepts (or types/classes) are a central part of many KBs/KGs. I find it surprising that for the lack of quantitative intrinsic evaluation of clustering quality, it's claimed that \"to the best of our knowledge there exists no large KG for which such information is available\". Many KGs have a type system. For example, Freebase, of which FB15K-237 is a subset, has a strong type system. There's polymorphism and one entity could belong to multiple types, which is inconsistent with the hard partition assumption of the proposed methods (that is a strong assumption that could be easily violated), but I think one could at least use the Notable_Types in Freebase to do a reasonable large-scale quantitative evaluation of clustering quality. I'm not sure if the Freebase types and entity-type memberships are retained in FB15K-237 or are pruned in the downsampling process. If pruned, it will be interesting to see a comparison with the original Freebase type information added back to FB15K-237. Note that it's okay if the comparison is not favorable to the proposed method, because such type information is not always available for any KG, but simply omitting its existence is not okay.\n\n- The improvement over baselines, while statistically significant, is somewhat marginal, and it's not clear whether the same improvement would translate to more advanced base models proposed more recently.\n\n- It's not entirely clear to me why directly using embeddings learned by a neural link prediction model for clustering doesn't work well, and why using the method from Das et al [2020] would work instead. The provided explanation (\"This could be due to the latent concept information which we explicitly introduce via augmentation being already captured by the neural link predictor embeddings\") is hand-wavy. \nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer ygGy\nWe would like to thank the reviewer for their insightful comments and suggestions, and for deeming our proposed concept formation framework interesting and experiments well thought-out.\n\n> If pruned, it will be interesting to see a comparison with the original Freebase type information added back to FB15k-237.\n\nFollowing the reviewer’s suggestion, we have tried to make this comparison but were unable to retrieve any type information for Freebase IDs from Wikidata or Freebase dumps.\n\n> The improvement over baselines, while statistically significant, is somewhat marginal, and it's not clear whether the same improvement would translate to more advanced base models proposed more recently.\n\nFirst, we would like to point out that, when combined with modern training procedures (as we do), ComplEx continues to achieve state-of-the-art performance compared to more recent alternatives -- please refer to [1] for a recent overview.\n\nSecond, we believe our performance gains are not marginal, especially when looked at under the light of the sparsification experiment, which is the main motivation to use concept information. \n\nThird, please also note that our gains are on the same order of magnitude as other widely adopted regularization methods for neural link predictors. For example, our approach (Table 4) is either on par or outperforms the gains obtained by the reciprocal relation augmentation proposed by Lacroix et al, 2018, which is ubiquitously used nowadays. (Furthermore, as we show in Table 4, in most cases, combining the two augmentations is possible and yields the largest improvement).\n\n[1] D. Ruffinelli et al. - You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings. ICLR 2020\n\n> It's not entirely clear to me why directly using entity embeddings doesn't work well, and why using the method from Das et al. [2020] would work instead.\n\nThank you for your comment - please see our joint response and the revised paper where we added an empirical evaluation (Appendix D, Table 10) where we investigated performing ConFormA with clustering over ComplEx embeddings.\n Global/general Response:\n```\nWe would like to thank all the reviewers for their insightful suggestions and comments. In addition to more detailed individual replies, we clarify some common themes here.\n\n\n**ConFormA: clustering entity embeddings learned by neural link predictors**\n\nReviewers FyiY and ygGy both inquired about empirical evidence regarding the statement that performing ConFormA via directly clustering entity embeddings does not result in performance improvements in downstream link prediction tasks, and the reasons why this is the case.\nFirst, we would like to make it clear that it is not that clusters learned directly on latent embeddings are not meaningful but rather that they are not improving the accuracy of the neural link predictor -- which is the key objective of our work. In fact, the concept-membership information contained in the embedding representations is already implicitly exploited by the neural link predictor, and reifying it adds little information to the model.\n\nTo demonstrate this, we have updated the paper (Appendix D, Table 10) to include an empirical investigation in which we augment both WN18RR and FB15K-237 with concepts by clustering ComplEx embeddings. We report MRR results for a range of embedding sizes and number of clusters and find that in most cases the augmentation either degrades the accuracy of the neural link predictor, or yields no significant improvements. \n\nAs for why performing ConFormA using random paths representations leads to an improvement, sometimes even surpassing ConFormAE, we point to [1] where authors already observed that the “strengths of latent and graph-based models are often complementary.” Using graph-based features rather than latent features for computing the concept membership can provide additional information, improving the prediction accuracy in downstream link prediction tasks. \n\n[1] Nickel, M. et al. - A Review of Relational Machine Learning for Knowledge Graphs. Proc. IEEE 104(1): 11-33 (2016) - https://arxiv.org/abs/1503.00759\n\n```\n", "role": "user" }, { "content": "Title: Follow-up Discussion\nAre you using the `type.object.type` property to pull type information from the Freebase dump? You could also try `type.object.notable_type` which will give you a smaller set of notable types. Considering that FB15K was based on a rather old version of Freebase, it’s likely that entities are still identified using their `en.` IDs like `en.barack_obama`, which was later deprecated in Freebase and replaced with uniform machine IDs such as `m.xxxxxx`. So If you were trying to find type information using the `en.` ID of entities, you won’t be able to fetch any.\n\nAlong the same line, I think the paper could be strengthened by adding a careful discussion of the ontology/concepts/type systems of existing knowledge graphs/bases. As I indicated, concepts are a core part of KGs/KBs. The fact that the commonly used KB embedding benchmarks don’t happen to include the type system (I’m not entirely sure but it seems so according to the authors) doesn’t mean that it doesn’t exist. As the proposed technique is claimed to be rooted in concepts, a careful discussion is necessary.", "role": "assistant" }, { "content": "Title: Freebase entity types, role of concepts in KGs\n> Freebase type information\n\nWe sincerely thank the reviewer for the detailed information on how to retrieve the entity types. We have been able to extract the type property from the Freebase dump for every entity in FB15k-237, which has allowed us to perform a range of insightful quantitative and qualitative experiments. We perform a similar analysis on UMLS for which we have also been able to extract ‘semantic group’ information. \nIn Appendix G of the updated paper, we provide a quantitative analysis (Normalized Mutual Information and Jaccard Index) of the clusterings learned by ConFormA and ConFormAE against the gold clusters coming from the retrieved types (Freebase) or semantic groups (UMLS) and compare against the other baseline that reviewers mentioned in their comments: directly clustering neural link predictor embeddings. \n\nHere we summarize our findings (see also updated paper):\nFor UMLS, we find that ConFormA with random paths and ConFormAE outperform clustering neural link predictor embeddings and recover the group information rather faithfully, with ConFormAE achieving the best scores overall.\nFor FB15k-237, we find that across all 3 approaches the NMI is relatively low, indicating that all the methods struggle to recover the concepts as specified by types.\nConFormA with relatively low-dimensional random paths representations (d=474) consistently outperforms clustering neural link predictor embeddings, even of significantly larger dimensions.\nConFormAE initialized with random paths representations improves over all other approaches.\n\n> Discussion of the ontology/concepts/type systems of existing knowledge graphs/bases.\n\nThank you for this suggestion as well: in the revised version we discuss how concepts are first-class citizens in the knowledge representation formalisms used by Knowledge Graphs, like RDF and RDFS.\n\n Global/general Response:\n```\nWe provide in Appendix G of the last uploaded revision a quantitative analysis of the clusterings delivered by ConFormA and ConFormAE on UMLS (based on the semantic groups of [1]) and on FB15K-237 (based on the `notable_type` attributes of the FB dump as suggested by reviewer ygGy.\n\nWe evaluate the quality of clusterings by the Normalized Mutual Information score (or its generalized version to deal with the overlapping clusters found in FB) between the clusterings learned by ConFormA and ConFormAE against the golden clusters and compare against the other baseline that reviewers mentioned in their comments: directly clustering neural link predictor embeddings.\n\nSee the updated revision for details. In summary: on UMLS, ConFormA and ConFormAE outperform clustering neural link predictor embeddings and recover the group information rather faithfully, with ConFormAE achieving the best scores overall. For FB15k-237, we find that across all 3 approaches the NMI is relatively low, indicating that all the methods struggle to recover the concepts as specified by types. \n\n[1] Olivier Bodenreider and Alexa T. McCray. Exploring semantic groups through visual approaches.J. Biomed. Informatics, 36(6):414–432, 2003\n```\n", "role": "user" }, { "content": "I am delighted to have engaged in the above conversation with the author, which undoubtedly enhances the strength of this paper.", "role": "assistant" } ]
V61-62OS4mZ
AKBC 2021 Conference
Reviewer_d4i2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_d4i2, and the following are the paper you need to review:\n```<<V61-62OS4mZ>>```", "role": "user" }, { "content": "Title: Well-written paper, but more analyses on clustering quality would be nice\nReview Content: The paper aims to improve the performance of neural link predictors on sparse KGs, by augmenting KGs with clusters of concepts. It proposes two versions of augmentations: (1) ConFormA which inserts the clustered concept, and (2) ConformAE which jointly learn the concepts and embeddings through an EM scheme. The authors show that the augmentation helps improve performance of two baselines. on different datasets.\n\nStrength:\n\nS1. The research topic is interesting.\n\nS2. The paper has clear motivations for each section. I especially like Q1-Q4 set at the beginning of the experiment.\n\nS3. The paper explains its results nicely, with hypotheses/explanations for all the phenomena.\n\nS4. The paper provides detailed appendices with additional experiment setups and results which are important for reproducibility.\n\nWeaknesses / questions:\n\nW1. The quality of clusters. \n\nFrom the method description, It looks like the augmentation quality is greatly impacted by the clustering quality, and so I would appreciate it if the authors can offer more detailed analyses on the clustering results. For example, would the clustering also work less stably for more sparse concepts? Are their ambiguous entity that can belong to more than one concept which may cause the hard clustering to backfire?\n\nI understand it might be difficult to do scaled quantitative analysis, but on top of the (nice) qualitative tables in the paper, I wish to at least see some more rigorous manual inspection. For example, if we manually inspect 20 concepts, in the top-10/ last-10 clusters, what's the ratio of correctly clustered entities? Are the sparse entities correctly clustered? Are there some quality difference between concepts from ConFormA and ConFormAE?\n\nW2. Augmentation method.\n\nI'm curious why the authors only choose to add ISA relationships. If the concept C for an entity E1 is indeed correct, would it work better/worse if the we also extended links between (E1,E2) to (C, E2)?\n\nW3. (minor) the paper content can be better organized.\n\ne.g. Table 1/2 are referred in the text after Table 3, and so it would also be nice if they occur closer to when they are explained in the text.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer d4i2 and Additional Quantitative Results on Cluster Quality\nWe would like to thank the reviewer for their valuable comments and suggestions. We are glad that the reviewer finds the work interesting and our hypotheses and experiments to be well-explained and foster reproducibility. Below we address the points raised by the reviewer.\n\n**W1**: Quality of Clusters\n \nWe concur with the reviewer’s concern that it is important to examine cluster quality in a quantitative manner - the primary challenge we have encountered in attempting to do this revolves around the lack of availability of ground-truth cluster information for large KGs, such as WN18RR or FB15K-237. The sheer scale of these KGs renders manual analysis challenging, as cluster size can range from a few dozen to over a thousand entities. \n> “For example, if we manually inspect 20 concepts, in the top-10/ last-10 clusters, what's the ratio of correctly clustered entities? Are the sparse entities correctly clustered?” \n\nGiven that we do not have access to ground-truth information of cluster assignments, how does the reviewer suggest we define ‘top-10, last-10’ clusters? By showing whole clusterings as results, we demand this qualitative judgment from the reader. \n\n> “Are there some quality differences between concepts from ConFormA and ConFormAE?”. \n\nHow does the reviewer propose we do this? In Table 8 we have provided the entire clustering of UMLS for ConFormAE - we have now also included the entire clustering of UMLS using ConFormAE (Appendix C, Table 9), with random paths representations. It is worth noting that the number of clusters in ConFormAE changes dynamically during training, while in ConFormA the number of clusters stays constant - this renders comparing the two clusterings challenging. \n\n> “Are there ambiguous entities that can belong to more than one concept which may cause the hard clustering to backfire?” \n\n The focus of this work is on hard clustering, extending it to soft-clustering would require a soft-augmentation scheme, which is definitely an interesting avenue for future research. If an entity naturally belongs to multiple clusters, a good hard clustering would select only one of them, the most likely. While some information is lost, it is hard to see how this could “backfire”.\n\n**W2:** Augmentation Method\n> “If the concept C for an entity E1 is indeed correct, would it work better/worse if we also extended links between (E1, E2) to (C, E2)?” \n \nWe tinkered with this possibility but realized that concept augmentations are meaningful depending on the type of relationship between entities. Consider for instance the relationship ‘capital_Of’. For a triple like (Rome, capital_Of, Italy), and (Rome, is_A, City) being the reified concept triple -- adding a triple (City, capital_Of, Italy) would be meaningless. Equivalently, we could not automatically add (Italy, is_A, City).\nDevising how and when a richer automatic relation-based augmentation scheme can be obtained is an interesting future research direction.\n\n**W3.** We thank the reviewer for suggesting improving the order of the tables to improve readability, we have updated the current version accordingly.\n\nAs you suggested, we are providing a quantitative analysis that should help understand in a more rigorous way how good the clusters learned by ConFormA/E are.\nPlease see our most recent global answer and our updated revision (Appendix G) for details.\n Global/general Response:\n```\nWe would like to thank all the reviewers for their insightful suggestions and comments. In addition to more detailed individual replies, we clarify some common themes here.\n\n\n**ConFormA: clustering entity embeddings learned by neural link predictors**\n\nReviewers FyiY and ygGy both inquired about empirical evidence regarding the statement that performing ConFormA via directly clustering entity embeddings does not result in performance improvements in downstream link prediction tasks, and the reasons why this is the case.\nFirst, we would like to make it clear that it is not that clusters learned directly on latent embeddings are not meaningful but rather that they are not improving the accuracy of the neural link predictor -- which is the key objective of our work. In fact, the concept-membership information contained in the embedding representations is already implicitly exploited by the neural link predictor, and reifying it adds little information to the model.\n\nTo demonstrate this, we have updated the paper (Appendix D, Table 10) to include an empirical investigation in which we augment both WN18RR and FB15K-237 with concepts by clustering ComplEx embeddings. We report MRR results for a range of embedding sizes and number of clusters and find that in most cases the augmentation either degrades the accuracy of the neural link predictor, or yields no significant improvements. \n\nAs for why performing ConFormA using random paths representations leads to an improvement, sometimes even surpassing ConFormAE, we point to [1] where authors already observed that the “strengths of latent and graph-based models are often complementary.” Using graph-based features rather than latent features for computing the concept membership can provide additional information, improving the prediction accuracy in downstream link prediction tasks. \n\n[1] Nickel, M. et al. - A Review of Relational Machine Learning for Knowledge Graphs. Proc. IEEE 104(1): 11-33 (2016) - https://arxiv.org/abs/1503.00759\n\n```\n\n Global/general Response:\n```\nWe provide in Appendix G of the last uploaded revision a quantitative analysis of the clusterings delivered by ConFormA and ConFormAE on UMLS (based on the semantic groups of [1]) and on FB15K-237 (based on the `notable_type` attributes of the FB dump as suggested by reviewer ygGy.\n\nWe evaluate the quality of clusterings by the Normalized Mutual Information score (or its generalized version to deal with the overlapping clusters found in FB) between the clusterings learned by ConFormA and ConFormAE against the golden clusters and compare against the other baseline that reviewers mentioned in their comments: directly clustering neural link predictor embeddings.\n\nSee the updated revision for details. In summary: on UMLS, ConFormA and ConFormAE outperform clustering neural link predictor embeddings and recover the group information rather faithfully, with ConFormAE achieving the best scores overall. For FB15k-237, we find that across all 3 approaches the NMI is relatively low, indicating that all the methods struggle to recover the concepts as specified by types. \n\n[1] Olivier Bodenreider and Alexa T. McCray. Exploring semantic groups through visual approaches.J. Biomed. Informatics, 36(6):414–432, 2003\n```\n", "role": "user" }, { "content": "I acknowledge the authors' response. It adequately addresses the points I raised. I continue to recommend this work for acceptance.", "role": "assistant" } ]
V61-62OS4mZ
AKBC 2021 Conference
Reviewer_FyiY
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_FyiY, and the following are the paper you need to review:\n```<<V61-62OS4mZ>>```", "role": "user" }, { "content": "Title: Contributions not exactly novel\nReview Content: The authors propose two algorithm for unsupervised concept learning in knowledge graphs: ConFormA and ConFormAE, where the latter is iteratively optimizing the concepts. In their experiments, the authors investigate the semantic meaningfulness of the concepts (qualitatively), the effect on link prediction performance, the impact in low-data regimes, and the generalization ability for relation types of different frequency.\n\nStrengths:\n- The research questions are interesting and the experimental design is solid.\n\nWeaknesses:\n- The main weakness of the paper is that both contributions (1. task of unsupervised concept formation, 2. algorithms based on entity clustering and KG augmentation) are not novel but have been investigated before, e.g., in Gad-Elrab et al. (ISWC 2020, https://openreview.net/forum?id=wbVVXSziQFQ).\n- In general, the related work section of the paper seems to be outdated as the latest cited paper in that section is from 2018.\n- The meaningfulness of the concepts is evaluated only qualitatively by manually looking at some examples. This leaves room for questions about their actual quality. Although a large quantitative evaluation might not be possible due to missing gold data, there are other possibilities, such as user studies or manual annotations of a subset of clusters.\n\nOther comments:\n- Although the authors say that clustering KG embeddings did not lead to improvements, it would be valuable to add a comparison to this approach to the paper.\n- Do the authors have an explanation why the joint training (i.e., ConFormAE) did not clearly outperform ConFormA? This is a bit unexpected. Could one reason be that the E and M steps are only loosely connected (in the form of the augmented graph but not in the form of the used embeddings for instance)? In general, I see only a loose connection of Algorithm 2 to \"Expectation-Maximization\" as the M-step optimizes a parameter set that is not used in the E-step.\n- It is not clear how the number of clusters was chosen (neither the range nor how it was optimized) and which number was used in which experiments. Since it can be expected that this choice has a large impact on the performance, this would be very important to clarify.\n- The introduction should also cite TransE as this was a link prediction approach clearly before the one that was cited.\n- I don't follow the first reason why ConFormA should improve link prediction performance. Can the authors elaborate on that? Which knowledge does ConFormA add that is not implicitly contained in the embeddings for link prediction already? (This is even given as one reason for not using embeddings for clustering)\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer FyiY and More quantitative results\nThank you for your valuable feedback and comments! Here we address your concerns.\n\n> Relationship with “ExCut: Explainable Embedding-Based Clustering over Knowledge Graphs”\n\nThank you for sharing the “ExCut: Explainable Embedding-Based Clustering over Knowledge Graphs” (https://resources.mpi-inf.mpg.de/d5/excut/ExCut_TR.pdf) paper -- we were not aware of this work, it’s definitely related! We will make sure to incorporate it in the revised version of this paper.\n\nThe first difference with Gad-Elrab’s work lies in our different objective: they aim at performing clustering for the sake of unsupervised explanations, whilst we focus on improving the accuracy of a downstream neural link prediction model. Our research question is whether representation learning models for Knowledge Graphs can learn better representations when such concept memberships are materialized in the Knowledge Graph.\n\nA similarity between our work and theirs lies in their iterative process for refining cluster memberships (Fig. 2 in the ExCut paper). This can be thought of as a particular instance of ConFormA where K-Means is used on latent embeddings and when the reification strategy of Ex-Cut materializes isA relationships. While we allow for other clustering routines, Ex-Cut allows for additional reification strategies, combining both could be an interesting future research direction. We will discuss it in the camera-ready version of this paper.\n\nAnother difference can be found in our additional contribution, ConFormAE, as a way to overcome the choice of a particular clustering mechanism and as a principled probabilistic way to elicit entity concepts. There is no equivalent of the EM-based optimization loop of ConFormAE in Ex-Cut, which is heuristics in nature and lacks a clear optimization objective. We speculate that one could extend Ex-Cut to use a ConFormAE-like iterative scheme to learn clusters and explanations.\n\n> Experiments with ConFormA and KG embeddings: “do the authors have an explanation why the joint training (ConFormAE) did not clearly outperform ConFormA?”\n\nThank you for your comments -- we added the experimental results for ConFormA with KG embeddings in the revised version of the paper (Appendix D, Table 9). Please see also our answer to this question in the joint answer we posted to all reviewers.\n\n> Relationship between Algorithm 2 and Expectation-Maximization: the M-step optimizes a parameter set that is not used in the E-step.\n\nIn our framework, the E-step materializes the hard concept memberships, which depend on the current set of parameters (including embeddings for entities, clusters and ISA relationships). This is shown in the maximization problem we are solving over the predicted scores (please see line 7 of Algorithm 2). Then, the M-step updates all these parameters.\n\n> It is not clear how the number of clusters was chosen (neither the range nor how it was optimized) and which number was used in which experiments.\n\nAll the hyperparameters, including the range for each dataset and how it was optimized (using the MRR on the held-out validation set), are specified in Appendix F. Moreover, the impact of varying the number of clusters on MRR for both WN18RR and FB15K-237 has been investigated and reported in Figure 4. \n\n\n> The introduction should also cite TransE as this was a link prediction approach clearly before the one that was cited.\n\nThank you for pointing this out, indeed a few neural link prediction models are not being cited in the paper, while we referred to the recent survey of Nickel et al [1]. We addressed this issue in the revised version.\n\n> Which knowledge does ConFormA add that is not implicitly contained in the embeddings for link prediction already?\n\nClustering via random paths in ConFormA adds long-range information that is generally not captured by latent embeddings. This has been already observed in [1] where authors state that the “strengths of latent and graph-based models are often complementary.” \nIn fact, in our experiments, we show that, when concept memberships are initialized with graph-based features of the entities, such as their membership to paths, this can significantly improve the accuracy of downstream neural link prediction models.\n\n[1] Nickel, M. et al. - A Review of Relational Machine Learning for Knowledge Graphs. Proc. IEEE 104(1): 11-33 (2016) - https://arxiv.org/abs/1503.00759\n\nThanks to the suggestion of Reviewer ygGy, we were able to provide an additional quantitative study over the quality of clusters learned by ConFormA/E.\nIn a nutshell: ConFormA and ConFormAE deliver better clusters than directly clustering neural link predictor embeddings according to normalized mutual information scores and for different embedding sizes, base neural link prediction (ComplEx, DistMult) and across initializations with different number of clusters. Please see our updated revision, Appendix G, as well as our answers to other reviewers.\n Global/general Response:\n```\nWe would like to thank all the reviewers for their insightful suggestions and comments. In addition to more detailed individual replies, we clarify some common themes here.\n\n\n**ConFormA: clustering entity embeddings learned by neural link predictors**\n\nReviewers FyiY and ygGy both inquired about empirical evidence regarding the statement that performing ConFormA via directly clustering entity embeddings does not result in performance improvements in downstream link prediction tasks, and the reasons why this is the case.\nFirst, we would like to make it clear that it is not that clusters learned directly on latent embeddings are not meaningful but rather that they are not improving the accuracy of the neural link predictor -- which is the key objective of our work. In fact, the concept-membership information contained in the embedding representations is already implicitly exploited by the neural link predictor, and reifying it adds little information to the model.\n\nTo demonstrate this, we have updated the paper (Appendix D, Table 10) to include an empirical investigation in which we augment both WN18RR and FB15K-237 with concepts by clustering ComplEx embeddings. We report MRR results for a range of embedding sizes and number of clusters and find that in most cases the augmentation either degrades the accuracy of the neural link predictor, or yields no significant improvements. \n\nAs for why performing ConFormA using random paths representations leads to an improvement, sometimes even surpassing ConFormAE, we point to [1] where authors already observed that the “strengths of latent and graph-based models are often complementary.” Using graph-based features rather than latent features for computing the concept membership can provide additional information, improving the prediction accuracy in downstream link prediction tasks. \n\n[1] Nickel, M. et al. - A Review of Relational Machine Learning for Knowledge Graphs. Proc. IEEE 104(1): 11-33 (2016) - https://arxiv.org/abs/1503.00759\n\n```\n\n Global/general Response:\n```\nWe provide in Appendix G of the last uploaded revision a quantitative analysis of the clusterings delivered by ConFormA and ConFormAE on UMLS (based on the semantic groups of [1]) and on FB15K-237 (based on the `notable_type` attributes of the FB dump as suggested by reviewer ygGy.\n\nWe evaluate the quality of clusterings by the Normalized Mutual Information score (or its generalized version to deal with the overlapping clusters found in FB) between the clusterings learned by ConFormA and ConFormAE against the golden clusters and compare against the other baseline that reviewers mentioned in their comments: directly clustering neural link predictor embeddings.\n\nSee the updated revision for details. In summary: on UMLS, ConFormA and ConFormAE outperform clustering neural link predictor embeddings and recover the group information rather faithfully, with ConFormAE achieving the best scores overall. For FB15k-237, we find that across all 3 approaches the NMI is relatively low, indicating that all the methods struggle to recover the concepts as specified by types. \n\n[1] Olivier Bodenreider and Alexa T. McCray. Exploring semantic groups through visual approaches.J. Biomed. Informatics, 36(6):414–432, 2003\n```\n", "role": "user" }, { "content": "Thank you for the authors' reply. It answers most of my concerns, and I will consider raising the rating score.", "role": "assistant" } ]
VX0swzJEzpg
AKBC 2021 Conference
Reviewer_e1h2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_e1h2, and the following are the paper you need to review:\n```<<VX0swzJEzpg>>```", "role": "user" }, { "content": "Title: Nice extension of multi-instance learning for biomedical RE supported by strong results\nReview Content: ### Summary\n\nThis paper proposes a variant of multi-instance learning (MIL) for biomedical entity extraction called abstractified multi-instance learning (AMIL), which uses entity type information to construct bags of instances. In the paper, the authors first point out that a long-tail distribution of fact triples (i.e., (entity1, relation, entity2)) diminishes the advantage of the standard MIL since many bags might end up with duplicated instances (i.e., heavy upsampling is required to fill all the bags). This is particularly a problem in the biomedical data given that 50+% of fact triples appear in less than three sentences in PubMed. Instead, AMIL groups training instances by UMLS semantic types, which are shared across different entities. This approach results in more diverse instances in the bags, maximizing the benefit of MIL. For evaluation, the authors construct their own training/dev/test split from PubMed and UMLS. The BioBERT-base RE models are trained with AMIL and compare with the prior work (Amin et al., 2020) retrained on the same data. The main results show that AMIL models outperform the baseline by large margin. The authors further investigate performance on rare/common triples and 17 different relationship representations.\n\n\n### Overall Review\n\nOverall, this paper is clearly written and carefully explains the proposed methods and motivations behind them. The idea of AMIL is quite simple, but backing off to more general information is a clever solution to alleviate the data sparsity issues in biomedical RE data. Evaluations are fair, and the strong experimental results support the AMIL’s effectiveness. It’s interesting to see that diverse bags by AMIL (using type-level information) can gain over the standard MIL (using entity-level information) without losing performance. In addition, improvement on rare triples indicates that having diverse sentences in bags is crucial. Although extensive architecture search on relationship representations is useful, it’s hard to see why “L” works best among others given the model’s black-box nature. The instance grouping is briefly mentioned in Section 4.1, but more detailed discussion would be helpful for reproducibility.\n\n\n### Pros\n\n1. Proposing AMIL that shows advantages on biomedical RE.\n2. Experimental setup seems to be fair and appropriate. Analyzing performance on rare/common triples is informative.\n3. Results are quite strong and promising. \n\n\n### Cons\n\n1. AMIL assumes an access to high-quality KBs such (e.g., UMLS) which might not be available in some data settings.\n2. Although instance grouping is a key step of AMIL, it is not discussed in detail.\n\n\n### Questions / Suggestions\n\n1.\t> \"The UMLS semantic network is curated by human experts for decades and provides a rich ontology of biomedical concepts...”\n\n How would you evaluate the annotation quality of semantic types? What’s the average number of types per entity? Are they always consistent? Or, it depends on the editor? It would be nice to show some entities with types (possibly in appendix). \n\n2. When you group training instances, do you always use all type combinations? Have you investigated good/bad type combinations? Do you end up with similar bags if many entity pairs share the same type sets?\n\n3. Have you tried your version of MIL baseline (matching everything except grouping)?\n\n\n### Minor points\n\n- I see “Amin et al.” in many places (missing (2020)).\n- In Table 2, the numbers from Dai et al. (2019) and Amin et al. (2020) are not directly comparable with your numbers (correct?). It would be nice to say it in the caption explicitly.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer e1h2\nResponse to general comments:\n\nWe agree, it is hard to see why relationship representation type \"L\" outperforms all other representations. Type \"L\" is a combination of the middle mention pool and entity end markers, both of which perform well on their own. In the paper, we lend intuition as to why the middle mention pool performs well but cannot do so with end entity markers. Determining why such an embedding performs best for biomedical relationship extraction would be an interesting area for future work. \n\nWe will expand on the explanation of the instance grouping and we will release all of our code to ensure future efforts can easily recreate all of our experiments.\n\nThe point that AMIL assumes access to high-quality KBs such as UMLS, which might not be available in some data settings, is a valid point and a key limitation to this approach and distant supervision in general. The method excels in the biomedical domain thanks to UMLS and could only generalize to other domains that have access to high-quality KBs. However, if approaches that rely on high-quality KBs, like AMIL, mature and become successful, a case can be made to divert human intelligence and resources to constructing KBs like UMLS rather than labeling text. We believe that the AKBC community will be interested in the idea of learning from a KB to construct a KB instead of learning from labeled text like what NLP community is attempting. Learning from a KB to construct a KB forms a positive feedback loop which we believe is a very promising direction for the field, in general.\n\nQuestion 1:\nOur understanding is that the UMLS semantic types are consistent and of reasonable quality. The Semantic Network is regularly maintained and updated by experts multiple times a year.  For entity types, we use the immediate parent of an entity. Each entity in the Semantic Network has a unique parent concept type. If accepted, we will add an example of an abstractified bag of entities to the Appendix.\n\nQuestion 2: \nThis is an area we hope to explore in future work. We believe performance could be improved by identifying patterns of underperforming, or \"bad,\" groups and potentially have multiple models to handle different grouping schemes. Further research is also needed to identify the level of similarity or dissimilarity between bags -- this will be an interesting area to look into.\n\nQuestion 3: \nYes, we include the Amin et al. (2020) model with improved sentence segmentation (e.g. SciSpacy instead of the NLTK English) as a baseline biomedical relation extraction model that uses MIL without grouping. This baseline also uses the original relationship representation embeddings. As such, we believe it is a good baseline model to compare both AMIL and AMIL with improved relationship embeddings (e.g. Relationship Embedding Type \"L\").  \n\nResponse to Minor Points:\nWe will address all these points in our revised manuscript. We will also add a formal problem formulation to the manuscript as Reviewer #2 also noted its absence. ", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It resolves the majority of my concerns. I continue to recommend this work positively.", "role": "assistant" } ]
VX0swzJEzpg
AKBC 2021 Conference
Reviewer_3SjD
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_3SjD, and the following are the paper you need to review:\n```<<VX0swzJEzpg>>```", "role": "user" }, { "content": "Title: Valid contribution and encouraging results\nReview Content: The paper presents explores the benefits of abstractifying relationships, i.e., taking into account the type signatures of its instances, to improve relation extraction from biomedical text corpora. The paper applies this technique on a state-of-the-art method for relation extraction based on BioBERT. The experimental evaluation on a handful of relation representations suggests that abstractified multi-relational extraction with middle mention attention pools and entity end markers outperforms the state of the art, and is particularly useful for tail relationships, i.e., relationships for which there are not many sentences that assert them. The results show that for those triples, both the aforementioned techniques provide a performance boost (Table 3).\n\nThe paper is overall easy to read and the experimental results are encouraging. The experimental evaluation is also comprehensive. I have nevertheless a few minor remarks.\n\n- Problem formulation\n\nI think the paper lacks a clear and proper problem formulation, e.g., something like, given a text corpus T and a knowledge graph G, our goal is to extract a set X of triples, etc., etc. I managed to have a clearer picture of the problem rather late (when reading the experiments). Readers of the paper may benefit from an early formal problem formulation in order to know to which extent they could apply the techniques proposed in the paper.\n\n- Datasets\n\nWhile Section 3 suggests that the relationships \"narrower\" and \"broader\" were excluded for extraction, the paper enhances the underlying embedding model with abstractified triples, i.e., predicate signatures. Where do exactly those signatures come from? Is there an rdf:type/type predicate for entities?\n\nMoreover, why did the authors exclude the relationships \"narrower\", \"broader\", and \"synonymous\". Is it because they are trivial to extract? I think the paper should make clearer the rationale behind this decision.\n\nAlso, the paper should provide some hints of the quality of the corpus annotation process. How accurately are the entity mentions annotated? (Section 4.1). Studying this aspect might be interesting and useful for settings where structured knowledge is scarce and/or noisy.\n\n- Evaluation\n\n-- The paper states: \"We also compare to Dai et al.(2019) but, due to a difference in data, comparisons to AUC and F1 are not possible but we are able to compare precision on the top K predictions.\"\nI would recommend the authors to provide a more detailed justification.\n\n-- The difference between corpus-level and sentence-level evaluation is not clear. Is this difference only based on the distinction between rare and common triples?\n\n-- A website or git repository with access to the data and source code would be great!\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 3SjD\nProblem formulation: \n\nThank you for the suggestion. We will add a problem formulation early in the manuscript so readers understand the problem without relying on other sections of the paper. \n\n\nDatasets:\n\nAll the entity types and relationships come directly from the UMLS Semantic Network. In the Semantic Network, each entity has a unique entity type. Typically “broader”, “narrower”, and “synonymous”  are well-known static relationships between entities. For our task, we are interested in the more informative and potentially unknown relationships between biomedical entities to further scientific understanding. This decision is also informed by previous works on these tasks and allows for a more fair comparison between our model and past models. We will clarify this in the Dataset section of the manuscript. \n\nWe appreciate the point made requesting a report on how accurately the entity mentions are annotated. We rely on the UMLS Metathesaurus, which contains concepts and their lexical variations, to identify entities from text. We believe it performs reasonably well and that a deeper analysis of its performance would require additional tests with a labeled dataset which is out of the scope of this paper. Analyzing the quality of the extracted concepts would indeed be informative and it is something we hope to do in the future.\n\n\nEvaluation:\n\nWe were unable to attain the code and data used by Dai et al. (2019). Ideally, we would have trained and tested the Dai et al. model with the same data we used for our other experiments. Since this was not an option, we provide the results of the Dai et al. model as reported by the authors. Without access to data from their experiments, we believe a direct comparison is unfair. However, the precision at K indicates the model's overall ability to extract the true triples from a hold-out set of triples found in a test corpus. Because we use similar data (e.g. the UMLS knowledge graph with raw text from PubMed abstracts), we believe this metric allows for a good, but not perfect, comparison.  We will update the manuscript with the above explanation.\n\nWe will work to clarify the difference between corpus-level and sentence-level evaluation. The key distinction is that corpus-level evaluation evaluates the model's ability to predict a hold-out set of unique triples from a test corpus. Corpus-level performance is used to gauge the model's usefulness in knowledge graph completion -- that is, predicting missing edges in a knowledge graph with high confidence. Sentence-level evaluation, or, in our case, bag-level evaluation, evaluates the model's performance on predicting the relationship for each bag of sentences. It's not typically used in biomedical relation extraction but it allows us to decompose triples by their frequency (e.g. the number of supporting sentences) into \"rare\" and \"common\" triples and report performance on each subset of triples. We include sentence-level evaluation to highlight the benefits of AMIL and confirm our hypothesis that grouping entities by entity type has a greater impact on the model's ability to predict \"rare\" triples since, with abstractified bags, we do not need to rely on upsampling as much as previous works. \n\nWe plan to release all of our code on GitHub as well as the instructions for obtaining the data. ", "role": "user" }, { "content": "Title: Thanks for the clarifications\nI thank you very much the authors for the clarifications!", "role": "assistant" } ]
VX0swzJEzpg
AKBC 2021 Conference
Reviewer_Jwtr
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Jwtr, and the following are the paper you need to review:\n```<<VX0swzJEzpg>>```", "role": "user" }, { "content": "Title: Interesting contribution on distantly supervised relation extraction for biomedical domain\nReview Content: Summary\nThis papers studies distantly supervised biomedical RE, where the data have a long-tail distribution. \nThe authors hypothesize that formulating bags using distinct entity pairs results in many bags with too few sentences, which may hinder performance. \n\nThe authors then propose abstractified multi-instance learning (AMIL) that formulates bags using UMLS semantic type (i.e., biomedical named entity type). AMIL outperforms the previous state-of-the-art method in Amin et al. 2020. Most of the performance gain comes from \"rare\" tuples, validating the hypothesis in the beginning. The author also empirically compared 17 different methods to construct the representation, where they find the type L achieves the best performance.\n\nStrength\n- The proposed method AMIL is a intuitive and effective extension to MIL targeting long-tail data distribution in distantly supervised RE. AMIL outperforms previous state-of-the-art in Biomedical RE.\n- The authors did a careful analysis on pre-processing and construction of representation. These small decisions actually influence performance a lot. Such empirical advice can be useful for other researchers.\n- The paper is well-organized. The figures are clear and informative.\n\nWeakness\n- Dataset creation and evaluation protocol may have defects. See the question below.\n- The work may be further strengthened with additional baselines. Since AMIL benefits from the UMLS semantic types, one baseline method I can think of is to include trainable embeddings for these types and include them in constructing relation representation (which is used in pre-BERT methods such as PCNN). \n\nQuestion\n- Since the construction of bag is different in AMIL, is it possible that sentence-level attention (Lin et al., 2016) becomes beneficial for AMIL?\n- I wonder whether sentence-level evaluation is reasonable and whether it gives a good estimate of the performance. The test set is taken from a bigger dataset constructed with distant supervision, so the test data can still be very noisy. \n- I'm not sure if listing performance from Dai et al. (2019) and Amin et al. (2020) is reasonable. The dataset construction in this paper seems to be different, so the numbers are not comparable.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Jwtr\nResponse to adding additional baselines:\n\nThank you for this comment. This is a good point. We could have used embeddings from a PCNN, instead of R-BERT with BioBERT pre-trained weights, to form our relationship representations but Amin et al. (2020) show large improvements over Dai et al.'s (2019) work which makes use of the PCNN architecture. We believe an additional baseline using a PCNN would primarily illustrate the benefits of using embeddings from a large, pre-trained transformer over a PCNN, and it would only weakly support our case for abstractified multi-instance learning.\n\nQuestion 1:\n\nWe experimented with sentence-level attention and observed similar results as Amin et al. (2020). This goes against our intuition and we hope to explore why an attention mechanism is not beneficial in this case and what can be done to make it more beneficial. \n\nQuestion 2:\n\nThis is a good point -- for this task, sentence-level evaluation is not a good estimate of overall model performance. We only include sentence-level evaluation to highlight the benefits of AMIL over traditional multi-instance learning methods for predicting rare triples. For overall model performance, we use corpus-level evaluation. \n\nQuestion 3:\n\nThis is true. The best and most fair comparison is between our proposed AMIL model and the previous Amin et al. (2020) model with SciSpacy since both models are trained and tested on identical data. The comparisons to Dai et al. (2019) and the original Amin et al. (2020) only help to illustrate the overall gains from our method using improved sentence segmentation, abstractified entities, and more informative relationship representations.", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised. I continue to recommend this work positively.", "role": "assistant" } ]
W0mr06PxTHp
AKBC 2021 Conference
Reviewer_Nx4p
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Nx4p, and the following are the paper you need to review:\n```<<W0mr06PxTHp>>```", "role": "user" }, { "content": "Title: Solid improvement, but I'm skeptical about the setting\nReview Content: This paper tackles the semi-supervised relation classification problem with only one manually annotated example for each relation. The method is largely based on the BERT matching model (Baldini Soares et al., 2019). The authors propose to (1) also take entity types as input, (2) use BERT matching to get noisy labels on the unlabeled data, (3) take a noisy channel and an entity type reconstruction objective to train the model on the self-labeled data. The setting is new, so the authors compared their method to BERT matching and a previous bootstrap method. The improvement compared to those selected baseline models is significant, but the performance is in general very low and there is a large gap to the supervised model. There is an ablation study showing the effectiveness of the entity type reconstruction and the noisy channel. \n\nStrength:\n\nThe proposed method is very intuitive (noisy channel and self-labeling) and the performance is strong (compared to the baselines).\n\nWeakness: \n\n1. In the experiment, the authors discard all the \"no relation\" sentences. \"No relation\" or N/A data are the most challenging part of relation extraction/classification problems, and ignoring them makes it a very toy setting.\n\n2. This one-shot + weakly-supervised setting is not well motivated. First, I didn't quite understand why the authors take the \"one-shot\" setting instead of taking the \"few-shot\" setting. Few-shot is in general more applicable in the real world. Second, the authors emphasizes that they are taking a \"true few-shot\" setting, but they actually take a \"toy\" weakly supervised setting, where they take the sentences from the supervised dataset, instead of taking an open corpus. Considering the authors also exclude the no-relation instances from the dataset, it makes the setting even easier.\n\n3. This is a minor point: I feel the writing and the structure of the paper could be further improved. The setting is not clearly explained at the beginning. \n\nQuestion: \n\n1. I'm curious about why the entity type reconstruction can help improve the accuracy. The input already includes the entity types (in the case of TACRED, those are even gold entity types), then why does it matter to \"reconstruct\" the entity types? Shouldn't the model be able to directly use this information?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: response \nWe would like to thank the reviewer for thoughtful comments and questions. We're especially grateful for the criticisms, and the writing suggestion. \n\n**Re the weaknesses**\n\n> 1. In the experiment, the authors discard all the \"no relation\" sentences. \"No relation\" or N/A data are the most challenging part of relation extraction/classification problems, and ignoring them makes it a very toy setting.\n\nWe are fully aware of the importance of NA. However, as our setting is very challenging, we take it step by step. The decision of temporarily ignoring NA is in line with the few-shot relation classification research; for instance, wiki80, reconstructed from FewRel version 1, ignores NA. BERT matching [Baldini Soares et al. 2019] also ignores it in the few-shot setting. As we mention in our conclusion section, we will tackle the NA prediction problem in our future work (in line with FewRel version 2).\n\n\n> 2. a) I didn't quite understand why the authors take the \"one-shot\" setting instead of taking the \"few-shot\" setting. \n\nWe believe that our NoelA is applicable to few-shot because the core idea doesn't depend on how many examples per relation. In fact, the more examples we have, the better we can denoise the annotation. We experimented with one-shot in this paper because we aim at minimizing the user's effort in providing examples. \n\n\n> 2. b) The authors emphasize that they are taking a \"true few-shot\" setting, but they actually take a \"toy\" weakly supervised setting, where they take the sentences from the supervised dataset, instead of taking an open corpus. \n\nWe would like to argue that this is not a \"toy\" setting because most aspects of our used datasets (except the NA issue which we intentionally left out) are natural. Evaluation on an open corpus shouldn't be more natural because if we manually annotate it, the corpus will be a supervised dataset like the ones we use. \n\n\nHere, we rely on the two supervised datasets because their available manual annotations allow us to analyze the weak labels predicted by BERT, without the need for recruiting human annotators to evaluate the predicted labels.\n\n\n> 2. c) Considering the authors also exclude the no-relation instances from the dataset, it makes the setting even easier.\n\nAs BERT matching's performance is very modest on both datasets, 15.46% and 27.48% on TACRED and reWiki80, respectively, we would like to emphasize that our setting is not easy at all. (This is also noticed by reviewer fjSn.) \n\n\n**Re the question**\n\n> The input already includes the entity types (in the case of TACRED, those are even gold entity types), then why does it matter to \"reconstruct\" the entity types? Shouldn't the model be able to directly use this information?\n\nAlthough the encoder takes entity types as input, it has a freedom to ignore this information. It is very much similar to what happens to auto-encoder: if we don't reconstruct the input, the encoder has no motivation to \"distill\" the whole input into a vector. \n\n\nEvidently, Tran et al 2020 shows that a very simple model reconstructing entity types outperforms much more complicated models taking entity types as input without reconstructing them. \n", "role": "user" }, { "content": "Thanks for the authors' clarification. It satisfactorily addresses most of my concerns.", "role": "assistant" } ]
W0mr06PxTHp
AKBC 2021 Conference
Reviewer_Rzkz
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Rzkz, and the following are the paper you need to review:\n```<<W0mr06PxTHp>>```", "role": "user" }, { "content": "Title: Solid Contribution to one-shot relation classification\nReview Content: This paper considers the problem of one-shot relation classification, in which a model is given a sentence with two entities identified, along with the entity types. The model must predict the label of the relationship, given only unlabeled data and a single labeled example from each possible relationship label. The proposed technique, NoelA, first uses weak supervision by labeling the unlabeled data to train a classifier. Each unlabeled example is embedded with a language model such as BERT and given the label of the nearest neighbor among the labeled examples. Then, the weak labels are used to train a multi-task model. One task head predicts the relationship type. The other is an auto-encoder task that predicts back the entity types of the entities in the sentence. The model also includes dispersion regularization.\n\nExperiments on TACRED and reWiki80 using three language models (BERT, GPT2, and SpanBERT). NoelA offers significant improvements over the baselines of using language models directly for labeling, or training a classifier alone on the weak labels. Ablations also show that the three components of the loss function all contribute to the positive results.\n\nThe combination of bootstrapping and multi-task learning is a nice experiment. It's a straightforward pipeline of two common techniques (unless learning to reconstruct entity types is more novel, see question below).\n\n### Questions and comments\n- Bootstrapping from one-shot supervision is nice, but how directly tied is it to the benefit of learning to reconstruct entity types. Do any of the references [Hancock et al., 2018, Ma et al., 2019, Tran et al., 2020] also use an autoencoder component to learn from entity types?\n- Something that could strengthen the paper is a discussion of how even a few labeled examples per class could fit into the framework. Some few-shot learning methods can easily handle a variety. Do the authors have an opinion about how best to apply NoelA in the few-shot setting?\n- Regarding the claim \"Therefore, we force the model to capture the entity type bias by\nreconstructing the entity types of h, t.\" in Section 4.2, is there any way to know that this is really encoding a bias specific to entity types? How do we know, for example, it's not just a generic type of regularization?\n- In section 5.1, it says \" We manually created a data-agnostic exemplar for each relation in which head and tail entities were randomly selected and mostly unseen in the two datasets.\" It would be nice to clarify what \"mostly unseen\" means.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: response\nWe would like to thank the reviewer for thoughtful comments and questions.\n\nRe the questions\n\n \n> Bootstrapping from one-shot supervision is nice, but how directly tied is it to the benefit of learning to reconstruct entity types. Do any of the references [Hancock et al., 2018, Ma et al., 2019, Tran et al., 2020] also use an autoencoder component to learn from entity types?\n\nSince entity types limit the possible relation types between an entity pair (e.g. \"father_of\" can only be between two persons), our entity type reconstruction component is proposed to encourage the model taking into account such information. To see how reasonable this is, please refer to section 6.4. Besides, table 1 shows the empirical effect of using entity type reconstruction: when removing the entity type reconstruction component, the performance drops 3.25 and 0.69 % in terms of accuracy on TACRED and reWiki80, respectively. \n\n\nTran et al (2020) shows that a discrete-state variational autoencoder model reconstructing entity types outperforms much more complicated models taking entity types as input without reconstructing them. Hancock et al., (2018) and Ma et al., (2019) show that entity types impose relation types.\n\n> Do the authors have an opinion about how best to apply NoelA in the few-shot setting?\n\nWe believe that our NoelA is applicable to few-shot because the core idea doesn't depend on how many examples per relation. In fact, the more examples we have, the better we can denoise the annotation. We chose one-shot because we aim at minimizing the user's effort in annotation. \n\n\n> Regarding the claim \"Therefore, we force the model to capture the entity type bias by reconstructing the entity types of h, t.\" in Section 4.2, is there any way to know that this is really encoding a bias specific to entity types? How do we know, for example, it's not just a generic type of regularization?\n\nThis entity type reconstruction is indeed a type of regularisation as we expect the encoder to behave in some certain way. As the problem of predicting entity types is as difficult as named-entity recognition, the encoder should have some bias specific to entity types. \n\n\n> In section 5.1, it says \" We manually created a data-agnostic exemplar for each relation in which head and tail entities were randomly selected and mostly unseen in the two datasets.\" It would be nice to clarify what \"mostly unseen\" means.\n\nBy \"mostly unseen\", what we meant is as follows. Considering Wiki80, 51.72% entities are seen in the dataset but only 1.25% (1 pair of entities in the exemplars) occur in the training dataset. Regarding TACRED, 28.81% of entities are seen in the dataset but no pairs of entities in the exemplars occur in the training set.\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It adequately addresses the points I raised. I maintain my positive recommendation for this work.", "role": "assistant" } ]
W0mr06PxTHp
AKBC 2021 Conference
Reviewer_fjSn
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_fjSn, and the following are the paper you need to review:\n```<<W0mr06PxTHp>>```", "role": "user" }, { "content": "Title: Review\nReview Content: ### Summary\n\nThis paper proposes a weakly supervised relation classification model using pre-trained language models (LM). It considers the very challenging setting in which only one example of relation type is available. However, LM matching-based predictions which align a test sentence to a relation exemplar, are usually incorrect. This paper proposes to learn from those noisy labeling. \n\nGiven a sentence and two entities (and their types), their model first encodes the sentences and then mean pools over the entity span. It then concatenates the entity representation with the representation of their types and passes through a feedforward network (with ReLU). Since the LM-based predictions are usually incorrect, the paper takes a noisy-channel approach in which they model the co-occurrence between the predictions of the noisy and the true label by the model. Since the correct relation is not known, they marginalize over the random variable representing true relation. The intuition behind this loss is that for relations that are easily confused by the model, the loss function increases the probability of the true relations as well (assuming LM-based predictions would be somewhat meaningful and will assign reasonable weight to the ground truth relation as well). Since entity types are helpful for relation classification, the paper has another objective for reconstructing the entity types of the two entities.\n\nThe paper conducts experiments on two datasets — TACRED and reWiki80 using multiple LMs (Bert, GPT2, SpanBert). The method achieves modest improvements over the BERT matching system (6% - 9%). However, the absolute performance is pretty low in general (24% for TACRED and 33% for reWiki80). The difference between a model trained with supervision is much higher (> 80% for TACRED and >70% for reWiki80), suggesting that their setup is indeed very hard. \n\n### Strengths:\n\n1. This paper considers a very hard (true one-shot) setting for weakly supervised relation extraction. \n2. The noisy channel approach is pretty interesting and has not been applied to few-shot relation extraction before.\n3. The paper is well written in general.\n\n### Weakness:\n\n1. Few modeling details are not well motivated (e.g. dispersion regularization term)\n2. The bootstrapping baseline should be moved from the appendix to the main section of the paper.\n\n### Question for Authors:\n\n1. Side note: Please number the equations, it makes referring to them easier.\nSince the equation for computing the probability of r' marginalizes over all relations, what is stopping from all the p(r | s, h, t) to increase? How do you ensure that the prob of only relations which has high q(r' | r) increase? Would it make sense to also include a term that decreases the entropy of the p(r | h, s, t), i.e. favors peaky distributions?\n2. In the loss function, there is a dispersion regularization term. I think the paper lacks the detail regarding why that is required. What would cause the encoder to predict the same relation across all instances?\n3. In the end of section 3, you mention a way of encoding the input as [x_h;x_t]. However, later in Sec 4, you redefine how the input is encoded (e.g. ReLU(Linear(Concat(x_h, x_t, x_eh, x_et)))). I am a little confused regarding the initial definition in Sec 3. Is it being used anywhere?\n4. \"The observation here indicates an interesting behaviour of NoelA: it seems to adjust its\nattention according to the hidden gold relation distribution.\" —- Why do you think this is the case? How is NoelA adjusting performance wrt the gold relation distribution?\n5. Suggestion: It would be informative to plot the C (or q(r'|r)) matrix to visualize if BERT also allocates reasonable scores (if not highest) to the correct relation. Edit (I see the confusion matrix is presented in Appendix D, but I think it could make sense to bring it to the main section of the paper.\n6. Paper organization suggestion: I think the paper would be more convincing / readable if the analysis in sec 6.1 and 6.2 regarding Bert matching along with the confusion matrix (in Appendix D) is presented to motivate the modeling decision rather than at the end of the paper. Also, please move the details of bootstrapping to the main section of the paper. I am not sure if an additional page is allowed in the camera-ready. If not, to make up for the additional space, I believe the section 6.4 could be moved to the appendix. Similarly size of table 1 could be shortened (fontsize).\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: response\nWe would like to thank the reviewer for the thoughtful comments and questions. We're especially grateful for the writing suggestions (e.g. reconstructing the draft, numbering the equations), and will change our draft accordingly. \n\n**Re the weakness**\n\n> Few modeling details are not well motivated (e.g. dispersion regularization term) (and question 2)\n\nFigure 3 shows that the relation distributions of the noisy data are peaky. Learning from the peaky distributions may lead the model biased towards frequent predicted relations and result in predicting a subset of the relation types. To prevent this issue, we employ the dispersion regularizer that encourages the model to predict a diverse set of relations.\n\n\n**Re the questions,**\n\n> 1. a) What is stopping from all the p(r | s, h, t) to increase? \n\nBecause we use a softmax to compute p(r|s,h,t) (please look at \"relation classifier\", section 4.2), this p is probabilistic. Therefore, when p(.|s,h,t) increases for one r, it has to decrease for some other r's. \n\n\n> 1. b) How do you ensure that the prob of only relations which has high q(r' | r) increase? \n\nThanks to the backpropagation, if p(r'|r1) > p(r'|r2) then p(r1) increases faster than p(r2) (the conditions s,h,t are temporarily removed). To see that, we can look at the gradient r.t p(r): \n \n dL / dp(r) = dL / dp'(r') * dp'(r') / dp(r) = dL / dp'(r') * p(r'|r). \n\nThus, if p(r' | r1) > p(r' | r2), we have |dL /dp(r1)| > |dL/dp(r2)| and hence p(r1) will change faster than p(r2). \n\nWhen we increase p'(r') as we observe r' in the dataset, we also try to increase p(r) where p(r'|r) > 0 (otherwise, the loss is getting worse). Because |dL /dp(r1)| > |dL/dp(r2)|, p(r1) will increase faster than p(r2). \n\nAnd now, thanks to the fact that p(r) is probabilistic, p(r) can only increase for some r's whose p(r'|r) are highest.\n\n\n> 1. c) Would it make sense to also include a term that decreases the entropy of the p(r | h, s, t), i.e. favors peaky distributions?\n\nAlthough decreasing the entropy of p(r) might be helpful, we don't consider that because we don't want to make the model over-confident here. This is because, even without decreasing entropy, neural networks are notorious for being over-confident with their predictions. Being over-confident in our case is extremely dangerous because the training data is noisy. As a result, we are afraid that the model will be more prone to the data noise if we decrease the entropy of p(r). \n\n> 2. In the loss function, there is a dispersion regularization term. I think the paper lacks the detail regarding why that is required. What would cause the encoder to predict the same relation across all instances?\n\nPlease see the answer to the weakness above. \n\n\n> 3. In the end of section 3, you mention a way of encoding the input as [x_h;x_t]. However, later in Sec 4, you redefine how the input is encoded (e.g. ReLU(Linear(Concat(x_h, x_t, x_eh, x_et)))). I am a little confused regarding the initial definition in Sec 3. Is it being used anywhere?\n\nWe use the encoding [x_h,x_t] for our baselines (e.g. BERT matching, used by Baldini Soares et al. [2019]’s). The encoder described in Section 4 is of our proposed model. \n\n> 4. \"The observation here indicates an interesting behaviour of NoelA: it seems to adjust its attention according to the hidden gold relation distribution.\" —- Why do you think this is the case? How is NoelA adjusting performance wrt the gold relation distribution?\n\nWe think this is the case because of the used noisy channel. In the noisy channel, we explicitly model p(r|s,h,t) for the true, hidden relation r. As a result, NoelA might be aware of the fact that the true, hidden distribution of the gold label p(r) is different from the distribution of the noisy label p'(r'). Therefore, NoelA is able to differ p(r) from p'(r'). (Again, s, h, t are temporarily removed for brief.)\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The authors have answered almost all of my concerns. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
a4-fFL7aCi0
AKBC 2021 Conference
Reviewer_CJ6w
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_CJ6w, and the following are the paper you need to review:\n```<<a4-fFL7aCi0>>```", "role": "user" }, { "content": "Title: Interesting experiments, but containing some overly strong claims\nReview Content: This paper presents several experiments that make use of combined knowledge from language models and unstructured knowledge bases. The paper is mostly well-written, though some choices in presentation made it difficult to follow (leading me to go back and forth). I will provide some suggestions to improve this below. The paper also contains some claims that, in my opinion, are not supported by the evidence. Since the paper does not need these claims in order to provide a nice contribution, I suggest they be adapted.\n\nThe claims of which I am not convinced based on this research are the following:\n\n- the claim that the results show the models capability of reasoning with implicit knowledge. \n=> what it means for a model to do this is a very complicated question and requires much more reflection on what implicit knowledge is and what reasoning with it means (and how that can be shown). I do not believe that solving something despite lack of lexical overlap provides evidence for this, since language models are known to capture similarity in meaning.\n- \"The significant improvement over BERT (3-14%) shows the ability for these methods to utilize implicit knowledge, which is especially useful for low-resource languages, target domains where we can pre-train using fewer data and use ad-hoc knowledge to solve a target task and have smaller vocab and params.\"\n=> BERT is still huge and the quality of similar models for real low-resource languages will be much lower. For such a claim, I would expect experiments that play with very small language models and limited training data (because this also tends to be the case for low-resource languages). What is more relevant from these results is that indeed the results with BERT come closer to those of RoBERTa and that this is therefore potentially interesting for reducing training costs. It also leads to the future research question of what size vs knowledge is doing (this current work does not dive into this question, but it seems the most logical thing to examine: what mistakes does each method still make and are they similar). \n\nAs mentioned earlier, I think the paper provides sufficient contributions without these claims and I therefore suggest they be revised.\n\nSuggestions for improvement on the presentation.\n\nAbout the structure:\n\n- the choice of spreading the general approach over various sections (Section 2 about the sets, Section 3 about the knowledge sources and their preparation and Section 4 with the rest of the method made the overall setup more difficult to follow. The first sentence in Section 4 illustrates this (it is starting with `After' which shows there is a strong connection with the previous sections). I suggest merging the three sections and use the saved space to take the reader a bit more by the hand (replacing the headers by short motivations). \n- I almost missed that both validation and a separate test set were present since only the size of the validation set was originally reported and had already made a note about using separate validation for tuning: this could be prevented by introducing all three components of the data right away and clearly stating how each set is used.\n- I am still a bit confused about what has actually been tested to support the hypotheses and this is largely due to the use of the term `baseline' for what, if I understand correctly turn out to be the actual experiments. I recommend removing the full table for tuning and use the additional space to explain better what the reported results are.\n- The paper states \"As a result, the knowledge does not overlap with the gold answer, making the task trivial.\" This statement (in particular the choice of the word ``trivial'') can be a bit confusing, since trivial is also often used to state something is extremely obvious. I assume the problem is in fact the opposite; that using knowledge correctly would lead to the wrong outcome and that therefore the outcome on these instances do not contribute to insights on the methods?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to R1\nR1.1. **\"the claim that the results show the models capability of reasoning with implicit knowledge.\"**: We understand the concern. We meant given our setup of commonsense QA with implicit retrieved knowledge sentences, there is a significant improvement in the performance. As these datasets require commonsense reasoning, this improvement can be attributed to the increased ability to reason with such implicit knowledge sentences as guidance. We have updated the section accordingly. \nWe have studied how the model behaves for different types of knowledge, i.e, explicit, implicit and irrelevant. This analysis is present in the supplementary material. But we don't have all knowledge variants for the same particular question. We will include an additional analysis of how the model prediction changes when we provide explicit, implicit and irrelevant knowledge to better understand if the model is able to answer correctly for both the explicit and implicit cases.\n\nR1.2. **\"The significant improvement over BERT (3-14%) shows the ability for these methods to utilize implicit knowledge, which is especially useful for low-resource languages”**: We agree that compared to methods that did not use any pre-training on any source of text present online, BERT is indeed a larger model. However, BERT was trained only with 13GB data, v/s 160 GB for RoBERTa, and GPT-3, T5 and all using C4 and super-massive datasets (> 1TB). We agree that for very low resource languages that do not even possess ~10GB of data for pre-training a BERT model, it is difficult. We have added this perspective and limitation. We agree the proposed experiments are better suited for the low-resource setting.\n\nR1.3. **“the choice of spreading the general approach over various sections”**: Thanks for the suggestion. We have incorporated reviewer suggestions to the best we could during the limited time. We will address this in the final camera ready version.\n\nR1.4. **“I almost missed that both validation and a separate test set were present”**: Test set samples are blind for these 3 challenge datasets. We uploaded our model to the evaluation server to get test set scores. We have mentioned this in the relevant section.\n\nR1.5. **“I am still a bit confused about what has actually been tested to support the hypotheses”**: We used \"baseline\" for the 3 methods, Concat, Max, Simple-Sum as they are stronger methods compared to the simple-baseline of no knowledge. We have clarified this in the introduction and the experiments section.\n\nR1.6. **“The paper states \"As a result, the knowledge does not overlap with the gold answer, making the task trivial.\"”**: We have rephrased this sentence. We meant, the knowledge does not overlap with the gold answer, because if it did significantly overlap, the model will use lexical overlap as a hint to solve the task instead of performing the intended reasoning.\n Global/general Response:\n```\nWe are thankful to the reviewers for their insightful feedback. We are encouraged that reviewers found our paper to be well-written (R1), well-executed and solid work (R2, R3), has sufficient contributions (R1) and interesting experiments/exploration/results (R1, R2, R3), convincing and detailed empirical analyses (R2, R3), builds an experimental framework (R3). We have incorporated reviewer suggestions to the best we could during the limited time. We will address the remaining later in the final camera ready version. We answer specific concerns to each of the reviewers.\n\n```\n", "role": "user" }, { "content": "Thanks for the authors' response. The authors have answered almost all of my concerns. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
a4-fFL7aCi0
AKBC 2021 Conference
Reviewer_QRDW
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_QRDW, and the following are the paper you need to review:\n```<<a4-fFL7aCi0>>```", "role": "user" }, { "content": "Title: An interesting exploration of how retrieved knowledge can be useful for commonsense reasoning QA tasks\nReview Content: This paper asks how to improve commonsense question answering by infusing or providing potentially useful knowledge to BERT-style models. Authors explore using finetuning on and/or retrieving of useful or related knowledge for questions, and experiment with multiple ways to combine the useful knowledge with the questions. Results show that simultaneously finetuning on related knowledge source and retrieving a few examples for each question improves performance the best. \n\nThe paper is well-executed, provides convincing experiments and analyses, and yields interesting results about the usefulness of external-ish knowledge.\n\nSome questions or comments:\n- According to the paper, the finetuning is done on both the MLM and the NSP objectives; however, the RoBERTa paper showed that the NSP objective isn't that useful, and wasn't used to train RoBERTa. I believe the results will not change that much, but it would probably be better if authors re-ran their experiments only using the MLM objective.\n- The method section is a little hard to follow; I wish the authors had included a high-level modelling diagram that illustrates how each of the {concat, max, ...} strategies work exactly, and how they are combined to feed into BERT/RoBERTa.\n- [Minor] the introduction mentions the WSC example and some retrieved passage that is supposedly helpful to answer the WSC example. I didn't fully understand why it was the same kind of reasoning, so it would be useful to point out directly why the example is useful.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to R2\nR2.1. **“According to the paper, the finetuning is done on both the MLM and the NSP objectives”**: We follow the protocols defined for pre-training in BERT and RoBERTa respectively. We do MLM and NSP only for BERT, and only MLM for RoBERTa. We have updated it in the paper.\n\nR2.2. **”The method section is a little hard to follow”**: Thanks for your suggestion! We have added a high level diagram.\n\nR2.3. **“[Minor] the introduction mentions the WSC example and some retrieved passage”**: We have added an explanation of the necessary reasoning needed to answer the question. The reasoning from the retrieved knowledge is \"the person who is refusing, is the one who is fearing.\"\n Global/general Response:\n```\nWe are thankful to the reviewers for their insightful feedback. We are encouraged that reviewers found our paper to be well-written (R1), well-executed and solid work (R2, R3), has sufficient contributions (R1) and interesting experiments/exploration/results (R1, R2, R3), convincing and detailed empirical analyses (R2, R3), builds an experimental framework (R3). We have incorporated reviewer suggestions to the best we could during the limited time. We will address the remaining later in the final camera ready version. We answer specific concerns to each of the reviewers.\n\n```\n", "role": "user" }, { "content": "I value the authors' response. The response addresses most of my concerns. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
a4-fFL7aCi0
AKBC 2021 Conference
Reviewer_urSv
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_urSv, and the following are the paper you need to review:\n```<<a4-fFL7aCi0>>```", "role": "user" }, { "content": "Title: A solid work with detailed empirical analysis.\nReview Content: The paper presents a detailed study on the commonsense reasoning ability of pre-trained Transformer-based LMs such as BERT and RoBERTa. The authors build an experimental framework for evaluating different models with different knowledge sources under different infusion strategies, e.g., continually pre-training (revision) or retrieval-based learning (openbook), or both. The authors present a few methods that can improve the results and reduce the gap between smaller LMs and larger ones. \n\nQuestions and suggestions\n\n- I suggest the authors consider adding the CommonsenseQA task [1] and an additional dataset and use the OMCS corpus (the text-version of ConceptNet) or GenericsKB [2] as the knowledge resource. Also, the OpenCSR [3] seems to share the similar goal of the openbook setup for answering commonsense questions. \n\n- The authors claim in the conclusion that the proposed methods \"surpass complex graph-based methods\"? I don't find the supporting facts from the experiments. \n\n- I would suggest the authors do not claim the methods are part of the novelty of the paper as they are very trivial. I would rephrase the Introduction and Section 4.1 so that the key contribution is the analysis and the evaluation protocol. The proposed methods here are just some default options as baselines in the framework. \n\n- The experiment section does not show any qualitative analysis or case studies. It is unclear how well the retrieved sentences are and how they can improve the LMs for reasoning.\n\n- The retrieval module can be replaced with pre-trained DPR [4], which is generally better than BM25.\n\n[1] CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. \nAlon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant. NAACL-19\n\n[2] https://allenai.org/data/genericskb\n\n[3] Differentiable Open-Ended Commonsense Reasoning. \nBill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen. NAACL-21\n\n[4] Dense Passage Retrieval for Open-Domain Question Answering. EMNLP 2020\n\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to R3\nR3.1. **”I suggest the authors consider adding the CommonsenseQA task.”**: Thanks for the suggestion. We did evaluate CommonsenseQA as one of the test dataset sources. However, CommonsenseQA authors have requested not to use ConceptNet and OMCS, as it have been extensively used for creation of the dataset, and hence, the commonsense knowledge has a large overlap, and would become explicit.They also have a separate leaderboard for those methods that use explicit commonsense knowledge.\n\nR3.2. **“The authors claim in the conclusion that the proposed methods \"surpass complex graph-based methods\"?”**: We have cited the graph based methods from the table which we outperform, KagNet and GBR. \n\nR3.3. **“I would suggest the authors do not claim the methods are part of the novelty of the paper as they are very trivial.”**: We have rephrased our contribution to emphasize more on the analysis.\n\nR3.4. **“The experiment section does not show any qualitative analysis or case studies.”**: More details are present in the supplementary. We have moved one section to the main paper. We did analyse the quality of the retrieved knowledge sentences. Please see the new Table 5 in the main, which was present earlier in the supplementary.\n\nR3.5. **“The retrieval module can be replaced with pre-trained DPR [4], which is generally better than BM25.”**: DPR needs ground-truth relevant passages for training. For datasets like, Natural Questions, the ground-truth paragraphs are provided which can be used as positive samples. We do not possess the actual relevant knowledge for any of the tasks currently. In future, we will work on creating a ground-truth commonsense retrieval dataset containing implicit knowledge, to enable training of such neural retriever models.\n Global/general Response:\n```\nWe are thankful to the reviewers for their insightful feedback. We are encouraged that reviewers found our paper to be well-written (R1), well-executed and solid work (R2, R3), has sufficient contributions (R1) and interesting experiments/exploration/results (R1, R2, R3), convincing and detailed empirical analyses (R2, R3), builds an experimental framework (R3). We have incorporated reviewer suggestions to the best we could during the limited time. We will address the remaining later in the final camera ready version. We answer specific concerns to each of the reviewers.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' reply. It satisfactorily addresses the major issues I had. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
j920k2uqQNz
AKBC 2021 Conference
Reviewer_NVb5
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_NVb5, and the following are the paper you need to review:\n```<<j920k2uqQNz>>```", "role": "user" }, { "content": "Title: Nice paper which could be better with some more discussion\nReview Content: The paper investigates the utility of dependency graph (DG) representation in converting questions to Question Decomposition Meaning Representation (QDMR) annotations. As an intermediate step, the paper proposes to convert QDMR annotations to logical form (LF) representation (this was also done in the original QDMR paper). This LF representation is used to then extract a DG representation from it in a deterministic manner. The DG can also be converted to LF in a similar manner. Additionally, the paper also proposes a LF based evaluation metric which apparently is a better measure for semantic equivalence between QDMRs as compared to the previously proposed metric. \n\nTo utilize DGs, the paper proposes two ways. (1) a graph-based parser that predicts all the DG edges simultaneously with some ILP based inference to ensure that the predicted DG is valid. This parser is not the most accurate but extremely fast. (2) Latent-RAT: Use the DG as side-information in a seq2seq transformer model by using RAT transformer layers [Shaw et al., 2018, Wang et al., 2020] to explicitly model relations between tokens. The model directly outputs QDMR annotation.\n\nExperimentally, the paper shows that the Latent-RAT model achieves competitive performance compared to the previous CopyNet+BART model in an in-domain test setting. In a domain adaptation setting (train on 7 domains in QDMR dataset, test on the remaining one), both graph-based parser and Latent-RAT achieve significant improvements compared to the baseline. The performance on each domain is still way behind when in-domain training data is used though. Analysis experiments show that graph-based parser learns better in low-data scenarios, and that the latent-RAT model performs better on longer LFs.\n\nOverall, I think the ideas presented in the paper are reasonable and well tested empirically that it should warrant acceptance. I do not find any concerning issues with the paper to advocate for rejection. Though, I do have various questions that I think the authors should try to address in the paper: \n1. What are the issues/challenges in converting QDMR/LF annotations to DG representations? Step 1 of token alignment seems an avenue where it is easy to make mistakes.\n2. Are the resulting DGs always projective or do they have crossing edges? \n3. Did the authors think about using a Shift-Reduce parser to predict the DGs? Is there any intuition on the expected performance?\n4. Similarly, do you think the model will achieve better performance if predicting LFs in the latent-RAT model? It seems that the structured nature of LFs might give better inductive bias while learning.\n5. I think the paper is missing an important experiment of using the exact same transformer architecture but not using the additional loss from DGs. This will help tease apart the contribution from the NN architecture and the additional benefit from the DG signal.\n6. The second last line of section 4. states “We now describe the details of the LF→DG conversion” but I do not find this anywhere. Maybe I’m missing something?\n7. In the main paper and A.2, the paper does not describe the cases in which LF-EM gets higher coverage than NormEM and in which ways it is lacking. Discussion around this will promote better future research by the readers.\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer NVb5\nThank you for finding our new approach interesting and for your detailed review!\n\n“issues/challenges in converting QDMR/LF annotations to DG”:\nFor brevity, we describe the main challenges of this conversion in the Appendices (A3, A6). We split these challenges into two types: (1) Token Alignment ambiguity (A3, see Fig. 8); (2) Graph extraction (A6), especially dealing with implicit information that is missing in the question, and with \"shared\" tokens of different execution paths - which make these paths indistinguishable from each other (see Fig.9).\n\n“DGs always projective”: \nOur dependency graphs may have crossing edges (see Fig.4 bottom right, the pink edge crosses the red and the orange edges).\n\n“Shift-Reduce parser to predict the DGs”:\nWe haven't tried it. We were interested in this work in a non-autoregressive approach to take advantage of its fast inference. We agree that designing a shift-reduce system that will produce QDMR dependency graphs is an interesting direction for future work, and might improve performance and consistency (see Error Analysis, A.8, Table 8, \"wrong global/step structure\").\n\n“ better performance if predicting LFs “:\nYes, The seq2seq models, including Latent-RAT, perform better (~2 LF-EM points) when trained to predict a sequential representation of the LF directly. We didn't include this part in the paper as it is difficult to compare it fairly to previous QDMR parsers. \n\n“the contribution from the NN architecture and the additional benefit from the DG signal”:\nThanks for this question! \n\nWe note that without the graph loss the relation embeddings don’t reflect the graph structure, which is what we are looking for. \nTo tease apart the contribution of the architecture and the loss, we conducted an experiment that omits the loss function but instead uses embeddings of gold relations at *test time*. This led to very high accuracy (0.759 LF-EM) which shows that RAT-SQL is a good architecture for this task, but the challenge is learning the relations. We omitted this from the paper for brevity but will include it in the final version.\n\n“We now describe the details of the LF→DG conversion”\nThe details were moved to the noted appendices, we will remove that line. Thanks!\n\n“ the cases in which LF-EM gets higher coverage than NormEM”\nAs both methods are based on exact-match evaluation of long phrases, it is hard to classify the exact gap. LF-EM uses a wider “synonym vocabulary”, and defines a rich dedicated properties set (s.a., “more-than-1” of BOOLEAN, see Table 5 in the appendices) - thus catches more syntactic equivalences. Moreover, it declares role-based arguments names, and so agnostic to equivalent arguments ordering (e.g., ARITHMETIC[sum](arg=#1, arg=#2) ⇔ ARITHMETIC[sum](arg=#2, arg=#1); but ARITHMETIC[div](left=#1, right=#2) != ARITHMETIC[div](left=#2, right=#1) ).", "role": "user" }, { "content": "Title: Thank you for the response\nThanks for your clarifications and answers to my questions. Including response details in the paper will be very useful to readers IMO. ", "role": "assistant" } ]
j920k2uqQNz
AKBC 2021 Conference
Reviewer_h9MV
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_h9MV, and the following are the paper you need to review:\n```<<j920k2uqQNz>>```", "role": "user" }, { "content": "Title: Solid approach to QDMR parsing using dependency graphs\nReview Content: The authors propose a novel view of decomposing questions into QDMR steps using dependency parsing. Specifically, they propose representing QDMR decompositions as a dependency graph in which the edges correspond to computation steps. Using this representation, they present two new methods for decomposing questions; a biaffine graph parser that directly predicts all of the edges, and a seq2seq parser using a relation-aware transformer to incorporate dependency information as additional supervision. The first model shows a significant speed-up at the cost of some accuracy, while the second model is shown to improve generalization performance on unseen domains. Additionally, the authors propose a new evaluation metric for QDMR parsing that better correlates with human judgements. \n\nPros:\n- Novel representation for QDMR as a dependency graph\n- Two models making use of the above representation to achieve specific improvements in runtime/accuracy\n- Good amount of explanation and analysis of the models' performance.\n\nCons:\n- The approaches don't provide a significant improvement on in-domain QDMR parsing when the full dataset is used.\n- Lacking analysis of the LF-EM evaluation compared to the prior NormEM. Specifically, I think it would be helpful to see some examples and explanation of the cases that LF-EM covers that a re not covered by NormEM.\n\nQuestions:\n- Is there a reason you didn't include any ensembled results? Based on model agreement, as you mention, combined with the fact that the Latent-RAT model specifically performs better on longer decompositions, it seems ensembling might be a way to improve over the SOA.\n- Have you run any extrinsic evaluations of these approaches? Given the error analysis that automatic evaluations of QDMR is missing a lot of correct cases, I'd be curious to see how the predictions of each of them affects the end QA performance. \nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer h9MV\nThank you for finding our new representation is novel and for your interest in our proposed evaluation metric. We appreciate your detailed review!\n\nRegarding the ensembled results:\nWe tried a light black-box ensemble method, in which a sample is first classified by a BERT-based classifier with a specific model, and then the inference is done by that model. This technique didn’t lead to a significant improvement. We believe a white-box technique, in which the models' confidences are taken into account, will work better here, but decided to leave it for future work.\n\n“extrinsic evaluations of these approaches”\nIn this work we focused on comparing the “traditional” seq2seq-based approaches with our new graph-based approach using an automatic metric, just like the official one. We agree that showing the effect on QA performance could be a very positive signal for correctness, but it's not a guarantee for correctness - so we preferred to leave it for future work and validate the metric through human judgement. Moreover, we tried to be agnostic to the downstream use of QDMR, as QA is not the only usage it may have (e.g. as an auxiliary task in a QA system, or for data augmentation through example generation), similar to how meaning representations are typically evaluated (AMR, SRL, QA-SRL) using intrinsic measure.\n", "role": "user" }, { "content": "Thanks to the authors for their clarification. It resolves the majority of my concerns. I continue to recommend this work for acceptance.", "role": "assistant" } ]
j920k2uqQNz
AKBC 2021 Conference
Reviewer_Y2Ua
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Y2Ua, and the following are the paper you need to review:\n```<<j920k2uqQNz>>```", "role": "user" }, { "content": "Title: Thorough work on dependency-based QDMR parsing\nReview Content: This paper is in the area of Question Decomposition Meaning Representation (QDMR), originally introduced by Wolfson et al (2020). The core idea proposed in this paper is a new representation based on dependency graphs; based on such a graph, QDMR logical forms can be computed using a deterministic procedure. In addition, the paper describes an evaluation protocol based on the logical forms. Finally, the paper also introduces two different automatic parsing methods that produce the dependency graphs, one of which uses the well-known biaffine approach by Dozat et al., and one using an autoregressive approach based on the Relation-Aware Transformer; the biaffine parser is faster but somewhat less accurate.\n\nThis is solid work that covers quite a bit of ground. It seems well-justified to pose the QDMR prediction task using a graph-based representation, and the evaluation protocol based on logical forms is well justified; the conversion algorithms are described thoroughly in the supplemental material.\n\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Y2Ua\nThank you for finding our new representation is reasonable, and for your interest in our work. We appreciate your review!", "role": "user" }, { "content": "I value the authors' response. It satisfactorily addresses the major issues I had. I will continue to recommend accepting this work.", "role": "assistant" } ]
jCt9S_3w_S9
AKBC 2021 Conference
Reviewer_nfuz
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_nfuz, and the following are the paper you need to review:\n```<<jCt9S_3w_S9>>```", "role": "user" }, { "content": "Title: A focused contribution that shows strong empirical gains\nReview Content: This paper studies the problem of link prediction for knowledge graph completion with rule-based methods. Specifically, it builds on a recent strong rule-based method, AnyBURL, for mining rules from a KG, and mainly studies the application of the mined rules. Recognizing the limitations of the Maximum and Noisy-OR rule aggregation strategies used in prior work, this paper proposes SAFRAN, a rule application strategy that clusters rules of the same relation based on their similarity with the hope that redundant rules will be clustered together. The Maximum strategy is then used within each cluster and the Noisy-OR strategy is used to aggregate clusters. Empirical results on FB15K-237, WN18RR, and YAGO3-10 are strong, outperforming embedding-based methods as well as the Maximum and Noisy-OR baselines.\n\nStrengths\n- Rule-based link prediction is an important problem with clear advantages in interpret ability\n- This paper makes a focused and clear contribution in rule application\n- Strong empirical results\n\nWeaknesses\n- Sec 2.2, the description of AnyBURL is not very self-contained and one has to resort to the original paper to get a sufficient understanding for the rest of this paper. I understand that we don’t want to spend too much space on repeating prior work, but it could have been described more clearly with the same amount of space. Typos and confusing sentences in this section make it even harder. For example, “ In a rule r uppercase letters represent variables and lowercase letters represent variables.” I believe the second “variables” should be “constants” (after reading the original AnyBURL paper). The next sentence is also very confusing.\n- Examples may be taking a bit too much space throughout the paper. It’d be better to use some of the space to have a more in-depth, quantitative analysis of the clustering quality as well as error analysis and case studies. Currently the experiments are a bit thin.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your valuable feedback!\nThank you for your comments on our work, we appreciate your time and effort.\n\n> Sec 2.2, the description of AnyBURL is not very self-contained and one has to resort to the original paper to get a sufficient understanding for the rest of this paper. I understand that we don’t want to spend too much space on repeating prior work, but it could have been described more clearly with the same amount of space. Typos and confusing sentences in this section make it even harder. For example, “ In a rule r uppercase letters represent variables and lowercase letters represent variables.” I believe the second “variables” should be “constants” (after reading the original AnyBURL paper). The next sentence is also very confusing.\n\nThanks, we have revised this paragraph and made it a bit less abstract.\n\n> Examples may be taking a bit too much space throughout the paper. It’d be better to use some of the space to have a more in-depth, quantitative analysis of the clustering quality as well as error analysis and case studies. Currently the experiments are a bit thin.\n\nThis is a good point, we will include this in our future work. We want to add to that, that it is not too trivial to assess the quality of a clustering, as the quality of the clustering would have to be reviewed manually. We have added a remark to this in the conclusion.\n", "role": "user" }, { "content": "I value the authors' response. The authors have answered almost all of my concerns. I will continue to recommend accepting this work.", "role": "assistant" } ]
jCt9S_3w_S9
AKBC 2021 Conference
Reviewer_C4co
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_C4co, and the following are the paper you need to review:\n```<<jCt9S_3w_S9>>```", "role": "user" }, { "content": "Title: Generally a paper with good motivation and considerable performance improvements\nReview Content: The paper introduces SAFRAN rule application framework to improve upon the existing AnyBURL interpretable rule-based link prediction approach. SAFRAN helps to solve the redundancies that affect the aggregation step in AnyBURL and introduces Non-redundant Noisy-OR (NRNO) to detects and clusters redundant rules before the aggregation step. In several link prediction benchmarks, SAFRAN achieves SOTA performances for rule-based algorithms and in two out of three datasets even beats multiple embedding-based algorithms with good explainability.\n\nStrengths: \n1) The paper is generally well-written and easy to follow, the code is also shared for reproducibility.\n2) This paper is well-motivated, in order to reduce the redundant rules which limit the performance of AnyBURL, the author proposes to conduct the rule clustering first based on a redundant estimation metric, then aggregate rules in a cluster using the Maximum approach, and finally aggregate rules in different clusters with Noisy-OR method. The resulting aggregation technique is called Non-redundant Noisy-OR (NRNO), which possesses very good explainability as shown in the Figure 1 example.\n3) Experimental results show that SAFRAN achieves state-of-the-art performances over latent embedding-based and interpretable-based approaches on FB15K-237 and WN18RR datasets. While in YAGO3-10, SAFRAN beats previous interpretable baselines. Overall, the performance improvements look impressive.\n\nWeakness and Questions: \n1) Though effective, the NRNO approach proposed in the paper is just a combination of the well-known Maximum and Noisy-OR aggregation methods. Hence, the technical novelty of NRNO is relatively limited.\n2) SAFRAN relies on the threshold t_{hd}, which is a list of hyperparameters that may take additional effort to decide with grid search or random search.\n3) The paper forgets to compare with HAKE (Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. “Learning hierarchy-aware knowledge graph embeddings for link prediction”, AAAI 2020), which is also a recent competitive baseline. Also, the ComplEx performance numbers in [Rossi et al., 2021] are slightly higher than in Table 6 of the NRNO paper.\n4) I didn’t very well understand the meaning of ”VS“ in Table 6. What do the numbers in the VS line indicate? Could you elaborate more on this? \n5) There are some minor typos, for example, on Page 4, the fourth last line - “Is is” should be “It is”; on Page 5, the tenth last line - “(i.e)” seems to be redundant, if not it should be “i.e.” instead.\n\n===== After Rebuttal =====\n\nThe author's response solves my concerns and I will keep my original rating (6).\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your valuable feedback!\nThank you for your time and careful review of our paper.\n\n> Though effective, the NRNO approach proposed in the paper is just a combination of the well-known Maximum and Noisy-OR aggregation methods. Hence, the technical novelty of NRNO is relatively limited.\n\nWe agree that the core algorithm of NRNO is rather straightforward. However we want to point out that the application of NRNO is not trivial, as it and symbolic rule based link prediction approaches in general need optimized implementations to achieve reasonable runtimes. As can be seen in Appendix C our implementation accomplishes a significant runtime speed-up in inferring predictions, which is necessary for the computationally expensive hyperparameter search.\n\n> The paper forgets to compare with HAKE (Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. “Learning hierarchy-aware knowledge graph embeddings for link prediction”, AAAI 2020), which is also a recent competitive baseline. Also, the ComplEx performance numbers in [Rossi et al., 2021] are slightly higher than in Table 6 of the NRNO paper.\n\nWe have added the results of HAKE, as well as corrected the rounding errors of ComplEx performance numbers.\n\n> I didn’t very well understand the meaning of ”VS“ in Table 6. What do the numbers in the VS line indicate? Could you elaborate more on this?\n\nThis experiment is explained in the last paragraph of Chapter 4. With the “VS” experiment we wanted to evaluate how important the optimal thresholds t_{hd} are within Non-redunant Noisy-OR. For this, we evaluated for each relation which aggregation approach (Maximum or Noisy-OR) generates better results on the validation set and apply the best approach for each relation to the test set. Our hypothesis was that this approach should generate better results than simply using the Maximum approach, however Non-redundant Noisy-Or and its combination of both Maximum and Noisy-OR approaches should still produce better results (which is supported by our findings of “VS” in Table 6).\n\n> There are some minor typos, for example, on Page 4, the fourth last line - “Is is” should be “It is”; on Page 5, the tenth last line - “(i.e)” seems to be redundant, if not it should be “i.e.” instead.\n\nThanks, we have corrected them!\n", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised.", "role": "assistant" } ]
jCt9S_3w_S9
AKBC 2021 Conference
Reviewer_Xz6v
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Xz6v, and the following are the paper you need to review:\n```<<jCt9S_3w_S9>>```", "role": "user" }, { "content": "Title: Good paper but can be improved\nReview Content: --------\n\n**Summary**\n\nThe paper proposes an interpretable rule-based approach for link prediction in knowledge graphs (KGs). The authors identify a common weakness (redundancy) in the following two prior ideas:\n* maximum aggregation [Anytime bottom-up rule learning for knowledge graph completion, IJCAI'19]\n* noisy-OR aggregation [Fast rule mining in ontological knowledge bases with amie+, VLDB'15].\n\nExperiments on real-world KG datasets demonstrate that a simple combination of the two ideas can overcome redundancy to an extent and compete with state-of-the-art embedding-based / deep-learning-based methods in terms of link prediction performance.\n\n--------\n\n**Quality**\n\nThe technical contents of the paper (problem definition, methodology, evaluation) are sound. Claims are well-supported by experiments. However, the authors need to carefully evaluate the limitations of their proposal.\n\n--------\n\n**Clarity**\n\nThe illustrations are helpful to understand the main ideas of the approach. The authors have shared the code. A pseudo code would enhance the clarity further and facilitate an expert to reproduce the results of the paper.\n\n--------\n\n\n**Originality**\n\nThe tasks and methods are not new. The main proposal (Non-redundant Noisy-OR) is a simple combination of max aggregation and Noisy-OR which are well-known. The fact that the simple combination overcomes the common weakness to an extent (Figure 1) is somewhat interesting.\n\n--------\n\n**Significance**\n\nThe paper is relevant to the AKBC community. The results seem significant in comparison to rule-learning baselines. However, the paper can be strengthened by \n* positioning with explainable KG embeddings, e.g., (i) Interaction Embeddings for Prediction and Explanation in Knowledge Graphs, WSDM'19, (ii) Faithfully explaining predictions of knowledge embeddings, ENIAC'19 and\n* conducting experiments on inductive relation prediction (learned rules can be easily applied to unseen entities at test time), e.g., (i) Communicative Message Passing for Inductive Relation Reasoning, AAAI'21, (ii) Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs, AAAI'21.\n\n--------\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Results on inductive link prediction and response to feedback\nMany thanks for your valuable comments!\n\n> However, the authors need to carefully evaluate the limitations of their proposal.\n\nWe have extended the Section “Experiments”.\n\n> A pseudo code would enhance the clarity further and facilitate an expert to reproduce the results of the paper.\n\nWe have added a pseudo code to the appendix of the paper.\n\n> positioning with explainable KG embeddings, e.g., (i) Interaction Embeddings for Prediction and Explanation in Knowledge Graphs, WSDM'19, (ii) Faithfully explaining predictions of knowledge embeddings, ENIAC'19 and\n\nWe included our positioning with explainable KG embeddings in the Related Work section.\n\n> conducting experiments on inductive relation prediction (learned rules can be easily applied to unseen entities at test time), e.g., (i) Communicative Message Passing for Inductive Relation Reasoning, AAAI'21, (ii) Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs, AAAI'21.\n\nWe tried to apply SAFRAN (Non-redundant Noisy-OR) to the datasets FB15k237_v(1-4) and WN18RR_v(1-4) used in (i) Communicative Message Passing for Inductive Relation Reasoning, AAAI'21, (ii) Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs, AAAI'21 which were introduced in (iii) Inductive Relation Prediction by Subgraph Reasoning ICML’ 2020. However we ran into some limitations:\n\n* As the set of entities of the training graph and the set of entities of the test graph is disjoint, only rules of type C (cyclic) rules can be used. Rules that contain a constant (regardless if in the head or the body of the rule) cannot be used in these datasets. Such rules are all rules of type AC1 and AC2.\n* Validation sets of these datasets are very small, which leads to overfitting of learned clusters to the validation set. SAFRAN is not able to recognize rule redundancies based on small validation sets. This leads to SAFRAN (NRNO) being not able to exceed the results of the maximum approach. Learning the clusters of rules on the union of validation and test set of the training graph improved the results, sometimes exceeding the Maximum approach for the bigger datasets. To clarify: The test set of the training graph is not to be confused with the test set of the test graph, that is used for evaluation. However, these joint sets are still very small and it is unclear if the authors of the dataset intended the use of the test set of the training graph besides filtering.\n\nAs can be seen in the results (following comment), the difference of the Maximum approach and NRNO is less on the bigger datasets (Datasets with a higher version number are bigger) and results on NRNO can exceed the Maximum approach on fb237_v3 and fb237_v4 when clusters of rules are learned on a bigger set of triples (union of validation and test set of the training graph), which is however still too small for NRNO to be able to recognize redundancies.\n\nWe also found it difficult to run fair evaluations to compare to results published in the literature, because to calculate hits@10 they compare the score of the positive triple to the scores of 50 randomly sampled negative triples. However, this set of negative triples was never published and using a different set of such triples can heavily impact the evaluation. The results above were evaluated using all possible negative triples in a dataset.\n\n\n```\n MRR hits1 hits10\nfb237_v1\nMax 0.377 0.317 0.466\nGrid (NRNO) 0.359 0.300 0.441\nGrid (NRNO) TestValid* 0.374 0.307 0.463\nNoisy 0.355 0.290 0.461\n\nfb237_v2\nMax 0.483 0.383 0.641\nGrid (NRNO) 0.447 0.354 0.606\nGrid (NRNO) TestValid* 0.462 0.356 0.632\nNoisy 0.433 0.316 0.645\n\nfb237_v3\nMax 0.452 0.346 0.623\nGrid (NRNO) 0.444 0.334 0.619\nGrid (NRNO) TestValid* 0.454 0.342 0.628\nNoisy 0.407 0.284 0.619\n\nfb237_v4\nMax 0.427 0.322 0.616\nGrid (NRNO) 0.427 0.318 0.615\nGrid (NRNO) TestValid* 0.433 0.326 0.620\nNoisy 0.380 0.269 0.589\n\nWN18RR_v1\nMax 0.705 0.652 0.809\nGrid (NRNO) 0.699 0.644 0.803\nGrid (NRNO) TestValid* 0.700 0.646 0.803\nNoisy 0.676 0.606 0.801\n\nWN18RR_v2\nMax 0.695 0.633 0.769\nGrid (NRNO) 0.691 0.627 0.769\nGrid (NRNO) TestValid* 0.695 0.633 0.769\nNoisy 0.689 0.625 0.769\n\nWN18RR_v3\nMax 0.400 0.352 0.466\nGrid (NRNO) 0.400 0.352 0.467\nGrid (NRNO) TestValid* 0.400 0.352 0.468\nNoisy 0.400 0.352 0.466\n\nWN18RR_v4\nMax 0.634 0.585 0.706\nGrid (NRNO) 0.634 0.584 0.706\nGrid (NRNO) TestValid* 0.634 0.584 0.706\nNoisy 0.626 0.572 0.704\n```\n*Clusters of rules learned on the combination of test and validation set of the training graph. Again: This is not to be confused with the test set of the test graph that is used for evaluation.", "role": "user" }, { "content": "Thanks for the authors' response. Most of my concerns have been addressed. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
jbnn1iNNsUj
AKBC 2021 Conference
Reviewer_F5Kc
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_F5Kc, and the following are the paper you need to review:\n```<<jbnn1iNNsUj>>```", "role": "user" }, { "content": "Title: Interesting task and systems for multilingual fact linking, some shortcoming \nReview Content: The contribution of this paper is two-fold. \nFirst, the authors create (and make available) IndicLink, a dataset of multilingual fact linking in 5 indian languages + English. The dataset is translated from English, from a portion of WebRED by annotators. The authors attempt to coin \"multilingual fact linking\" and propose a formalization of the problem. \nSecond, the authors evaluate current bi-encoder and cross-encoder retrieval and reranking architectures for fact linking using an multilingual transformer model to encode facts (subject, predicate and object names) and additionally \npropose a system following the mGenre paradigm where in the standard bi/dual encoder + reranking cross-encoder, the reranking cross-encoder is replaced by a generative (constrained) decoder.\nThe authors propose an evaluation of their system on their dataset, against other current systems on all 5 languages and english and also perform various ablations of the model in order to determine the effect of the different components on the final results. \n\n\nStrong points: \n- Interesting system that shows promising results. \n- A new dataset that can be of interest to the community. \n- Relatively well written \n\nShortcomings: \n - Potential bias in the dataset construction due to the translation from English as pointed out by another reviewer. \n - Poor justification for the choice of the specific Indian languages used in the dataset. Some of the languages belong to different sub-families, which is good, but the selection criteria aren't clear. For example if the criterion was the number of speakers, then surely, a language like Bangla would have to be included. Other European languages could be included too as they are available in the source dataset. \n - The formalization for the task is difficult to parse and perhaps too arcane (doesn't follow usual conventions). As such the textual description is much easier to understand. \n - No error analysis (an appendix shouldn't be used to finish the paper, the error analysis has as much a place in the paper itself as the ablation studies. \n\nDetailed comments and questions: \n- Please clearly state the definition of \"fact\" for the task of fact interlinking. \"Fact\" has many conflicting and very different definitions in related subfields of computer science, even though the venue is clearly oriented towards fact as a synonym for assertion in a knowledge base, it's still important to say so in the introduction. \n- Please improve the formalization, that's a very strange way of modeling binary relations (predicates are a binary relation between subject and object). The set of facts is clearly contained in the cartesian product of R \\subseteq ExE, why isn't it defined as such? Equation 1 doesn't need to be so complex, if a fact is synonymous with triple, you can just state {F_i \\in R| T_m \\entails F_i}. \nSince there is no formal way of defining that a \"fact\" is contained or expressed in a textual fragment or an utterance, you could very well use the entailment symbol (just a suggestion, this is far from being the only way). It's also likely that a diagram would go a long way towards explaining the task in a precise yet clear manner. \n- What prevents the application of the methods to standard monolingual datasets for fact linking? An external benchmark beside the authors' own dataset would strengthen the approach. \n- Tables 3 onwards aren't placed optimally and are far from where they're commented.\n- Equations 3 and 4 would again benefit from being represented schematically. \n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Choice of languages, Monolingual fact linking and improved presentation\n**Effect of Translationese artifacts**:\n\nWe deliberated on the choice of native text vs translationese during IndicLink design, and decided to go ahead with translations noting the effort spent in creating WebRED (Ormandi et. al., 21) for English sentences and the difficulty in finding experts who can do the same in other languages. Therefore, we decided to proceed with translating already available WebRED English sentences into Indian languages. This allowed us to extend to more languages that would be otherwise possible. \n\n**Lack of non-Indian languages**:\n\nIndian languages have been historically underserved in the NLP community, particularly for Information Extraction tasks. In an effort to remedy this situation, we decided to use our available budget to exclusively focus on Indian languages. Apart from fact linking, IndicLink forms the first resource for evaluating important IE tasks such as entity linking and relation classification for web text in Indian languages. \n\n**Choice of Indian languages**:\n\nWe chose the six Indian languages based on diversity of language families and geographical distribution: \n1. Dravidian languages: Telugu, Tamil in South India, \n2. Indo-Aryan languages: Hindi, Urdu from North/Central India, Assamese and Gujarati from East and West, respectively. \nhttps://en.wikipedia.org/wiki/Indo-Aryan_languages#Groups provide a detailed geographical classification of Indo-Aryan languages. \n\nWe particularly choose low-resourced Assamese (same sub-family as Bangla) because of the lack of available machine translation support in Google Translate, allowing us to evaluate the zero-short performance of the ReFCoG model on Assamese sentences. \n\n**Application to Monolingual Fact Linking**:\n\nWe do apply the ReFCoG model to fact linking where both the input sentence and the facts labels are of the same language (Table 4: EL fact labels and EN text). However, we note the difficulty in comparing with prior datasets that are designed primarily for fact extraction and not linking. \n\nFor example, SemEval 2010 is used to extract facts from the sentence. But to pose it as fact linking, we need an oracle set of facts from which we can choose the correct subset relevant for the sentence. These facts don’t belong to a KG like Wikidata or DBPedia and hence are difficult to compare. \nOn the other hand, T-Rex facts do belong to Wikidata but are annotated using distant supervision and hence don’t necessarily represent the ground truth. WebRED overcomes all of these issues and hence we apply our technique on WebRED examples.\n\n**Presentation**:\n\nWe would like to thank the reviewer for pointing out ways to improve the presentation in the paper. We have incorporated the suggested changes in the revised draft of 1. improving the task formalization, 2. adding error analysis, 3. clarifying fact definition, 4. schematic representation of alternative cross encoder, 4. adjustment of tables.\n\nOrmandi, R., Saleh, M., Winter, E., & Rao, V. (2021). WebRED: Effective Pretraining And Finetuning For Relation Extraction On The Web. arXiv preprint arXiv:2102.09681.\n", "role": "user" }, { "content": "I acknowledge the authors' response. It adequately addresses the points I raised. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
jbnn1iNNsUj
AKBC 2021 Conference
Reviewer_dGrx
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_dGrx, and the following are the paper you need to review:\n```<<jbnn1iNNsUj>>```", "role": "user" }, { "content": "Title: Good multilingual fact linking corpus with interesting model, but missing an obvious baseline\nReview Content: This paper studies the problem of fact linking -- linking facts from a knowledge graph (KG) to a given sentence where the fact is mentioned. More specifically, recognizing the sparsity and skewness of KG entity and relation labels in terms of language coverage, this paper proposes to study the multilingual setting, where the input sentence is in an underrepresented language and one needs to link KG facts, often only expressed in more high-resource languages, to the input sentence. A new test set for 6 Indian languages is created by manually translating sentences from WebRED with professional translators. The paper also proposes a model for this task based on the Dual Encoder - Cross Encoder architecture, but unlike conventional Cross Encoder, it follows mGENRE and proposes a constrained autoregressive decoder to generate valid facts. It is shown that the proposed model outperforms re-ranking based Cross Encoders by a large margin on the new test set.\n\nStrengths\n- A new problem and dataset with an emphasis on underrepresented languages in NLP study\n- Strong baseline model with autoregressive decoder for fact linking\n- Empirical results seem promising\n\nWeaknesses\n- An obvious baseline is missing, where one uses some off-the-shelf machine translation tool to translate the English labels in the KG to the target language, which effectively turns the multilingual problem into monolingual. Discussion and demonstration of the inefficacy (if there's any) of that is needed.\n- It's surprising that constrained generation leads to worse performance than free-form generation, and there lacks a convincing analysis and explanation for that. In principle, the constrained version should be strictly better than the free-form version, unless there are unnecessary constraints or something happens with the optimization. Since it's a major contribution of this paper, it merits more thorough discussion.\n\nMinor\n- Table 2, missing \"K\" in the Assamese row?\n- Related work, \"Fact Extraction\", \"whereas fact extraction system do not\": \"system\" -> \"systems\"\n- I find it a bit odd to capitalize phrases like \"Entity Linking\", \"Coreference Resolution\", and \"Multilingual Fact Linking\" in the main text. Probably better to be lower-cased.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Machine Translation baseline, Constrained generation issue\n**Machine Translation baseline**:\n\nWe thank the reviewer for pointing out this important baseline. However, we note that this requires access to language-specific resources such as a translation system which may not be readily available at inference time or may be expensive to use. Translating the 4.5 million English facts into 6 languages can be rather challenging. Moreover, some low-resourced languages may not even have translation systems readily available. For example, among the IndicLink languages, Google Translate does not support Assamese. Therefore, in line with other multilingual tasks like TyDiQA (Clark et. al., 20), we hope to approach this problem by relying on the cross-lingual ability of the model. \n\n**Issue with constrained generation**:\n\nWe agree that the constrained version is expected to be strictly better than the free-form version and the current results are contradictory to this. On a closer look, we uncovered that the trie used for the constrained system was incomplete (missing some valid fact labels) and so the unconstrained system was allowed to produce more valid facts than the constrained system. Once we correct this issue, constrained generation does indeed outperform the unconstrained model by 0.6 P@1 and 2 R@5. We have updated these results in Table 3 and Table 4 of the latest draft. We note that all the other presented conclusions remain the same.\n \n**Minor Suggestions**:\n\nWe have incorporated the suggested changes in related work and removed capitalization of tasks like entity linking, relation classification in the latest draft. We note that in Table 2, there are indeed only 257 Assamese labels for facts, stemming from the lack of Wikidata fact labels in low-resourced languages. \n\n\n\nClark, J. H., Choi, E., Collins, M., Garrette, D., Kwiatkowski, T., Nikolaev, V., & Palomaki, J. (2020). TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8, 454-470.\n", "role": "user" }, { "content": "Thank you for the authors' feedback. It adequately addresses the points I raised. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
jbnn1iNNsUj
AKBC 2021 Conference
Reviewer_pfgr
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_pfgr, and the following are the paper you need to review:\n```<<jbnn1iNNsUj>>```", "role": "user" }, { "content": "Title: Solid but limited multilingual fact linking resource\nReview Content: === Summary ===\n\nThis paper considers the task of multilingual fact linking, where the goal is to link abstract representations of facts to their language-specific representations in multiple languages. The challenge is that although \"worldly facts\" themselves are language-agnostic, annotations of how they appear in text form (e.g., described in a sentence or named) are often restricted to only a few languages. The proposed task of multilingual fact linking seeks to link KG facts with their mentions in multilingual text, even when the label of the fact doesn't match the language of the text (e.g., matching <NYC, country, USA> to a Hindi text that expresses this).\n\n=== Strengths ===\n\n- The paper is mostly well-written. It's a bit hard to follow exactly what the task setting is (which components are in language A and which are in language B). It would be much clearer if a simple, formal definition was given in the introduction. It also would be good to perhaps highlight the MFL row in table 1 as \"this paper\".\n\n- Overall, the task is well motivated and the introduced dataset, despite some limitations (see below), may be helpful for further multilingual development.\n\n- The paper proposes a retrieval + generation model (ReFCOG) that performs well as compared to retrieval + reranking models, which provides a good starting point for the task.\n\n=== Weaknesses ===\n\n- One of the biggest downsides is that the evaluation dataset (IndicLink) is (a) limited in typological scope to a select family of languages, and (b) translated from English. The question of \"why not translate\" is addressed nicely in Section 3.2 of Clark et. al. 2020 (https://arxiv.org/pdf/2003.05002.pdf). Thus, there is a concern that the artifacts introduced by \"translationese\" may considerably hurt generalization performance to purely native text.\n\n- The proposed ReFCOG model is composed of several existing components previously proposed in multi-lingual work, and seems more like a baseline than a purely \"novel\" approach. That's ok, as it's good to have documented performance of simple to slightly complex models when introducing a new dataset (but the messaging might be better off slightly altered to reflect that).\n\n=== Questions ===\n\n- Why does constrained generation *reduce* performance? It seems that by restricting generation to only valid facts, it wouldn't be filtering any beam candidates that reflect possible correct answers. So, for any unconstrained generation that would be correct, why would it not still be the top candidate from the constrained search? It would seem that the set of correct answers should be a perfect subset.\n\n=== Missing references ===\n\n- The related work could be more complete in included related work from multilingual fact retrieval, for example, Jiang et. al. (https://aclanthology.org/2020.emnlp-main.479.pdf).\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Choice of languages, Novelty and Constrained generation issue\n**Effect of Translationese artifacts**:\n\nWe deliberated on the choice of native text vs translationese during IndicLink design, and decided to go ahead with translations noting the effort spent in creating WebRED (Ormandi et. al., 21) for English sentences and the difficulty in finding experts who can do the same in other languages. Therefore, we decided to proceed with translating already available WebRED English sentences into Indian languages. This allowed us to extend to more languages than would be otherwise possible. \n\n\n**Lack of non-Indian languages**:\n\nIndian languages have been historically underserved in the NLP community, particularly for Information Extraction tasks. In order to make progress on this front, we decided to use our available budget to exclusively focus on Indian languages. Apart from fact linking, IndicLink forms the first resource for evaluating important IE tasks such as entity linking and relation classification for web text in Indian languages. \n\n**Novelty**:\n\nThe ReFCoG model demonstrates the benefits of augmenting generation with retrieval models for IE tasks. Prior entity linking systems (Botha et. al., 20) relied on retrieval based models while recent works (Nicola et. al., 21) have proposed constrained generation models as a replacement to retrieval models. \n\nReFCoG is the first system to successfully demonstrate that both the approaches can be combined to complement each other and achieve strong performance. We believe that this is a general observation that can also benefit entity linking. This can serve as a starting point for exploring end-to-end ways of combining both the retrieval and generation paradigms in the context of entity linking.\n \n**Issue with constrained generation**:\n\nWe agree that the correct facts output by unconstrained generation must be a proper subset of the constrained generation and the current results are contradictory to this. On a closer look, we uncovered that the trie used for the constrained system was incomplete (missing some valid fact labels) and so the unconstrained system was allowed to produce more valid facts than the constrained system. Once we correct this issue, constrained generation does indeed outperform the unconstrained model by 0.6 P@1 and 2 R@5. We have updated these results in Table 3 and Table 4 of the latest draft. We note that all the other presented conclusions remain the same.\n\n**Presentation and References**:\n\nWe thank you for pointing out the challenges involved in understanding the task formulation in the submitted version. We have added a better description of the task in the introduction, clarified the MFL task in Table 1 and simplified the formulation in Section 3 in the latest uploaded draft.\n\nWe have added the Jiang et. al., 2020 reference but we note that the work addresses a fundamentally different problem of “retrieving” facts from the memory of pretrained LM. Instead, we focus on retrieving only a subset of facts that are relevant to the sentence. \n\nBotha, J. A., Shan, Z., & Gillick, D. (2020). Entity linking in 100 languages, EMNLP, 2020\nDe Cao, N., Izacard, G., Riedel, S., & Petroni, F. (2020). Autoregressive entity retrieval, ICLR, 2021\nOrmandi, R., Saleh, M., Winter, E., & Rao, V. (2021). WebRED: Effective Pretraining And Finetuning For Relation Extraction On The Web. arXiv preprint arXiv:2102.09681.\n\n", "role": "user" }, { "content": "I appreciate the authors' reply. The reply answers almost all of my questions. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
kmVA04ltlG_
AKBC 2021 Conference
Reviewer_Bz1V
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Bz1V, and the following are the paper you need to review:\n```<<kmVA04ltlG_>>```", "role": "user" }, { "content": "Title: Interesting new source of weak supervision. Would be stronger with comparison to more recent work in zero-shot classification.\nReview Content: This paper presents a method of extracting document class labels from the Wikipedia category hierarchy; the set of sub-areas on StackOverflow; and the list of sub-reddits on Reddit. These pairs are then used to train a BERT-based binary classifier that predicts whether a category matches a document. Since categories are named with natural language, the model can be applied to new classification tasks that have natural language labels.\n\nWhen applied to a number of few-shot classification taskss, the model outperforms baselines that don't make use of large pre-trained language models. On average, it is beneficial to use all three sources of distant supervision, but this is not true for each individual task.\n\nThe results would be stronger with a comparison to a zero-shot approach that makes use of a large pre-trained LM but doesn't make use of the source of weak supervision. Currently, it is not clear where the accuracy improvements are coming from. An alterative way to show the importance of the NatCat data would be to report accuracy results for the BERT/RoBERTA models that have been trained on 1%/10%/50%/100% of the available dataset.\n\nStrengths:\n- Aggregation of three different sources of weak supervision is yielding nice results on zero-shot classification.\n- Data may be useful resource for future work.\n\nWeaknesses:\n- Experiments don't properly untangle impact of NatCat weak supervision vs. adoption of large pre-trained LM.\n- No comparison to recent work on zero-shot classification.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Performance of Pretrained LM without NatCat and Comparison to recent zero-shot classification. \nThanks for the feedback! Below are our responses: \n> Experiments don't properly untangle impact of NatCat weak supervision vs. adoption of large pre-trained LM.\n\nWhen classifying with the pretrained BERT model (not fine-tuned on NatCat), following the model setup described in section 4.1 with cosine similarity as the scoring function, we get model prediction accuracies of 30.2% for AGNews, 30.0% for DBPedia, and 19.0% for Yahoo. Compared to 75.6% for AGNews, 82.8% for DBPedia, and 54.9% for Yahoo from Table 4, fine-tuning on NatCat greatly improves model performance on text classification tasks. \n\n> No comparison to recent work on zero-shot classification.\n\nWe compared to Yin et al. at Table 8 in the appendix. We also compared to Puri and Catanzaro (GPT2) at Table 6 in the appendix. We could move that up to the main text in the final version. These were the zero-shot approaches we were aware of. Are there more works reviewers would suggest us to compare to? ", "role": "user" }, { "content": "I appreciate the authors' reply. The reply answers almost all of my questions. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
kmVA04ltlG_
AKBC 2021 Conference
Reviewer_MExX
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_MExX, and the following are the paper you need to review:\n```<<kmVA04ltlG_>>```", "role": "user" }, { "content": "Title: Good general-purpose text classification resource with lack of proper validation\nReview Content: # General comments\nThis paper presents a dataset and evaluation benchmark for weakly supervised text classification. The main resource, NatCat, contains a large number of diverse real-world documents and categories, making it appealing for general-purpose off-the-shelf text classification. The way of generating weakly supervised data is efficient, intuitive and easy to reproduce. The baseline models trained on this resource easily outperform previous weakly supervised approaches.\n\nThe main weakness of this work stems from the lack of support for its main hypothesis. This is a training and evaluation resource but instead of comparing against other resources, the main comparison (Tables 4, 5) is between multiple models trained on the same resource. If the motivation for this work is that NatCat is a better resource for text classification, the experiments need to demonstrate exactly that. \n\nSimilarly for the CatEval benchmark - currently, the contribution is the consolidation of multiple existing evaluation datasets. There is nothing in the paper to test if CatEval is a better resource than existing benchmarks for this task or whether the datasets comprising CatEval are the best choices for the purpose.\n\nAnother practical consideration regarding any weakly supervised †raining resource is how much domain-specific fully supervised data does it take to get a better performance.\n\n\n# Specific comments/questions\nIt's not clear what the example is Table 1 illustrates. Are the highest-scoring categories from NatCat a good fit? As there is no overlap with the AGNews categories (including the true one) wouldn't the score of the example be 0? Furthermore, if the categories from AGNews are part of NatCat (as is claimed in Section 3.1) why are they not appearing even further down the list?\n\nNone of the datasets used in CatEval has nearly as many labels as there are in NatCat (especially the Wikipedia part). Does this cause problems for the models? Would it be better for the number of categories in NatCat to be consolidated to match those found in CatEval?\n\nWhy are only the common categories (subreddits) chosen from Reddit, but not Wikipedia? What are some examples of subreddits that fell below the threshold?\n\nThe result of \"superhuman\" performance is a bit suspicious. To confirm this is semantic drift from the annotators' side, the same annotation should be performed with training sessions for each annotator. The alternative possibility is that the categories aren't applied consistently among the users of the websites comprising the dataset (which is plausible in community-generated content).\n\nI disagree with the analysis in 5.1 claiming that Wikipedia is a better resource for training sentiment classifiers than Stack Exchange. Wikipedia's editorial style aims to be as objective as possible.\n\n\n# Minor comments\n- Incorrect capitalisation in the second paragraph of Section 1 (\"of NatCat, We\" -> \"of NatCat, we\")\n- Inconsistent non-textual citation style applied to some references ([Yogatama et al., 2017] instead of Yogatama et al. [2017])\n- Add the #words/doc statistic to Table 2.\n- I don't think median was the statistic used in reporting the results over 5 runs (Section 4.1). Maybe the authors meant mean? Also, the standard deviation/error should be reported for completeness.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Clarification of the NatCat resource and some experimental settings \nThanks for the valuable feedback! Below are our responses. \n\n> It's not clear what the example is Table 1 illustrates ...\n\nTable 1 is an example to show that NatCat can be used to score any document-category pairs for text classification tasks. The row “NatCat” shows categories from the NatCat resource. In particular, these categories are from the Wikipedia dataset. We put this row here to show that models trained on NatCat can be used to classify common categories (from Wikipedia). However, if we want to use NatCat for a particular text classification task, we may also just treat it as a scoring and ranking task, to first score document-category pairs of each category, and pick the highest scored category as the predicted label. Thanks for pointing this out, we will add more descriptions in the caption of Table 1, so the readers won’t be confused. \n\nAGNews categories do appear further down the list. The list is so long (millions of labels) and the AGNews labels are not as high scoring as the labels shown. The NatCat labels as shown in Table 1 are indeed closer to the news text compared to “International” (Label from AGNews). \n\n> None of the datasets used in CatEval has nearly as many labels as there are in NatCat (especially the Wikipedia part). Does this cause problems for the models? Would it be better for the number of categories in NatCat to be consolidated to match those found in CatEval?\n\nWe don’t think this is necessary. Our way of using NatCat to fine-tune BERT/RoBERTa models essentially builds models that are capable of scoring any document-category pairs. Presumably, it learns to capture the semantic relatedness or similarity between the document text and category name or descriptions. Our goal is to make such a model as general-purpose as possible and can be applied to a variety of downstream text classification tasks. With the NatCat resource constructed and made available, future work could explore several interesting follow-up questions, including whether it's beneficial to focus training on a small number of high-level categories.\n\n> Why are only the common categories (subreddits) chosen from Reddit, but not Wikipedia? examples of subreddits ...\n\nWe find subreddit names are generally more noisy, and a lot of them are abbreviations that only a certain group of people knows. Wikipedia category names are more formal and understandable by most people. Hence we decide to only keep the most common categories from Reddit. Below are some example category names from Subreddit (with its ranking by frequency): archeage (3001), Deutsche_Queers (10000), HartfordWhalers (20000), pyladies (30000), Religitards (40000), TheArnoldFanClub (50000), HomeSeer (60000), 001 (68627). \n\n> The result of \"superhuman\" performance is a bit suspicious ... \n\nThat’s a good point. We actually want to convey the message that “text classification” can suffer from overfitting, in a sense that if human beings cannot outperform machine-trained models, that “superhuman” performance may not be that meaningful. In this sense, NatCat fine-tuned models do not seek to outperform or even get close to state-of-the-art models on any particular text classification tasks, because a real person with reasonable world knowledge won’t outperform state-of-the-art machine-learned models. As you were getting at, we suspect the reason why human performance is below that of supervised models is that the categories were generated naturally by online communities rather than by annotators with annotation guidelines or training examples. We chose to conduct our human evaluation with the setting in which humans are solely given label names rather than guidelines or training examples, in order to reflect the setting of our NatCat-trained classifiers.\n\n> I disagree with the analysis in 5.1 claiming that Wikipedia is a better resource for training sentiment classifiers than Stack Exchange. Wikipedia's editorial style aims to be as objective as possible.\n\nYeah, that’s a good point. We were interpreting the results from Table 5 directly here. Wikipedia is likely to contain more knowledge about sentiment words and phrases, such as certain words or phrases belonging to a certain emotion, and that emotion can be associated with positive or negative sentiment. StackExchange categories are very engineering-focused and may provide less knowledge for sentiment classification tasks. However, Table 5 shows that none of Wikipedia, StackExchange, or Reddit is a perfect resource for sentiment classification tasks. We can soften the tone in the final version of the paper to say that Wikipedia provides more knowledge about sentiment in its category structure compared to Stack Exchange, but it is by relative measure. In general, NatCat is not ideal for sentiment tasks but good for topical classification. \n", "role": "user" }, { "content": "Title: Thanks for the detailed clarifications\nI would like to thank the authors for providing clear and detailed answers to my questions. Regarding subreddit naming your point makes a lot of sense and the examples are helpful to understand the reason of the threshold of 3k. However, I suspect there are subreddit names above the threshold that have the same problem of \"in-group meaning\" - (e.g. r/Superstonk or r/facepalm). A more in-depth analysis of the categories that are in the dataset would be interesting.\n\nI would also like to encourage you to have a look at my general comments, especially the point about providing evidence about the main hypothesis put forward by this work. While NatCat is undeniably a useful resource, I believe you can make a much stronger researcher paper if you address these issues.", "role": "assistant" }, { "content": "Title: Thanks for the advice! Add more response. \nThanks for the advice! Here are some more response. \n\n> A more in-depth analysis of the categories that are in the dataset would be interesting.\n\nWe can work on that and add some analysis of category names in the paper as you suggested. \n\n> I would also like to encourage you to have a look at my general comments, especially the point about providing evidence about the main hypothesis put forward by this work. While NatCat is undeniably a useful resource, I believe you can make a much stronger researcher paper if you address these issues.\n\nWe could compare with Yin et al. However, Yin’s work is also based on Wikipedia (document and their immediate categories). This is similar to what we demonstrated in Table 5, i.e., with resource Wikipedia only. We could add some text in the paper saying that “WikiOnly” is close to Yin et el., except we did more processing of category names and parent categories, as described in “Section 2 → Wikipedia”. \n\nWe could also provide a comparison to other pretrained language models, as the other reviewer suggested. Such language models are trained purely on text without naturally annotated categories. When classifying with the pretrained BERT model (not fine-tuned on NatCat), following the model setup described in section 4.1 with cosine similarity as the scoring function, we get model prediction accuracies of 30.2% for AGNews, 30.0% for DBPedia, and 19.0% for Yahoo. Compared to 75.6% for AGNews, 82.8% for DBPedia, and 54.9% for Yahoo from Table 4, fine-tuning on NatCat greatly improves model performance on text classification tasks. We could add such results as another baseline in the paper. \n\n> Add the #words/doc statistic to Table 2.\n\nYes, we can add them. \n\n> I don't think median was the statistic used in reporting the results over 5 runs (Section 4.1). Maybe the authors meant mean? Also, the standard deviation/error should be reported for completeness.\n\nWe actually meant median. This was following the approach of the RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf). The reason is that sometimes RoBERTa fine-tuning could fail and give very bad results. Training on 5 random seeds and report the median provides more robust results. ", "role": "user" }, { "content": "I truly enjoyed the above exchange with the author, which has significantly improved the quality of this work.", "role": "assistant" } ]
kmVA04ltlG_
AKBC 2021 Conference
Reviewer_TZXz
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_TZXz, and the following are the paper you need to review:\n```<<kmVA04ltlG_>>```", "role": "user" }, { "content": "Title: Useful resources + nice distantly supervised results; Old baseline + overfitted semi-supervised results\nReview Content: **Summary:** The paper considers the task of distantly supervised text classification. The model cannot access the labels in the target tasks. Instead, weak labels constructed from Wikipedia, Stack Exchange, and Reddit are used to train the model. The main contributions are:\n\n- Scraped and formatted weakly labeled texts from Wikipedia (labels = categories), Stack Exchange (labels = subarea names), and Reddit (labels = subreddit names).\n- A compiled benchmark with 11 datasets. The datasets are diverse (topic, sentiment, and emotion labels; single-label and multi-labels).\n- A recipe to train the model on weak labels (apply BERT/RoBERTa on the input concatenated with the label, then predict a score). The paper also shows results on other potential recipes (e.g., cosine similarity to the label embedding, or generative models such as GPT-3) in the appendix.\n\n**Strengths:**\n\n1. The experiments are very thorough.\n - Alternative solutions considered are stated and their results are included in the appendix.\n - The experiments were on a diverse collection of tasks with different label spaces and numbers of labels per input.\n - The ablation and error analysis are also informative. The comparison against human raters also show that the types of errors made are reasonable.\n2. The resources provided would be very useful for other researchers.\n\n**Weaknesses:**\n\n1. If I am not mistaken, the main baseline is quite old (a pre-neural model from 2008).\n - That said, there is a comparison against Yin18 at the end of Section 4.3. I wish this is used as the main baseline. Is these results comparable to the setup in Table 4?\n2. In practical scenarios, a weakly supervised model is often fine-tuned on at least a small amount of target data (so that it's familiar with the target label space). The paper does consider this semi-supervised setup in Appendix D (the \"Half Seen\" setting). The result in Table 8 shows that the model greatly degrades on the \"unseen\" subset after fine-tuning on the \"seen\" subset, which is undesirable.\n - However, this overfitting issue might be fixable via regularization, such as a distance-based penalty when the parameters move too far from the initial parameters.\n - The overfitting problem seems to also manifest in Section 5.2, where training on a bigger training dataset hurts.\n\n**Other comments:**\n\n1. Section 3.1: \"The choice of label names can have a large impact on performance.\" --- It would be interesting to do an ablation on the choice of target label names.\n2. Section 4.1: Are the negative labels chosen randomly? Wouldn't such negatives be too easy to solve by the model?\n3. Section 4.1: For the \"encode the document and category using pretrained models\" --- are these pretrained on the NatCat task? If not, how well does the cosine-similarity model tuned on NatCat perform?\n4. Section 4.3: Why is the human performance on Yahoo much higher than the models (including the supervised one)?\n5. It is still strange that the model trained on Wikipedia labels is so good at sentiment (Table 5). Are there Wikipedia categories that resemble sentiment or emotion labels? The Reddit results is more understandable due to sentiment-like subreddit names (like oddlysatisfying or mildlyinfuriating).\n\n**Edit suggestions:**\n- Section 1: \"StackOverflow\" --> \"Stack Exchange\"?\n- Section 4.3: Yin 18 has an incorrect citation. Should be \"Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach\"\n- Section 6: \"Yin et al. [2018] and\" --- wrong Yin18 and a missing second reference.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Clarifications on some experimental results\n> If I am not mistaken, the main baseline is quite old (a pre-neural model from 2008).\nThat said, there is a comparison against Yin18 at the end of Section 4.3. I wish this is used as the main baseline. Is these results comparable to the setup in Table 4?\n\nThese results are comparable. We will expand on those results in the next version by clarifying that they are comparable and discussing the comparison in more detail.\n\n> Section 3.1: \"The choice of label names can have a large impact on performance.\" --- It would be interesting to do an ablation on the choice of target label names.\n\nThanks for the advice! We can do that and add some results in the main body (if space permits) or appendix. \n\n> Section 4.1: Are the negative labels chosen randomly? Wouldn't such negatives be too easy to solve by the model?\n\nYes, they are randomly sampled. We did not sample harder negative examples in our experiments. \n\n> Section 4.1: For the \"encode the document and category using pretrained models\" --- are these pretrained on the NatCat task? If not, how well does the cosine-similarity model tuned on NatCat perform?\n\nThese models are not fine-tuned on the NatCat resource. With the model setup described in section 4.1 and cosine similarity as the scoring function, we get model prediction accuracies of 30.2% for AGNews, 30.0% for DBPedia, and 19.0% for Yahoo. Compared to 75.6% for AGNews, 82.8% for DBPedia, and 54.9% for Yahoo from Table 4\n\n> Section 4.3: Why is the human performance on Yahoo much higher than the models (including the supervised one)?\n\nGood observation/question. The Supervised result for Yahoo is cited from Zhang et al.’s CharCNN text classification paper (https://arxiv.org/pdf/1509.01626.pdf). With BERT we could expect higher accuracy (77.6% in Sun et al. https://arxiv.org/pdf/1905.05583v3.pdf ). Hence Yahoo is not an outlier in this sense, but it is indeed more noisy and harder to solve. As discussed in the paper: “In Yahoo, society & culture is often misclassified as education & reference, politics & government, and business & finance. Health is often misclassified into science & mathematics, family relationships as society culture.” These categories are hard to tell and can be classified in multiple categories. We can update the final version with more recent BERT based supervised results. \n\n> It is still strange that the model trained on Wikipedia labels is so good at sentiment (Table 5). Are there Wikipedia categories that resemble sentiment or emotion labels? The Reddit results is more understandable due to sentiment-like subreddit names (like oddlysatisfying or mildlyinfuriating).\n\nWe think Wikipedia is likely to contain some knowledge about sentiment words and phrases, such as certain words or phrases belonging to certain emotion, and that emotion can be associated with positive or negative sentiment. The category of emotion (https://en.wikipedia.org/wiki/Category:Emotion) may provide a lot of such knowledge. ", "role": "user" }, { "content": "I acknowledge the authors' response. The reply answers almost all of my questions. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
li-3nHhT0xc
AKBC 2021 Conference
Reviewer_UY5T
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_UY5T, and the following are the paper you need to review:\n```<<li-3nHhT0xc>>```", "role": "user" }, { "content": "Title: Intuitive method with positive results.\nReview Content: Strengths:\n- Clear, well explained motivations\n- Positive results compared to baseline RoBERTa methods\n- Ablation study to show the benefits of the proposed model\n\nWeaknesses\n- Gains are not huge for EM and F1, especially for the test set\n\nThis paper presents an intuitive technique with positive results, I am a bit concerned that the observed gains might be due to noise/randomness, but at least the consistency score and low-data results seems to have been improved by a substantial margin.\n\nQuestion:\n- Were the results averaged over multiple trials? It would help substantially to show averaged results for at least the validation set, especially since some of the margins are small\n- Are these results statistically significant given the size of the test/validation sets?\n- Is the MLP from equation 7 used at test time? If not, it seems like that would limit the benefit of the contrastive loss. Is there a way to share parameters between the MLP in the contrastive loss and the logistic regression model in the event detection loss so the contrastive loss more directly effects the model's output?\n\nPresentation:\n- Equation 2: Should there be commas between the h and i in the W subscript?\n- Table 2: 50.9 is bolded even though it is not the highest score in that column, is that intentional?\n- Equation 8: “is a binary event indicator (1 or 0) if” -> “is a binary event indicator (1 or 0) and”\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and thoughtful suggestions!\nWe thank the reviewer for the constructive suggestions regarding the evaluation of our proposed method. We answer these questions in the following.\n\nQ1: I am a bit concerned that the observed gains might be due to noise/randomness, but at least the consistency score and low-data results seems to have been improved by a substantial margin. Were the results averaged over multiple trials? Are these results statistically significant given the size of the test/validation sets? \n\nResponse: First we want to point out that our OTR-QA has two advantages: 1) our model learns context-agnostic, free-text-based temporal relation representations, which leads to higher data efficiency. As you mentioned, our model has a significant improvement under the low-source setting, such as 5.3% and 20.4% gains on F1 using 50% and 10% training data in Table 2. 2) OTR-QA models the differences in temporal relations with a contrastive loss function so that the consistency score has been improved by a substantial margin, such as a gain of 9.4% on the test set under the RoBERTa-base setting. We appreciate you noticed these two main improvements in the experiments. All improvements are due to reformulating the temporal QA task to an open temporal relation extraction task rather than noise. \n\nSecondly, the performance gain of our model is comparable to the DEER method, which is proposed in the paper of “DEER: A Data Efficient Language Model for Event Temporal Reasoning. Han et al., 2020” in the footnote of page 8. This approach not only uses TORQUE data but also takes extra 10 million sentence passages for training the model. Even though DEER uses more data, the improvement is still similar to us (our F1 and C scores are slightly better). So the improvement under this challenge task is enough to show the benefit of our model. What’s more, this comparison further proves our OTR-QA is more data-efficient since we haven’t used additional data but achieved the SOTA performance. \n\nThirdly, our validation metrics are averaged over multiple trials. All standard deviations of our OTR-QA are less than 0.01. In addition, we have added a t-test on the validation set by comparing our method to baseline with RoBERTa-base in terms of F1 on multiple trials. The p-value is 0.016, which proves statistically significant. \n\nQ2: Is the MLP from equation 7 used at test time? If not, it seems like that would limit the benefit of the contrastive loss. Is there a way to share parameters between the MLP in the contrastive loss and the logistic regression model in the event detection loss so the contrastive loss more directly affects the model's output?\n\nResponse: This is a great suggestion. It is true that we use the MLP during training instead of testing. We appreciate your suggestion using shared parameters to take the benefit of contrastive loss. One possible way is to share the parameters for scoring answer events in Eq. 4 to do the same computation as MLP in Eq. 7. \n\nText Revisions: \nThanks for checking our paper carefully, we have fixed these in the revised version.\n", "role": "user" }, { "content": "I value the authors' response. The authors have answered almost all of my concerns. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
li-3nHhT0xc
AKBC 2021 Conference
Reviewer_udhY
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_udhY, and the following are the paper you need to review:\n```<<li-3nHhT0xc>>```", "role": "user" }, { "content": "Title: Solid study on open temporal relations for QA, but the empirical result seems marginal\nReview Content: **Summary**\nAuthors introduce a temporal question answering (TQA) model called OTR-QA, which decomposes a given question into a question event and a temporal relation. Their main contribution is to learn open temporal relations that do not require a predefined set of relations. The architecture is simple and trained with multiple hyperplanes. Answer event prediction loss is further assisted by two auxiliary losses (event detection loss and contrastive loss). Problem definition is straightforward (i.e., extract all the answer entities given the question entity and the relation) and the model is evaluated on TORQUE (Ning et al., 2020). Compared to the RoBERTa-large baseline by Ning et al., 2020, OTR-QA achieves +1.1% F1, -0.2% EM, and +3% C score (consistency). Authors also show that their model is more data-efficient and modeling the open temporal relations is indeed the key to achieve strong performance.\n\n**Strengths**\n* The paper is well-structured and easy to follow.\n* The ablation study in Table 3 is helpful to understand why their model works well.\n* Their model shows significant improvement over RoBERTA-base in the low-resource setting.\n\n**Weaknesses**\n* The performance of the RoBERTa-large baseline in the TORQUE leaderboard is (F1=0.7562, EM=0.5076, C=0.3691), which largely closes the gap with OTR-QA, especially in C score. I understand that authors reported the RoBERT-large performance from Ning et al., 2020, but it would be nice if authors can explain why this happens and report the performance of OTR-QA evaluated on the official leaderboard.\n* Their main result (Table 3) might have omitted comparison with previous works on TORQUE (e.g., DEER: A Data Efficient Language Model for Event Temporal Reasoning. Han et al., 2020) but only includes the baselines provided by Ning et al., 2020. It would be better to include them to put their numbers in context.\n* In Figure 3, it is not very clear that OTR-QA has better representations of the relations and the color scheme is also confusing.\n\nOverall, I find the paper interesting and clear, but the improvement over previous models seems marginal. It would be nice if authors can provide clarification on these weaknesses.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and thoughtful suggestions! \nThank you for the thoughtful suggestions, which will make the paper better. All comments have been carefully taken into consideration. We give the following responses to clarify the proposed method and answer your concerns.\n\nQ1: The performance of the RoBERTa-large baseline in the TORQUE leaderboard is (F1=0.7562, EM=0.5076, C=0.3691), which largely closes the gap with OTR-QA, especially in C score. I understand that authors reported the RoBERT-large performance from Ning et al., 2020, but it would be nice if authors can explain why this happens and report the performance of OTR-QA evaluated on the official leaderboard.\n\nResponse: Thanks for your question. Firstly, we reported the numbers of RoBERTa-large from paper since other numbers are also coming from this paper, which keeps the consistency. Secondly, we only achieved the similar performance presented in the paper instead of the performance of the leaderboard by our experiments. Thirdly, another paper DEER [Han et al., 2020] from one of the authors who proposed the TORQUE challenge also reported the numbers used in the paper instead of leaderboard.\n\nQ2: Their main result (Table 3) might have omitted comparison with previous works on TORQUE (e.g., DEER: A Data Efficient Language Model for Event Temporal Reasoning. Han et al., 2020) but only includes the baselines provided by Ning et al., 2020. It would be better to include them to put their numbers in context.\n\nResponse: Thanks for your suggestion. Our paper contains an explanation about this concern in the footnote of page 8: \n“We do not include the model and results from [Han et al., 2020] because it used different training data in addition to TORQUE.” \nThe DEER [Han et al., 2020] created large-scale masked samples as distant supervision signals. It takes advantage of the 20-year’s New York Times news articles with 10 million sentence passages for training. So it is hard to directly compare our OTR-QA to the DEER method. \nThe performance of the DEER [Han et al., 2020] with much more training data is F1 76.1, EM 51.6 and C 36.8, which is similar to us. Thus these results suggest that our approach is data efficient since we haven’t used additional data but achieved the SOTA performance. Some comparisons will be added in the final version.\n\nQ3: In Figure 3, it is not very clear that OTR-QA has better representations of the relations and the color scheme is also confusing.\n\nResponse: Our purpose using the Figure 3 is to show OTR-QA could learn better embeddings of open temporal relations. Here a better representation means that the similar relations are grouped together tightly and different relations are well separated. The cluster w.r.t. OTR-QA (right sub-figure) is much tighter and well separated. We will update these figures with a new color scheme with high separability. More explanations will be added in the final version.\n", "role": "user" }, { "content": "Thanks to the authors for their clarification. The response covers nearly all of the issues I raised. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
li-3nHhT0xc
AKBC 2021 Conference
Reviewer_2dLF
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_2dLF, and the following are the paper you need to review:\n```<<li-3nHhT0xc>>```", "role": "user" }, { "content": "Title: Review\nReview Content: This paper reformulates TORQUE questions (temporal-ordering-focused reading comprehension questions)\nas open temporal relation extraction. Questions are decomposed as (source event, relation, ?)\nqueries, where the ? corresponds to another event in the passage. After this decomposition, events\nfrom the passage and the extracted relation are modeled in a way similar to knowledge base\ncompletion, using multiple hyperplane projections. An additional contrastive loss is added, which\nbasically amounts to normalizing event answer scores across contrasting questions, when it is known\na priori that an event has exactly one answer among the set of questions. Experimental results show\nthat this model outperforms prior work on TORQUE, and good ablations are done to show the impact of\nvarious modeling decisions that were made.\n\nOverall, I think the paper presents a straightforward but novel idea, executes this idea well, and\nevaluates it thoroughly. I think the ideas and experiments here are interesting and timely. The\nresults with a contrastive loss and open relation modeling add to the body of evidence of the\nutility of these methods generally. There are a few things that I would recommend improving in the\npaper, but it's already useful as it is.\n\n\nMore detailed comments:\n\nRelated work on contrastive losses is missing. Asai and Hajishirzi, 2020, \"Logic- guided data\naugmentation and regularization for consistent question answering\", is a good starting place for\nfinding related work in this area. The question-conditional contrastive estimation method from\nDheeru Dua et al. 2021, \"Learning with Instance Bundles for Reading Comprehension\" seems like it is\nbasically identical to the loss presented in this paper (this second one is a recent enough paper\nthat a comparison is not expected, and a missing citation is not surprising, I'm just providing it\nFYI).\n\nDo you know what percentage of the training data is not well captured by an (event, relation, ?)\nquery? That is, how often does your decomposition fail?\n\n3.2.1 - it's not clear what exactly is being averaged to compute v_q - all of the words in the\nquestion? Just the event word in the question? The event word in the question and the\ncorresponding event word(s) in the passage?\n\nUsing a fixed set of temporal relations is a nice ablation, I like including that experiment quite a\nbit.\n\nSection 4.4 - t-SNE is generally not very informative, and I disagree with the text in this section\nthat says that the OTR model has a better clustering than the baseline. It does not seem nearly so\nclear cut to me as is stated in the text. I think this section would be better to just be removed\nfrom the paper. It doesn't really add anything that's scientifically useful.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thanks for your review and thoughtful suggestions! \nWe thank the reviewer for the excellent summary of our paper, and will update the paper based on your suggestions. In what follows we respond to the questions one by one.\n\nQ1: Related work on contrastive losses is missing.\n\nResponse: We will add more related papers about contrastive learning including the papers you suggested. These contrastive methods learn representations by contrasting positive and negative examples, which have been used in both computer vision and NLP tasks.\nSome related papers that we will add to our references are:\n\n[1] Logic-Guided Data Augmentation and Regularization for Consistent Question Answering\n\n[2] Learning with Instance Bundles for Reading Comprehension\n\n[3] Contrastive estimation: training log-linear models on unlabeled data\n\n[4] Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering\n\n[5] Representation Learning with Contrastive Predictive Coding\n\nQ2: Do you know what percentage of the training data is not well captured by an (event, relation, ?) query? That is, how often does your decomposition fail?\n\nResponse: Firstly, all warm-up questions, such as “What has already happened”, cannot be well captured by a query. Hence we add a special event “current time” as e_q as shown in section 3.2.1. Secondly, for the rest of the questions, only less than 3% questions cannot be separated. Here the same process used in the warm-up questions is taken to deal with these unseparated questions.\n\nQ3: In section 3.2.1, it's not clear what exactly is being averaged to compute v_q - all of the words in the question? \n\nResponse: Each question event e_q usually contains multiple words. For example, the question event includes three words {'Kilpatrick', 'Robyn', 'said'} for the question “what happened after Robyn Kilpatrick said?” Hence we average the representations of these words as the v_q. \n\nQ4: Section 4.4 - t-SNE is generally not very informative, and I disagree with the text in this section that says that the OTR model has better clustering than the baseline. It does not seem nearly so clear cut to me as is stated in the text. I think this section would be better to just be removed from the paper. It doesn't really add anything that's scientifically useful.\n\nResponse: I couldn't agree with you more. The t-SNE figures are usually used for the visualization without providing much information. Our purpose here is to show the embeddings of open temporal relations could be well separated. We will modify this section based on your suggestion. \n", "role": "user" }, { "content": "Thanks for the authors' response. The reply answers almost all of my questions. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
nDe2D8DDXKR
AKBC 2021 Conference
Reviewer_a2gb
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_a2gb, and the following are the paper you need to review:\n```<<nDe2D8DDXKR>>```", "role": "user" }, { "content": "Title: Interesting ideas, not much science\nReview Content: I like the mission of this nearly 2-decade old project and this seems like a reasonable thing to try, an epsilon increment on the idea of populating a class-level graph for the purposes of supporting a textbook. All told, they take a bunch of existing ideas and tack them on to something that is already there, integrating feedback into the tool is a great idea in particular.\n\nHowever the paper reads like a progress report or masters project summary, and comes to no conclusion about the methods other than simply offering some metric numbers. It describes what has been done, but has no hypothesis that was tested. For this to be science, we'd need at the very least a comparison to some baseline, an evaluation of different approaches and comparison of their advantages and disadvantages.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Substantial progress \nWe thank the reviewer for the feedback. As no citation of the mentioned old project was provided, we will assume that the reviewer is referring to Project Halo. If we consider Project Halo as a baseline, the current work is a significant advancement. The developers in Project Halo used knowledge authoring by domain experts which provided a highly detailed and accurate gold standard for knowledge base content that is not available anywhere else. The present paper was framed around the engineering question: How do we bootstrap a comprehensive and precise ontology graph from text? The following specific conclusions answer this question and have been incorporated into the paper upfront in the introduction section.\n\n1) Automated term extraction using language models is a viable technology partially to automate the Term Identification step of knowledge authoring. This conclusion is backed by the F1 score of 0.51 on automated term extraction. In Project Halo, all of this work was done manually. A commercial off-the-shelf key phrase extractor provided by AWS produces an F score of 0.32 on this data set, and hence, our ATE offers a substantial improvement over the state of the art.\n\n2) Automatic taxonomy construction using language models is a viable technology to support taxonomy creation. This conclusion is backed by an average F1 value of 0.55 for taxonomy construction which is substantially better than the highest F score of 0.3947 reported by taxonomy learning methods evaluated at the SemEval 2016 shared Task 13.\n\n3) Automated relation extraction, for the relations such as has part (F1 score of 0.22) is currently not a viable technology to support knowledge authoring. The new labeling tool reported here has an accuracy of over 0.96, can be deployed on a large textbook reading platform and will eventually produce training data for free that will help the community in significantly improving relation extraction technology.\n\nThe framework reported here is radically different from Project Halo that was locked into proprietary textbooks and licensed software. All the textbook data and the extraction algorithms will be openly available for the community to freely use and develop further. Eventually a large library of knowledge graphs for all 42 textbooks published by OpenStax will be created. We are hoping that this paper will introduce an invaluable dataset to advance the research in the AKBC research community.\n", "role": "user" }, { "content": "Thanks to the authors for their thoughtful response. It covers most of my concerns, and I'm open to increasing the score.", "role": "assistant" } ]
nDe2D8DDXKR
AKBC 2021 Conference
Reviewer_Y5sz
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Y5sz, and the following are the paper you need to review:\n```<<nDe2D8DDXKR>>```", "role": "user" }, { "content": "Title: An interesting experiment on an interesting research topic\nReview Content: Overall, I like this work. It is well written and easy to follow. It feels like there is still a fair amount of work to get to well-rounded conclusions, but this seems like a good stage to already make initial results available for the community. Here and there, the text could be more clear by only discussing what has been done already and relegating all future steps to the discussion at the end. A further major aspect that needs improvement is comparing the obtained results to baselines. My comments on several aspects are below:\n\nOne aspect which is always hard is to differentiate between instances/entities/individuals and classes/types/concepts. I think you are doing a good job at explaining that. But, I wonder whether it is really that important to distinguish the two.\n\nYou mention that 'extracting rule definitions is out of the reach of automated methods'. I think this is not completely true. I have seen presentations on methods for doing that. Perhaps they are not performing well enough for your purpose?\n\nSelecting 20 out of the 100 available relations, based on you thinking they are 'confusing' or their low usage frequency is a weakness of your work. I would have liked to see you using these 100 relations and then gain an insight on whether what you would a priori thought to be confusing and low frequency relations, really lead to issues. This would be an interesting ablation study, rather than something to exclude from the outset.\n\nIn 3.1 you describe your model for ATE. It remains unclear to me whether you still free the weights of the BERT model while training for the prediction. If you only allow the weights of your softmax layer to be trained, you might not be taking the full advantage of the possibilities of your model.\nIn the same section you also mention you use 'standard techniques for optimizing'. Please provide more detail here. Small differences might have large effects. Also, I would hope you could share the full implementation of the work done as this would make it possible for others to run comparative studies.\nAlso, there are a lot of existing methods for this step you could compare with. \n\nI section 3.2, you mention you use sentences with two terms identified in the sentence. I do not get why this is limited to two and sentences with more terms are not included for this.\n\nFor your experiments I would have expected more attention paid to the creation of good baselines. For example systems like NELL have been used for relation extraction before and also systems like Yago have extensive implementations for relation extraction from text. Even using a simple baseline based on Hearst patterns (which you are aware about), would be expected. Besides, there are also specific works that use distributional semantics for automatic taxonomy induction. These would be only limited to some of the relations types you consider, but would still be very relevant.\n\nIn section 3.3, I think it is very nice that you include the option of flipping the relationship in the correct direction. Even nicer would be to see whether you can use that information to train a model and do this flipping task automatically.\n\nOne limitation of your work is that (co-)references are not handled. This is in part because you feed your models one sentence at a time. I think there is still an interesting direction where you would give your model longer pieces of text including coreferences and see whether it can figure those out while extracting the relations. \n\nIn 4.3. You mention you employed biologists to check the correctness of the psychology text annotations as well. Did they consider annotating this data harder than annotating the biology text? Were there more inter-annotator disagreements?\n\nIn section 4.3, you describe your use of a 'sophisticated denoising algorithm' (with reference to 37 and 9). Please give a summary or at least an intuition on what that is doing in your paper.\n\nIn 3.2 and 5.2, you reduce your set further to only 6 relations, while earlier you mentioned 20. This becomes very limiting.\n\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Many valid points, and good suggestions!\nThanks so much for your careful and considered review. Our responses are below.\n\n1) We have reworded the overview section to more clearly delineate between the current work and the future work.\n\n2) Whether it is necessary to distinguish between classes and individuals?\n\nWe make this distinction for two reasons. First, the training data at class level is much smaller than the training data for individuals, and this reinforces the difficulty in training automated methods for relation identification. Second, as explained in Section 6, classes capture nuances in relationships through proper use of quantification. For example, a chromosome is inside a cell only when it is a part of that cell. Chromosomes can be removed from a cell, and in that case, the \"is inside\" relationship does not hold true. The individual level relationships cannot correctly capture such information making it necessary to distinguish between a class level and instance level graph.\n\n3) State of the art on rule extraction. \n\nWe can provide a comparison if you give us a specific citation. Most closely related rule extraction work is [5,24]. The primary advancements in our work are (a) the use of state of the art pre-trained language models that are rapidly improving, and (b) a more scalable labeling approach that can be deployed in the context of textbook reading by millions of students. Because of this a comparison to methods used in [5,24] is not meaningful.\n\n4) Choosing 20 out of 100 relationships. \n\nRecall that the labeling tool is to be deployed in the context of an online textbook reading experience for students, and hence, the training requirement needs to be very low. The use of 100 relationships in previous projects required a training time of 20 hours which is not feasible for data collection from students. The number of relationships dropped because of their confusing nature was only three (it included agent, object, and base). The feedback from the publishers interested in adopting this technology has been to provide them with something much less complex but which includes maximum automation.\n\n5) We will publicly release all the code and data in the open source. As OpenStax textbooks are open source, we are hoping that this dataset will itself be of great interest to this community for future algorithm development. We have noted this in the revised version of the paper.\n\n6) We trained all the weights, not just the softmax layer. (Fine-tuning refers to this process instead of just training the final layer). We have clarified this in the revised version of the paper.\n\n7) We have added Appendix D and E that provide details of the optimization and details. A summary is included in the main body of the revised paper.\n\n8) We have clarified in Section 3.2 that if a sentence contains more than two terms, we consider all possible combinations of those terms for identifying possible relationships between those terms.\n\n9) Neither YAGO nor NELL are appropriate comparisons for our relation extraction work for the following reasons. Both are fact extraction systems and do not extract taxonomies, and relations such as has part/has region.\n\n10) In the SemEval 2016 evaluation, Task 13, several taxonomy extraction systems were evaluated with the best F score of 0.3947. The distributional semantics based method had a score of of 0.24. We have added this discussion to the paper.\n\n11) Regarding comparison to baselines such as Hearst Patterns, we have added a table in Appendix F that shows the performance of each of the extraction patterns we used and discussion added in the results section.\n\n12) That's a worthy idea for investigating in the future, but was outside the scope of the work for this project.\n\n13) Coreference resolution. This is another great direction for future work.\n\n14) Section 4.3. The psychology terms were validated by the psychology domain experts.\nThe paper has been corrected to reflect this.\n\n15) We have provided a brief overview of the denoiser in the Section 4.3 of the revised version.\n\n16) Set of six relations. \n\nA reduced set of 6 relations was used only for the ARE experiments. This smaller subset was chosen for an initial experiment and for the relations that had the highest amount of training data available. Even for this small subset, the current ARE algorithms are not above a practically useful threshold. Therefore, it did not make sense to test with relations that have even lesser training data available.\n", "role": "user" }, { "content": "Thank you for the authors' feedback. It adequately addresses the points I raised. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
nDe2D8DDXKR
AKBC 2021 Conference
Reviewer_49XT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_49XT, and the following are the paper you need to review:\n```<<nDe2D8DDXKR>>```", "role": "user" }, { "content": "Title: This paper proposes to construct ontology graph by adapting BERT and leveraging a novel crowd sourced relationship selection task, which shows that this approach is able to bootstrap the ontology graph creation for further refinement and improvement through human effort.\nReview Content: Strength: \n1. The labeling task is designed for the textbook reading by students across large libraries of textbooks to obtain plenty of data for relation selection.\n2. The framework can handle diverse relationships that include taxonomic, structural, functional and causal relationships. \n3. This paper also considers SOTA BERT-based model for sub-tasks . \n\n\nWeekness:\n\nThe paper mentions active learning will be considered later, which indeed is a good next step for this work.\n\nOne related work can be cited.\n\"Constructing Taxonomies from Pretrained Language Models, Chen et al, NAACL 2021\" \n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Restriction to a tree-structured taxonomy is a big difference\nThanks for pointing out the useful related work: Constructing Taxonomies from Pretrained Language Models, Chen et al, NAACL 2021\". The approach used by Chen et. al. is very similar to our work in that we are leveraging BERT for relationship prediction. Here are some interesting differences:\n\n1) Chen et. al. begin with a list of terms. One challenge in our work was that the list of terms itself had to be extracted from the text.\n\n2) Chen et. al. assume that the target taxonomy is a tree, which is an artificial restriction. The Wordnet hyponym hierarchy is not a tree. Similarly, the taxonomy considered in our paper is not a tree. Therefore, the Tree Reconciliation step used in Chen et. al. is not appropriate for our work.\n\n3) As explained in the Section 5.1 of the paper, all the terms that are of interest for an ontology are not always mentioned explicitly in the text making the problem of inferring relationships between such concepts more difficult.\n\n4) The approach used in Chen et. al. for generating negative training examples is interesting, and it could be worth exploring in our framework.\n\n5) The approach used in Chen et. al. predicts only the taxonomy relation while we predict multiple relationships using the same model. It is an interesting question whether it might be better to train a single model for each relation vs training a combined model as we considered.\n", "role": "user" }, { "content": "I appreciate the authors' reply. Most of my concerns have been addressed. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
o7sMlpr9yBW
AKBC 2021 Conference
Reviewer_fFyE
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_fFyE, and the following are the paper you need to review:\n```<<o7sMlpr9yBW>>```", "role": "user" }, { "content": "Title: The paper fine-tunes the model directly (instead of performing prompt tuning) and shows strong results on the LAMA benchmark.\nReview Content: This paper aims to extract relational facts from pre-trained language models. Instead of tuning the prompts at the input layer of the language model, the paper investigates fine-tuning the model on a set of training facts (<subject, relation, object> triples) for each relation. The results show that fine-tuning the pre-trained language model can achieve higher accuracy on the LAMA benchmark compared to prompt-tuning approaches. The paper also studies the transfer learning capabilities of the pre-trained language models for knowledge extraction and provides interesting insights.\n\nStrengths:\n* The paper studies extracting knowledge from pre-trained language models which are trained on unlabeled text. The problem seems promising with the development of modern large-scale language models and it fits well with the scope of AKBC.\n* The paper is well written and easy to understand.\n* The empirical results on LAMA are strong and significantly outperform previous approaches.\n* Transfer learning experiments lead interesting discussions and insights.\n\nWeaknesses:\n* The main weakness of this paper is that the goal of this paper is unclear to me. The approach is only evaluated on the LAMA benchmark, but LAMA is originally to probe how much knowledge is stored in the pre-trained language models, where the parameters of the models are frozen. In this paper, the model parameters are updated. If the paper targets to better estimate how much knowledge is stored in the language models, it lacks discussion on whether the results can reflect how much knowledge the model has stored in parameters (because the parameters have been updated). If the paper aims to build a neural knowledge base, the scope of the LAMA benchmark can be limited -- it only contains single-token objects and 41 relations.\n* The paper lacks qualitative analyses of the experiments. It would be insightful if the paper can include some concrete examples when showing the experimental results. For example, in transfer learning experiments’ group (c), the “omitted” performance can be much lower than the original BERT. Some examples might help understand what’s happening there.\n\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Probing vs. Fine-tuning and Qualitative Analysis\nFirst of all, we would like to thank you for your helpful review. We agree that both weaknesses are indeed problematic. We have added additional content, since we got an additional page for the revision.\n\n\n>If the paper targets to better estimate how much knowledge is stored in the language models, it lacks discussion on whether the results can reflect how much knowledge the model has stored in parameters (because the parameters have been updated).\nThis point is similar to what the other reviewers have pointed out. Indeed the paper is not being clear about the differences between probing a pre-trained language model for world knowledge and building a neural knowledge base.\n\nWe are aware that we modify the original model, which might deviate from the original idea of probing a language model for knowledge. However, the goal was to not add additional world knowledge, but just tune the model to better understand the triples.\nWe cannot completely exclude the problem of adding some knowledge to the original model, but similar issues may also exist for pure prompting models like BERTese or AutoPrompt.\nThese models might capture additional knowledge during training in the parameters e.g. of the rewriter which then is used for probing the original pre-trained language model. Thus, BERTese for example may learn certain trigger words which it can use in a prompt to trigger a certain answer behavior in the model. E.g. predicting only countries for the object entity when considering P19 (place of birth). This way a prompting model may also capture additional knowledge which complicates the probing procedure and could be considered as task specific fine-tuning without changing the original model.\n\nWe believe that this discussion may question the idea of probing a language model for knowledge using complex prompting techniques and would need significantly more investigation of the different techniques. We would be happy to continue this discussion.\n\n\n>The paper lacks qualitative analyses of the experiments. It would be insightful if the paper can include some concrete examples when showing the experimental results. For example, in transfer learning experiments’ group (c), the “omitted” performance can be much lower than the original BERT. Some examples might help understand what’s happening there.\n\nWe were able to add an additional qualitative analysis of our experiments in Section 4.4. Here, we particularly focus on properties that have improved significantly from the fine-tuning and added some examples to better understand the effects of BERTriple.\n\n", "role": "user" }, { "content": "I appreciate the authors' reply. It resolves the majority of my concerns. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
o7sMlpr9yBW
AKBC 2021 Conference
Reviewer_a3cA
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_a3cA, and the following are the paper you need to review:\n```<<o7sMlpr9yBW>>```", "role": "user" }, { "content": "Title: Adaptive fine-tuning of pre-trained masked language models for knowledge extraction using masked triple prompts\nReview Content: This paper presents the BERTirple model for the cloze-style fact extraction task. Instead of complex additional prompt tuning methods, they show that an adaptive fine-tuning of the pre-trained language model can outperform the best state-of-the-art baselines on the LAMA and LAMA-UHN probes. They propose to use the triples of a knowledge graph in the form of masked sentences to alleviate the difficulties of manually designing prompts. This new data representation leads to better performance, where the proposed model outperforms the previous manually created, mining-based and learning-based prompts. Finally, the experiments show that by omitting some relations during training, the performance doesn't degrade, which validates the transfer learning capabilities of the proposed model.\n\nStrengths:\n- The paper is mainly well-written, and the empirical results are promising.\n- Their approach doesn’t need to tune the prompts and the idea of using the triples as masked sentences is simple and efficient.\n- They show that the proposed model requires fewer training data to outperform the previous SOTA model.\n- The discussion about transfer learning experiments reported in the paper is compressive and informative.\n\nWeaknesses:\n\n- In Figure (1), it's better to repeat the experiments (for each sample size) at least 3 times and report the mean and variance of each experiment. Since the paper claims that “by using significantly fewer training facts, BERTriple outperforms all baselines”, the mean and variance of P@1 are required to see the stability of the model.\n- Regarding the training speed, authors mention that “existing techniques usually need a significant amount of training data in the form of existing knowledge graph triples and a large amount of training time to optimize prompts using complex additional models” also “ with fewer training data required and without the need for an additional model. Consequently, the fine-tuning is very fast.” Since the training speed is another contribution of the paper, it’s much better to quantify the training speed (and/or fine-tuning) and compare the training time between mining-based, learning-based, and BERTirple.\n- It’s a little hard to follow the reported results in Table 2 without the name of each relation. Please add each relation’s name (label) in the final version. \n\n\n\n\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Added insights on robustness and runtime\nThank you for your helpful comments. We have performed some additional experiments and added some additional information to the paper as stated below:\n\n>In Figure (1), it's better to repeat the experiments (for each sample size) at least 3 times and report the mean and variance of each experiment. Since the paper claims that “by using significantly fewer training facts, BERTriple outperforms all baselines”, the mean and variance of P@1 are required to see the stability of the model.\n\nThanks for this hint. You are right that particularly the small sample sizes could influence the overall precision of the model. We have retrained the model 3 times for each sample size and now report the mean precision in Figure 1. Since the variance between these 3 runs was very small and hardly depictable in our graphic, we omitted the variance.\n\n>Since the training speed is another contribution of the paper, it’s much better to quantify the training speed (and/or fine-tuning) and compare the training time between mining-based, learning-based, and BERTirple.\n\nWe have not retrained all the baseline models ourselves, but used the numbers from the original papers. But our runtime observations stem from several experiments that we did over the last months that have shown us that the other approaches are significantly slower. Our approach, however, for small training samples only needs around 5min of training on a high-end consumer GPU. For further details, we have added a new paragraph in Section 4.2.\n\n>It’s a little hard to follow the reported results in Table 2 without the name of each relation. Please add each relation’s name (label) in the final version.\n\nWe have added the labels in Table 2. ", "role": "user" }, { "content": "I appreciate the authors' feedback. It resolves the majority of my concerns.", "role": "assistant" } ]
o7sMlpr9yBW
AKBC 2021 Conference
Reviewer_XzDL
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_XzDL, and the following are the paper you need to review:\n```<<o7sMlpr9yBW>>```", "role": "user" }, { "content": "Title: Simple but effective method for the task of \"fact retrieval\" from pre-trained language models, despite modifying the model directly.\nReview Content: This paper presents a simple but effective method for the task of \"fact retrieval\" from pretrained language models. The method requires a training set of (s,r,o ) triples, which must have human-readable labels, such as wkikdata triples. The triples are treated as pseudo-natural language sentences e.g. (Dante, born in ,Florence) -> \"Dante born in Florence\", and the pre-trained language model is given additional MLM finetuning on these training data. \n(they also investigate a setting which uses the LAMA templates rather than just directly using triples). \n\nThe fine-tuned model is then queried for a fact by providing the subject and relation, and replacing the object with a [mask] token, and then extracting the highest scoring predicted token as the object prediction, similar to cloze-question answering, à la LAMA.\n\nThe authors evaluate their approach on LAMA, and show improved P@1 scores, comparing to prompting approaches - both the simple manual approach from the original LAMA probe, and more recent approaches that try to optimize the prompt. \nThe authors then assess how much training data is required for their approach to be effective, and conclude that about 50 examples per relation is sufficient for the approach to work well.\n\nThey also assess to what extent finetuning on triples can improve predictions for unseen relations. They do this by holding out a relation, fine-tuning a model on all the other relations ,and then evaluating the LAMA performance for that relation, and observe that some relations improve relative to the original LAMA probe zero-shot scores, but most are either around the same performance as the original LAMA probe scores or perform very poorly.\n\nThe authors conclude with a thoughtful discussion. \n \nThis work resembles recent work in \"soft prompting\", in the sense that explicit prompts are avoided and fine-tuning is used instead, e..g https://arxiv.org/pdf/2101.00190.pdf, https://arxiv.org/abs/2104.08691 and https://arxiv.org/abs/2103.10385 - the latter paper also experimenting with LAMA and achieving almost the same P@1 for bert-base. Unlike these works however, this model fine-tunes the whole language model, rather than learning only \"continuous prompt\". \n\nThe paper feels similar in spirit to COMET, which also trains a transformer essentially directly on facts from a knowlegebase, although in the case of COMET, its data from commonsense KBs.\n\nStrengths\n\n* the paper is very clear, easy to read and well laid out. The ideas are well-motivated and argued The related work section is a particularly good summary of the important relevant work.\n* A directly-finetuned baseline like this is important for contextualizing and understanding other works on this dataset.\n* The experiments are thoughtful and well executed\n* I particularly like the discussion and future work section, acknowledging the strengths and limitations of this work.\n* this approach eliminates the need for manually-written prompts for relations, which is a great strength\n* the training process is fast and computationally fairly cheap, and the number of examples needed for the method to work well are quite small\n* the authors use both LAMA and its harder subset LAMA-UHN\n\nWeaknesses\n\n* unlike prompting work, this directly fine-tunes the model, which specializes it, and reduces its ability for zero-shot tasks - this is quite a fundamental difference. Whereas other approaches try to fit prompts/queries to the model, this approach changes the model itself - as such, it is unclear wether this is appropriate for a probing dataset like LAMA. The authors are aware, and touch on this in their discussion. \n* Only one base model (BERT) is used, and only one dataset is experimented on (LAMA, and its subset LAMA-UHN) - the paper could be improved it it assessed how effective the method was for other models, such as GPT-style models or encoder-decoder models like T5 or BART\n* Whilst this approach doesn't require manually written prompt templates for relations, it probably does rely on human-readable relation labels to work well in the low data regime - e..g \"born in\" rather than a human unreadable label (e.g. P19) \"Q23243\". An interesting experiment would be to see how important these are, and wether the method would work with unique but non-human-descriptive relation labels. \n \nOverall, I think this is a well-presented, well written paper giving useful contribution/baseline methods for people interested in fact-retrieval, although i have qualms about treating LAMA as as \"task\", to be trained an optimized for, rather than a \"probe\", used for analyzing knowledge already present in LMs.\n\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Additional Models, Experiments and Prompting vs. Fine-tuning\nWe would like to thank you for your very comprehensive and helpful review. Your review has given us a new perspective on our work and provided us with interesting related works that we were not aware of. \n\n>the paper could be improved it it assessed how effective the method was for other models, such as GPT-style models or encoder-decoder models like T5 or BART\n\nIndeed, this would be a valuable addition to the paper. Unfortunately, our lab's GPU computing resources are very limited so these additional experiments would take a lot of additional time. We will try to finish experiments for some additional models for the final version of the paper.\n\n\n>An interesting experiment would be to see how important these are, and wether the method would work with unique but non-human-descriptive relation labels.\n\n\nThis is an interesting idea and we have performed a first experiment fine-tuning with the property IDs, but keeping the original human-readable entity labels. \nThis has worked surprisingly well and we could achieve a performance of just 2% below the human-readable templates when using 100 training triples per property. Due to space reasons, we have not added this experiment to the paper itself, since we believe this would need more space for discussion and analysis.\n\n> Whereas other approaches try to fit prompts/queries to the model, this approach changes the model itself - as such, it is unclear wether this is appropriate for a probing dataset like LAMA. \n\nThis seems to be the main criticism of you and another reviewer. We are aware that we modify the original model which might deviate from the original idea of probing for world knowledge. We try to not add additional world knowledge, but to tune the model to better understand the triples. However, we cannot rule out that we add some knowledge to the model.\nWe added some more detailed insights and thoughts in the answer to *Reviewer fFyE*. We think the answer might therefore be also interesting for your point.\n", "role": "user" }, { "content": "Thank you for the authors' detailed response. It covers most of my concerns.", "role": "assistant" } ]
9_oCNR6R9l2
AKBC 2021 Workshop CSKB
Reviewer_o61a
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_o61a, and the following are the paper you need to review:\n```<<9_oCNR6R9l2>>```", "role": "user" }, { "content": "Title: Review\nReview Content: This paper aims to provide a formalism for NLP models that retrieve knowledge from an external nonparametric memory and how the retrieved knowledge is fused into the model. The hope is that a formal exposition would enable us to uncover combination of mechanisms that have not been tried for a given task (and might have been used for other task) or a combination of mechanism which has not been tried at all. \n\nThe paper starts with a general description of retrieval based model which retrieves knowledge based on a query vector which is obtained by encoding the task input. The retrieved knowledge is then aggregated by an aggregator to produce an artefact which is used as an input to the model to generate an output for the task. An artefact is defined as an intermediate object that depends on the elements retrieved from the knowledge base and can be used to improve task performance during training and inference.\n\nThe paper also lays down a general description of how the knowledge source is constructed. They can be structured knowledge graphs containing knowledge triples, unstructured document collection (e.g. for open-domain QA) and also the training set (e.g. for models which retrieve KNN training examples during inference)\n\nStrengths:\n\n1. The paper lays out a general framework for knowledge retrieval models for various NLP tasks. The abstraction is quite general, and it covers quite a few models for various talks and is an interesting read.\n\nWeakness:\n\n1. In sec 3.2, I think it will be concatenating x and \"a\", instead of \"y\". On another note, I think artefact should be denoted by \\kappa (or k) as denoting it with \"a\" can confuse the reader as \"a\" is often used to denote answer in QA tasks.\n2. Other examples of early fusion models worth mentioning are Das et al 2017 and Sun et al 2019. \n3. I find it a little hard to understand why in figure 2, early and late fusion models are described in terms of \"loss\". Loss is something which is usually computed during training and there is no guarantee that the loss of a model with artefact would be lower than the loss of the same model without artefact. It might well be the case that a model with extra knowledge might take more time to train and have higher loss than a model without artefact, but actually converge to a point where the model is able to generalize better. I think the explanation should be modified to include task performance at inference (and I think that is the intended interpretation of the distance measure between the output and the projection) rather than training loss. \n4. I like the general description of KB source and key and value type in Sec 3.4 and 3.5. The paper could include another class of work that retrieves other similar question, answer pair in the training set to answer a given question. For example, in Lewis et al 2021, a model is proposed that retrieves paraphrases of the question from the training set and returns the answer. This model can therefore easily answer a question if the paraphrase is present in the training set. More recently, Das et al 2021 proposed a model based on case-based reasoning, in which they retrieve other similar questions and logical forms (SQL queries) and uses them to generate logical forms for new query. Since they use both query and logical forms, they can answer questions even if the exact paraphrase is not present in the training set.\n5. Minor: The citation of KILT says Fan et al 2020, but I think it should be Petroni et al 2020. Might be an error in the bib file?\n\nMissing References:\n\nDas et al 2017: Question answering on knowledge bases and text using universal schema and memory networks ACL 2017\n\nSun et al 2019 PullNet: Open Domain Question Answering with Iterative Retrieval on Knowledge Bases and Text EMNLP 2019\n\nLewis et al 2021: Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets EACL 2021\n\nDas et al 2021: Case-based Reasoning for Natural Language Queries over Knowledge Bases EMNLP 2021\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Review Feedback\nThank you for your detailed comments and suggestions.\n\n1. We opted for $\\xi$ (xi) for denoting the artefact object because $\\kappa$ (kappa) could conflict with the denotation of a knowledge base (though in our case we use $\\mathcal{KB}$).\n2. Case-based reasoning QA (Das et. al. 2021) is highly interesting in this context, particularly because it uses a fusion mechanism different from the other described models (creation of an executable query). For completeness, we also include nearest-neighbour QA (Lewis et. al. 2021) though the pipeline is fairly simple and does not benefit much from being explained in the context of artefact retrieval.\n3. The term \"loss\" was used to describe distances between points in intermediate projections. A key remark is that it is the distance to the _closest_ point in the space which results in the correct prediction. We changed the paragraph to make it more explicit that we are talking about computations during inference.\n4. PullNet (Sun et. al. 2019) and Universal Schema (Das et. al. 2017) make a good addition to the list of examples because of their iterative nature in knowledge base / memory access.", "role": "user" }, { "content": "Thanks to the authors for their clarification. It satisfactorily addresses the major issues I had. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
0yj3LL-0KcX
AutoML 2023 Conference
Reviewer_e426
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_e426, and the following are the paper you need to review:\n```<<0yj3LL-0KcX>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors develop OFA^3, a continuation of OFA^2.\nOFA^2 is first trained to create a large network that during search time, given the accuracy and latency requirement, outputs the best model (subnetwork) it can find for the required latency and accuracy. \nThe authors use all the subnetworks that OFA^2 outputs, and then using evoulutionary algorithm (EA) they output the best ensemble of these subnetworks for the given latency-accuracy requirement. \nTheir EA is based on the NSGA-II algorithm. The DNA is a bit-string whose size matches the number of subnetworks output by the OFA^2 model. \nThey start with all bits set to 1. The mutation flips a bit at random with some probability p=1%, the crossover operator is the uniform crossover so every gene is chosen uniformaly at random from either of the parents, and the selection algorithm is based on non-dominated sorting and crowding distance metrics, as in NSGA-II. \nThe authors then experiment using two options: sum of latencies of the subneworks in the ensemble, and the maximum latency of the subnetworks in the ensemble. In the case of sum of latencies, they find out that it may be better to use a single architecture rather than an ensemble in most of the cases. For the maximum latency, the authors show that their results dominate any other configuration of the available sub-networks, taking accuracy and latency as the conflicting objective.\n\noverall_review: Strengths\n========\nS1. An interesting approach to combine the subnetworks of OFA^2 into an ensemble. \nS2. The approach is simple, the paper is well written and easy to follow, the figures are explanetory.\n\nWeaknesses\n==========\nW1. There is not even a single comparison table in the paper. Please add some comparison table, with numerical values. In my opition, it is important for the evaluation of the paper as well as a way to compare future work to your own work.\n\nW2. The evolutionary algorithm is pretty simple / straightforward, maybe consider several alternatives and show what is the best.\n\nW3. Can you add ablation study.\n\nNot listed as a weakness, but, the approach uses the subnetworks computed by OFA^2, and run an EA on the output of OFA^2. Can you find an approach that is more entangled with the training and search procedure of OFA^2, I find it interesting as well.\n\nreview_summary: See the above strengths and weaknesses.\nactions_required_to_increase_overall_recommendation: Please add a table of experiments.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer e426\nDear reviewer e426, thank you for your time and your valuable comments addressed to our work. We adjusted our paper taking your comments into consideration. Next, we copied some of your comments and added our reply straight after it:\n\n- [W1. There is not even a single comparison table in the paper. Please add some comparison table, with numerical values.]\n\nA table containing comparative results was added. However, it is worth mentioning that OFA performance has already been contrasted, in its original proposal, with the performance of a high number of relevant contenders, with a clear advantage both in terms of accuracy and computational burden. So, making a direct comparison solely with OFA and OFA² is a valid methodology here, because OFA was already properly positioned before relevant contenders in the literature.\n\n- [W2. The evolutionary algorithm is pretty simple / straightforward, maybe consider several alternatives and show what is the best.]\n\nWe included support to other evolutionary algorithms as well. We realized experiments with the NSGA-II, SMS-EMOA and SPEA2. These experiments are provided as Jupyter notebooks as follows: [OFA² search](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa2.ipynb), [OFA³ (summed latency)](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-summed-latency-N1000.ipynb) and [OFA³ (maximum latency)](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-max-latency.ipynb). We also ran the experiments for 3 different random seeds each.\n\n- [W3. Can you add ablation study.]\n\nOur ablation study is related to three subjects: voting schemes (hard/soft voting), latency computation (summed/maximum) and now the evolutionary algorithm used during the search (NSGA-II, SMS-EMOA or SPEA2). We plan to extend the results on Table 1 to include each combination of these choices, since they are independent to each other, in our final version of the paper. \n\n- [Not listed as a weakness, but, the approach uses the subnetworks computed by OFA^2, and run an EA on the output of OFA^2. Can you find an approach that is more entangled with the training and search procedure of OFA^2, I find it interesting as well.]\n\nWe provided several Jupyter notebooks ([GitHub repository](https://anon-github.automl.cc/r/once-for-all-3-89E3) --> jupyter-notebooks) on how to run each of these techniques (OFA, OFA² and OFA³). But your suggestion is valid, and we plan to add this support to the code of OFA³ as well.", "role": "user" }, { "content": "Thank you for the detailed response from the authors. The response covers nearly all of the issues I raised. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
0yj3LL-0KcX
AutoML 2023 Conference
Reviewer_tDcb
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_tDcb, and the following are the paper you need to review:\n```<<0yj3LL-0KcX>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes OFA³, a strategy to build high-performance ensembles for multi-objective architecture search. This paper is based on OFA², which uses the evolutionary search algorithm NSGA-II for accuracy and latency multi-objective search and returns a set of solutions. The proposed method further improves on the latter search method by selecting the best sub-networks for forming efficient ensembles. The method is evaluated on ImageNet. \noverall_review: Strengths:\nThe experiments show good results in finding architectures with high accuracy and low latency in the proposed setting. \nThis paper tackles an important research question. \n\nWeaknesses:\nThere are some open questions (see Technical Quality And Correctness), which would when answered improve the overall assessment and novelty of the approach in comparison to other multi-objective search strategies. \nA clear description of the strategy is missing. \n\n\nreview_summary: My main concern is that there are no comparisons with other methods except for OFA² and most importantly the contribution is not clear to me. \n\nAdditionally, the submission checklist is filled rather sparsely. \n\nactions_required_to_increase_overall_recommendation: In order to increase the overall recommendation, this paper would need to answer the open questions in Technical Quality And Correctness and clearly state the novelty.\nRating: 2 reject, significant issues present\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer tDcb\nDear reviewer tDcb, thank you for your time and your valuable comments addressed to our work. We adjusted our paper taking your comments into consideration. Next, we copied some of your comments and added our reply straight after it:\n\n- [… the novelty in comparison to OFA² is rather limited.]\n\nThe OFA³ proposal involves a cascade of three once-for-all mechanisms during the NAS: a single training step (provided by OFA), a single search step to populate the Pareto frontier (provided by OFA²), and a single selection step to compose the ensemble of efficient learning models, provided by OFA³. This is a remarkable achievement due to two main reasons: (1) OFA has split training and searching the neural architecture, thus making the search stage of negligible cost, when compared to the training of the super-network. Therefore, there is room for more elaborate search strategies, as performed by OFA² and OFA³, both being of a multi-objective nature and cascading. (2) Making a multi-objective selection of efficient components for the ensemble (being the main contribution of OFA³) is motivated by three main factors: (2.1) the guaranteed presence of distinct trade-offs among the candidate components provided by OFA², given that they populate an approximation of the Pareto frontier; (2.2) the assurance that they are independent models and can operate fully in parallel; (2.3) the possibility of automatically choosing just a subset of the efficient models produced by OFA² as components of the best ensemble. Those are the main motivation to support the gain in performance when compared with, for instance, a single model of the same size of the whole ensemble, which, to make matters worse, would not be implementable by resorting to independent fully parallelizable sub-networks. We have included this additional discussion at the concluding remarks of the paper.\n\n- [How does this method behave for other objectives and not only latency?]\n\nThis is a relevant question that will be investigated as a further step of the research (we have included support to FLOPS in our experimental setup already). The OFA³ framework is immediately extensible to deal with additional conflicting objectives (joining latency and accuracy) and/or distinct conflicting objectives (replacing latency and/or accuracy). We have included this statement at the concluding remarks of the paper.\n\n- [A clear description of the strategy is missing.]\n\nGiven that we have one additional page to prepare the final version of the paper, we have enriched the presentation of OFA³ strategy, at Section 1 (Introduction) and at Section 3 (Methodology).\nNext, we copied some of your comments and added our reply straight after it:\n\n- [My main concern is that there are no comparisons with other methods except for OFA²]\n\nOFA performance has already been contrasted, in its original proposal, with the performance of a high number of relevant contenders, with a clear advantage both in terms of accuracy and computational burden. So, making a direct comparison solely with OFA and OFA² is a valid methodology here, because OFA was already properly positioned before relevant contenders in the literature.\n\n- [What is the search space?]\n\nSince we are dealing with an encoding of 100 binary genes, the search space of the OFA³ is 2^{100}. We also included this information on the text (lines 215-216).\n\n- [How does the supernetwork look like? Since OFA is the basis for this work, more information on that is necessary.]\n\nThe OFA supernetwork is formed by 5 convolutional units placed in sequence. Each unit have 3 different parameters that can have their values changed: depth (number of layers) chosen from {2, 3, 4}, channel expansion ratio (number of channels) chosen from {3, 4, 6} and the convolution kernel size chosen from {3, 5, 7}. This gives OFA a search space of approximately 10^{19}. We did not include this information since it was already very well discussed in the papers of OFA and OFA², which are related to the architecture search. Our method uses the OFA and OFA² as the basis of our work. However we are tackling a different problem here, which is the selection of networks to form efficient ensembles, instead of architectures search. For the final version of the paper, we plan to include this information on the appendix, though.\n\n- [Lines 237-238: I don’t understand this sentence: Why are all ensembles equal?]\n\nThe first population of ensembles is started with all genes set to one (meaning that all architectures from the pool of 100 neural networks participate in the ensembles). Therefore, all individuals of the first population have the same encoding and represent in fact the same ensemble. We have now included this information in the text to make it more clear: Sections 4.1 - Encoding (lines 226\\~230), 5.1 - Summed latency (lines 252\\~254) and 5.2 - Maximum latency (lines 281\\~283).\n\n- [Line 247: “typical Pareto-front”: I would be rather careful with this sentence, since this sounds like, the found ensembles are the Pareto front, and I don’t see the guarantee of more information about that.]\n\nWe changed this sentence in the text (lines 264-265).\n\n- [What are the search times to find the architectures/ensembles?]\n- [What are the resulting accuracy/latency numbers in comparison to other baseline methods?]\n\nWe included a table of comparison between OFA³ (our method) and the baselines OFA and OFA². In this Table 1 there are information on latency, accuracy and search time (GPU hours).\n\n- [Additionally, the submission checklist is filled rather sparsely.]\n\nWe refactored the submission checklist and now it is complete.", "role": "user" }, { "content": "I appreciate the authors' feedback. They have addressed most of the issues I raised, and I am considering increasing the score.", "role": "assistant" } ]
0yj3LL-0KcX
AutoML 2023 Conference
Reviewer_z1tG
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_z1tG, and the following are the paper you need to review:\n```<<0yj3LL-0KcX>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors propose an ensemble technique, OFA3, which addresses the problem of automatically selecting the optimal subset of the already obtained non-dominated subnetworks.\nOFA3 is an extension of the OFA and OFA2-NAS frameworks that formulates the selection of neural networks to form ensembles as a multi-objective optimisation problem and automatically finds efficient ensembles from OFA2.\noverall_review: The paper represents an interesting evolution of existing methods.\nThe performance and results presented show potential for future application.\nHowever, the overall impression is marred by occasional formal errors.\nFurthermore, the application raises questions that need to be clarified in order to allow a better assessment of the method and its performance.\n\nreview_summary: Unfortunately, the positive impressions of the paper are marred by the formal errors and criticisms of the application. In particular, small details in the formal presentation, such as errors in chapter titles or careless mistakes in the graphics, detract from the good impression. In addition, the sometimes superficial presentation in the experiments and method section detract from the overall impression.\nactions_required_to_increase_overall_recommendation: - Please improve the presentation of the figures. These are minor issues, but their increased occurrence deteriorates the good impression.\n- Fix the naming issues of chapter 4 and 5. \n- The application needs an overhaul. In particular, a comparison over several runs and/or initialisations.\nRating: 2 reject, significant issues present\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer z1tG\nDear reviewer z1tG, thank you for your time and your valuable comments addressed to our work. We adjusted our paper taking your comments into consideration. Next, we copied some of your comments and added our reply straight after it:\n\n- [Although it suggests that it is better than other methods on the core issues of computational cost, it lacks hard facts such as computation time, etc.]\n- [The authors claim that their method is better in terms of computational resources. This argument is also understandable however no clear evidence is given in terms of recorded computation time or respective comparisons.]\n\nWe have included additional information regarding computational cost, thus allowing a direct comparison in Table 1. Notice that the search stage of OFA is of very low computational cost, opening the possibility of investing resources to promote more elaborate and multi-objective search stages for OFA² and OFA³.\n\n- [A final point here is that when doing experiments that are stochastic in nature, please do several runs and give their average results and associated standard deviations rather than just a single run.]\n\nThe experiments were redesigned to consider the average of multiple runs. We now considered three different evolutionary algorithms for the search: NSGA-II, SMS-EMOA and SPEA2. Each of these algorithms ran for 3 times with 3 different random seeds (the Jupyter notebook with these runs is available [here](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-max-latency.ipynb) for the maximum latency approach, and [here](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-summed-latency-N1000.ipynb) for the summed latency approach).\n\n- [… the paper lacks a self-critical assessment and presentation of possible disadvantages of the method.]\n\nWe have improved the paper so that the possible disadvantages of the proposed OFA³ methodology were left clear: (1) Additional computational burden to perform the two multi-objective searches; (2) Necessity of an additional partition of the training dataset to perform the selection of ensemble components; (3) The OFA³ search may be beneficial only for the maximum latency scenario. For the summed latency scenario, it may be better to use single architectures instead of ensembles.\n\n- [Please improve the presentation of the figures. These are minor issues, but their increased occurrence deteriorates the good impression.]\n- [Fix the naming issues of chapter 4 and 5.]\n- [The application needs an overhaul. In particular, a comparison over several runs and/or initialisations.]\n\nAll these three final requirements have been fully incorporated into this new version of the paper.", "role": "user" }, { "content": "Thanks for the authors' clarification. It satisfactorily addresses most of my concerns, and I am considering increasing the score.", "role": "assistant" } ]
0yj3LL-0KcX
AutoML 2023 Conference
Reviewer_6Qzm
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_6Qzm, and the following are the paper you need to review:\n```<<0yj3LL-0KcX>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes the framework OFA^3 to select efficient ensembles out of an already obtained set of single neural networks. It optimizes for accuracy and latency by utilizing a well-established evolutionary multi-objective optimization algorithm. \noverall_review: The paper gives a clear introduction into the topic and a good overview of related work. The methodology primarily explores established concepts, which, while informative, offers limited originality. The results are insightful but suggest that the performance of the final ensembles is reported on the same dataset that they were searched on, thus could be overfitting.\nreview_summary: The primary contribution of this paper is the formulation of a multi-objective problem to form ensembles with pre-existing neural networks. The resulting performance improvements are modest and the overall impact of the framework is limited.\nactions_required_to_increase_overall_recommendation: (already addressed above)\nRating: 2 reject, significant issues present\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 6Qzm\nDear reviewer 6Qzm, thank you for your time and your valuable comments addressed to our work. We adjusted our paper taking your comments into consideration. Next, we copied some of your comments and added our reply straight after it:\n\n- [Additionally, the paper indicates that the performance results have been reported for the dataset on which the ensembles were searched. So, it cannot be ruled out that the ensembles overfit on this validation set for which they were optimized.]\n\nIn fact, the text was inducing the reader to a misinterpretation of the way in which the dataset was split. We have rechecked the experiments and fortunately the issue was restricted to the unclear description at the body of the paper. We are now providing the correct explanation: all ensembles are being evaluated on another held-out dataset for which they have not been searched for. The search was done on a subset of the training set with 50k images. The evaluation was done on the 50k images of the validation set that were not used whatsoever during the search.\n\n- [… the overall potential impact on the field of AutoML is considered as small.]\n\nThe OFA³ proposal involves a cascade of three once-for-all mechanisms during the NAS: a single training step (provided by OFA), a single search step to populate the Pareto frontier (provided by OFA²), and a single selection step to compose the ensemble of efficient learning models, provided by OFA³. This is a remarkable achievement due to two main reasons: (1) OFA has split training and searching the neural architecture, thus making the search stage of negligible cost, when compared to the training of the super-network. Therefore, there is room for more elaborate search strategies, as performed by OFA² and OFA³, both being of a multi-objective nature and cascading. (2) Making a multi-objective selection of efficient components for the ensemble (being the main contribution of OFA³) is motivated by three main factors: (2.1) the guaranteed presence of distinct trade-offs among the candidate components provided by OFA², given that they populate an approximation of the Pareto frontier; (2.2) the assurance that they are independent models and can operate fully in parallel; (2.3) the possibility of automatically choosing just a subset of the efficient models produced by OFA² as components of the best ensemble. Those are the main motivation to support the gain in performance when compared with, for instance, a single model of the same size of the whole ensemble, which, to make matters worse, would not be implementable by resorting to independent fully parallelizable sub-networks. We have included this additional discussion at the concluding remarks of the paper.\n\n- [Also, for plotting the performance of the ensembles during the search process, choosing generations 14, 32, 60, and 66 for sum latency and 38, 179, and 212 for maximum latency seems arbitrary (given that there are 1000 generations in total) and has not been further elaborated on.]\n\nThose portraits along the evolutionary process have been chosen aiming at illustrating the behavior of the evolutionary process along the search. We now changed most of the generations to power of 2 values, so we can explicitly illustrate the non-linear characteristic of the evolution. The generations for the summed latency were replaced from {14, 32, 60, 66} to {16, 32, 64, 70}, while the generations for the maximum latency were replaced from {38, 179, 212} to {32, 512, 700}. We have followed the evolutionary behavior and made the decision of which portrait to get based on a visual comparison of scenarios. The arbitrary nature of the choice is now mentioned in the text.\n\n- [Other inconsistencies, such as Figure 9 (a) stating that the results are based on \"hard voting\" in contrast to Line 230 stating that only the soft voting scheme has been considered, should be avoided as well.]\n\nThis and other inconsistencies have been fixed after a careful proofreading.\n", "role": "user" }, { "content": "Title: Response to Authors\nThank you for your response. \nConcerning the most crucial point (3): it is important to note that the paper presents OFA^3, but does not present the cascade of three once-for-all mechanisms. It is explicitly stated that OFA as well as OFA^2 have been proposed for the OFA framework in prior work (lines 121 and 129). Consequently, the primary contribution of this paper is confined to the selection step of composing ensembles, which essentially revolves around formulating the multi-objective problem (accounting for accuracy and latency).\nIn light of this persisting concerns, I will maintain my current score.", "role": "assistant" }, { "content": "Title: Response to Reviewer 6Qzm\nDear reviewer 6Qzm, thank you again for your response. The selection step for optimally composing ensembles of efficient learning models, the main contribution of OFA3, is motivated by the previous existence of diverse trade-offs provided by the outcome of OFA2. It is not an incremental contribution, given that it: (1) involves a multi-objective search that is capable of automatically determining the number of components (**innovation**); (2) requires an additional computational cost that is only viable due to the extremely low cost involved when manipulating OFA efficient learning models (**opportunity**); (3) guides to state-of-the-art performance in the objective space (**competitiveness**); (4) adds robustness to the performance metrics (variance reduction), a well-known contribution of committee machines (**theoretical guarantee**). We are strictly following the page limit for the new version of the paper. The allowed extra page was devoted to reacting to the reviewers' feedback, according to the \"Formatting Instructions\" section of the [Call for Papers](https://2023.automl.cc/calls/forpapers). We have: (1) added a table of results with a comparison to the baselines; (2) increased the size of the figures; (3) enriched Section 1 (Introduction) and Section 3 (Methodology); (4) provided a better discussion in Section 6 (Concluding Remarks). All these editing initiatives were to improve the paper presentation, following the feedback given by the reviewers.", "role": "user" }, { "content": "It was a pleasure to engage in the above discussion with the author, which has definitely improved the overall quality of the paper.", "role": "assistant" } ]
0yj3LL-0KcX
AutoML 2023 Conference
Reviewer_TFUH
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_TFUH, and the following are the paper you need to review:\n```<<0yj3LL-0KcX>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors have introduced OFA3, which is an extension of the OFA and OFA2 NAS frameworks. OFA3 utilizes a multi-objective optimization approach to select neural networks for forming ensembles, and it can automatically identify efficient ensembles from OFA2. This approach helps in discovering architectures that exhibit better top-1 accuracy and latency. The authors have also conducted experiments related to ensembles, including tests with encodings and operators in the EA. According to the experimental results, OFA3 outperforms OFA2 and OFA in most cases while also reducing computational costs.\noverall_review: Positive aspects: The paper attempts to address the ensemble formation problem as a multi-objective problem, which is a novel approach. The internal ablation study that compares sum latency, max latency, and voting scheme is also a valuable contribution.\n\nNegative aspects: Upon reading the paper, it is difficult to understand why this methodology is necessary. Additionally, the explanation of the multi-objective problem is excessive and should be condensed into a single section. The formulation of the methodology is lacking in detail and could benefit from a more thorough and reader-friendly (or mathematical) presentation. Lastly, the paper lacks experimental results to support the claims made in the conclusion. It would be beneficial to provide metrics such as computational cost or performance improvement in a table format.\nreview_summary: The evidence presented for why the proposed methodology is necessary may not be sufficient to convince readers. To better support the claims, the authors should provide additional results such as direct comparisons with OFA and OFA2, the extent of performance improvement, and the computational cost aspect mentioned in conclusion. This would help in providing a more convincing argument for the necessity of the proposed methodology.\nactions_required_to_increase_overall_recommendation: - Fill in a comparison table for the search time.\n- Add experiments comparing with OFA and OFA2.\n- Compare with other multi-objective methods.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer TFUH\nDear reviewer TFUH, thank you for your time and your valuable comments addressed to our work. We adjusted our paper taking your comments into consideration. Next, we copied some of your comments and added our reply straight after it:\n\n- [While using ensembles in MONAS is a promising attempt, it does not seem to be an innovative approach. Additional evidence is needed to support why this approach is considered good.]\n- [… it is difficult to understand why this methodology is necessary.]\n\nThe OFA³ proposal involves a cascade of three once-for-all mechanisms during the NAS: a single training step (provided by OFA), a single search step to populate the Pareto frontier (provided by OFA²), and a single selection step to compose the ensemble of efficient learning models, provided by OFA³. This is a remarkable achievement due to two main reasons: (1) OFA has split training and searching the neural architecture, thus making the search stage of negligible cost, when compared to the training of the super-network. Therefore, there is room for more elaborate search strategies, as performed by OFA² and OFA³, both being of a multi-objective nature and cascading. (2) Making a multi-objective selection of efficient components for the ensemble (being the main contribution of OFA³) is motivated by three main factors: (2.1) the guaranteed presence of distinct trade-offs among the candidate components provided by OFA², given that they populate an approximation of the Pareto frontier; (2.2) the assurance that they are independent models and can operate fully in parallel; (2.3) the possibility of automatically choosing just a subset of the efficient models produced by OFA² as components of the best ensemble. Those are the main motivation to support the gain in performance when compared with, for instance, a single model of the same size of the whole ensemble, which, to make matters worse, would not be implementable by resorting to independent fully parallelizable sub-networks. We have included this additional discussion at the concluding remarks of the paper.\n\n- [… there is also a shortage of comparisons from the perspectives of search efficiency and computational cost.]\n- [Lastly, the paper lacks experimental results to support the claims made in the conclusion. It would be beneficial to provide metrics such as computational cost or performance improvement in a table format.]\n\nWe have included additional information regarding computational cost, thus allowing a direct comparison. Notice that the search stage of OFA is of very low computational cost, opening the possibility of investing resources to promote more elaborate and multi-objective search stages for OFA² and OFA³.\n\n- [Additionally, the authors should provide a more detailed and reader-friendly introduction to OFA².]\n\nGiven that we have one additional page to prepare the final version of the paper, we have enriched the presentation of OFA², at Section 1 (Introduction) and at Section 3 (Methodology).\n\n- [Compare with other multi-objective methods.]\n\nWe extended our work and included other evolutionary multi-objective optimization algorithms (EMOA) to perform the search. We now run our experiments with the NSGA-II, SMS-EMOA and SPEA2 algorithms. For each of these algorithms, we ran the search for 3 different random seeds. We also provided Jupyter notebooks (with outputs) for these comparisons:\n 1. notebook to run OFA² search: [ofa2.ipynb](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa2.ipynb)\n 2. notebook to run OFA³ search (summed latency): [ofa3-summed-latency-N1000.ipynb](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-summed-latency-N1000.ipynb)\n 3. notebook to run OFA³ search (maximum latency): [ofa3-max-latency.ipynb](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-max-latency.ipynb)\n 4. notebook to plot figures and results of OFA³: [ofa3-plot.ipynb](https://anon-github.automl.cc/r/once-for-all-3-89E3/jupyter-notebooks/ofa3-plot.ipynb)\n\nAdditionaly, the OFA performance has already been contrasted, in its original proposal, with the performance of a high number of relevant contenders, with a clear advantage both in terms of accuracy and computational burden. So, making a direct comparison solely with OFA and OFA² is a valid methodology here, because OFA was already properly positioned before relevant contenders in the literature.", "role": "user" }, { "content": "Thanks for the authors' response. It satisfactorily addresses most of my concerns, and I will likely raise the rating.", "role": "assistant" } ]
14U6uzrh-wr
AutoML 2023 Conference
Reviewer_WcoC
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_WcoC, and the following are the paper you need to review:\n```<<14U6uzrh-wr>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposed Multi-Predict NAS algorithm that can be used for multiple search spaces and multiple hardware devices. Multi-Predict uses the search space agnostic architecture encoding, ZCP. ZCP is an ensemble of multiple zero-cost proxies so that ZCP is independent of the search space but still can encode architecture information. Multi-Predict applies learnable embeddings to represent different devices, named HWL. Thanks to the novel encoding of architectures and hardware, Multi-Predict can train one predictor for multiple search spaces and different hardware devices. Furthermore, the encoding algorithms enable Multi-Predict to transfer knowledge to new search spaces or add new hardware devices. It is demonstrated that Multi-Predict can transfer knowledge from FBNet to NAS-Bench-201. \noverall_review: This paper focuses on an important NAS topic, transferring knowledge between different search spaces. A very recent work [R1] uses meta-learning to solve this problem and transfers knowledge between different datasets. Multi-Predict is able to train a predictor for multiple search spaces and different devices. Multi-Predict proposes a very simple yet effective architecture encoding (ZCP) that is agnostic to search spaces. I really like the idea of ZCP. I suggest the authors discuss [R1] in the literature review because [R1] is the most recent paper. \n\nThis paper definitely should be accepted. \n\n[R1] Transfer NAS with Meta-learned Bayesian Surrogates, ICLR 2023. https://openreview.net/forum?id=paGvsrl4Ntr \nreview_summary: Multi-Predict makes it possible to transfer knowledge between different search spaces. ZCP encoding is simple, smart, and effective. Although ZCP does not have a very strong algorithm contribution, I really like the idea. In my opinion, ZCP is an ensemble of multiple zero-cost proxies. In that case, it can represent the architecture but is not dependent to search space. Furthermore, Multi-Predict makes it easier to adapt predictors to new search spaces or new hardware. \nactions_required_to_increase_overall_recommendation: I suggest the conference accept this paper.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer WcoC \nWe would like to thank the reviewer for their insightful comments. We are pleased to find that the reviewer appreciates the potential impact of our findings.\nWe have included [R1] in our related works in the updated version of the paper. We have also incorporated the comments by other reviewers and added NAS search results to demonstrate the effectiveness of our representation. \n Global/general Response:\n```\nDear Reviewers,\n\nAs we are now a few days into the author-reviewer discussion period, we would greatly appreciate it if you could take the time to review our updated draft and responses to your comments at your earliest convenience. Doing so will allow us to address any additional questions or concerns you may have before the discussion period comes to a close.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n\nWe truly value your time and effort in this process and thank you for your valuable review. \n```\n", "role": "user" }, { "content": "Thanks for the authors' response. The reply answers almost all of my questions. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
14U6uzrh-wr
AutoML 2023 Conference
Reviewer_ur9J
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_ur9J, and the following are the paper you need to review:\n```<<14U6uzrh-wr>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The paper proposes a method that encodes zero-cost proxies or hardware latencies to make sample-efficient predictions about the considered task, specifically accuracy or latency, of an architecture. Majority of the paper focuses on evaluation using latency prediction, but accuracy prediction is also evaluated. Within latency prediction, both standard and adversarial tasks are considered, with strong improvements shown on the adversarial task.\noverall_review: Positives:\n* The paper introduces a novel search-space independent NN encoding strategy that is the first step towards search-space, task and device agnostic predictors for NAS\n* The method is sample efficient and has performance competitive with current approaches, in some settings (adversarial task) leading to noticeable gains\n* Evaluation is done across various types of predictors: latency and accuracy, variety of benchmarks and settings\n* Extensive analyses are provided that study the method from various points of view, e.g. in terms of pre-training, training from scratch scenarios, etc.\n\nNegatives:\n* Inconsistent evaluations that make it challenging to make sound conclusions about the method - more details in the technical quality section\n* The paper is difficult to read and understand - there are many settings considered, without proper explanations - more details in the clarity section\n* Accuracy predictors can be of particular interest to many researchers, but the evaluation in the paper studies them in significantly less depth than latency predictors - one would likely expect more balanced evaluation for both types of predictors\n\nreview_summary: The paper provides an interesting method that is a step towards more general NAS predictors. However, the paper would strongly benefit from more consistent evaluations and clearer writing so that it is easier to see the advantages of the method.\n\nactions_required_to_increase_overall_recommendation: Make the evaluations more consistent and also improve the clarity of the paper.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Response to Reviewer ur9J\n\nWe appreciate the reviewer's detailed feedback and constructive criticism. We would like to address the points raised and provide clarification to improve the understanding of our paper.\n\n> there is Table 2 which uses 20 samples for both TL and HELP.\n\nWe have updated the main text to explain this point further [Line 236]. Regarding Table 1, our intention was to compare our method with the baselines reported in the HELP paper. The default device sets exhibit low sensitivity to the change in the number of samples, which is why we introduced an 'adversarial' device set. Table 2 tests both our TL and HELP methods on a difficult task, showcasing the effectiveness of using hardware latency tables in conjunction with the proposed ZCP and HWL representations. Furthermore, we included Figure 10 in the Appendix to demonstrate the sample efficiency of FSP (TL) and HELP on the adversarial device sets. We chose to report only the most important results in the paper to emphasize the effectiveness of our representation.\n\n> ‘... what is an adversarial task in this case? …’\n\nWith the updated page limit, we have explained the adversarial task in the main text [Line 225, 236] with further details in the Appendix. We have also described the search spaces, hardware latency benchmarks, devices, zero cost proxies in the Appendix to aid readers outside of the area to understand the paper.\n\n> ‘... It would be good to clarify that the method is not used for predicting both accuracy and latency at once using one model - assuming that is the case. …’\n\nFigure 3 indeed depicts different predictors and visually conveys the intended information. We have also added a sentence to clarify this in the Method. [Line 108]\n\n> ‘... in significantly less depth than latency predictors - one would likely expect more balanced evaluation for both types of predictors … ‘\n\nTo evaluate the accuracy predictor in more depth, we have also added ‘NAS search’ result [Line 315] specifically focusing on ZCP vs Vec. We employed the same evaluation method for both accuracy and latency predictors. For hardware latency, we used a hardware embedding table. Our experiments are robust, with confusion matrices showcasing the improvement across a large number of search spaces. In addition, Figures 16 and 17 in the Supplemental material demonstrate substantial cross-transfer results.\n\n> ‘...checklist and FLOPs for Table 3…’\n\nWe have updated the checklist. Unfortunately, we are unable to initialize individual networks on the FBNet search space, as the official code-base does not support it. Further, they have not provided the FLOPs for their neural networks. Which is why neither the reference paper (HELP) or ours include FLOPs.\n\nWe acknowledge the reviewer's concerns and have updated the checklist accordingly. We have also included HELP results for Figure 6 in the Appendix to provide a more complete and insightful evaluation. We appreciate the reviewer's valuable input and suggestions. We believe that our work is an important step towards more general NAS predictors and will strive to improve the clarity and consistency of our evaluations in the revised version of the paper. In light of our clarification, we would appreciate it if the reviewer could kindly consider revising their score.\n\nDear Reviewer,\n\nAs we approach the end of the author-reviewer discussion period, we kindly request your attention to review our revised manuscript and the responses to your comments as soon as possible. This will enable us to address any further queries or concerns you may have before the discussion period concludes.\n\nWe have made several improvements to our manuscript to incorporate the valuable feedback provided by the reviewers:\n1. Expanded our literature survey with relevant citations.\n2. Made several clarifications regarding the methodology used as well as explained Sample Hardware Embedding in greater depth.\n3. Employed a highly-cited methodology for neural architecture search (BRP-NAS) to evaluate the effectiveness of our ZCP based accuracy predictors (Figure 10), and included a NAS Search Effectiveness sub-section detailing our search methodology and presenting specific NAS results.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n Global/general Response:\n```\nDear Reviewers,\n\nAs we are now a few days into the author-reviewer discussion period, we would greatly appreciate it if you could take the time to review our updated draft and responses to your comments at your earliest convenience. Doing so will allow us to address any additional questions or concerns you may have before the discussion period comes to a close.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n\nWe truly value your time and effort in this process and thank you for your valuable review. \n```\n", "role": "user" }, { "content": "Title: Response\nThanks for the additional clarifications and analyses. I increased my rating by one level to reflect the improvements to the paper. However, I still find that the paper suffers from inconsistent evaluations and relatively confusing explanations, even if there have been improvements to it (the response does not address the inconsistencies). Accuracy predictors are studied in more depth now, but there is still quite significant disparity between how much Hardware latency predictors and Accuracy predictors are studied.", "role": "assistant" } ]
14U6uzrh-wr
AutoML 2023 Conference
Reviewer_H8rP
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_H8rP, and the following are the paper you need to review:\n```<<14U6uzrh-wr>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes a novel NN encoding called Multi-Predict. The encoding is based on the combination between Zero Cost Proxies (ZCP), and Hardware Latencies (HWL) in order to improve Neural Architecture Search (NAS). The main differences between this paper and prior works is that this one uses a combined approach between ZCPs and HWLs to generate a search space agnostic method to predict neural architecture outcomes. The authors did a variety of comparisons against the BRP-NAS and HELP methods, using NASBench-201 as the primary comparison framework.\noverall_review: Strengths:\n\nThe authors introduce a new method for creating neural network encodings, and leveraging it to create better neural architectures. The motivation is clear, where we are looking to improve model latencies and performance simultaneously, while also keeping the usually large NAS costs down. To accomplish this, they encode information from a ZCP and a HWL in order to introduce a more efficient NAS search space.\n\nWeaknesses:\n\nTaking from the definition of this paper, there is a Zero Cost Proxy that should be used in this final method to perform NAS. While there is a basic definition of what a ZCP is, and how they are used, there is no explanation of these ZCPs outside of the appendix. There is also a lack of experimentation for the ZCPs. One such example would be to compare the base ZCP performance against your model. While we could assume that your model would perform better due to being an ensemble method, this is not explicitly mentioned in the paper. \n\nI also feel that even though the Spearman's rank correlation is useful in describing the effectiveness of this method, showcasing test dataset accuracies on NAS tasks would be very useful for future comparison against this work.\n\nreview_summary: Overall, this paper has concise experiments that prove the major points layed out in the abstract. The paper also clearly identifies its own contributions to the field, and how they are useful. However, this new NN encoding does not appear to have a high potential within the field of AutoML.\nactions_required_to_increase_overall_recommendation: I would recommend adding further experiments on the ZCPs in order to give a more holistic comparison against the baseline for this algorithm. Also, including accuracy values for some experiments will improve the paper’s ability for future comparisons.\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer H8rP\nWe appreciate the reviewer's feedback and acknowledge the concerns raised. We would like to address these points and provide clarification to enhance the understanding of our work.\n\nOur work employs a set of Zero Cost Proxies (ZCPs), and while we understand that a detailed description of each ZCP would be helpful, including these descriptions within the main body of the paper is not feasible due to space constraints. However, we have provided references to the type and citation for each ZCP used in our study in Table 6 of the Supplemental material. This allows interested readers to explore the specific details of each ZCP in their respective sources.\n\n> ‘... compare the base ZCP performance against your model….’\n\nThank you for pointing this out. Generally, our method improves over the best zero cost proxy significantly. Further, we have included all device - ZCP correlations in Figure 12 to serve as a reference point. \n\n> ‘... showcasing test dataset accuracies on NAS tasks would be very useful for future comparison against …’\n\nThe primary focus of our paper is on representation. We agree that including test data-set accuracies on NAS tasks could be beneficial for future comparisons. To this end, we add a ‘NAS Search Effectiveness’ section [Line 315] to our accuracy evaluation to clearly compare between Vec and ZCP. We would like to remark that conducting a robust study on a large set of search algorithms to obtain test accuracies would require a separate investigation to assess the interaction between input representation and search strategy. As such, it is beyond the scope of our current work. However, we choose one of the state of the art iterative selection strategy introduced in BRP-NAS to evaluate our search effectiveness. We believe that our novel neural network encoding method and the experiments presented in the paper make significant contributions to the field of AutoML.\n\nDear Reviewer,\n\nAs we approach the end of the author-reviewer discussion period, we kindly request your attention to review our revised manuscript and the responses to your comments as soon as possible. This will enable us to address any further queries or concerns you may have before the discussion period concludes.\n\nWe have made several improvements to our manuscript to incorporate the valuable feedback provided by the reviewers:\n1. Expanded our literature survey with relevant citations.\n2. Made several clarifications regarding the methodology used as well as explained Sample Hardware Embedding in greater depth.\n3. Employed a highly-cited methodology for neural architecture search (BRP-NAS) to evaluate the effectiveness of our ZCP based accuracy predictors (Figure 10), and included a NAS Search Effectiveness sub-section detailing our search methodology and presenting specific NAS results.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n Global/general Response:\n```\nDear Reviewers,\n\nAs we are now a few days into the author-reviewer discussion period, we would greatly appreciate it if you could take the time to review our updated draft and responses to your comments at your earliest convenience. Doing so will allow us to address any additional questions or concerns you may have before the discussion period comes to a close.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n\nWe truly value your time and effort in this process and thank you for your valuable review. \n```\n", "role": "user" }, { "content": "I appreciate the authors' feedback. It satisfactorily addresses most of my concerns, and I am inclined to revise the rating upwards.", "role": "assistant" } ]
14U6uzrh-wr
AutoML 2023 Conference
Reviewer_Updo
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Updo, and the following are the paper you need to review:\n```<<14U6uzrh-wr>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper tackles the problem of search space-dependent performance prediction models used so far in NAS. This paper proposes a search space-independent neural network encoding based on zero-cost proxies and hardware embeddings. This way, the proposed method shows an improved correlation for latency prediction in comparison to SOTA baselines. Furthermore, this method enables transfer to other unseen devices and search spaces as well as tasks for latency and accuracy prediction. The paper evaluates the proposed method on NAS-Bench-201 and FB-NET for different devices.\noverall_review: This paper introduces an important research aspect.\nIt is mainly well-written and motivates the proposed approach in a good way.\n\nHowever, there are some open question, which would need to be answered. \nEspecially, my main concern is, that there is no actual search result reported in this paper, but only correlations, which does not show the ability to find high-performing architectures. \n\nreview_summary: The idea of taking a step forward and using search space-independent predictors is great, I believe more results on actual search results and more common NAS-Benchmarks are needed.\n\n\nactions_required_to_increase_overall_recommendation: In order to increase the overall recommendation, this paper would need to answer the open questions in Technical Quality And Correctness. \n\n**After Rebuttal:** I increase my score from 4 to 6.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Updo\nWe appreciate the reviewer's feedback and acknowledge the concerns raised. We would like to address these points and provide clarification to enhance the understanding of our work.\n\n> ‘... Section 5: No accuracy values are shown and associated with this no search results in terms of found architectures and their accuracies….’\n\nIn this paper, we study two ways of representing neural networks, transfer learning of predictors, and hardware embeddings. We take a first step in evaluating the effectiveness of these methods from a prediction-based NAS perspective. We agree that including test data-set accuracies on NAS tasks could be beneficial for future comparisons. To this end, we add a ‘NAS Search Effectiveness’ section [Line 315] to our accuracy evaluation to clearly compare between Vec and ZCP. We choose one of the state of the art iterative selection strategy introduced in BRP-NAS to evaluate our search effectiveness. (Refer to response to reviewer H8rP). We believe that there is an interaction between search algorithms and the input representations, which warrants a separate study.\n\n> ‘... Where is the latency information from? …’\n\nWe detail our latency sources in Appendix A.2.\n\n> ‘...which does not sum to 28?...’\n\nWe say we test transfer of predictors on 28 NAS search spaces AND tasks. TransNASBench101 Micro and Macro have different sets of tasks, and NB201 and NB301 have different datasets, as detailed in Table 8. Our accuracy transfer matrix in the Appendix (Figure 17) studies transfer across all of these search spaces and tasks. There is limited correlation between tasks in the TB101 space, which makes the transfer task challenging.\n\n> ‘... , which are also provided in Meta et al….’\n\nThe paper provides results on 'NAS Methods' focusing on the ranking ability of NAS Algorithms, whereas we look at representations for MLP prediction-based NAS. We believe a study on the effectiveness of different NAS Algorithms (RS/RE, etc.) and Predictors (GP, XGB, etc.) warrants a separate manuscript. For transfer, the Meta et al. paper looks at the generalizability of hyperparameters, and not the models themselves.\n\n> ‘... Line 75: Which 95 domains? …’\n\nFigure 9 Micro -> Macro transfers (10*7) = 70\nNB2CF10, NB2CF100, NB2IM16, NB3CF10, NB3SVHN transfers (5*5) = 25\n70 + 25 = 95\n\n> ‘...How many runs were conducted for all tables?...’ \n\n5 runs were conducted for all the tables. We will include this information in the appendix. Thank you.\n\n> ‘... more than an order of magnitude”: very vague statement. The corresponding Figure 1 only shows improved correlation but…’\n\nWe refer to improving the sample efficiency of the predictors without compromising the effectiveness of the predictors. We improve correlation with fewer samples.\n\n> ‘... Line 72: “as few as 4 samples“: What and where is the according experiment? …’\n\nThis information can be found in Figure 7 (Multi-Search-Space Latency Prediction). It demonstrates how effective a good representation can be for both few-shot learning and few-shot transfer of hardware predictors. \n\n> ‘... it is not clear to me between what training and test device information this correlation is calculate …’\n\nThank you for the note, we have clarified it on Line 225. The latency/accuracy correlation between two devices is the task distance. \n\n> ‘... The focus of this paper is more on the hardware embedding than on the actual contribution of multi-search approaches….’\n\nHardware embedding enables higher sample efficiency for multi-hardware latency prediction tasks. The hardware embedding (which actually is represented as an embedding table) enables a good hardware device description initialization, which improves sample efficiency.\n\n> ‘... (1) sample: It is not clear to me what that means….’\n\nThanks for pointing it out, we have attempted to clarify this explicitly in Section 3.1 [Line 140]. To represent a device, an 'n' dimensional vector is used. This 'n' dimensional vector is populated with latency measurements of 'n' fixed neural networks on the device.\n\n> ‘... h^{\\tau}^{1}”: It is not clear what that means. …’\n\nWe search for the device that has the highest correlation with the 'new' device, and use its embedding as initialization for the new device.\n\n> ‘... What are x^{\\tau}^1 and x^{\\tau}^2? …’\n\nThe dimensions of the vector representation of the neural network may not be the same. x^{\\tau}^1 is the 'vec' representation of a sample NN architecture in \\tau^{1} search space. (Line 157: Vec Encoding (X^{\\tau})\n\n> ‘...does not clearly describe on what data the predictors are pretrained and for how long …’\n\nThe predictors are pre-trained on 900/4000 latency samples on a training device set. The size of the NASBench-201 dataset is 15625, and size of the FBNet dataset is 5000. The test data is all the remaining architectures which are not used for training. We have clarified this in Section 4 [Line 219].\n\n> ‘... not that different given their corresponding starting points. …’\n\nWe agree, in general, we remark that pre-training is important.\n\n> ‘... Also in what terms is the improvement calculated Correlation or accuracy? …’\n\nThe diagonal has a non-zero improvement because ZCP predictors are inherently better without transfer learning. Improvement is the difference in spearman rank correlation. We have made this clarification in Figure 9, thank you for pointing it out.\n\n> ‘... which does not show the ability to find high-performing architectures. …’\n\nWe focus on the representation in this paper, test data-set accuracies would require us to do a robust study on a large set of search algorithms, which we believe is a separate study to assess the interaction between input representation and search strategy.\n\nWe have made major revisions to this paper to address the concerns of the reviewer. We believe that our search space agnostic representation can significantly improve the sample efficiency of NAS for both accuracy and latency, and would request the reviewer to kindly reconsider their scores in light of our revision and clarifications. \n\n Global/general Response:\n```\nDear Reviewers,\n\nAs we are now a few days into the author-reviewer discussion period, we would greatly appreciate it if you could take the time to review our updated draft and responses to your comments at your earliest convenience. Doing so will allow us to address any additional questions or concerns you may have before the discussion period comes to a close.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n\nWe truly value your time and effort in this process and thank you for your valuable review. \n```\n", "role": "user" }, { "content": "Title: Response to Authors\nI thank the authors for the responses. However, I disagree that showing the test accuracies of the found architectures from NAS benchmarks (therefore only querying the architectures) is a separate study. In order to access the ability to find well-performing architectures in terms of hardware and accuracy both information is needed. I appreciate, that the authors included a section *NAS Search Effectiveness*. This section leads to more questions, about how a test accuracy of 100% in NAS-Bench-101 is possible when the optimum is 94.32% (the same holds for NAS-Bench-201). \nI will slightly raise my score. Though some of my concerns and questions were answered, I still believe that this paper needs more information about the found architectures in terms of accuracy.", "role": "assistant" }, { "content": "Title: Response To Reviewer\nDear Reviewer,\n\nThank you for your valuable feedback.\n\nWe would like to clarify that in the caption of Figure 10, we had specifically stated that the Accuracy is normalized within the range of 0 to 1. To eliminate any confusion, we will revise the figure to display the un-normalized test accuracy rather than the normalized version. As the rebuttal period has expired, we are unable to update the paper with the new figure. Please find this anonymized link to the figure here: https://ibb.co/MVrSCSy\n\nIt is worth mentioning that we employed a recent, highly-cited methodology for assessing the search performance of predictors (BRP-NAS) based on the ZCP representation. We evaluated the ZCP representation across approximately 423,000 neural networks within the NASBench-101 search space, thus establishing a fair benchmark for future comparisons. The updated figure now includes the precise accuracy of the discovered architectures. We agree that presenting the test accuracies of the identified architectures on NAS Benchmarks is essential, and have amended our manuscript accordingly.\n\nIf there are any additional steps we can take to assist you in reevaluating your score, please let us know.", "role": "user" }, { "content": "Title: Response to Reviewer\nI appreciate the fast response of the authors and their response. I am willing to raise my score and encourage the authors to update the submission (in case of being a camera-ready version) by including the updated figure and the information about the test accuracies of the identified architectures. ", "role": "assistant" } ]
14U6uzrh-wr
AutoML 2023 Conference
Reviewer_NUxr
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_NUxr, and the following are the paper you need to review:\n```<<14U6uzrh-wr>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The paper provides a few-shot multi-search-space latency and accuracy predictors that could be used to expedite NAS methods. The paper focuses on generalizing the task of prediction such that it is agnostic to task, search space, and hardware. I feel this is the most interesting contribution from the paper - Instead of using a vector for representing NNs, authors leveraged a search-space independent network embedding vector that is based on either zero-cost proxies or hardware latency measurements.\noverall_review: •\tStrengths:\n\no\tIt is a relevant topic for the conference.\n\no\tThe message of the paper is well supported.\n\no\tThis paper addresses a SoTA problem in the AutoML domain. \n\n\n•\tWeaknesses:\n\no\tComputational cost and carbon footprint of the accuracy and latency predictors have not been reported.\n\no\tThe manuscript contains a few typos.\n\no\tA multi-device latency predictor may not work in practice.\n\nreview_summary: Overall the idea of using a novel network embedding for building design-space and task-agnostic accuracy/latency predictor is quite interesting. I give this paper a weak accept, but I'm willing to change my evaluation if the mentioned concerns are addressed.\nactions_required_to_increase_overall_recommendation: Please consider addressing my concerns mentioned in the previous sections.\nRating: 8 accept, good paper\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to Reviewer NUxr \nWe would like to thank the reviewer for their insightful comments.\n\n> ‘... I believe it is a huge claim to say we ... even with some learning samples. A multi-device latency predictor may not work in practice….’\n\nWe acknowledge that there are several HW designs that would be sufficiently un-correlated with the training set of devices such that pre-training does not work as effectively. In fact, our construction of the ‘Adversarial device set’ is an explicit attempt to model situations where the ‘test’ device is novel and difficult to model using existing reference devices. We do not intend to claim that we have a latency predictor that works for any device, rather claim that we have improved the sample efficiency of existing latency predictors, specifically by using HWL and ZCP representations and learnable HW embeddings. A multi-device latency predictor may not work in practice, but our tests indicate that transfer of knowledge from existing search spaces, even if the correlation is not very high, still benefits end to end sample efficiency of the prediction task. Kindly note that we do not have a ‘multi-device’ latency predictor, rather, we have a baseline predictor that works on a set of training devices, and is adapted to the specific test device. This adaptation process results in a predictor that is not generalized, and works only on the specific target HW . Additionally, we consider a device to be a combination of HW and software, so if the software changes, it would be a different device in our characterization. We completely agree that predicting the latency on FPGAs with different accelerator parametrization, quantization levels and routing objectives are more difficult. In our case, by FPGA, we refer to a specific accelerator on the fpga, not the unconstrained search space of possible accelerators. again, each of those would be considered a separate device in our methodology. We follow the convention of prior work HELP and refer to a single instantiation of the accelerator as an FPGA to keep it easy to compare results.\nWe agree that multi-device prediction is not a trivial task, and an extremely novel HW device may be distinct enough to make transfer learning less effective. Nevertheless, we take steps towards increasing the sample efficiency of existing methods for few-shot latency and accuracy prediction. We believe that as an increasing catalogue of HW latencies is made available to us, in conjunction with our search space agnostic representation of NN architectures, it would be possible to build unified datasets with a very diverse device set, making few-shot latency prediction on HW devices more feasible in the future.\n\n> ‘... Please consider discussing the differences between this paper and ZiCO. …’\n\nWe believe that the performance of accuracy predictors would improve if more ZCPs are added. We adapt prior work NAS-Bench-Suite-Zero to obtain the ZCPs for our studies. We will discuss ZiCO in the related work.\n\n> ‘... ‘TL ZCPVec ET’ provides better results than ‘TL HWLVec ET’. Why? …’\n\nWe find that some of the ZCPs have a higher correlation than the reference training devices with the test devices. This is in agreement with our analysis in Figure 15 (Appendix) where we discuss the effectiveness of input-output correlation for predictors.\n\n> ‘... most suitable regression model? …’\n\nWe adapt predictors from BRP-NAS and HELP for consistency with prior work. A separate study on predictors (XGB, GP etc.) is warranted, but we focus on representations in this paper. We believe that a study on the interaction between predictors, representations and search algorithms warrants a separate, in depth investigation. However, we have added NAS search results to demonstrate the end-to-end effectiveness of our representation. Please see response to reviewer H8rP for more on this.\n\n> ‘... The computational cost and carbon footprint of the accuracy and latency predictors have not been reported. …’\n\nIt takes approximately 10 GPU minutes to train a predictor for accuracy and latency. These predictors are extremely sample efficient to train. Our paper focuses on training efficient predictors for NAS, and we consider the carbon footprint of the training process to be negligible when compared to other methods.\n\n> ‘... this paper needs a separate section for the ablation study. I expect the authors to clarify if they represent the impact of each individual contribution. …’\n\nWe investigate several aspects of few-shot regression of latency and accuracy in this paper. To introduce the terminology and methodology in a complete fashion, we conduct some ablation studies and include them in the paper. We further support our claims with more experiments in the Appendix. \n\n> ‘... figure legend which of these lines is proposed by the authors. Plus, I recommend authors cite each method in the figure legend….’\n\nWe have cited the respective methods in Figure 6, and will regenerate all figures to include appropriate citations.\n", "role": "user" }, { "content": "Title: Response\nThanks for addressing my concerns. I increased my score from 7 to 8. ", "role": "assistant" }, { "content": "Title: Author Rebuttal Discussion Reminder\n Global/general Response:\n```\nDear Reviewers,\n\nAs we are now a few days into the author-reviewer discussion period, we would greatly appreciate it if you could take the time to review our updated draft and responses to your comments at your earliest convenience. Doing so will allow us to address any additional questions or concerns you may have before the discussion period comes to a close.\n\nIf our rebuttal satisfactorily addresses your concerns, we kindly request that you consider revising your rating of our submission accordingly.\n\nWe truly value your time and effort in this process and thank you for your valuable review. \n```\n", "role": "user" }, { "content": "I am glad to have had the above discussion with the author, which has made the paper much stronger.", "role": "assistant" } ]
1tZY0La5GFRp
AutoML 2023 Conference
Reviewer_LpEh
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_LpEh, and the following are the paper you need to review:\n```<<1tZY0La5GFRp>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper presents an RL-based Neural Architecture Search method that uses elementary operations to construct novel building blocks for sequence datasets.  The paper focuses on searching for novel building blocks, rather than reusing widely used NN operations, such as the normalization layer. During the search step, the authors used synthetic datasets of increasing complexity to expedite the search procedure. This is a remarkable contribution.\noverall_review: Strengths:\n\no\tThe manuscript is well-written and easy to follow. \n\no\tProposing a NAS method that searches for elementary operations (e.g., add, multiply, etc.) to discover novel building blocks. \n\no\tThe message of the paper is clear and well-supported.\n\nWeaknesses:\n\no\tExperiments of the paper are very limited to one sequential dataset. \n\no\tThe novelty of the paper is marginal, similar problems have been previously addressed in [2].\n\no\tSome parts of the paper need more clarification, such as the procedure of graph splitting. \n\n[2] Real, Esteban, et al. \"Automl-zero: Evolving machine learning algorithms from scratch.\" International Conference on Machine Learning. PMLR, 2020.\n\nreview_summary: This paper emphasizes finding novel building blocks. However, the current version lacks diverse experimental results with deep analysis. Thus, I give this paper a weak rejection, but I will change my evaluation if the mentioned concerns are addressed.\n\n\nactions_required_to_increase_overall_recommendation: As also mentioned in Section 5, I highly recommend authors consider evaluating the non-sequential datasets, such as CIFAR-10. This can significantly improve the paper's quality and impact. Please also consider addressing my concerns mentioned in the previous sections.\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer LpEh\nWe thank the reviewer for the detailed feedback and for pointing out the significance of our contribution. In the following, we would like to address your comments and concerns:\n\n1. Synthetic Datasets: The three synthetic datasets aim to evaluate the general performance of the building blocks, such as remembering a specific input over long time lags in the domain of sequential data. We assume to see comparable performances during a search process if we select other synthetic datasets that assess similar properties. This implies that we do not need to update/add synthetic datasets for other real-world complex tasks. This approach aims to find building blocks that generalize well to a wide variety of real-world complex tasks.\nIn lines 56 to 59 and the appendix, we have included a more detailed explanation to clarify the motivation behind using these datasets and how they facilitate the search procedure.\n\n2. Further building block evaluation: Although the emphasis of this work is on the novelty of the methodology rather than on the resulting building blocks, we acknowledge the importance of evaluating them on a broader range of real datasets. As per your suggestion, we have evaluated the performance of our building blocks on the Penn Treebank dataset and further demonstrated that ELENA1 and ELENA2 are parameter-efficient and generalize well to various real-world data.\n\n3. Computation Cost: Each search process requires about 100 GPU hours on a single NVIDIA A100 GPU. This amounts to approximately 10 kg of CO2 emissions. Since we conducted four separate search processes, we reported a total of 400 GPU hours in our OpenReview submission.\n\n4. Building Block evaluation: We appreciate your observation regarding the difference in performance between ELENA1 and ELENA2 on synthetic and real data. The reason for this difference might be rooted in the fact that – compared to ELENA1 – ELENA2 follows the concept of a cell state which is beneficial for memorizing the input at the initial time step for a predefined sequence length. Although ELENA2 performs slightly worse than ELENA1 on the real-world dataset, it still is on par with or even outperforms the other architectures with only ⅕ of the size.\n\n5. Clarity: Thank you for pointing out the remaining unclarities in our work. We have addressed your specific concerns:\n- Line 31: In this regard, we want to emphasize that current NAS approaches frequently incorporate well-established ML concepts, such as pooling layers, normalization layers, or attention mechanisms. Particularly in the domain of sequential data, related work utilizes fixed highway connections, or LSTM cell-inspired variables, which are not trivial to design in the first place.\n- Lines 224-226: We have rephrased this part to provide better insight into our choice of search strategy. In general, since the big search space of elementary mathematical operations only includes a relatively low fraction of high-performing building blocks, employing an RL agent that acquires knowledge from graph structures to consequently refine the search space, appeared to be a natural choice.\n- Line 162-163: We have revised the sentence in question and provided a detailed explanation of the graph-splitting approach in the appendix. In essence, this approach simply refers to the order in which the edges of a graph are predicted, without altering the graph itself. Through various ablation studies, we have empirically investigated the graph neural network architecture, the graph embedding, and the reinforcement learning algorithm, including the edge prediction order. Our findings indicate that predicting edges based on sub-graphs yields high-performing building blocks earlier in the search process.\n\nIn summary, we have addressed your concerns by expanding our experiments, providing deeper analysis, and improving the clarity and presentation of our manuscript. As suggested and also highlighted in the section ‘Limitations’, we will perform our method on non-sequential datasets (such as CIFAR-10) in subsequent work. We hope that these revisions will convince you of the merit of our work and encourage you to reconsider your evaluation.\n\nThank you once again for your valuable feedback!\n Global/general Response:\n```\nWe once again thank the reviewers for their extensive feedback, through which we could further improve the clarity and technical quality of our work. This can be summarized as follows:\n\n1. Potential Impact On The Field Of AutoML: We appreciate the reviewers' acknowledgment of the novelty and potential impact of our work. Reviewers y13y and NxY5 explicitly recognized the significance of our contributions and considered our method as \"worth being introduced to the AutoML community.\" Other reviewers conveyed their receptiveness to our method, provided that we conduct more extensive evaluation of our building blocks. Following their suggestions, we have performed additional experiments on the widely-used real-world Penn Treebank dataset. It is noteworthy that the neural architectures found by other NAS approaches, which we compare against, have been specifically searched for this dataset and that our computational resources have been a constraining factor. Nonetheless, these additional results highlight the parameter-efficiency and generalizability of the building blocks and thus the effectiveness of our method.\n\n2. Technical Quality And Correctness: No major technical issues were identified by the reviewers.\nIn the revised paper, we have addressed the remaining questions, including the choice of search strategy, the use of synthetic datasets, the size of the computation graph, the utilization of positional encoding, and the approach of graph-splitting. Also, we would like to thank reviewer fYxV for the high rating regarding the reproducibility of our method.\n\n3. Clarity: We appreciate that all reviewers found our approach clearly presented and easy to follow. The remaining clarity concerns could be cleared up as part of the comments as well as in the revised paper.\n\nWe hope that our revisions resolve your questions and concerns and that the additional experiments provide compelling evidence of the effectiveness of our method. Thank you for your time and efforts!\n```\n", "role": "user" }, { "content": "Title: Response\nThanks for addressing my comments. I just increased the 'Clarity' score to 4, and the 'Technical Quality And Correctness Rating' score to 3 (each one is increased by 1 score). Although the idea of the paper is interesting, I still believe more experiments are required to better evaluate ELENAS. I don't know how well the proposed approach will really scale to larger datasets in view of the huge computational overhead (100 GPU hours on a single NVIDIA A100 GPU for each search process).", "role": "assistant" }, { "content": "Title: Author response\nThank you for your response. We consider one of the main novelties of our method to be that scaling to larger datasets is not necessary during the search process, as we solely utilize synthetic datasets to search for parameter-efficient building blocks that can then be scaled to various datasets of any size and complexity. \nAlthough the entire search process takes about 100 GPU hours, ELENA1 was found at 52% and ELENA2 at 41% of the search process. And since our method has already reached its maximum computational complexity, it requires less than 2 GPU days to find high-performing, general-purpose ELENAs. In comparison, Efficient NAS (ENAS) finds recurrent cells in about 0.5 GPU days, DARTS in about 1 GPU day, and NAS in about 10,000 CPU days (Liu et al., 2018). Given these facts, we would highly appreciate it if you considered adapting your overall review rating as well. Thank you!", "role": "user" }, { "content": "I appreciate the fruitful discussion with the author, which has certainly made the work more robust.", "role": "assistant" } ]
1tZY0La5GFRp
AutoML 2023 Conference
Reviewer_v8Xm
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_v8Xm, and the following are the paper you need to review:\n```<<1tZY0La5GFRp>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes Elementary Neural Architecture Search (ELENAS), a method that learns to combine elementary mathematical operations to form new building blocks for deep neural networks. These building blocks are represented as computational graphs, which are processed by graph neural networks as part of a reinforcement learning system. In a set of experiments, the authors demonstrate that their method leads to efficient building blocks that achieve strong generalization and transfer well to real-world data. When stacked together, they approach and even outperform state-of-the-art neural networks at several prediction tasks. Using the proposed method, authors present the building blocks ELENA1 and ELENA2 for the domain of sequential data. \noverall_review: Overall the method proposed in this paper is sound to the reviewer.\n\nstrength:\n- the paper is well-written and the details are discussed properly\n- the motivation and the overall presentation is clear\n\nlimitation:\n- this paper lacks sufficient empirical results to support the proposed method, which is the most concerning part to the reviewer. Without extensive experiments, the audience does not have sufficient confidence in applying the proposed method. The reviewer would suggest authors conduct more experiments.\n\nreview_summary: Overall the method proposed in this paper is sound to the reviewer. However, the experiment evaluation is very limited, which is not convincing enough to tell whether the proposed method, the found building blocks are generally better than others. Given this limited evaluation, the reviewer leans to not support acceptance of this paper. \nactions_required_to_increase_overall_recommendation: I would suggest author conduct more empirical experiments, e.g. more datasets, compare with more blocks, etc.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer v8Xm\nThank you for acknowledging the soundness of our proposed method as well as the clear presentation of our method. We appreciate your feedback and understand your concerns regarding the limited experimental evaluation. Thus, we have taken your suggestions into account and have expanded our experiments by another real-world dataset suggested by one of the other reviewers. \n\nWe have additionally tested ELENA1 and ELENA2 on the Penn Treebank dataset to tackle your main concern. Although neural architectures found by other NAS approaches, which we compare against, have been specifically searched for this dataset and our computational resources have been a constraining factor, the real-world experiments further demonstrate the generalizability and parameter-efficiency of our building blocks. While it is true that LLMs currently dominate this field, the need for parameter-efficient models remains significant, especially for real-world applications with resource constraints or latency requirements. We believe that fostering research in parameter-efficient building blocks can also contribute to a more sustainable AI landscape. We hope that this perspective on this matter further emphasizes the potential relevance of our work.\n\nWith these additional empirical experiments and analyses, we hope to have addressed your concerns. As hinted at in the Limitations section, we will conduct further experiments in subsequent work, exploring different domains and search spaces.\nThus, we would highly appreciate it if you considered raising the rating of our work. \n\nThank you once again for your valuable feedback!\nDear reviewer v8Xm,\n\nAs the discussion period is coming to an end soon, we would appreciate it if you could evaluate our clarifications and the updated paper. If our response resolves your questions and concerns, we kindly ask that you consider raising the rating of our work.\n\nThank you!\n Global/general Response:\n```\nWe once again thank the reviewers for their extensive feedback, through which we could further improve the clarity and technical quality of our work. This can be summarized as follows:\n\n1. Potential Impact On The Field Of AutoML: We appreciate the reviewers' acknowledgment of the novelty and potential impact of our work. Reviewers y13y and NxY5 explicitly recognized the significance of our contributions and considered our method as \"worth being introduced to the AutoML community.\" Other reviewers conveyed their receptiveness to our method, provided that we conduct more extensive evaluation of our building blocks. Following their suggestions, we have performed additional experiments on the widely-used real-world Penn Treebank dataset. It is noteworthy that the neural architectures found by other NAS approaches, which we compare against, have been specifically searched for this dataset and that our computational resources have been a constraining factor. Nonetheless, these additional results highlight the parameter-efficiency and generalizability of the building blocks and thus the effectiveness of our method.\n\n2. Technical Quality And Correctness: No major technical issues were identified by the reviewers.\nIn the revised paper, we have addressed the remaining questions, including the choice of search strategy, the use of synthetic datasets, the size of the computation graph, the utilization of positional encoding, and the approach of graph-splitting. Also, we would like to thank reviewer fYxV for the high rating regarding the reproducibility of our method.\n\n3. Clarity: We appreciate that all reviewers found our approach clearly presented and easy to follow. The remaining clarity concerns could be cleared up as part of the comments as well as in the revised paper.\n\nWe hope that our revisions resolve your questions and concerns and that the additional experiments provide compelling evidence of the effectiveness of our method. Thank you for your time and efforts!\n```\n", "role": "user" }, { "content": "I value the authors' response. Their response has resolved the majority of my concerns, and I am considering revising the rating.", "role": "assistant" } ]
1tZY0La5GFRp
AutoML 2023 Conference
Reviewer_y13y
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_y13y, and the following are the paper you need to review:\n```<<1tZY0La5GFRp>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors introduce ELENAS, a search method for constructing neural network building blocks based on RL + GNN search over computation graphs, consisting of nodes representing learnable parameters and edges representing elementary mathematical operations. Constructed computation graphs are then mapped back to architectures for performance estimation on synthetic datasets, serving as the reward function. Empirically, they show that ELENAS can design novel building blocks with strong predictive performance, in this case in the domain of sequential data.\noverall_review: Strengths:\n- Intuitive search process that is grounded in strong algorithmic foundations (PPO and GNN)\n- Searched architectures, ELENA1 and ELENA2, demonstrate empirically strong performance on both synthetic and real-world datasets -- while being much smaller (in terms of # of parameters) than contemporary methods\n\nWeaknesses:\n- Limited experiment scope \n - Only graphs with 32 nodes\n - Only consider sequential data domain\n - Not obvious how to extend synthetic dataset generation to e.g. image classification\n- In results, no comparison with Transformer-based models\n\nreview_summary: The authors introduce ELENAS, a straightforward and reasonable search method to develop novel NN architecture blocks. They demonstrate strong results in the domain of sequential data. While experimental results hint at their method's usefulness, more comprehensive studies are lacking. Overall, an interesting idea with high impact potential, if more sufficiently studied.\nactions_required_to_increase_overall_recommendation: More experiments -- see discussion above.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer y13y\nThank you for recognizing the novelty and potential impact of our work. We appreciate your feedback and agree that broader experimental work is desired. In response to your concerns, we have expanded our experimental evaluation and addressed the specific comments:\n\n1. Experiments: We additionally tested ELENA1 and ELENA2 on the real-world Penn Treebank dataset. The results further demonstrate the adaptability and effectiveness of our method.\n\n2. Information on Position Encoding: We have clarified the use of trigonometric functions to encode information position in Section F of the appendix. In essence, the position encoding is inspired by the Transformer model's positional encoding, which uses trigonometric functions (sine and cosine functions) to encode the position of tokens. In this work, positional encoding is just a single additional node feature that is constructed using the sine function.\n\nWe hope that the additional experiments and improvements in the manuscript address your concerns and demonstrate the effectiveness and generalizability of our proposed method. As hinted at in the Limitations section, we will conduct further experiments in subsequent work, exploring different domains and search spaces. Regarding image classification, synthetic datasets could for instance be small images with simple geometric shapes such as circles, squares, or triangles optionally with varying colors, sizes, positions, or noise. \n\nThank you once again for your valuable feedback!\n Global/general Response:\n```\nWe once again thank the reviewers for their extensive feedback, through which we could further improve the clarity and technical quality of our work. This can be summarized as follows:\n\n1. Potential Impact On The Field Of AutoML: We appreciate the reviewers' acknowledgment of the novelty and potential impact of our work. Reviewers y13y and NxY5 explicitly recognized the significance of our contributions and considered our method as \"worth being introduced to the AutoML community.\" Other reviewers conveyed their receptiveness to our method, provided that we conduct more extensive evaluation of our building blocks. Following their suggestions, we have performed additional experiments on the widely-used real-world Penn Treebank dataset. It is noteworthy that the neural architectures found by other NAS approaches, which we compare against, have been specifically searched for this dataset and that our computational resources have been a constraining factor. Nonetheless, these additional results highlight the parameter-efficiency and generalizability of the building blocks and thus the effectiveness of our method.\n\n2. Technical Quality And Correctness: No major technical issues were identified by the reviewers.\nIn the revised paper, we have addressed the remaining questions, including the choice of search strategy, the use of synthetic datasets, the size of the computation graph, the utilization of positional encoding, and the approach of graph-splitting. Also, we would like to thank reviewer fYxV for the high rating regarding the reproducibility of our method.\n\n3. Clarity: We appreciate that all reviewers found our approach clearly presented and easy to follow. The remaining clarity concerns could be cleared up as part of the comments as well as in the revised paper.\n\nWe hope that our revisions resolve your questions and concerns and that the additional experiments provide compelling evidence of the effectiveness of our method. Thank you for your time and efforts!\n```\n", "role": "user" }, { "content": "I appreciate the authors' reply. It resolves the majority of my concerns. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
1tZY0La5GFRp
AutoML 2023 Conference
Reviewer_NxY5
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_NxY5, and the following are the paper you need to review:\n```<<1tZY0La5GFRp>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The paper presents ELENAS, a method for finding novel, performant neural architectural building blocks at the elementary level by combining mathematical operations. The building blocks are represented as computational graphs and processed by GNNs as part of a reinforcement learning system with the Proximal Policy Optimization (PPO) algorithm. Ultimately, ELENAS can design building blocks and is employed in the domain of sequential data. ELENA1 and ELENA2 building blocks are proposed for the sequential data domain with better performance compared to LSTM. The performance metric is based on the proposed reward function based on the average accuracy, a specific step, the factor by which the reward is weighted for a specific dataset, and reward shaping. The resulting building blocks are compared based on the accuracy of the test/validation set.\noverall_review: Positive aspects: \n- Novel approach: The paper introduces an innovative idea in the field of Neural Architecture Search (NAS), focusing on searching for neural architectural building blocks using elementary mathematical operations. \n- Detailed methodology: The authors provide a comprehensive description of their approach, including the process of designing a building block, the utilization of the Proximal Policy Optimization (PPO) algorithm, the search for building blocks using a graph representation, and in the appendix, they have reported the values of all the hyperparameters.\n\nNegative aspects of the paper:\n\n- Reason for using PPO algorithm: The authors do not discuss the advantages of the PPO algorithm over other potential reinforcement learning algorithms.\n- Impact of increasing the number of total nodes 𝑚: The authors mention that it's possible to increase 𝑚 to adapt to the search progress dynamically, but they do not provide discussion on the potential implications of increasing 𝑚.\n- Incomplete results section: The results section seems to be incomplete, as it only reports outcomes related to one of the four datasets mentioned, and it only compares the proposed method to LSTM. Including results for all datasets and a more comprehensive set of building blocks would provide a better evaluation of the proposed method.\n- Lack of validation set results: The results in Table 1 only include test set outcomes, and the validation set results are missing. Including validation set results would offer a more comprehensive understanding of the method's performance.\n- Insufficient detail on splitting G into sub-graphs: The authors mention that splitting G into sub-graphs and predicting the edges accordingly improves the search for high-performing building blocks. However, they do not provide enough detail on the process of splitting G into sub-graphs or the criteria used for doing so.\nreview_summary: The paper presents an interesting idea in the field of NAS, proposing an innovative approach for searching neural architectural building blocks using elementary mathematical operations. The proposed idea has the potential to greatly affect how neural networks are designed and optimized. While the idea is intriguing, there are some important concerns that need to be addressed to achieve a more thorough understanding and assessment of the proposed method. The reported results seem to be incomplete, as Table 1 only compares the approaches to LSTM, and among the four datasets, results are only reported for one. Additionally, considering that in Table 2 there is only a small improvement for the fifth dataset compared to all other approaches and some partially good results for the fourth dataset, I decided to weakly reject the current state of the paper.\nactions_required_to_increase_overall_recommendation: Address the minor issues: Ensure that minor issues such as the reference to the correct table and the explanation of the ROC-AUC performance metric are corrected and clarified, which would improve the overall readability and understanding of the paper.\n\nEnhance the results in Table 1: As suggested earlier, either improve Table 1 by including results for all four datasets and comparisons with other building blocks besides LSTM, or provide a clear explanation for focusing on only one dataset in the experiments and the rationale behind comparing the method only with LSTM.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer NxY5 and Follow-Up\nWe appreciate your recognition of the novelty and potential impact of our work in the field of AutoML. We understand your concerns regarding the results section and the clarity of certain aspects of our paper. In response to your feedback, we have made the following revisions:\n\n1. Results on synthetic datasets: The Reber grammar, embedded Reber grammar, and adding datasets were employed to assess the general performance of the building blocks in the domain of sequential data during the search process. On the contrary, the memory dataset functioned as the validation dataset to further evaluate the resulting building blocks on an unseen dataset. This evaluation process identified the two highest-performing building blocks, ELENA1 and ELENA2, which we present in this paper. We appreciate and followed your suggestion to extend the table with all ELENAs discovered during the four search processes (see Table 3 in the revised paper). ​​Since the primary purpose of evaluating the resulting building blocks on the memory task was to rank them based on their general performance, we decided to present only the test set results and limit our comparisons to the LSTM architecture. The LSTM just serves as a reference to state-of-the-art building blocks, providing context for the performance of the ELENAs. This approach allows us to maintain focus on the main objective without overextending the scope of the comparison on this synthetic dataset. Instead, we have included an additional real-world dataset to our results, which offers more valuable insights into the performance of the discovered building blocks in a broader context.\n\n2. Graph splitting details: We have added more information on the process of splitting G into sub-graphs in Section F of the appendix. In essence, this approach simply refers to the order in which the edges of a graph are predicted, without modifying the graph itself. Through various ablation studies, we have empirically investigated the graph neural network architecture, the graph embedding, and the reinforcement learning algorithm, including the edge prediction order. Our results suggest that predicting edges by utilizing sub-graphs leads to the discovery of more complex building blocks earlier in the search process.\n\n3. Search Strategy: Thank you for pointing out the ambiguity, we have extended the discussion on why we have chosen to employ the PPO algorithm for our search strategy over other potential algorithms.\n\n4. Number of nodes 𝑚: We have provided more insights into the potential implications of the number of nodes 𝑚 in the computation graph. In general, we have opted for 32 nodes, since such a computation graph can potentially represent most state-of-the-art building blocks. Also, it includes 15 intermediate nodes which proves advantageous to be split into 3 subgraphs to determine the order in which edge types are predicted (see appendix Section F for further details). Additionally, since the number of edges increases quadratically with the number of nodes (formula provided in Section 3.1), choosing a total of 32 nodes provides a suitable balance between the complexity of the graph structure and computational expenses.\n\n5. Other minor issues: Thank you for pointing out some minor issues. We addressed all of them to further improve the readability of the paper.\n\nWith these revisions, we believe we have addressed your concerns regarding the results section and the clarity of certain aspects of our paper. We hope that these changes will provide a more comprehensive evaluation of our method and encourage you to reconsider your assessment. Thank you once again for your valuable feedback!\nDear reviewer NxY5,\n\nAs the discussion period is coming to an end soon, we would appreciate it if you could evaluate our clarifications and the updated paper. \nIf our response resolves your questions and concerns, we kindly ask that you consider raising the rating of our work.\n\nThank you!\n Global/general Response:\n```\nWe once again thank the reviewers for their extensive feedback, through which we could further improve the clarity and technical quality of our work. This can be summarized as follows:\n\n1. Potential Impact On The Field Of AutoML: We appreciate the reviewers' acknowledgment of the novelty and potential impact of our work. Reviewers y13y and NxY5 explicitly recognized the significance of our contributions and considered our method as \"worth being introduced to the AutoML community.\" Other reviewers conveyed their receptiveness to our method, provided that we conduct more extensive evaluation of our building blocks. Following their suggestions, we have performed additional experiments on the widely-used real-world Penn Treebank dataset. It is noteworthy that the neural architectures found by other NAS approaches, which we compare against, have been specifically searched for this dataset and that our computational resources have been a constraining factor. Nonetheless, these additional results highlight the parameter-efficiency and generalizability of the building blocks and thus the effectiveness of our method.\n\n2. Technical Quality And Correctness: No major technical issues were identified by the reviewers.\nIn the revised paper, we have addressed the remaining questions, including the choice of search strategy, the use of synthetic datasets, the size of the computation graph, the utilization of positional encoding, and the approach of graph-splitting. Also, we would like to thank reviewer fYxV for the high rating regarding the reproducibility of our method.\n\n3. Clarity: We appreciate that all reviewers found our approach clearly presented and easy to follow. The remaining clarity concerns could be cleared up as part of the comments as well as in the revised paper.\n\nWe hope that our revisions resolve your questions and concerns and that the additional experiments provide compelling evidence of the effectiveness of our method. Thank you for your time and efforts!\n```\n", "role": "user" }, { "content": "Title: Updated review\nI accepted the paper since the majority of my concerns have been resolved, and I believe it is a valuable addition to the existing literature.", "role": "assistant" } ]
1tZY0La5GFRp
AutoML 2023 Conference
Reviewer_s9Zr
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_s9Zr, and the following are the paper you need to review:\n```<<1tZY0La5GFRp>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes ELENAS, which is a method that searches for mathematical operations in a newly defined search space to form building blocks, i.e., combinations of operations applied on input features similar to one operation in a cell-based search space. This method forms these building blocks as computational graphs and utilizes Graph Neural Networks to learn node and graph embeddings. These embeddings are part of the search strategy, here proximal policy optimization. This paper shows that the proposed method is able to find efficient novel building blocks.\noverall_review: Strengths:\nThe experiments are well done and explore an interesting idea of new building blocks, reducing the amount of human-expert knowledge. \nThis paper is well-written and motivates the idea good.\n\nWeaknesses:\nThere are some open questions (see Technical Quality And Correctness), which would when answered improve the overall assessment of the approach of finding novel and efficient building blocks.\nMy main concern is that there isn’t any standard benchmark from the NAS literature used and the stacking is not mentioned clearly.\n\nMissing citation, which also introduces different building blocks.\nNicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar:\nRethinking Neural Operations for Diverse Tasks. NeurIPS 2021\n\nreview_summary: While the idea of searching for novel building blocks using GNNs with RL is novel,\nI believe additional assessments and experiments are needed to improve this paper and clearly show its superiority over other search approaches.\n\nactions_required_to_increase_overall_recommendation: In order to increase the overall recommendation, this paper would need to answer the open questions in Technical Quality And Correctness. \n\n**After Rebuttal**: I keep my score of 5.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer s9Zr\nWe appreciate your constructive feedback on our work and recognize the concerns you raised. To address them, we have made the following changes in our manuscript:\n\n1. Search Strategy: We have extended our discussion on the choice of search strategy. In general, we opted for RL learning, since a RL agent can acquire knowledge from graph structures to consequently refine the search space of elementary mathematical operations that only includes a relatively low fraction of high-performing building blocks.\n\n2. Stacking of Building Blocks: We have provided a clearer explanation of the stacking approach in Section F of the appendix. In essence, stacking simply refers to having multiple layers of building blocks of the same type to construct the overall model to increase the model complexity. Thus, the boost can still be attributed to the individual building blocks, since everything else is equal for each of the compared models.\n\n3. Hyperparameter Settings: We have also provided a more detailed explanation of the final hyperparameter setting. Regarding the reward shaping, the values r in Table 2 of the revised paper are the ones used in the respective formula for the final reward function detailed in Section 3.4.\n\n4. Results on additional real-world datasets: As per your suggestion, we have included results on the Penn TreeBank dataset to provide a better assessment of our method's capabilities in finding parameter-efficient and well-generalizable building building blocks.\n\n5. Search Times and Efficiency: In general, each search process requires about 100 GPU hours on a single NVIDIA A100 GPU. Since we conducted four separate search processes, we reported a total of 400 GPU hours in our OpenReview submission. However, ELENA1 was already found at about 52% and ELENA2 was found at about 41% of the search process. Also, the efficiency of the building blocks is measured by the number of learnable parameters. \n\n6. Number of nodes 𝑚 and graph-splitting: We have provided more insights into the potential implications of the number of nodes 𝑚 in the computation graph. In general, we have opted for 32 nodes, since such a computation graph can potentially represent most state-of-the-art building blocks. Additionally, since the number of edges increases quadratically with the number of nodes (formula provided in Section 3.1), it provides a suitable balance between the complexity of the graph structure and computational expenses. Also, it proves advantageous to split the graph into 3 subgraphs to determine the order in which edge types are predicted. We have added more information on this regard in Section F of the appendix. In essence, this approach simply refers to the order in which the edge types are predicted, without modifying the graph itself.\n\n7. Clarity-related Issues: \n- Although there is no formal definition of building blocks, as part of the abstract they are exemplified as components of deep learning models, such as feed-forward, convolution, recurrent, long short-term memory, or attention blocks. Also, especially for the domain of computer vision, building blocks are exemplified as normalization layers, pooling layers, and convolution layers with different filter sizes. This should give a clear understanding of what we refer to as building blocks.\n- Line 41-43: “In contrast to well-established search spaces in NAS, the majority of building blocks included in this search space of elementary operations are not valid and thus cannot be used in neural networks.” This statement indicates that our search space encompasses a large number of building blocks that do not represent a proper neural network that can be trained on data, since the elementary mathematical operations do not fit together. In contrast, established search spaces within NAS solely consist of trainable neural networks, where only certain attributes such as the kernel size are altered.\n- Figure 2: As indicated in Section 3.2, combining operations are for edges that connect a node with a different node, while transforming operations are for edges that connect a node with itself. This reduces the search space by a factor of two without compromising its generality. Since the number of edge types is fixed, the number of transforming operations theoretically must be equal to the number of combination operations (practically, in case more combining operations are desired, additional transforming operations can be defined as the identity that has no effect on the node)\n- We have added the missing citation (Roberts et al., NeurIPS 2021) as suggested.\n\nWe hope that these revisions address your concerns and provide a clearer and more comprehensive understanding of our work, encouraging you to reconsider your assessment. Thank you once again for your valuable feedback!\n Global/general Response:\n```\nWe once again thank the reviewers for their extensive feedback, through which we could further improve the clarity and technical quality of our work. This can be summarized as follows:\n\n1. Potential Impact On The Field Of AutoML: We appreciate the reviewers' acknowledgment of the novelty and potential impact of our work. Reviewers y13y and NxY5 explicitly recognized the significance of our contributions and considered our method as \"worth being introduced to the AutoML community.\" Other reviewers conveyed their receptiveness to our method, provided that we conduct more extensive evaluation of our building blocks. Following their suggestions, we have performed additional experiments on the widely-used real-world Penn Treebank dataset. It is noteworthy that the neural architectures found by other NAS approaches, which we compare against, have been specifically searched for this dataset and that our computational resources have been a constraining factor. Nonetheless, these additional results highlight the parameter-efficiency and generalizability of the building blocks and thus the effectiveness of our method.\n\n2. Technical Quality And Correctness: No major technical issues were identified by the reviewers.\nIn the revised paper, we have addressed the remaining questions, including the choice of search strategy, the use of synthetic datasets, the size of the computation graph, the utilization of positional encoding, and the approach of graph-splitting. Also, we would like to thank reviewer fYxV for the high rating regarding the reproducibility of our method.\n\n3. Clarity: We appreciate that all reviewers found our approach clearly presented and easy to follow. The remaining clarity concerns could be cleared up as part of the comments as well as in the revised paper.\n\nWe hope that our revisions resolve your questions and concerns and that the additional experiments provide compelling evidence of the effectiveness of our method. Thank you for your time and efforts!\n```\n", "role": "user" }, { "content": "Title: Author Response\nI appreciate the author's response and will update the *Potential Impact On The Field Of AutoML Rating* and the *Technical Quality And Correctness Rating*. The PTB numbers however show limited improvements to other search methods, while using rather huge computational costs. Therefore I agree with Reviewer LpEh about the uncertainty of how the proposed method scales to larger datasets and thus will keep my score.\n", "role": "assistant" }, { "content": "Title: Author response\nThank you very much for the response. As already mentioned to Reviewer LpEh, the novelty of our method lies in its ability to find general-purpose building blocks by solely using synthetic datasets. As demonstrated in the real-world experiments, these building blocks can then be scaled to various datasets of any size and complexity while being very parameter-efficient. Thus, our method has already reached its maximum computational complexity, requiring as little as 2 GPU days to find high-performing ELENAs. In comparison, Efficient NAS (ENAS) finds recurrent cells in about 0.5 GPU days, DARTS in about 1 GPU day, and NAS in about 10,000 CPU days (Liu et al., 2018). In light of this clarification, we would be highly appreciative if you could re-evaluate your overall review rating as well. Thanks!", "role": "user" }, { "content": "It was a pleasure to engage in the above discussion with the author, which has certainly made the work more robust.", "role": "assistant" } ]
3dpOZ2dZcAa
AutoML 2023 Conference
Reviewer_YVAr
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_YVAr, and the following are the paper you need to review:\n```<<3dpOZ2dZcAa>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposed a Sparse Activation Function Search (SAFS) method for optimizing the activation functions and training hyperparameters of Sparse Neural Networks (SNNs). The authors show that the accuracy drop in SNNs, especially at high pruning ratios, can be attributed to the use of ReLU as the default activation function and the use of the same hyperparameters as dense counterparts. The authors analyze the impact of activation functions and training hyperparameters on the performance of SNNs and propose a new AutoML approach, SAFS, to optimize the activation functions and training hyperparameters of SNNs. The results demonstrate significant improvements in the accuracy of VGG-16, ResNet-18, and EfficientNet-B0 networks trained on CIFAR-10 and ImageNet-16 datasets, especially at high levels of sparsity.\noverall_review: Positive:\n1) Novel approach: The paper proposes a novel approach that combines hyperparameter optimization (HPO) and activation function search for sparse neural networks (SNNs). This approach is a significant contribution as there is no efficient approach for optimizing activation functions in SNN training.\n\n2) Significant improvement in accuracy: The paper shows that the SAFS approach results in up to 8.88% and 6.33% absolute improvement in the accuracy of VGG-16 and ResNet-18 models, respectively, over the default training protocols, especially at high pruning ratios. This improvement is substantial and demonstrates the effectiveness of the proposed approach.\n\n3) Addresses underexplored factors: The paper identifies and addresses two underexplored factors that impact the accuracy of SNNs, namely the choice of activation functions and training hyperparameters. By analyzing these factors and proposing the SAFS approach, the paper makes a significant contribution to the field of SNNs and deep learning.\n\nNegative:\n1) Limited comparison: This paper only compares its proposed approach, Sparse Activation Function Search (SAFS), with the default training protocols and the magnitude pruning method in 2015 for three popular DNN models trained on two datasets. More comparisons with other state-of-the-art methods and datasets would provide a more comprehensive evaluation of SAFS.\n\n2) Lack of thorough analysis: The paper briefly mentions the impact of activation functions and training hyperparameters on the performance of sparse CNN architectures, but it does not provide a thorough analysis of these factors. A more detailed investigation of these factors could provide insights into why SAFS is effective and could lead to further improvements.\nreview_summary: The idea in this paper is novel, but empirical evaluation needs to be improved, including more experiments and more implementation details.\nactions_required_to_increase_overall_recommendation: see Technical Quality And Correctness and Overall Review\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer YVAr\nWe thank you for your constructive feedback and your overall positive evaluation.\n\n\n## Limited Comparison \n\nTo provide further evidence, we have extended our experiments by optimizing the sparse LeNet-5 architecture with a 99% pruning ratio trained on the MNIST dataset. Our results show that SAFS improves the absolute classification accuracy of LeNet-5 by 15.53% (Table 2). \n\n## Thorough Analysis\n\n[Ref3] showed that ReLU might not be the optimal activation function for any sparse neural network as ReLU reduces the network gradient flow. In a similar vein, the main reason for SAFS’s efficiency is the improvement in the network’s gradient flow by learning superior activation functions and corresponding hyper-parameters. We are going to add new results wrt gradient flow to the rebuttal submission of the paper. \n\n* [Ref3] Curci, Selima, Decebal Constantin Mocanu, and Mykola Pechenizkiyi. \"Truly sparse neural networks at scale.\" arXiv preprint arXiv:2102.01732 (2021).\n\n Global/general Response:\n```\n# General Response \n\nWe thank the reviewers for their helpful feedback. We are happy to see that the reviewers consider our approach to be novel (Reviewer YVAr) and adding a new perspective (Reviewer AwGP). We acknowledge the potential impact of applying AutoML to resource-constrained paradigms, which also happens to be our primary motivation to work on sparsity and be recognized for it (Reviewer JFfa). While we agree that this paper tackles the niche of sparsity (Reviewers JFfa, ziJv, uT44), we would like to extend on Reviewer ziJv’s c comment that this is a niche that is receiving an increasing amount of attention, given the recent applications it has seen, not just to image recognition, but also in sequential decision making problems and reinforcement learning scenarios [Ref1]. \n\n* [Ref1] Liu, Vincent, et al. \"The utility of sparse representations for control in reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n\nWe would also like to reiterate that the reason we tackle sparsity is specifically that the resource-constrained nature of the problem requires different ways of mitigating it. As we point out in Section 5.3, the same HPs for dense networks do not translate to sparse regimes. Additionally, given the potential applications of sparse networks to resource-constrained use cases, extracting as much performance as we can from these networks becomes very important. Hence, our motivation to perform an activation search on top of HPO. Through this work, in a similar vein to hardware-aware NAS, we aim to bring the attention of the AutoML Community to the resource-constrained regime, an area where we believe AutoML shines. To this end, we show that even standard techniques such as LAHC and SMAC3, combined with the added touch of SAFS can help us contribute to these applications. We aim to push further in this direction with follow-up work. \n\nWe would like to address general comments here, and address individual points as separate replies.\n\n\n## Additional Experiments\n\nIn line with the reviewers’ requests for additional ablations and experiments, we have run and added the following to the paper:\n\n1. Ablations\n* Table 2: EfficientNet-B0 trained on ImageNet-16 with Swish activation functions and only HPO (only second stage)\n2. Cost comparison\n* Empirical costs of SAFS (Appendix B).\n3. Additional results\n* Table 2: Applying SAFS on the sparse LeNet-5 model trained on the MNIST dataset with a 99% pruning ratio.\n\n\n```\n", "role": "user" }, { "content": "I acknowledge the authors' response. It resolves the majority of my concerns. I will continue to recommend accepting this work.", "role": "assistant" } ]
3dpOZ2dZcAa
AutoML 2023 Conference
Reviewer_AwGP
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_AwGP, and the following are the paper you need to review:\n```<<3dpOZ2dZcAa>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposed to search for activations and tune hyperparameters for sparse neural networks. Searching for layerwise activation functions is a new perspective to improve the performance of sparse networks. It showed the effect of different activation functions. \noverall_review: This paper provides a new perspective to boost the performance of sparse networks. I am not sure if searching for activations really works. I am not sure if we can simply tune parameters to get the similar performance gain as the proposed SAFS. \nreview_summary: My initial decision is borderline accept. I raised some concerns, especially if we can simply use HPO to get the similar performance gain as the proposed method. I am not sure if we really need to search for activations for sparse networks. I hold room to decrease scores if my major concerns are not addressed after rebuttal. \nactions_required_to_increase_overall_recommendation: Please add experimental results to ease my concerns.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer AwGP \nWe thank you for your constructive feedback and your overall positive evaluation.\n\n> Why do you need to formulate the activation function as y=𝜶×f(𝜷×X). The scaling parameters 𝜶 and 𝜷 should be learned by convolutional layers.\n\nAs discussed in Section 4.1, this representation allows efficient implementation. Additionally, this formula provides effective parameterization since tweaking 𝜶 and 𝜷 parameters give the sparse network additional flexibility to be fine-tuned (Figure 1). Our findings demonstrate that the suggested activation function has notably enhanced the test accuracy of sparse neural networks.\n\n> You did not compare your algorithm with a strong baseline. And you search for layerwise activation functions from existing activation functions. [R1] searches for activations from basic primitives. Can you compare your algorithm with [R1]?\n\nThank you for the feedback. We considered comparing SAFS to two baselines: (1) vanilla pruning (the most popular method for refining pruned networks), and (2) SReLU (suitable for SNNs suggested by several studies [Ref4][Ref5][Ref6]). [R1] is an evolutionary-based method that jointly designs batch normalization-activation function layers from elementary operators. However, their proposed method is slow with marginal improvements, and the EvoNorm layer is not effective anymore for many networks such as EfficientNet. Thus, [R1] is not a strong baseline from our point of view. Although it is still interesting to see the results of applying [R1] on SNNs, due to the limited time for rebuttal, we cannot compare our work with [R1] at the moment. \n\n* [Ref4] Mocanu, D., Mocanu, E., Stone, P., Nguyen, P., Gibescu, M., and Liotta, A. (2018). Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature Communications.\n* [Ref5] Curci, S., Mocanu, D., and Pechenizkiyi, M. (2021). Truly sparse neural networks at scale. CoRR. \n* [Ref6] Tessera, K., Hooker, S., and Rosman, B. (2021). Keep the gradients flowing: Using gradient flow to study sparse network optimization. CoRR. \n\n> In your ablation study (5.7), stage 2 even outperforms SAFS on VGG-16. Does it mean HPO is more useful than searching for activation functions? In Figure 4, it showed ReLU is inferior to other activation functions. Why did you only pick up ReLU in your ablation study? Can you add more experimental results on other activations, like FLAU, Tanh. From your ablation study, I do not know if searching for activation functions is really working. Furthermore, can you provide an ablation study of tuning hyperparameters of naive pruning methods?\n\nAccording to our new ablation results on MNIST and ImageNet-16 datasets (see Section 5.7), using only HPO does not always provide the best results. For the MNIST and ImageNet-16 datasets, we can achieve 13.51% and 0.74% accuracy improvements by only using HPO. While using efficient activation functions provides us with up to 2.02% and 0.55% higher accuracy than only using HPO. Thus, searching for activation functions adds performance improvement, further motivating the need to do so.\n\nThe aim of our ablation study is to individually evaluate the impact of each optimization step. We selected ReLU as the comparison baseline since ReLU is the default activation function for LeNet-5, VGG-16, and ResNet-18 (Swish is the default activation function for EfficientNet-B0). Finally, as can be seen in Figure 4(a), SAFS always provides the best activation functions for SNNs compared to all other state-of-the-art activation functions. Therefore, we expect that adding other dense activation functions to our ablation study will not change the results.\n\n> I raised some concerns, especially if we can simply use HPO to get the similar performance gain as the proposed method. I am not sure if we really need to search for activations for sparse networks.\n\n We have addressed the concern in response to Question 3. \n\n Global/general Response:\n```\n# General Response \n\nWe thank the reviewers for their helpful feedback. We are happy to see that the reviewers consider our approach to be novel (Reviewer YVAr) and adding a new perspective (Reviewer AwGP). We acknowledge the potential impact of applying AutoML to resource-constrained paradigms, which also happens to be our primary motivation to work on sparsity and be recognized for it (Reviewer JFfa). While we agree that this paper tackles the niche of sparsity (Reviewers JFfa, ziJv, uT44), we would like to extend on Reviewer ziJv’s c comment that this is a niche that is receiving an increasing amount of attention, given the recent applications it has seen, not just to image recognition, but also in sequential decision making problems and reinforcement learning scenarios [Ref1]. \n\n* [Ref1] Liu, Vincent, et al. \"The utility of sparse representations for control in reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n\nWe would also like to reiterate that the reason we tackle sparsity is specifically that the resource-constrained nature of the problem requires different ways of mitigating it. As we point out in Section 5.3, the same HPs for dense networks do not translate to sparse regimes. Additionally, given the potential applications of sparse networks to resource-constrained use cases, extracting as much performance as we can from these networks becomes very important. Hence, our motivation to perform an activation search on top of HPO. Through this work, in a similar vein to hardware-aware NAS, we aim to bring the attention of the AutoML Community to the resource-constrained regime, an area where we believe AutoML shines. To this end, we show that even standard techniques such as LAHC and SMAC3, combined with the added touch of SAFS can help us contribute to these applications. We aim to push further in this direction with follow-up work. \n\nWe would like to address general comments here, and address individual points as separate replies.\n\n\n## Additional Experiments\n\nIn line with the reviewers’ requests for additional ablations and experiments, we have run and added the following to the paper:\n\n1. Ablations\n* Table 2: EfficientNet-B0 trained on ImageNet-16 with Swish activation functions and only HPO (only second stage)\n2. Cost comparison\n* Empirical costs of SAFS (Appendix B).\n3. Additional results\n* Table 2: Applying SAFS on the sparse LeNet-5 model trained on the MNIST dataset with a 99% pruning ratio.\n\n\n```\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It resolves the majority of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
3dpOZ2dZcAa
AutoML 2023 Conference
Reviewer_JFfa
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_JFfa, and the following are the paper you need to review:\n```<<3dpOZ2dZcAa>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper introduces a technique for improving sparse neural networks by optimizing the hyperparameters and activation function in addition to the sparsity mask, which are often ignored in the sparse neural network literature. This search space over activation functions comprises a set of unitary functions with learned scaling of the inputs and outputs, and a hill-climbing algorithm is used to optimize the activation function, with experiments conducted on CIFAR-10 and a resized version of ImageNet. \noverall_review: Strengths:\n- This combination of research areas appears to be novel and fills an often-overlooked gap in the literature (tuning activations and hyperparameters of sparse neural networks)\n- The current set of experimental results seems promising\n- The ablation study of the different stages of the proposed search method is a useful experiment to include. \n\nWeaknesses:\n- As an empirical work, it would be ideal to experiment with more than two datasets. Currently, only CIFAR-10 and resized ImageNet are evaluated. On the other hand, this is potentially understandable as these experiments are likely rather expensive. \n\nreview_summary: The idea is novel and is well explored on the two datasets that are included, CIFAR-10 and resized ImageNet, however, the evaluation could be improved by including more datasets (particularly since this is an empirical paper). \nactions_required_to_increase_overall_recommendation: I would suggest that the authors include at least one additional dataset in their empirical evaluation.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer JFfa\n> As an empirical work, it would be ideal to experiment with more than two datasets. Currently, only CIFAR-10 and resized ImageNet are evaluated. On the other hand, this is potentially understandable as these experiments are likely rather expensive. I would suggest that the authors include at least one additional dataset in their empirical evaluation.\n\nThanks for the careful review and very positive feedback. We have extended our experiments by optimizing the sparse LeNet-5 architecture with a 99% pruning ratio trained on the MNIST dataset. Our results show that SAFS improves the absolute classification accuracy of LeNet-5 by 15.53% (Table 2).\n\n Global/general Response:\n```\n# General Response \n\nWe thank the reviewers for their helpful feedback. We are happy to see that the reviewers consider our approach to be novel (Reviewer YVAr) and adding a new perspective (Reviewer AwGP). We acknowledge the potential impact of applying AutoML to resource-constrained paradigms, which also happens to be our primary motivation to work on sparsity and be recognized for it (Reviewer JFfa). While we agree that this paper tackles the niche of sparsity (Reviewers JFfa, ziJv, uT44), we would like to extend on Reviewer ziJv’s c comment that this is a niche that is receiving an increasing amount of attention, given the recent applications it has seen, not just to image recognition, but also in sequential decision making problems and reinforcement learning scenarios [Ref1]. \n\n* [Ref1] Liu, Vincent, et al. \"The utility of sparse representations for control in reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n\nWe would also like to reiterate that the reason we tackle sparsity is specifically that the resource-constrained nature of the problem requires different ways of mitigating it. As we point out in Section 5.3, the same HPs for dense networks do not translate to sparse regimes. Additionally, given the potential applications of sparse networks to resource-constrained use cases, extracting as much performance as we can from these networks becomes very important. Hence, our motivation to perform an activation search on top of HPO. Through this work, in a similar vein to hardware-aware NAS, we aim to bring the attention of the AutoML Community to the resource-constrained regime, an area where we believe AutoML shines. To this end, we show that even standard techniques such as LAHC and SMAC3, combined with the added touch of SAFS can help us contribute to these applications. We aim to push further in this direction with follow-up work. \n\nWe would like to address general comments here, and address individual points as separate replies.\n\n\n## Additional Experiments\n\nIn line with the reviewers’ requests for additional ablations and experiments, we have run and added the following to the paper:\n\n1. Ablations\n* Table 2: EfficientNet-B0 trained on ImageNet-16 with Swish activation functions and only HPO (only second stage)\n2. Cost comparison\n* Empirical costs of SAFS (Appendix B).\n3. Additional results\n* Table 2: Applying SAFS on the sparse LeNet-5 model trained on the MNIST dataset with a 99% pruning ratio.\n\n\n```\n", "role": "user" }, { "content": "Title: Thank you for including an additional dataset\nThank you for including a third dataset. I have updated my score from 5 to a 6 and now lean toward acceptance. ", "role": "assistant" } ]
3dpOZ2dZcAa
AutoML 2023 Conference
Reviewer_ziJv
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_ziJv, and the following are the paper you need to review:\n```<<3dpOZ2dZcAa>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes SAFS, an automated search for activation functions for sparse neural networks in conjunction with hyperparameter optimization. This method is a two-step procedure, which uses the already pruned neural network as starting and initialization point; first, the activation function for each layer in the network is searched using Late Acceptance Hill Climbing. Second, the hyperparameter for this identified network is optimized. This paper evaluates SAFS on three different common CNNs (ResNet-18, VGG16, and EfficientNet-B0) for CIFAR-10 and ImageNet16 image classification and shows improved results over pruning baselines, which use the default HP of the dense network. \noverall_review: Strengths:\nThis paper proposes an interesting research question with sound experiments.\nGood motivation and clear assessment of why this is important (given the studies in Sections 5.1 and 5.2)\nThe idea to search for activation functions and HPO in conjunction is novel. \n\nWeaknesses:\nHow does this method scale to larger networks, e.g., ResNet-50, and larger image resolutions (Imagenette, ImageNet 100) with the focus on how the input image resolutions influence this method. \n\nreview_summary: While the idea of combining activation functions and HPO for sparse neural networks is novel and great, I believe some additional assessments and experiments are useful to improve this paper.\n \n\nactions_required_to_increase_overall_recommendation: Additional experiments regarding the scalability of the proposed method or at least a discussion: \n\n-\tLarger networks like ResNet-50.\n-\tLarger Datasets with larger resolutions, Imagenette or ImageNet100 \n\nDifferent activation functions as ablation for the used dense networks. \n\n**Post-Rebuttal:**\nI updated my score from 6 to 7.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer ziJv\n## Technical Quality And Correctness:\n> Section 5.2: What is the influence of the different activation functions at ...\n\nThe focus of this paper is to improve the accuracy of SNNs by jointly learning network activation functions and HPs. Several prior studies proposed to optimize activation functions [Ref7][Ref8][Ref9] and/or HPs of dense networks (0% sparsity level) [Ref10]. From our perspective, these are orthogonal to our work since we specifically focus on the sparse setting.\n* [Ref7] Apicella, Andrea, et al. \"A survey on modern trainable activation functions.\" Neural Networks 138 (2021): 14-32.\n* [Ref8] Dubey, Shiv Ram, Satish Kumar Singh, and Bidyut Baran Chaudhuri. \"Activation functions in deep learning: A comprehensive survey and benchmark.\" Neurocomputing (2022).\n* [Ref9] Karlik, Bekir, and A. Vehbi Olgac. \"Performance analysis of various activation functions in generalized MLP architectures of neural networks.\" International Journal of Artificial Intelligence and Expert Systems 1.4 (2011): 111-122.\n* [Ref10] Yu, Tong, and Hong Zhu. \"Hyper-parameter optimization: A review of algorithms and applications.\" arXiv preprint arXiv:2003.05689 (2020).\n\n> Table 2: For VGG-16 it seems to be the case that either the different activations at each layer improve the accuracy, or ...\n \nWe have updated the ablation results (Table 2) by adding the results of EfficientNet-B0 trained on ImageNet-16 and LeNet-5 trained on MNIST. Results show that using only HPO does not always provide the best results. For the MNIST and ImageNet-16 datasets, we can achieve 13.51% and 0.74% accuracy improvements by using HPO. While using efficient activation functions provides us with up to 2.02% and 0.55% higher accuracy than only using HPO. \n\n> The overall search times are missing.\n\nAppendix B reports the cost of using SAFS for optimizing different networks. \n\n## Clarity:\n> Line 12: “learning novel activation functions for sparse networks”: This sentence can be misleading, since ...\n\nWe have changed this to ‘learning a novel way to tune activation functions for sparse networks’ for better clarity.\n\n> Figure 2 is not referenced.\n\nWe have referenced Figure 2 in the text. \n\n> LAHC is not clearly described. Therefore, more background on that is helpful, since lines 190-195 are not ...\n\nWe have added a smaller explanation of LAHC to make this more self-contained.\n\n> Lines 217-220: scaling factors of activation functions are trainable parameters, and ...\n\nWe have updated the sentence to better reflect your point.\n\n> Section 5.6 is not mentioned in the introductory part of Section 5.\n\nWe have added a sentence for this.\n\n> The overall used methods could be mentioned, SMAC3 was only mentioned ...\n\nYes, SMAC3 is used for HPO in all experiments. We have added a small explanation to Section 5.\n\n> Mentioning the default activation functions for the CNNs would be helpful since ...\n\nSorry for being unclear. The default activation function for EfficientNet-B0 is Swish. We have mentioned it in Table 2. \n\n> A comparison for each used CNN with different activation functions ...\n\nThe default activation functions of each dense network are the best choice that is designed by experts. In addition, regarding dense networks, many related studies [Ref7][Ref8][Ref9] well-studied the impact of tweaking activation functions. And the focus of this paper is on sparsity cases. \n\n> Table 1: How many runs were conducted?\n\nWe used three runs. We have added an explanation to Section 5.4.\n\n> Section 5.5 rather repeats the information from Section 5.2.\n\nEven though a small part of the results is redundant, we aim to point to a specific conclusion in each section. \n\n> Figure 5: better alignment of the x-axis labels for better readability.\n\nWe have fixed the problem. \n\n\n## Actions Required To Increase Overall Recommendation:\n\n> Additional experiments regarding the scalability of the proposed method or at least a discussion:\n\nSAFS has been evaluated on diverse datasets, including MNIST, CIFAR-10, and ImageNet-16, and various network architectures such as LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0. While the current results demonstrate the general applicability of our method and signs of scalability, we believe further experiments on larger datasets and more scalable networks would be an interesting avenue for future work. \n\n> Different activation functions as ablation for the used dense networks.\n\nThe focus of this paper is to improve the accuracy of sparse neural networks by jointly learning network activation functions and HPs. Several prior studies proposed to optimize activation functions [Ref7][Ref8][Ref9] of dense networks. In view of sufficient studies on this topic, we see this as out of scope for our paper and focus on the sparse case.\n\n Global/general Response:\n```\n# General Response \n\nWe thank the reviewers for their helpful feedback. We are happy to see that the reviewers consider our approach to be novel (Reviewer YVAr) and adding a new perspective (Reviewer AwGP). We acknowledge the potential impact of applying AutoML to resource-constrained paradigms, which also happens to be our primary motivation to work on sparsity and be recognized for it (Reviewer JFfa). While we agree that this paper tackles the niche of sparsity (Reviewers JFfa, ziJv, uT44), we would like to extend on Reviewer ziJv’s c comment that this is a niche that is receiving an increasing amount of attention, given the recent applications it has seen, not just to image recognition, but also in sequential decision making problems and reinforcement learning scenarios [Ref1]. \n\n* [Ref1] Liu, Vincent, et al. \"The utility of sparse representations for control in reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n\nWe would also like to reiterate that the reason we tackle sparsity is specifically that the resource-constrained nature of the problem requires different ways of mitigating it. As we point out in Section 5.3, the same HPs for dense networks do not translate to sparse regimes. Additionally, given the potential applications of sparse networks to resource-constrained use cases, extracting as much performance as we can from these networks becomes very important. Hence, our motivation to perform an activation search on top of HPO. Through this work, in a similar vein to hardware-aware NAS, we aim to bring the attention of the AutoML Community to the resource-constrained regime, an area where we believe AutoML shines. To this end, we show that even standard techniques such as LAHC and SMAC3, combined with the added touch of SAFS can help us contribute to these applications. We aim to push further in this direction with follow-up work. \n\nWe would like to address general comments here, and address individual points as separate replies.\n\n\n## Additional Experiments\n\nIn line with the reviewers’ requests for additional ablations and experiments, we have run and added the following to the paper:\n\n1. Ablations\n* Table 2: EfficientNet-B0 trained on ImageNet-16 with Swish activation functions and only HPO (only second stage)\n2. Cost comparison\n* Empirical costs of SAFS (Appendix B).\n3. Additional results\n* Table 2: Applying SAFS on the sparse LeNet-5 model trained on the MNIST dataset with a 99% pruning ratio.\n\n\n```\n", "role": "user" }, { "content": "Title: Response to Authors\nI thank the authors for their response and update my score from 6 to 7. ", "role": "assistant" } ]
3dpOZ2dZcAa
AutoML 2023 Conference
Reviewer_uT44
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_uT44, and the following are the paper you need to review:\n```<<3dpOZ2dZcAa>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: In the space of sparse neural networks (SNNs), this paper identifies the lack of SNN-specific development of activation functions among other design choices and proposes Sparse Activation Function Search (SAFS), a method for optimizing layer-wise SNN activation functions in a mixed-integer search space. \noverall_review: **Positive aspects:**\n\n* The problem is motivated clearly.\n\n* Aside from missing experiments below, the authors cover a good range of experiments and results, from explorations to define the problem in Figure 4, main results in Table 1, insightful analysis in Figure 5, and some ablations in Table 2.\n\n**Negative aspects:**\n\n* There is minimal mention (Table 6) nor any comparison of algorithmic cost of SAFS versus baselines and other approaches. This is important information for readers and practitioners to consider the trade-off of computational cost versus performance gain. This is also crucial for Figure 4(a): were all runs given equal computational resources? If not, you need to rerun these experiments but use the equivalent computational cost of SAFS for each baseline model with a single activation function. Computational cost information should also be included at least for all experiments in Table 2. \n\n* The choice of LAHC over other discrete search algorithms is only supported by Figure 6. Figure 6(a) doesn't seem very conclusive due to noise I don't completely understand Figure 6(b): is this the same data but with some sort of smoothed/windowed average? This choice could be further supported by including it in the ablation studies.\n\n* The idea of Section 5.6 seems interesting but not yet completely developed. Do the patterns you noted extend to other backbones and tasks? With only 3 runs represented in Figure 5 (and only their averages, no measures of variance), it's not clear how much signal there is compared to noise. Also, I don't understand what is being plotted: what are the units of the colorbar?\n\n* In Section 5.2, it's not clear what you are using \"three-fold validation\" here for, since you are comparing networks with different activation functions (with no inner optimizations) and Figure 4(a) shows test accuracy, not validation accuracy. \n\n* The breadth of backbones and tasks is relatively narrow. How does this method fair in other modalities, network types, etc.? Can the same unary operator search space be used, or does it need to be adapted to the backbone and input modality?\n\n**Further Experiments:** (could be included in follow-up work)\n\n* Your ablation studies are a good start, but since Figure 4(b) shows significant gain from just HPO, I would like to also see an ablation that only removes HPO (as well as an ablation that only includes HPO). \n\n* Your search space is well divided into selecting a unary operator and it's two continuous parameters. Your algorithm approaches these decisions in series, but I think tackling them in parallel could be more effective. Instead of having two stages, instead initialize $\\alpha$ and $\\beta$ for each activation function and each layer. Then, the matrices containing these parameters could be optimized by SGD simultaneously to the discrete optimization of $\\Psi$.\n\n\n**Minor comments:**\n\n* Line 69: \"A number of studies have proposed the necessity of pruning...\" - the word \"necessity\" doesn't feel right here. Perhaps \"...have proposed to prune...\" instead?\n\n* Figures 1 and 2 appear out of order.\n\n* For Figure 5, right-align the bottom labels and rotate them by the right side, rather than the center, using something like this: \n```plt.setp(axs[i].get_xticklabels(), rotation=45, ha=\"right\", rotation_mode=\"anchor\")```\nreview_summary: This paper is a clear and impactful study on activation function optimization for SNNs. It could be further supported by additional experiments detailed above. I think it is worth accepting, especially if the recommended actions are taken.\nactions_required_to_increase_overall_recommendation: * Adding experiments and results requested (in order of importance):\n * Comparison of computational costs for Table 1 and Figure 4(a)\n * Ablation of LAHC vs. other algorithms and of HPO for Table 2\n * Experiments outside of tiny natural image classification\n * Compare to parallel optimization of the unary operator and its parameters\n* Expand and clarify analysis in Section 5.6\n\nIn response to the additions discussed below and in the general response, I have raised my score from 7 to 8.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer uT44 \n## Overall Review\n\n> There is minimal mention (Table 6) nor any comparison of algorithmic cost of SAFS versus baselines and other approaches.\n\nIn Appendix B, we have added a discussion of the cost of using SAFS in terms of GPU hours. We also reported the GPU cost for optimizing activation functions of VGG-16 (Figure 1) at the end of Section 5.2. We admit that SAFS adds a substantial overhead, but we believe that this will be compensated by the downstream application of these models.\n\n> The choice of LAHC over other discrete search algorithms is ...\n\nFigure 6(a): Although we have small noise, LAHC showed promising results across all search iterations compared to other search methods. Figure 6(b): Sorry for being unclear. It shows the same data, but we used a smoothing average (logarithmic function) to show the trend in the data more clearly. We have added a statement regarding thisin the paper\n\n> The idea of Section 5.6 seems interesting but not yet completely developed. Do the patterns you noted ...\n\nThe unit of the color bar is the number of seeing a specific activation function across all search iterations (stage 1). Overall, activation functions with higher values are potentially more promising. Note that we have added this explanation to the paper. The idea of Figure 5 is to give some insights to the community and researchers who want to extend our work. Our results show that, for example, SRS, GELU, and Symexp activation functions have less chance to be selected. \n\n> In Section 5.2, it's not clear what you are using \"three-fold validation\" here for, since you are comparing networks with different activation functions (with no inner optimizations) and Figure 4(a) shows test accuracy, not validation accuracy.\n\nWe wanted to use a statistically robust way with multiple train/validation splits via 3-fold cross-validation instead of evaluating the performance on a single train/validation set. In addition, during the search time, we use the validation set to evaluate candidates. However, in Figure 4(a), we have reported the final test accuracy results to be comparable with other baselines. We have revised the manuscript to make it clear. \n\n> The breadth of backbones and tasks is relatively narrow. How does this method fair in other modalities, network types, etc.? Can the same unary operator search space be used, or does it need to be adapted to the backbone and input modality?\n\nOur proposed method is agnostic to the network architecture. For example, we could in principle apply SAFS to optimize the activation parts of vision transformers, such as encoder networks. There is no further adaptation needed to the backbone or other modalities. If further activation function classes should be needed, they can easily be added to the search space of SAFS.\n\n> Your ablation studies are a good start, but since Figure 4(b) shows significant gain from just HPO, I would like to also see an ablation that only removes HPO (as well as an ablation that only includes HPO).\n\nWe have updated the ablation results (Table 2) by adding the results of EfficientNet-B0 trained on ImageNet-16 and LeNet-5 trained on MNIST. The Stage 1 column in the table shows the results of the activation function search by removing HPO (dense HPs). The Stage 2 column includes the HPO results without optimizing activation functions (using the default activation function for the network). Finally, we report the results of our proposed method, SAFS=Stage 1 + Stage 2, in the last column.\n\n> Your search space is well divided into selecting a unary operator and it's two continuous parameters. Your algorithm approaches these decisions in series, but I think tackling them in parallel could be more effective. Instead of having two stages, instead initialize 𝜶 and 𝜷 for each activation function and each layer. Then, the matrices containing these parameters could be optimized by SGD simultaneously to the discrete optimization of Ψ.\n\nThank you very much for the great suggestion. We will consider working on this idea in future works. \n\nWe addressed all the minor comments in the new revision of our paper.\n\n## Actions Required To Increase Overall Recommendation:\n\n> 1. Adding experiments and results requested (in order of importance):\n> * Comparison of computational costs for Table 1 and Figure 4(a). \n\nAddressed by Appendix B.\n\n> * Ablation of LAHC vs. other algorithms and of HPO for Table 2\n\nWe have updated Table 2. \n\n> * Experiments outside of tiny natural image classification\n\nRemain to future work. \n\n> * Compare to parallel optimization of the unary operator and its parameters\n\nRemain to future work. \n\n> 2. Expand and clarify analysis in Section 5.6\nWe have added a small explanation to clarify Section 5.6. \n\n Global/general Response:\n```\n# General Response \n\nWe thank the reviewers for their helpful feedback. We are happy to see that the reviewers consider our approach to be novel (Reviewer YVAr) and adding a new perspective (Reviewer AwGP). We acknowledge the potential impact of applying AutoML to resource-constrained paradigms, which also happens to be our primary motivation to work on sparsity and be recognized for it (Reviewer JFfa). While we agree that this paper tackles the niche of sparsity (Reviewers JFfa, ziJv, uT44), we would like to extend on Reviewer ziJv’s c comment that this is a niche that is receiving an increasing amount of attention, given the recent applications it has seen, not just to image recognition, but also in sequential decision making problems and reinforcement learning scenarios [Ref1]. \n\n* [Ref1] Liu, Vincent, et al. \"The utility of sparse representations for control in reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n\nWe would also like to reiterate that the reason we tackle sparsity is specifically that the resource-constrained nature of the problem requires different ways of mitigating it. As we point out in Section 5.3, the same HPs for dense networks do not translate to sparse regimes. Additionally, given the potential applications of sparse networks to resource-constrained use cases, extracting as much performance as we can from these networks becomes very important. Hence, our motivation to perform an activation search on top of HPO. Through this work, in a similar vein to hardware-aware NAS, we aim to bring the attention of the AutoML Community to the resource-constrained regime, an area where we believe AutoML shines. To this end, we show that even standard techniques such as LAHC and SMAC3, combined with the added touch of SAFS can help us contribute to these applications. We aim to push further in this direction with follow-up work. \n\nWe would like to address general comments here, and address individual points as separate replies.\n\n\n## Additional Experiments\n\nIn line with the reviewers’ requests for additional ablations and experiments, we have run and added the following to the paper:\n\n1. Ablations\n* Table 2: EfficientNet-B0 trained on ImageNet-16 with Swish activation functions and only HPO (only second stage)\n2. Cost comparison\n* Empirical costs of SAFS (Appendix B).\n3. Additional results\n* Table 2: Applying SAFS on the sparse LeNet-5 model trained on the MNIST dataset with a 99% pruning ratio.\n\n\n```\n", "role": "user" }, { "content": "Title: Response to authors\nI appreciate the updates. I will raise my score from a 7 to an 8.", "role": "assistant" } ]
7Nbd1Ru1M_t
AutoML 2023 Conference
Reviewer_4CRV
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_4CRV, and the following are the paper you need to review:\n```<<7Nbd1Ru1M_t>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The paper combines bias mitigators and ensembling methods, and shows that ensembling reduces the volatility of the mitigators. They present a flow chart for choosing a bias mitigator and ensemble combination to use.\noverall_review: The paper provides multiple contributions: a flow chart for choosing ensemble/mitigator combinations, empirical evidence for the ensembles helping, a library for combining ensembles and mitigators, and some analysis of the quality of the flow chart. The results could be made more interpretable by presenting them as plots, but overall I think it is a useful paper.\nreview_summary: The paper compares a number of bias mitigators and ensembling methods, and probably most usefully provides a flow chart with suggestions for what combination to use. This could be very useful, though the results could be presented in a more easy-to-digest manner.\nactions_required_to_increase_overall_recommendation: Show results presented as tables as figures, to make them easier to interpret, and show that the results compared to random guessing and the best-performing combination.\nRating: 8 accept, good paper\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer 4CRV\nThank you for your positive review! Based on your recommendation, we have replaced Table 6 by a new Figure 5, which we hope is easier to read and more accessible. We have also made other changes throughput the paper based on the feedback in your review.", "role": "user" }, { "content": "Title: Thank you for your response\nThank you for your changes. \n\nI keep my postitive score.", "role": "assistant" } ]
7Nbd1Ru1M_t
AutoML 2023 Conference
Reviewer_y2RT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_y2RT, and the following are the paper you need to review:\n```<<7Nbd1Ru1M_t>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The paper explores the combination of multiple bias mitigators and ensembles to improve algorithmic fairness and predictive performance. The authors developed an open-source library and guidance diagram for the composition of mitigators, ensembles, and hyperparameters, and conducted experiments with the configuration space on various datasets.\noverall_review: The paper demonstrates technical rigor and holds the potential to have impact for \"fairness in machine learning\" while incorporating some elements of AutoML. For readers unfamiliar with the terminology, the paper may be challenging to comprehend. It is recommended to introduce key terms early in the paper, improve the formatting of tables to enhance readability, and compare with relevant existing research.\nreview_summary: To me it is uncertain whether the AutoML Conference is a suitable venue for publishing this work.\nactions_required_to_increase_overall_recommendation: highlight the importance and potential impact of this work for the field of AutoML\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer y2RT\nUnder \"Actions Required To Increase Overall Recommendation\", you requested that we \"highlight the importance and potential impact of this work for the field of AutoML\". It is widely accepted that ensembling (often post-hoc, but not always) is a critical part of AutoML (see for example AutoSklearn, AutoGluon). The motivation behind this wide use of ensembling is that it often improves the predictive performance over the base estimators. In our paper, we systematically try to understand the influence of ensembling on both predictive performance and fairness. Our elaborate empirical study allows us to tease out of the effects of the different ensembling and mitigation choices. This is critical if we wish to systematically incorporate bias mitigation in AutoML systems. Furthermore, before our paper, it was unknown where bias mitigation should be introduced in an ensemble. The search space we use pushes the boundaries of what is possible with current AutoML search tools, since it involves multiple layers of nested meta-estimators, while requiring the search to consider multiple options both inside and outside of these layers. Our paper presents novel search spaces and our implementation allows us to search over these novel search spaces. We have updated the related-work section with citations for ensembles in AutoML.\n\nThank you for your feedback towards improving the clarity of the paper. We have expanded Section 3 (library and datasets) to define disparate impact and provide more background on ensembles. Furthermore, we have reformatted the tables to make them easier to read.\n", "role": "user" }, { "content": "Thanks for the authors' clarification. Their response has resolved the majority of my concerns, and I am inclined to revise the rating upwards.", "role": "assistant" } ]
7Nbd1Ru1M_t
AutoML 2023 Conference
Reviewer_cvkx
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_cvkx, and the following are the paper you need to review:\n```<<7Nbd1Ru1M_t>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The article discusses the challenge of maintaining algorithmic fairness in machine learning models when using bias mitigators, as their effectiveness can vary across different data splits. The authors explore the use of ensemble learning as a potential solution but note that it is unclear how to best combine ensembles with mitigators to balance fairness and predictive performance. They present an open-source library that allows for the modular composition of mitigators and ensembles along with hyperparameters, which they use to explore the configurations of 13 datasets. The authors provide their insights in the form of a guidance diagram that can serve as a starting point for practitioners. They also demonstrate that their solutions perform similarly to those generated by automatic combined algorithm selection and hyperparameter tuning (CASH) on many datasets.\noverall_review: Addressing an important challenge: The paper addresses an important challenge in maintaining algorithmic fairness in machine learning models, which is a crucial issue in the field of AI.\n\nOpen-source library: The paper presents an open-source library that enables the modular composition of mitigators and ensembles along with hyperparameters, which can be useful for practitioners in the field.\n\nEmpirical exploration: The authors explore the configurations of 13 datasets empirically, which can provide insights into the effectiveness of their approach.\n\nGuidance diagram: The authors provide their insights in the form of a guidance diagram that can serve as a starting point for practitioners, which can be helpful for those who are new to the field of AutoML.\n\nthis work adopts grid search to find the best mitigation configs, which would be inefficient when the search space is large\nreview_summary: The open-source library and guidance diagram provided by the authors may be useful for practitioners in the field of AutoML who are interested in improving the fairness and accuracy of their machine learning models.\nactions_required_to_increase_overall_recommendation: Please provide the time and memory usage when fitting models and discuss more about the extra overhead\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer cvkx\nUnder \"Actions Required To Increase Overall Recommendation\", you wrote \"Please provide the time and memory usage when fitting models and discuss more about the extra overhead\". In response, we have added a new Figure 3, along with a new paragraph entitled \"How do ensembles affect resource consumption?\", in the results section.", "role": "user" }, { "content": "I acknowledge the authors' response. The reply answers almost all of my questions. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
7Nbd1Ru1M_t
AutoML 2023 Conference
Reviewer_MqGH
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_MqGH, and the following are the paper you need to review:\n```<<7Nbd1Ru1M_t>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: In this study, authors explore the trade-off between the fairness and performance of different models through using 13 data-sets, eight mitigators and four ensembles. They also, suggest a guidance diagram to address this trade-off based on insights stemmed from this exploration.\n\n\noverall_review: This study used a large number of data-sets and configurations to explore the trade-offs between fairness and performance which is an important topic in the community. \n\nHowever, I have some minor issues that I did not quite figure out. \n\n1. What was the reasoning behind 8000 rows/samples in guidance diagram? How would that be helpful or beneficial to take such split?\n\n2. For Table 6, authors mention that \"while the guidance diagram rarely recommends the exact same con- figuration as that found by Hyperopt, it recommends one with similar performance\". However, roughly in half of cases(across all metrics) they have <0.1 differences and others are higher than that. Therefore, that claim is not valid. Also, is there any explanation why almost all cases of guidance F1's DI has >0.1 difference with Hyperopt suggestion(in some of them the difference is significant)?\nreview_summary: I believe this paper explores one of the most important trade-offs across classical machine learning that would be beneficial for AutoML community as well. With addressing the formatting problem mentioned in clarity section, I believe it has a chance to be accepted.\nactions_required_to_increase_overall_recommendation: Change the captions of tables and also try to fit them properly.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author response to Reviewer MqGH\nUnder \"Actions Required To Increase Overall Recommendation\", you requested that we \"Change the captions of tables and also try to fit them properly.\" We have done both, and furthermore, we have replaced Table 6 by a new Figure 5 as suggested by Reviewer 4CRV.\n\nUnder \"Clarity\", you asked \"What was the reasoning behind 8000 rows/samples in guidance diagram?\" This was motivated by the observation that volatility is highest in small datasets, and hence, the guidance for small datasets likely differs. You also remarked that the claim that the guidance diagram recommends configurations with similar performance is not valid. We have weakened that claim accordingly.\n", "role": "user" }, { "content": "Thanks for the authors' response. It resolves the majority of my concerns. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
7cUV9K3ns9Q
AutoML 2023 Conference
Reviewer_aCMM
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_aCMM, and the following are the paper you need to review:\n```<<7cUV9K3ns9Q>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors propose a 2 level meta-learning approach for recommendation of algorithm configurations in multivariate time series anomaly detection. At the first level, the decision is if a new time series is similar to the ones used for training. Depending on the result, a different meta-model is used to make the recommendation.\n\nThe evaluation separates two scenarios: new time series from known domain and new domain. Both scenarios make sense but the second is not usually considered.\n\nThe results show that the approach competes with the state-of-the-art and that the two-level approach is better than the two base-level recommendation models.\noverall_review: The idea of separately handling similar and different time series in multivariate scenarios, both in modelling and evaluation, is quite interesting and should be followed.\n\nHowever, even though the method is interesting, the motivation for the solutions incorporated into it are not entirely clear to me.\n\nOn the other hand, the evaluation methodology is quite interesting and should be followed in empirical studies concerning recommendation of algorithms multivariate time series.\n\nMore detailed comments are provided above.\n\nOther comments:\n-couldn't UReg optimize for ranking rather than for error?\n-fig 2 not references in text\n\nreview_summary: Although I'm not entirely convinced by the method, I find the general approach interesting, particularly in what concerns evaluation.\n\nactions_required_to_increase_overall_recommendation: Explain and motivate the method more clearly.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Answer to comments by reviewer\nWe thank the reviewer for their insightful comments. We proceed to answer to it below, per comment:\n\n**The use of benchmark datasets for anomaly detection in time series as recently been questioned. This should be discussed by the authors.**\n\nWe agree that the quality of benchmarks is, indeed, a recently discussed problem. While the available data for our scenario (multivariate industrial time series) is limited, we tried to ensure the potential effects of bad labelling was limited, by only including one of the discussed benchmarks (SMD), which represents less than 30% of our datasets. Nevertheless, we've included its potential effect as a limitation in the discussion, as well as another reference that cites the paper the reviewer suggested regarding potential issues with the metric we used, which opens a future work exploring the effect of different metrics on our task. We thank the reviewer for the pointer.\n\n**1. Although the method is presented as unsupervised, the offline training phase is supervised.**\n\nIndeed, this is a limitation of the meta-learning approaches that we had to adopt in order to ensure our recommendations can be done in the order of seconds. Nevertheless, the recommendation itself does not require labels on new data, which makes it unsupervised.\n\n**2. Although the method is presented as suitable for multivariate time series, it essentially depends on the distance between time series, which is not necessarily related to the domain. The relation between distance and domain is never really analyzed.**\n\nWe'd like to clarify the method depends on distance in the meta-feature space, not in the original time series space, which makes it more flexible than just comparing in the original space. Regarding the relationship between domain and distance in the meta-feature space, we display it on Figure 3, where we show that data from the same sources do tend to be clustered together.\n\n**3. Many data characterisation approaches use label information, which, in the current approach, is available in the offline training but not at online application. This issue should be discussed and it should be clear that the metafeatures cannot use labels.**\n\nWe agree it's a needed clarification. We have added \"for which no labels are needed\" in 3.1 when describing the concept of meta-features to emphasize their unsupervised nature.\n\n**4. The multivariate metafeatures only describe very basic characteristics of the distribution of basic metafeatures across the different time series of a dataset. Given that the problem is anomaly detection, I would expect metafeatures that characterise deviations from the distribution.**\n\nWe agree other summarization features could be extracted from the meta-feature set. The current ones include measures of centrality (average, Q1 and Q3) and outliers (minimum and maximum). As the regression method we use is a Random Forest, interactions between these two families should be captured by the decision trees, which would represent the behavior the reviewer is expressing. Nevertheless, this is still an open area on which we only used one possible option and so we acknowledge it in section 6 when regarding future work about meta-features.\n\n**Motivation for the use of distances to choose the base-level method should be further investigated / The reason why the methods used at the base-level were chosen is also not clear.**\n\nWe agree that some of the explanations and reasonings about the methods weren't all clear: we've revamped their sections, including diagrams for both URegression and CFact (Figures 1 and 2), as well as improved the justification and definition of CFact. \n\n**It is not clear how the PRAUC values are transformed**\n\nWe agree, we have included an example of the transformation and reworded the explanation to make it clearer.\n\n**It is not clear why standard statistical comparison tests were not used for significance of differences**\n\nThis is something that we discussed internally extensively. We opted to use meta-analysis techniques for three reasons: 1. We had to perform thousands of pairwise comparisons between recommenders (20 per fold), which would increase our Type I error and force us to use an extremely strict Bonferroni correction of the p-values, 2. p-values don't reflect a measure of the effect size, in this case the difference in performance between algorithms, which we considered key to display and 3. For space and clarity, summarizing hundreds of p-values across several dimensions was not something we could do given the available page count. Nevertheless, we add a table of these p-values in appendix G for completeness.\n\n**Couldn't UReg optimize for ranking rather than for error?**\n\nTotally true, and it is in fact what MetaOD (the state of the art method we compare against) does. We didn't do it for two reasons: 1. in our tests, URegression's performance was lower when using it and 2. if it optimizes per rank it's impossible to use it in CFact, as the scores from independent UReg recommenders wouldn't be comparable.\n\nWe hope the changes we've detailed and our explanations in this comment satisfy the reviewer's petition and thank them again for their deep insights.", "role": "user" }, { "content": "I appreciate the authors' reply. It satisfactorily addresses the major issues I had. I continue to recommend this work positively.", "role": "assistant" } ]
7cUV9K3ns9Q
AutoML 2023 Conference
Reviewer_attc
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_attc, and the following are the paper you need to review:\n```<<7cUV9K3ns9Q>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors propose Hydra, a meta-learning recommendation framework for novelty detection. Hydra uses two inner recommender systems to deal with: known and novelty data. The tool has an inner mechanism that automatically actives which recommender should be used.\noverall_review: The paper proposes a new meta-recommender and presents a nice comparison to some related works in the specific area (meta-learning recommenders for novelty detection and time series). The methodology is also well described, and results are organized according to the research questions and present statistical evaluations. Results also present good implications for discussing Hydra's performance against simpler baselines and MetaOD.\n\nOn the other hand, it is missing comparisons versus cited baselines (LOTUS, ELECT). The authors of these studies could be contacted by email and asked about their tools.\nreview_summary: Based on the previous comments, I think the paper presents a good study, still in its initial, but with many questions that can contribute to the AutoML area in time series. Thus, my recommendation is to accept the paper.\nactions_required_to_increase_overall_recommendation: ...\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Answer to reviewer's comments\nWe thank the reviewer for their comments and insights. We have uploaded a new version of the article with the changes highlighted in blue to facilitate the review task.\n\n**The single question is regarding results when comparing Hydra x MetaODS. Figure 3 shows a considerable difference in performance between them in Scenario II. Why does this occur? Is it conditioned to the experimental scenario (datasets, folds) or something else?**\n\nOur hypothesis is that Hydra is able to get a better result thanks to the dynamic selection of recommenders: CFact creates hyper-specialized recommenders, but URegression with 2 factors is forced to keep only the broadest and most relevant patterns across datasets, so they're able to be exported across data sources. MetaOD is similar to URegression but done with a larger number of singular vectors, which forces it to focus on finer details and patterns that don't translate across data sources. As such, MetaOD and CFact performance are very close in Scenario I, while UReg(2) is much better than MetaOD at generalization is Scenario II: overall,  meta-recommendation by Hydra is able to select the best among CFact and UReg across all scenarios.\n\n**Which features are in the C22 set? How are they applied to multivariate datasets?**\n\nRegarding Catch22, while we cannot include a full detail of the contained features in the main paper due to space constrains, we include it in appendix E. We have also expanded Section 3.4 to clarify how they're used in a multivariate setting, and expanded considerations about alternative or extensions to Catch22 and related summarization in Section 6.\n\n**Figure 2 is not a figure. It is a table. Change it, please.**\n\nWe have corrected this and that element is now a table.\n\n**Suggestion: Add a figure/diagram with the Hydra flow (online/offline tasks) and other components in the Methods section's beginning. It would help to understand its flows better.**\n\nWe added diagrams for both URegression and CFact (Figures 1 and 2), as these are the basic building blocks of Hydra, and the core of our contribution. We have also reworded CFact's explanation to ensure the recommenders design is clear.", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The authors have answered almost all of my concerns. I maintain my positive recommendation for this work.", "role": "assistant" } ]
7cUV9K3ns9Q
AutoML 2023 Conference
Reviewer_6xX9
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_6xX9, and the following are the paper you need to review:\n```<<7cUV9K3ns9Q>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes Hydra, a meta-learning-based model recommender for anomaly detection in multivariate time series data. The authors highlight the real-world restrictions that make existing methods unusable, and address the need for generalization to previously known or unseen data sources. The proposed approach outperforms the current state of the art, achieving a higher performance even in difficult scenarios where data similarity is minimal. The experiments are conducted on a large number of public datasets from different data sources, and the ablation study confirms the efficacy of Hydra in recommending models in the top 10% for known sources of data and the top 13% for unseen sources. This work is an important contribution to the field of unsupervised model recommendation for anomaly detection and provides a promising direction for future research in this area.\noverall_review: Positive aspects:\n\nThe paper proposes Hydra, the first meta-recommender for anomaly detection in multivariate time series data, which is a novel contribution to the field of unsupervised model recommendation.\nThe paper addresses real-world restrictions, such as limited time to issue a recommendation and the need for generalization to previously unseen data sources, which are important considerations in practical applications.\nThe paper conducts a thorough experimental evaluation using a large number of public datasets from different data sources, and provides extensive analysis of the results.\nThe paper clearly presents the proposed approach, including the use of meta-learning and the design of Hydra, making it easy for readers to understand the methodology.\n\nNegative aspects:\n\nThe paper could benefit from providing a more detailed discussion of the limitations and challenges of existing methods for unsupervised model recommendation for anomaly detection in multivariate time series data, to help readers better understand the motivation for the proposed approach.\nThe paper could provide more information on the specific design of Hydra, such as how the meta-learning framework is applied to the problem of model recommendation for anomaly detection, and how the proposed approach addresses the real-world restrictions and need for generalization to previously unseen data sources.\nThe paper could benefit from providing details of dataset split (train/val/test)\nreview_summary: Based on my assessment, I recommend accepting the paper with minor revisions. The paper proposes Hydra, a meta-recommender for anomaly detection in multivariate time series data, and conducts a thorough experimental evaluation to demonstrate its effectiveness. The proposed approach addresses real-world restrictions and provides a novel contribution to the field of unsupervised model recommendation. While there are some areas where the clarity could be improved, including a more detailed discussion of existing limitations and challenges, providing more information on the specific design of Hydra, and more detailed analysis of the experimental results, these issues can be addressed with minor revisions. Overall, the paper provides a valuable contribution to the field and warrants acceptance with minor revisions.\nactions_required_to_increase_overall_recommendation: since the work claim that their work can be applied for real-world time series data, they should evaluate Hydra in a larger dataset and more complicated data distributions\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Answer to reviewer's comments\nWe express our gratitude to the reviewer, their comments have allowed us to clarify and improve our work. We have uploaded a new version of the article with the changes highlighted in blue to facilitate the review task. We proceed to answer to some of the comments issued:\n\n**There is still a third case in which the data is composed of a few known data and a lot of unseen data. This work does not seem to solve this case.**\n\nWe may not have been clear in our explanations, but the recommendation is performed per dataset. So when queries for new datasets are received, in which some of the datasets are close to known datasets (i.e., datasets known at training time) and others are novel (akin to unseen data), each of the query would be resolved by the best corresponding recommender independently (i.e., Hydra would select CFact for the former and UReg(2) for queries of the latter kind). Our recommender does not inspect internal patterns of the data to characterize those as known or unknown, only the distance of the whole dataset's meta-features to existing ones.\n\nOf course, when queries are mostly for unseen data, then the value of the training is reduced (i.e., whether CFact had previous experience on similar data), but nevertheless Hydra in this case relies on UReg(2) which as proven by our leave-one-provider-out (Scenario II) experiments improve with respect to the state of the art. As part of our future work, we plan to further extend the experimental campaign to even more datasets, which could allow us to explore more deeply the interesting issue raised by the reviewer -- which is however a knowingly difficult task owing to the lack of public available multi-variate time-series data with ground-truth. We added a comment in Section 6 concerning this aspect.\n\n**The paper could benefit from providing a more detailed discussion of the limitations and challenges of existing methods for unsupervised model recommendation for anomaly detection in multivariate time series data, to help readers better understand the motivation for the proposed approach.**\n\nWe agree we were not clear enough. We have added a summary at the end of section 2 clarifying the main differences between our work and previous proposals, namely, the complete lack of focus on time series in either the methodological design (including meta-feature selection) or algorithmic space tested, as well as a lack of attention to novelty detection, which we consider crucial, as existing methods may overfit on local patterns that may not generalize well.\n\n**The paper could provide more information on the specific design of Hydra, such as how the meta-learning framework is applied to the problem of model recommendation for anomaly detection, and how the proposed approach addresses the real-world restrictions and need for generalization to previously unseen data sources.**\n\nWe agree with the reviewer that details about several important aspects of Hydra design were omitted in the previous explanation. To improve the description, we have included diagrams for both base methods, URegression and CFact (Figures 1 and 2) and reworded the intuition and explanation of CFact.\n\nAdditionally, to clarify that we are measuring generalization capabilities, we have expanded the end of Section 4, highlighting that the Scenario II addresses real-world restrictions, since the Hydra recommender has been trained on totally different data, and no single dataset in the training data comes from the same distribution or provider as the test data.\n\n**The paper could benefit from providing details of dataset split (train/val/test)**\n\nWhile ultimately, for repeatability purposes,we believe that the code we provide is the ultimate source for these details, we have added some clarification in Section 4.4 for Scenario I (10-fold cross validation with 90%/10% train/test split) and Scenario II (4-fold leave-one-provider-out validation, with different train/test splits in this case since the size of folds in this case depends on the heterogeneous size of each provider data).\n\n**Since the work claim that their work can be applied for real-world time series data, they should evaluate Hydra in a larger dataset and more complicated data distributions.**\n\nWe definitely agree with the reviewer and this is indeed part of our plans: further extending the experimental campaign to even more datasets, would allow us to explore more deeply the generalization ability of Hydra. At the same time, we hope the review concur with us that this is knowingly difficult task owing to the lack of public available multi-variate time-series data with ground-truth: as far as we know, we already use every publicly available multi-variate time-series data including labeled anomalies. This is a known problem, which requires a community-wide effort.  We added a comment in Section 6 concerning this aspect.", "role": "user" }, { "content": "Thanks for the authors' response. It resolves the majority of my concerns. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
7cUV9K3ns9Q
AutoML 2023 Conference
Reviewer_fynp
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_fynp, and the following are the paper you need to review:\n```<<7cUV9K3ns9Q>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors presenta meta-learning based approach for AutoML on unsupervised anomaly detection in multivariate time series. The system `Hydra` is performs favorably copmpared to SoTA approaches, especially for novel or unseen data sources.\nThe architecture consists of (i) novelty detector (ii) model recommenders based on novelty in the meta-feature space. The recommender for novel data relies on SVD and Random Forest to rank models. \nA large number of datasets are utilized for the meta-training phase, and catch22 is used for computing meta-features. A wide set of anomaly detection approaches are covered.Performance is demonstrated to be better than the current SoTA. The authors present a fairly comprehensive evaluation and ablation study as well.\noverall_review: The paper has many strong points.\n1. The core idea is robust, based on meta learning. The whole architecture with the novelty detection (of data) component and the recommender heads are well founded and make intuitive sense.\n2. A wide ranger of anomaly detection algorithms are covered in the meta learning process. Also, many datasets are used, and hyperparameter tuning is done.\n3. Empirical evaluation and analysis is performed adequately. The ablation study (Sec 5.3.1, Sec 5.3.2) is useful in providing the user understanding. \n\nNegative aspects:\n1. Some statements are not very clear. For instance Sec 5.3.1,Line 274-Line 276.\n2. Sec 5.4 provides computational requirements in an anecdotal fashion. A clear figure or performance numbers would be effective in this regard. \n3. Some design choices could be better explained. e.g. use of catch22. \n4. How are hyperparameters tuned for the recommended algorithm is not clear.\n5. Meta learning based approaches (such as AutoSklearn 1, 2) have been successful. The authors concede that their system requires more data for better performance. How does that limit the utility of the current iteration is not clear. The concern that there is inadequate data is not addressed.\nreview_summary: The paper presents an effective solution for a interesting and practical problem. The paper presents some good insights , and is well written and the formulation is sound. However a few points that need to be addressed remain as noted.\nactions_required_to_increase_overall_recommendation: Provide computational overhead figures in a more comprehensive manner.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Answer to reviewer's comments\nWe are very thankful for the reviewer's comments, we proceed to answer to some of them below. We have uploaded a new version of the article with the changes highlighted in blue to facilitate the review task.\n\n**One point that the authors mention is that catch22 is efficient is not well supported by the authors (maybe cite something like : doi:10.1109/ICDMW53433.2021.00134). Why is catch22 chosen vs. hctsa, Kats or TSFEL needs more explanation. Is it due to presence of 22 important meta features? Also catch22 is for univariate time series, how is it adopted to multivariate case is not well explained (apart from 110 features are taken).**\n\nWe thank the reviewer for the reference and we have included it, as well as three reasons to use Catch22 (their performance in ML tasks, their feature independence and their computation time) in section 3.4. We have also added a clarification on how the 110 meta-features are extracted, in the same section. Finally, we added a discussion on future work concerning on meta-features in Section 6.\n\n**Pseudocode or Algorithm figure would have helped in clearly communicating ideas to the reader.**\n\nWe agree with the reviewer that details about several important aspects of Hydra design were omitted in the previous explanation. To improve the description, we have included diagrams for both base methods, URegression and CFact (Figures 1 and 2) and reworded the intuition and explanation of CFact. We hope this will make understanding them easier.\n\n**Some statements are not very clear. For instance Sec 5.3.1,Line 274-Line 276.**\n\nWe agree and have reformulated it to be clearer.\n\n**How are hyperparameters tuned for the recommended algorithm is not clear.**\n\nThe recommender's output is a configuration itself, being an algorithm + specific hyperparameter values, so no further tuning is needed after recommendation. When choosing the different values to generate our configuration library, we tried to use large enough ranges to cover potential different behaviors in the algorithms.\n\n**Meta learning based approaches (such as AutoSklearn 1, 2) have been successful. The authors concede that their system requires more data for better performance. How does that limit the utility of the current iteration is not clear. The concern that there is inadequate data is not addressed.**\n\nThe reviewer is right in that we had not touched on this. We highlighted it in the discussion, including two references: one highlighting problems with benchmarks and another one referring to improved metrics for time series anomaly detection.", "role": "user" }, { "content": "Title: Addressing of Concerns\n1. [+] The authors have adequately addressed the concern regarding catch-22. \n2. [-] However, clarity is still lacking. Diagrams only do so much, a clear algorithmic structure is needed to concisely convey the idea. Also, descriptions are still somewhat vague. For instance, in Figure 2 the caption \".. 𝑢𝑚𝑎𝑝 a dimensionality reduction of the transposed performance matrix...\"--- UMAP is a widely known method, and as for notations using a single letter is more standard practice. The rewording and additional clarity helps, but is not sufficient. \n\nAlso Hydra is eponymous with a very popular configuration handler, and should not be used.\n", "role": "assistant" } ]
7cUV9K3ns9Q
AutoML 2023 Conference
Reviewer_dJd2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_dJd2, and the following are the paper you need to review:\n```<<7cUV9K3ns9Q>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors provide a \"recommender system\" called hydra to predict which anomaly detector to use on a new dataset, given the past performance data of the set of detectors on other datasets. \nTheir main contributions lie in \na. applying an anomaly-specific set of meta-features\nb. identifying clusters in the (projected) meta-feature space and selecting a recommender that best suits this cluster.\nc. identifying if a new dataset is not in the vicinity of the training distribution. In that case \n\n\noverall_review: Strengths:\n1. The approach can distinguish in the meta-feature space whether it has encountered similar datasets (based on the distance in the projected meta-feature space). This way, the model can exploit more specialized recommenders for particular sets of the meta-feature space. \n\nWeaknesses\n1. I would like to start by pointing out that the name of the method Hydra is not well motivated and that it easily is confused with a widely used python package [hydra.cc](hydra.cc) which is used for configurable experimentation. I recommend changing the name.\n\n2. The definition in L.44 of what meta-learning is appears to be wrong. Instead, this seems to be a description of algorithm selection.\nThis is also one of my concerns regarding how the paper is situated; while technically correct, that a recommendation is made and similarities exist in this line of thought, the entire outline rather strikes me as an algorithm selection problem. This phrasing is more easily referable in this research area. Crucial here is that the configuration (algorithm and its hyperparameters) is a discrete set for which we want to predict a performance given some meta-experience (performance data) with that set. Also, the section headers and paragraphs 3., 3.1, 3.2 simply describe an algorithm selector. This description also facilitates the unfortunately complex title \"Unsupervised Model Recommendation through Meta-Learning\"\n\n3. As with most algorithm selectors, the dependency on meta-features eventually limits the performance of a selector by the quality of these features and how well they are \"aligned\"/indicative of the performance. The time-series aspect of this paper can mostly be reduced to a selection of time-series targeted meta-features. In effect, this opens up a lot of baselines from the AS literature to compare against. \n\n4. I acknowledge, that in AS, datasets are hard to come by, but I wouldn't go as far as claiming that 4 benchmarks are an unseen extent.\n\n5. A potentially important hyperparameter regarding the complexity in both functional expressivity and cost of this method lies in the number of singular vectors. Another set of important hyperparameters are those regarding a) the UMAP projection and b) the decision on how to cluster the meta-feature. Effectively, this will change the model's performance.\nreview_summary: I would think that this paper should be placed in the algorithm selection setting and may require considerable rewriting (that using this perspective will simplify the paper) and also require more baselines from the AS field. Any potential empirical gain should be demonstrated against those baselines.\nactions_required_to_increase_overall_recommendation: Look at the weakness section.\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Answer to reviewer's comments\nWe are deeply thankful to the reviewer for their comments. We have uploaded a new version of the article with the changes highlighted in blue to facilitate the review task. We proceed to answer several of the comments below:\n\n**Clarification on DBSCAN when using UMAP**\n\nThe reviewer is right in that UMAP doesn't preserve distances and this can be an issue when performing clustering in its space. We believe in our case it is not harming the model (and we include explanations in sections 3.3.1 and 3.3.3): for Hydra's novelty detection step, the only decision to make is whether the new point is close to any existing point or not. Since neighbor distances are relatively well preserved, this is still possible. For CFact's internal clustering, it may produce smaller clusters than needed ([https://umap-learn.readthedocs.io/en/latest/clustering.html](https://umap-learn.readthedocs.io/en/latest/clustering.html)), but it will not  join points that are not close in the original space, so the intended effect of CFact (separating configurations with different performance profiles in order to improve URegression's performance) will still be there.\n\n**How do you configure DBSCAN - as the number of clusters and their circumferences are dependent on its hyper parameterization?**\n\nIt is definitely true that the clustering step will impact the recommender's performance. In our tests, Hydra was quite robust against different values of DBSCAN; nevertheless, we will include an ablation study for different clustering hyper parameters in the appendix to make this explicit.\n\n**How do you deal with very small clusters and the little data used to train the respective recommender (3.3.3 L131-133)? Please report the found cluster sizes.**\n\nWe do apologize but we cannot report the found cluster sizes in a comprehensive manner; due to the cross-validation process, 150 clustering processes have been run just for Scenario I, which is not possible to report fully, and pointless to report cursorily (as a compromise, we will report a statistical summary in the appendix). Regarding the problem of small clusters, they are not necessarily an issue: a small cluster in CFact means a partition of a small number of configurations of the performance matrix, but every dataset was evaluated on each of them, so we have full performance profiles for each algorithm inside the cluster.\n\nThere are still two potential pitfalls we see regarding small clusters:\n\n1. the noise cluster produced by DBSCAN will not be associated to meaningful patterns (i.e., which correspond to configurations that behave dissimilar to others and from which URegression would not be able to generalize); at the same time, as per our overall results, the incidence of noise cluster does not seem to affect results in a significantly negative way.\n2. Small cluster could have a small technical impact for SVD computation, as SVD cannot be performed for the requested number of singular vectors: however, as URegression employes 2 (or 5) singular vectors, only clusters smaller than 2 (or 5) could constitute a problem, which could be easily circumvented by setting minPoints in DBSCAN to 2 (or 5).\n\nIncidentally, we use default DBSCAN hyperparameter, for which minPoints equals 5 and as such we do not experience this technical problem.\n\n**3.3.2. Motivate, how in case of radically differing meta-features from the training distribution, we can at all make a sensible prediction (given that they by definition have little variability we can confidently exploit for prediction)**\n\nWe have included a sentence at the end of 3.3.2 detailing that a low number of singular vectors extracted from SVD forces the method to focus on only the broadest, most general patterns, which we expect to be applicable across novel data too.\n\n**L133-136: when you say you are having n tuples, you refer to the number of configurations right?**\n\nThis was a typo, it should have been the number of clusters (instead of the number of datasets), as the number of produced tuples at the output of CFact is equal to the number of clusters found / URegression recommenders trained. Each output is of the form {recommended configuration, estimated performance}. Thanks for spotting the nit, we have fixed it.\n\n**On Hydra's name**\n\nWe thank the reviewer for the pointer, as we were not aware of the existence of the aforementioned package. If the reviewer thinks it's convenient, we could change the recommender's name, in the same vein, to another multi-headed monster of mythology: Orthos. We have not changed the figures and text for this rebuttal period due to time constrains, but we are totally willing to do so for the final version.\n\n**Regarding meta-learning, algorithm selection and other potential methods**\n\nThe reviewer is right in the sense that this paper could be cast in the broader area of Algorithm Selection, that we refer to as Model Recommendation. In this area, we decided to align with what the literature refers to as meta-learning (see e.g., Joaquin Vanschoren. Meta-Learning: A Survey. [http://arxiv.org/abs/1810.03548](http://arxiv.org/abs/1810.03548), Yue Zhao et al. Automatic Unsupervised Outlier Model Selection, [https://proceedings.neurips.cc/paper/2021/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf](https://proceedings.neurips.cc/paper/2021/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf)), to differentiate from other methods that do not require previous labelled data but do require the application of every possible configuration considered in the new dataset (see [https://www.andrew.cmu.edu/user/yuezhao2/papers/23-kdd-explorations-ipm.pdf](https://www.andrew.cmu.edu/user/yuezhao2/papers/23-kdd-explorations-ipm.pdf)). The meta-learning settings are more fit to our scenario, as fitting >500 anomaly detectors is not feasible every time a new dataset is received.\n\nThe semantic difference between algorithm selection and meta-learning is even acknowledged in the wikipedia page for Algorithm Selection (AS), written by the Coseal group: [https://en.wikipedia.org/wiki/Algorithm_selection](https://en.wikipedia.org/wiki/Algorithm_selection)). The reviewer's remark is completely right, though: while we compared against the state of the art in the meta-learning part, other Algorithm Selection methods could be compared against ours. We have included so in the Conclusions and it opens a broader research and comparison area. We thank the reviewer for this remark.", "role": "user" }, { "content": "Title: Final Comment\nThank you for your efforts in revising the paper. \n\nThanks for reporting the DBSCAN HP ablation. I found this helpful and also supports my intuition, That DBSCAN's HP can indeed affect your method, albeit it is mostly the dynamic range in the performance that changes along the Hyperparameters. More seeds than one would make this a more reliable ablation, though.\n\nA few pointers, \n\nI am still not entirely convinced of the UMAP-smaller-cluster argument. As you and its maintainers point out, it is cumbersome to use and requires considerable validation on the cluster level. As much as I appreciate UMAP for its elegance, I myself have been stumbling over this at some point and instead used UMAP as a visual validation process of another clustering method instead, as it was less prone to concerns that I now myself am issuing. \nConsidering, that smaller clusters are more likely to produce unstable recommendations based on the little data that is available for this subspace - which would be \"smoothed over\" for larger clusters, this surely can affect the recommender's performance and should be dealt with carefully.\n\nI find the Orthos name a better choice. But maybe there is something more tangible, more related to the method or its components. \n\nOverall, I would lean towards weak acceptance.", "role": "assistant" } ]
8-8k3okjpY
AutoML 2023 Conference
Reviewer_xdDU
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_xdDU, and the following are the paper you need to review:\n```<<8-8k3okjpY>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper focuses on downsampling layers, such as pooling and strided convolutions, in convolutional neural networks (CNNs). The authors analyze the performance of independently trained models with different pooling configurations on CIFAR10 using a ResNet20 network. They find that the position of downsampling layers significantly impacts network performance and that predefined downsampling configurations are not optimal. They also show that traditional one-shot Network Architecture Search (NAS) based on a single SuperNet does not work for optimizing pooling configurations due to parameter sharing issues. To address this, they propose a balanced mixture of SuperNets that associate pooling configurations with different weight models, reducing weight-sharing and inter-influence of pooling configurations on the SuperNet parameters. Their proposed approach outperforms other methods and improves over default pooling configurations on CIFAR10, CIFAR100, and Food101 datasets.\noverall_review: Paper strength: \n1.\tThe motivation of the paper is interesting and clear. \n\nPaper weakness: \n1.\tThe presentation of this paper is not good and some details are not explained clearly. \n2.\tThe experiments are not convincing. \n3.\tMore in-depth analysis is expected. It is not enough to show comparable / better results than existing methods.\n\nreview_summary: Overall, this research contributes to the understanding of the impact of downsampling layers in CNNs and presents a novel approach for optimizing pooling configurations, which could have practical implications in the field of deep learning.\nactions_required_to_increase_overall_recommendation: 1.\tThe work has studied the search for the optimal network under a fixed number of parameters. How about when you fixed the FLOPs?\n2.\tThe transferability of the found network should be studied. At least, networks found over CIFAR10 should be evaluated on other datasets.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer xdDU\nWe thank the reviewer for their valuable comments and suggestions. We address the concerns and answer the questions below:\n\n### Experimental Results\n\n> \"whether the claims or findings can scale beyond small-scale datasets (CIFARs) and other types of architectures are yet to be verified.\"\n\nAfter submission, we evaluated our approach also on ImageNet (table 7 in appendix). We could not find better pooling configurations than the default. We believe that the pooling configurations for ResNet models are optimised on ImageNet and therefore our identified pooling configurations are near optimal, and more importantly, our method is able to recover the default configuration (for M=2 SuperNets).\n\n> \"2. The experiments are not convincing. 3. More in-depth analysis is expected. It is not enough to show comparable / better results than existing methods.\"\n\nWe show that finding the pooling configuration is hard even with a limited set of pooling configurations, and existing methods fail. We present a method that works and compare it with NAS and not NAS approaches, and in all cases results are comparable or better.\n\n\n### Clarity and Presentation\n\n> \"some symbols should be defined first. And the description of the search space should be elaborated.\"\n\n> \"The presentation of this paper is not good and some details are not explained clearly. \"\n\nWe added the missing definitions in several places in text in page 3-4 of the revised version of the paper and we also improved the description of the search space and added table 5 in the supplementary material.\n\n### Recommended Actions\n\n> \"The work has studied the search for the optimal network under a fixed number of parameters. How about when you fixed the FLOPs?\"\n\nThe aim of this work is to find pooling configurations that improve the accuracy of a classification model, which we show is an unsolved and hard task by itself. At the moment we do not have a simple way to optimise pooling while keeping fixed the FLOPs. Instead, we could optimise a multi-cost objective that takes into account also the computational cost of the found models. However, this is complex and goes beyond the scope of our current work.\n\n>\"The transferability of the found network should be studied. At least, networks found over CIFAR10 should be evaluated on other datasets.\"\n\nWe do not claim that the found models are transferable, instead we claim that in order to obtain the best performance of a model, the pooling configuration should be optimised for the specific dataset; we show that common approaches do not work and propose an efficient way (compared to brute force) to do that.\n\n\n Global/general Response:\n```\nWe thank the reviewers for their detailed reading of the manuscript and valuable comments. We are pleased that reviewers found our paper innovative, well-written with illustrative figures, interesting and with clear motivation and new directions in the field of NAS. We are also glad that our experiments are shown to be promising and exciting to the reviewers.\nThe main criticism from one reviewer is the possible previous work already providing results on the topic and the lack of solid experiments. We clearly explain that no previous work provided analysis/results on our specific topic and our results are solid, in line with baselines, with the correct hyper-parameters, and run multiple times. \nOverall in the paper, we added results on ImageNet and we included time for training our model as asked by reviewers.\n\nDear Reviewers,\n\nWe would greatly appreciate it if you could review the updated version of the paper and responses to your comments and provide us with your questions and suggestions. Doing so will allow us to address additional questions you may have before the discussion phase ends.\nWe again thank you for your time and effort reviewing our paper and look forward to your feedback.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
8-8k3okjpY
AutoML 2023 Conference
Reviewer_Zu75
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Zu75, and the following are the paper you need to review:\n```<<8-8k3okjpY>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper found the existing one-shot NAS algorithms cannot be natively applied to find optimal pooling configuration. This paper hypothesized that the existing weight sharing results in this problem. So, a mixture of supernets is proposed. They use multiple supernets rather than a single supernet to estimate the accuracy of a pooling configuration. The proposed algorithm is demonstrated on CIFAR datasets and Food101. \noverall_review: This paper noted the configuration of pooling layers will affect performance. A new one-shot search algorithm is proposed to search for pooling layer configuration. Multiple supernets are used to estimate the performance of sub-net. \n\nAs I mentioned, this paper has problems with experiments and hypotheses. I agree the configuration of pooling will affect the final performance to some extend. I am not sure 1) the proposed algorithm can effectively outperform baselines and 2) that similar experimental result will be observed on large-scale datasets and harder tasks. \nreview_summary: The hypothesis and experimental results are not solid and convincing. The algorithm contribution is weak. \nactions_required_to_increase_overall_recommendation: Please try to address my major concerns about experiments.\nRating: 2 reject, significant issues present\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Zu75\nWe thank the reviewer for their comments and suggestions. We address the concerns and answer the questions below:\n\n### Question 1\n\n> \" 1. This paper claimed, \"In this work we tested both differentiable and sample based approaches, but both failed to provide good results for finding the optimal pooling configuration of a network. We hypothesize that the underlying reason is two fold: inappropriate search space design, and strong weight sharing in SuperNet.\" \"\n\nWe agree that maxpooling operations are included in many search spaces, however, they do not contribute to the feature map size of the layer, for example in popular DARTS [1] search space, the downsampling of the feature map is performed by manually setting the strides of operations adjacent to the input nodes equal to two, the stride of all other operations (including the maxpoolings are set to one). \n\nSome previous work such as DenseNAS [1][add more] do not use downsampling directly into the search space, but they consider configurations with skip operations which can induce a reduction of the number of layers at a given resolution by reducing the total length of the network. Thus, in those search spaces it is not really possible to define configurations with similar length but different pooling configurations, which is what we consider. \nAdditionally, those papers mixed skip operations with other layers’ operations such that it is not possible to discern if an improvement in performance is due to a different global pooling configuration or different operations used also because the optimal configuration is in most of the cases not available. \n\nFinally, most work in recent years focuses on the micro-structure of CNN. Commonly used search spaces such as DARTS-based search space [2] and Mobilenet-based search space [3], and benchmarks such as NAS-Bench-201 [4] and NAS-Bench-Macro [5] all fix the outer-skeleton of the network.\n\n*[1] Liu, Hanxiao, Karen Simonyan, and Yiming Yang. \"Darts: Differentiable architecture search.\" arXiv preprint arXiv:1806.09055 (2018). \\\n[2] Fang, Jiemin, et al. \"Densely connected search space for more flexible neural architecture search.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. \\\n[3] Sandler, Mark, et al. \"Mobilenetv2: Inverted residuals and linear bottlenecks.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. \\\n[4] Dong, Xuanyi, and Yi Yang. \"Nas-bench-201: Extending the scope of reproducible neural architecture search.\" arXiv preprint arXiv:2001.00326 (2020). \\\n[5] Su, Xiu, et al. \"Prioritized architecture sampling with monto-carlo tree search.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.*\n\n### Question 2\n\n> \" Why did this toy example motivate you to propose a mixture of models?\"\n\nAs studied previously [1,2] the coupling of architectures in a supernet training can have a negative impact on the evaluation performance of the architecture. This simple example shows that when training multiple (in this case two) pooling configurations with the same weights produces interference and leads to lower performance than training models independently. The performance when mixing the two pooling configurations during training goes from 91.83 to 89.84 for the late pooling and from 87.45 to 86.81 for early pooling). Thus, it motivates our approach of reducing this interference by using multiple specialised models (SuperNets) in the same training. \n\n*[1] Bender, Gabriel, et al. \"Understanding and simplifying one-shot architecture search.\" International conference on machine learning. PMLR, 2018. \\\n[2] Zhang, Yuge, et al. \"Deeper insights into weight sharing in neural architecture search.\" arXiv preprint arXiv:2001.01431 (2020).*\n\n> \"Furthermore, the searched result shows accuracy 91.55 in Table 2 that is still inferior than late pooling.\"\n\nThe late pooling configuration is better than the found configuration, but it is not found automatically, without brute force. Among all tested methods, only our approach managed to get close to the optimal pooling configuration, which shows the difficulty of the task.\n\n### Question 3\n\n> \"The experiments are not solid.... Did you carefully set hyperparameters or optimizers? To my knowledge, ResNet20 usually reports around 91.x% on CIFAR-10. In my opinion, baselines are not well trained.\"\n\nThe hyperparameters of the proposed baselines are carefully tuned. As explained in Appendix B.2, the architecture we use is a modified ResNet20, with fewer parameters, the last fully connected layer is removed and the channels in the last layer are lower. This produces lower baselines than those mentioned by the reviewer, but allows us to conduct the brute force experiment for the optimal pooling configuration more efficiently.\n> \"On the other hand, Late Pooling (91.83%) is still better than the Balanced Mixture (91.55%)\"\n\nThe late pooling configuration is better than the found configuration, but it is not found automatically, only via a bruteforce baseline search. However, among all tested methods, only our approach managed to get close to the optimal pooling configuration, which shows the difficulty of the task. Late pooling configuration in Figure 1, is chosen as an example to show the effect of weight sharing on architecture performances. Since the architecture is not found by NAS, the performance of this specific configuration is available from bruteforcefully testing all possible configurations, in table 5 of appendix D.1. Not all late pooling architectures are performing well, therefore before performing bruteforce training it’s not possible to choose an appropriate pooling configuration. \n\n### Question 4\n\n> \" Because the experimental result is not solid on small scale CIFAR datasets. I cannot expect the proposed algorithm works on large-scale datasets, like ImageNet.\"\n\nWe hope that the previous answers have clarified and demonstrated that our evaluation is solid and fair and that our approach outperforms several well-known and common NAS and non-NAS algorithms for this problem.\nFurthermore, we also included results on ImageNet in table 7 in appendix. In that case, the algorithm does not improve over the initial pooling configuration, however it seems that neural network configurations may have been heavily optimized manually for the ImageNet data. Nevertheless, we are also able to show the advantage of using a Mixture of Supernets instead of just one. In fact, Mixture of Supernets manages to find the optimal configuration when only using two models, while one model is unable to find it.\n\n> \"I guess 1) we cannot observe difference between different pooling configuration \"\n\nWe showed with our benchmark that pooling can highly affect the final performance of a CNN model. For each pooling configuration we also provided standard deviation over 3 runs, which shows that the provided results are significant.\n\n> \"2) it is very hard to outperform the stander configuration by changing pooling layers.\"\n\nWe agree with the reviewer that outperforming the standard pooling configurations is a significant challenge, particularly for datasets and architectures that have been heavily optimized manually. This is why we consider our work and contribution to be important.\n\n### Response to Overall Review and Summary\n\n> \" I am not sure 1) the proposed algorithm can effectively outperform baselines \"\n\nWe hope that our modifications have clarified the motivation for investigating pooling architectures and results on datasets including ImageNet. We show that for datasets different from ImageNet our algorithm can find configurations that are better than the common baseline.\n\n> \"2) that similar experimental result will be observed on large-scale datasets and harder tasks.\"\n\nWe have demonstrated that our algorithm can recover the baseline configuration for ImageNet, however with no further improvement. We believe that the ResNet pooling configuration is already near-optimal for this ubiquitous dataset, due to heavy manual optimization.\n\n> \"The hypothesis and experimental results are not solid and convincing. The algorithm contribution is weak.\"\n\nWe hope our responses have clarified our hypotheses, motivation and experimental results. Finding a way to deal with the problems of weight sharing and the exponential number of models is an important issue in AutoML and here we propose and verify a novel approach based on Mixtures of Supernets.\n Global/general Response:\n```\nWe thank the reviewers for their detailed reading of the manuscript and valuable comments. We are pleased that reviewers found our paper innovative, well-written with illustrative figures, interesting and with clear motivation and new directions in the field of NAS. We are also glad that our experiments are shown to be promising and exciting to the reviewers.\nThe main criticism from one reviewer is the possible previous work already providing results on the topic and the lack of solid experiments. We clearly explain that no previous work provided analysis/results on our specific topic and our results are solid, in line with baselines, with the correct hyper-parameters, and run multiple times. \nOverall in the paper, we added results on ImageNet and we included time for training our model as asked by reviewers.\n\nDear Reviewers,\n\nWe would greatly appreciate it if you could review the updated version of the paper and responses to your comments and provide us with your questions and suggestions. Doing so will allow us to address additional questions you may have before the discussion phase ends.\nWe again thank you for your time and effort reviewing our paper and look forward to your feedback.\n\n```\n", "role": "user" }, { "content": "Thanks for the authors' clarification. They have addressed most of the issues I raised, and I am inclined to revise the rating upwards.", "role": "assistant" } ]
8-8k3okjpY
AutoML 2023 Conference
Reviewer_qZsK
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_qZsK, and the following are the paper you need to review:\n```<<8-8k3okjpY>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: The authors explore the impact of downsampling layers on the performance of CNNs and highlight the limitations of current NAS methods for optimizing downsampling configurations. The authors propose a balanced mixture of SuperNets to address these limitations and show that this approach outperforms other approaches and default pooling configurations on CIFAR10, CIFAR100, and Food101 datasets. The authors emphasize the importance of appropriate search space design and present a solution for learning the optimal scale of CNN feature maps through pooling/stride configurations. The proposed method reduces weight-sharing and inter-influence issues associated with pooling configurations, which leads to better ranking correlations. Finally, the authors also compare their proposed method with non-NAS-based approaches.\noverall_review: The authors should clarify if this is the first kind of work to use NAS for the position of downsampling layers. The authors should probably take a look at the following paper and consider explaining the difference between their work: https://ieeexplore.ieee.org/document/9506777 \n\nIt may be a good idea to clarify in the search space subsection how the authors obtained 36 configurations for Resnet20 instead of placing it in Section 3.1. \n\nThe GPU search time is not reported, and hence, the authors should also compare the computation time. The time required to train 36 architectures vs the search time of the proposed algorithm would be good to present in the paper. This can signify the importance of the search method. \n\nThe experiments are limited. The results on large-scale datasets and resolution are missing. At least, the authors should have evaluated the mixture of supernets on the Imgenet-16-120 dataset. The authors experimented only on the Image Classification task and CNN-based network. Currently, Vision transformers are being utilized on various tasks and slowly replacing CNNs. Hence, the proposed search method should be tested on different kinds of architecture and tasks to truly show its effectiveness. \n\n\nreview_summary: The presentation of the proposed approach is good. However, more evaluation is required to justify the correctness of the paper. The reviewer is fairly confident of the review after having gone through the search space and search algorithm. The review covered essential aspects of the paper, including the novelty and evaluation of the proposed approach. \nactions_required_to_increase_overall_recommendation: The authors should enhance the paper by adding more experimental results in terms of datasets and architectures. \n\nThe accuracy aspect is fine, but the authors should also include FLOPs or the number of multiplication and accumulation (MAC) operations. Late pooling has better accuracy compared to early pooling, as shown in Figure 1. However, networks with early pooling have lower FLOPs and MACs due to a reduction in spatial resolution early in the network. The subsequent layers operate on lower spatial resolution feature maps. From Table 3, the baseline methods seem to work better than the proposed method in this paper. Therefore, FLOPs and MACs can be good for enhancing the paper. \n\nMinor:\nLine 135: r^L, L (number of layers) is not defined before. It should be clarified. \n\nLine 136-137: therefore it is reasonable to exclude upsampling operations from the search space.: This statement seems to be obvious and redundant. The authors established already that they are searching for optimal downsampling operations. \n\nThe reference on line 656 is missing.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer qZsK\nWe thank the reviewer for their valuable comments and suggestions. We address the concerns and answer the questions below:\n\n### Search Space Design\n\n> \"It may be a good idea to clarify in the search space subsection how the authors obtained 36 configurations for Resnet20 instead of placing it in Section 3.1.\"\n\nWe agree. We also added table 5 in supplementary material to clarify the design of search space in all experiments.\n\n### Search Time\n\n> \"The GPU search time is not reported.\"\n\nWe included the search time in table 2.\n\n### Additional Experiments\n\n> \"The experiments are limited. The results on large-scale datasets and resolution are missing.\"\n\nAfter submission, we evaluated our approach also on ImageNet. We included the results of our tests on table 7. In that case the algorithm does not improve over the initial pooling configuration, we believe that neural network configurations may have been heavily optimized manually for the ImageNet data. Nevertheless, we are also able to show the advantage of using a Mixture of Supernets instead of just one. In fact, MoS manages to find the optimal configuration when only using two models, while one model is unable to.\n\n### Other Concerns\n\n> \"the authors should probably take a look at the following paper and consider explaining the difference between their work: https://ieeexplore.ieee.org/document/9506777\n\"\n\nIn this paper authors try to optimize the pooling of a U-Net model by choosing among three different operations that change the position of the pooling layer. While this is an interesting way to optimize the pooling layers, it is a local optimization that greatly reduces the search space. For instance, using that trick and assuming that it works well (in our extensive experiments DARTS based approaches do not work for pooling), we believe we would not be able to find the optimal configuration on FOOD-101 because the pooling configurations are further than 3 steps from the default setting. \n\n> \"Currently, Vision transformers are being utilized on various tasks and slowly replacing CNNs. Hence, the proposed search method should be tested on different kinds of architecture and tasks to truly show its effectiveness.\" \n\nAs ViT does not use downsampling layers, it is not possible to use our method on it. Swin [1] instead proposes a kind of analysis at multiple resolutions, and there our method could be used. However, we do not have the time and the computational resources to obtain results for the rebuttal.\n\n*[1] Liu, Ze, et al. \"Swin transformer: Hierarchical vision transformer using shifted windows.\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.*\n\n### Corrections\n\n>\"Minor: Line 135: r^L, L (number of layers) is not defined before. It should be clarified.\" \n\n> \"Line 136-137\"\n\n> \"The reference on line 656 is missing.\"\n\nCorrected.\n\n\n\n\n\n\n\n\n Global/general Response:\n```\nWe thank the reviewers for their detailed reading of the manuscript and valuable comments. We are pleased that reviewers found our paper innovative, well-written with illustrative figures, interesting and with clear motivation and new directions in the field of NAS. We are also glad that our experiments are shown to be promising and exciting to the reviewers.\nThe main criticism from one reviewer is the possible previous work already providing results on the topic and the lack of solid experiments. We clearly explain that no previous work provided analysis/results on our specific topic and our results are solid, in line with baselines, with the correct hyper-parameters, and run multiple times. \nOverall in the paper, we added results on ImageNet and we included time for training our model as asked by reviewers.\n\nDear Reviewers,\n\nWe would greatly appreciate it if you could review the updated version of the paper and responses to your comments and provide us with your questions and suggestions. Doing so will allow us to address additional questions you may have before the discussion phase ends.\nWe again thank you for your time and effort reviewing our paper and look forward to your feedback.\n\n```\n", "role": "user" }, { "content": "Title: Reply to Authors\nThe reviewer would like to thank the authors for their response and incorporating the changes in the paper based on the feedback provided. However, for a few points raised by the reviewer, the authors did not write a rebuttal. The ImageNet experiments are required to strengthen this work and the authors did provide the results in Table 7. However, the results show accuracy compared to the baseline. However, as suggested before, the authors should have used FLOPs or MACs as a metric to compare which can boost the significance of this work. The authors are yet to provide results in this direction. The reviewer agrees with the authors claim “we believe that neural network configurations may have been heavily optimized manually for the ImageNet data.” The reviewer’s rating is still 7 (weak accept) as the comparison of networks using other metrics are still missing in the updated version. ", "role": "assistant" }, { "content": "Title: Response to Reviewer \nThank you very much for feedback on our response. \n\nTo investigate the computational cost of our method compared to other works, we measured the relative FLOPs with respect to the baselines for Diffstride as 1.8 compared to our method with 2.22. While the FLOPS are lower in their case, the training time is similar with 1.43 (Diffstride) vs. 1.5 (ours) with respect to baseline training time. However, our method works with discrete and square filters which remains compatible with basic building blocks with minimal change to the backbone architecture while Diffstride search space is continuous, which allows for fractional pooling allowing for rectangular kernels. \n\nFurthermore, the aim of this work is to find pooling configurations that improve the accuracy of a classification model (which we show is a hard task by itself) and our method does not apply any constraints on computational cost, and as mentioned in the review, the resulting late pooling architectures have higher FLOPs. At the moment we do not have a simple way to optimize pooling while constraining the FLOPs. Instead, we could optimize a multi-cost objective that takes into account also the computational cost of the found models. However, this is complex and goes beyond the scope of our current work.", "role": "user" }, { "content": "I truly enjoyed the above exchange with the author, which has certainly made the work more robust.", "role": "assistant" } ]
8-8k3okjpY
AutoML 2023 Conference
Reviewer_v6iq
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_v6iq, and the following are the paper you need to review:\n```<<8-8k3okjpY>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper studies the placement of layers that reduce spatial dimensionality in convolution-based neural architecture search (NAS) search spaces for the vision domain. These decisions are often treated as fixed in such search spaces, often with a predefined number of blocks before each spatial pooling location. The authors first observe that standard weight-sharing-based NAS techniques fail to recover this optimal placement, and use a ResNet-20 backbone on three vision datasets as a testbed for this problem – this setup admits a small search space of only 36 architectures and allows for convenient evaluation of the de facto choices made by standard vision backbones, which turn out to be suboptimal. The authors propose a new weight-sharing NAS method that targets this issue–balanced mixtures of supernets, which reduces the amount of weight-sharing between the different supernets. The authors evaluate their method on three vision datasets and show that their method improves performance by searching over different pooling placement choices alone. \noverall_review: Strengths:\n- The authors have identified an important problem in the NAS literature that appears to have been mostly understudied by prior work. \n- The balanced mixture of supernets idea appears to work for this problem – it is also interesting that prior weight-sharing methods fail. \n- A ResNet-based search space for studying this problem is extremely simple and seems to be exactly the right place to start for this problem. \n\nWeaknesses:\n- It would be great to include an evaluation of other search spaces besides the ResNet-based search space in order to show that the results hold up when using other blocks. However, I do not view this as a deal breaker because the current results are interesting on their own. \n- One thing that seems missing is that since the proposed method is no longer single-shot in the usual weight-sharing sense, it could make sense to report training times and parameter counts compared to weight-sharing methods. While this is understandably not the point of the work, this is useful information. \n- Additional datasets would be nice to have, since the evaluation comprises only three well-known vision datasets. \n\nOther comments:\n- Another interesting point that this work points out is the limitations of weight-sharing-based NAS. It would be interesting to study these limitations further. \n- Lines 340-341: “One could argue that NAS in general are a waste of computation, however, they help to avoid an even more expensive validation search for the optimal hyper-parameters of the model” – I’m not sure I agree with this, can the authors comment on this view? \n\nreview_summary: The problem that the authors have identified is important, and their experimental design seems very appropriate. The resulting method also seems to work, while prior work fails on this simple search space, which is interesting. The evaluation could be strengthened by extending to additional backbones beyond ResNet, a comparison of search timings to standard weight-sharing methods, and additional datasets. \nactions_required_to_increase_overall_recommendation: - Additional backbone evaluations\n- Timing comparisons\n- Additional datasets (lower priority)\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer v6iq \nWe thank the reviewer for their valuable comments and suggestions. We address the concerns and answer the questions below:\n\n### Additional Experiments\n\n> \"Additional datasets would be nice to have, since the evaluation comprises only three well-known vision datasets.\"\n\n> \"Actions Required To Increase Overall Recommendation: Additional datasets (lower priority)\"\n\nAfter submission, we evaluated our approach also on ImageNet and report results on ImageNet in table 7 in the appendix. In that case, the algorithm does not improve over the initial pooling configuration, we believe that neural network configurations may have been heavily optimized manually for the ImageNet data. Nevertheless, we are also able to show the advantage of using a Mixture of Supernets instead of just one. In fact, MoS manages to find the optimal configuration when only using two models, while one model is unable to.\n\n\n\n> \"It would be great to include an evaluation of other search spaces besides the ResNet-based search space in order to show that the results hold up when using other blocks. However, I do not view this as a deal breaker because the current results are interesting on their own.\"\n\nUnfortunately, the limited time for rebuttal did not allow us to perform experiments on additional backbone architectures, however it is possible to potentially add these results in the camera ready version if the paper is accepted.\n\n\n### Training Time and Parameter Count\n\n> \"it could make sense to report training times and parameter counts compared to weight-sharing methods.\"\n\n> \"Actions Required To Increase Overall Recommendation: Timing comparisons\"\n\nWe updated table 2 to include search times for each method. The number of parameters in experiments presented on table 2 are the same as default ResNet, except for our method which has M times number of parameters than default configuration during the search, but the found model will still have the same amount of parameters as a default ResNet.\n\n### Other Comments\n\n> \"Lines 340-341: “One could argue that NAS in general are a waste of computation, however, they help to avoid an even more expensive validation search for the optimal hyper-parameters of the model” – I’m not sure I agree with this, can the authors comment on this view?\"\n\nWe consider weight sharing as a trade-off between the amount of computation needed to find the optimal solution and the quality of an approximate solution. The extreme case is brute-force in which there is no weight sharing and each model is trained independently but the found solution is optimal. The other extreme is to use a single SuperNet model, in which the training is fast, but the found solution is biased by the interference of each model. Our Mixture of SuperNets is a compromise in which we reduce the interference by using multiple SuperNets, but increase the training time. \n\n Global/general Response:\n```\nWe thank the reviewers for their detailed reading of the manuscript and valuable comments. We are pleased that reviewers found our paper innovative, well-written with illustrative figures, interesting and with clear motivation and new directions in the field of NAS. We are also glad that our experiments are shown to be promising and exciting to the reviewers.\nThe main criticism from one reviewer is the possible previous work already providing results on the topic and the lack of solid experiments. We clearly explain that no previous work provided analysis/results on our specific topic and our results are solid, in line with baselines, with the correct hyper-parameters, and run multiple times. \nOverall in the paper, we added results on ImageNet and we included time for training our model as asked by reviewers.\n\nDear Reviewers,\n\nWe would greatly appreciate it if you could review the updated version of the paper and responses to your comments and provide us with your questions and suggestions. Doing so will allow us to address additional questions you may have before the discussion phase ends.\nWe again thank you for your time and effort reviewing our paper and look forward to your feedback.\n\n```\n", "role": "user" }, { "content": "Title: Thank you for your response\nThank you for your response -- I have updated my score from a 7 to an 8 for including an additional dataset and timing results. ", "role": "assistant" } ]
8-8k3okjpY
AutoML 2023 Conference
Reviewer_cD7Q
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_cD7Q, and the following are the paper you need to review:\n```<<8-8k3okjpY>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper discusses the importance of downsampling layers in CNNs and their impact on network performance. The study shows that predefined downsampling configurations are not optimal and proposes a new approach of a balanced mixture of SuperNets to optimize pooling configurations. The proposed method outperforms other methods and improves the default pooling configurations in CIFAR10, CIFAR100, and Food101 datasets.\noverall_review: 1. The method only achieves marginal improvement compared to the previous method, which is a very important baseline, DiffStride. \n2. The paper lacks a report on the efficiency of the proposed method, such as the number of GPU hours required for the experiments.\n3. The experiments in the paper were limited to CIFAR-level and food-101 datasets, which are relatively small. It is recommended that the author conduct experiments on larger datasets, such as ImageNet, to demonstrate the potential of the proposed method.\nreview_summary: Please refer to the comments above. Considering the objective limitations, I am inclined to recommend a borderline leaning reject.\n\n\n\n\nactions_required_to_increase_overall_recommendation: Please see the comments above.\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer cD7Q \nWe thank the reviewer for their valuable comments and suggestions. We noted from the response that, whilst the review recommended strong accept, the reviewer may have intended to recommend borderline rejection. We hope our clarifications will help to clarify the review concerns. We address the concerns and answer the questions below:\n\n### Clarity and Presentation\n\n> \"It would be beneficial if the author included a table to clarify the design of the search space in the paper. The reason for having N positions for pooling in the search space is not entirely clear in different sections of the paper.\"\n\nIn supplementary material, we added table 5 that clarifies the search space design by providing detail about the number of pooling operations and layers which their placements are searched. We fixed the number of pooling operations as the default backbone architectures and applied a few restrictions (no manipulation of input image, smooth and monotonic changes in feature map size vs. layers) to where pooling is allowed to be placed.\n\n\n### Comparison with baselines\n\n>\"The method only achieves marginal improvement compared to the previous method, which is a very important baseline, DiffStride.\"\n\nDiffstride search space is continuous, which allows for fractional pooling allowing for kernels of different size and shapes (not necessarily square). However, it requires replacing pooling layers with a downsampling module that involves complex number computation. Our method achieves similar results within a discrete search space, with minimal change to the backbone overall architecture. \n\n### Training Time\n\n>\"The paper lacks a report on the efficiency of the proposed method, such as the number of GPU hours required for the experiments.\"\n\nWe updated table 2 to include GPU hours for each method.\n\n\n### Additional Experiments\n\n>\"The experiments in the paper were limited to CIFAR-level and food-101 datasets, which are relatively small. It is recommended that the author conduct experiments on larger datasets, such as ImageNet, to demonstrate the potential of the proposed method.\"\n\nAfter submission, we evaluated our approach also on ImageNet. We included the results of our tests in table 7. We believe that the pooling configurations for ResNet models are optimised on ImageNet and therefore our identified pooling configurations are near optimal, and more importantly, our method is able to recover the default configuration (for M=2 SuperNets).\n Global/general Response:\n```\nWe thank the reviewers for their detailed reading of the manuscript and valuable comments. We are pleased that reviewers found our paper innovative, well-written with illustrative figures, interesting and with clear motivation and new directions in the field of NAS. We are also glad that our experiments are shown to be promising and exciting to the reviewers.\nThe main criticism from one reviewer is the possible previous work already providing results on the topic and the lack of solid experiments. We clearly explain that no previous work provided analysis/results on our specific topic and our results are solid, in line with baselines, with the correct hyper-parameters, and run multiple times. \nOverall in the paper, we added results on ImageNet and we included time for training our model as asked by reviewers.\n\nDear Reviewers,\n\nWe would greatly appreciate it if you could review the updated version of the paper and responses to your comments and provide us with your questions and suggestions. Doing so will allow us to address additional questions you may have before the discussion phase ends.\nWe again thank you for your time and effort reviewing our paper and look forward to your feedback.\n\n```\n", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised. I continue to recommend this work positively.", "role": "assistant" } ]
9ZE9L4tzO7D
AutoML 2023 Conference
Reviewer_Nafb
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Nafb, and the following are the paper you need to review:\n```<<9ZE9L4tzO7D>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: This paper proposes a meta-learning method to choose different strategies to deal with imbalance for different classes in problems with many classes.\n\noverall_review: The problem is important and the approach is very interesting and obtains good results.\n\nHowever, the paper is hard to read, which reduces it's usefulness and potential impact.\n\nreview_summary: In spite of the potential value, the effort required to improve clarity is significant.\n\nactions_required_to_increase_overall_recommendation: 1. Improve clarity.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Nafb\nWe thank you for your insightful comments and suggestions. We have revised the paper accordingly with the changes marked in red color.\n\n(1) **Re: Clarity of p5/L193-L196**. We included 2 figures and revised the paper to further increase the clarity of our approach. A detailed explanation is presented in \"Response about clarity and algorithm details\".\n\n(2) **Re: meta-feature is limited to \"heterogeneity across individual classes\".**\nThank you for your insightful comment on the potential limitation of CAP in relating different classes. We appreciate the suggestion to incorporate an error cost or class-wise attributes to capture more kinds of data heterogeneity. The comprehensive definition of data heterogeneity encompasses a multitude of problem characteristics, including error costs. We utilize class-conditional error and label noise ratio as attributes, both of which are a type of error cost. Additionally, our method displays notable adaptability, allowing all feasible features to be employed so long as they can be quantified as class attributes. For relating classes, We would like to highlight that CAP has the potential to be adapted to incorporate a pairwise similarity matrix or noise transition matrix. By mapping pairwise class similarities through the proper A2H functions, the CAP framework can effectively utilize the class relationships and their respective similarities. This modification allows the algorithm to account for inter-class relationships.\n\n(3) **Re: p4/l136: is the last term correct?** This term is aligned with prior studies [25,38] and we also provide theoretical justification in Appendix E (Appendix D in the old version).\n\n[25] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. arXiv preprint arXiv:2007.07314, 2020\n\n[38] Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and compensating for feature deviation in imbalanced deep learning. arXiv preprint arXiv:2001.01385, 2020. \n\n(4) **Re: notation, typo, and wording issues.** Thank you for your valuable feedback. Based on your suggestions, we have revised the paper and indicated the changes in our new submission.\n Global/general Response:\n```\nWe sincerely appreciate all reviewers for taking the time to review our paper and providing valuable reviews. Firstly, we would like to address the concerns regarding the clarity of the paper and the algorithm details. We have made significant revisions to section 3 and added Figure 2 and Figure 4 to provide further clarification.\n\nAs shown in Figure 2, **CAP** is the overall framework proposed in our paper, with **A2H** being the core algorithm. \\textbf{A2H} is a meta-strategy that transforms the class-attribute prior knowledge into hyper-parameter ${\\mathbfcal{S}}$ for each class through a trainable matrix $\\mathbf{W}$, forming a training strategy that satisfies the desired fairness objective. The left half of Figure 2 specifically illustrates how our algorithm calculates and trains the weights. In the first stage, we collect class-related information and construct an attribute table of $n \\times K$ dimension. This is a general prior, which is related to the distribution of training data, the training difficulty of each class, and other factors.\n\nSubsequently, by utilizing the constructed attribute table $\\mathbfcal{A}$, the first step of \\textbf{A2H} is to compute a $K\\times M$ Feature Dictionary $\\mathbfcal{D}=\\mathbfcal{F}(\\mathbfcal{A})$ by applying a set of functions $\\mathbfcal{F}$. It is important to emphasize that $M << K$, and in AutoBalance[1], the number of parameters to be trained is directly related to the number of classes $K$. Even with the group aggregation, the scale remains at the $\\mathcal{O}(K)$. However, in \\textbf{CAP}, $M$ is only related to the number of attributes $n$ and $|\\mathbfcal{F}|$, making it a constant. Therefore, we state in the introduction that its order of complexity belongs to $\\mathcal{O}(1)$. Then, in the second step, the weight matrix $\\mathbf{W}$ is trained through bilevel or post-hoc methods to construct the hyperparameter $\\mathbfcal{S}$. We further demonstrate our entire training process in Figure 4.\n\n[1] Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, and Samet Oymak. Autobalance: Optimized loss functions for imbalanced data. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. \n\n```\n", "role": "user" }, { "content": "I value the authors' response. It resolves the majority of my concerns. I will continue to recommend accepting this work.", "role": "assistant" } ]
9ZE9L4tzO7D
AutoML 2023 Conference
Reviewer_6kuv
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_6kuv, and the following are the paper you need to review:\n```<<9ZE9L4tzO7D>>```", "role": "user" }, { "content": "Review Content: summary_of_contributions: Real-world datasets may have imbalanced classes (e.g., certain classes may have many examples, while others have few, leading to a long-tail in the class distribution) or may have noisy labels. These data imbalance issues pose challenges for both standard classification accuracy and fairness objectives. To address such imbalances, one may wish to treat each class differently, for example upweighting the contribution of examples from rare (e.g., under-represented) classes.\n\nThis paper proposes an approach called Class-Attribute Priors (CAP) for meta-learning a mapping from class attributes (such as the frequency of a class in the dataset, the level of label noise, similarity to other classes, and training difficulty) to an embedding vector that can be passed to a downstream algorithm to modulate the training algorithm on a per-class basis.\n\nThe authors use CAP to meta-learn the hyperparameters of the following loss function (which was used in prior work, such as AutoBalance),\n$$\n\\ell(y, f(x)) = \\omega_k \\log \\left( 1 + \\sum_{k \\neq y} e^{\\ell_k - \\ell_y} \\cdot e^{\\Delta_k f_k(x) - \\Delta_y f_y(x)} \\right)\n$$\nwhere $\\{ \\omega_k, \\ell_k, \\Delta_k\\}$ are class-specific hyperparameters representing the overall loss weighting factor $\\omega_k$, the additive logit adjustment $\\ell_k$, and the multiplicative adjustment $\\Delta_k$.\n\nThe $n$ attributes for class $k$, denoted $\\mathcal{A}_k$, are mapped to an embedding space by a fixed, nonlinear mapping $\\mathcal{F}(\\mathcal{A}_k) : \\mathbb{R}^n \\to \\mathbb{R}^M$ , and then this embedding is mapped to a vector of hyperparameters $\\mathcal{S}_k = W \\mathcal{F}(\\mathcal{A}_k)$ via a weight matrix $W$ that is learned. These hyperparameters are the $(\\omega_k, \\ell_k, \\Delta_k)$ that define the loss function for the given class $k$. Conceptually, the embedding serves as a bottleneck that can map classes with similar attributes close to each other, such that the resulting class-specific losses are similar. The idea is that the relatively small number of class attributes can reduce the number of meta-parameters that need to be learned, e.g., $W$ has dimension $s \\times M$ rather than $s \\times K$ as would be the case if learning separate hyperparameters for each of the $K$ classes directly.\n\nThe authors propose two variants of the approach: in the first, they formulate the learning of $W$ as a bilevel optimization problem, where the inner problem optimizes model parameters using the current parametric loss function, and the outer problem evaluates the trained parameters on a validation set, targeting a fairness objective. The second approach performs post-hoc optimization of a function $g$ that modifies the logits output by $f(x)$ to minimize the loss on a test dataset (without performing bilevel optimization).\n\nThe paper evaluates CAP on long-tail variants of CIFAR and Imagenet. They use CAP to optimize for different fairness objectives, including quantile class performance and conditional value at risk (CVaR), and they compare to a few prior approaches for dealing with class imbalance. They also measure balanced error, and check which attributes are most useful on different tasks in an ablation study.\noverall_review: **Pros**\n\n* The CAP method is simple, and yields few hyperparameters.\n\n* The empirical results are good; CAP outperforms logit adjustment (LA), CDT, and AutoBalance on balanced error in Table 2, and outperforms LA when optimizing for two different fairness objectives in Figure 2.\n\n* The authors implement two instantiations of CAP: 1) using CAP to learn a parametric loss function via bilevel optimization; and 2) using CAP for post-hoc logit adjustment.\n\n\n**Cons**\n\n* Overall, the paper is not well-written, and this is a major weakness. The method is very simple, but it is not clearly explained; the writing is very wordy, and obfuscates simple ideas using overly abstract and complex terminology. I believe that the paper would benefit from a significant rewrite.\n\n* The paper does not provide an algorithm box detailing CAP.\n\n* I think that the method name \"class-attribute priors\" is not well-explained; in what way is this a prior? It does not seem to have a Bayesian interpretation. A clearer method name could be \"class-attribute embeddings.\" Also, with regard to method naming, both A2H and CAP are used at different points in the paper, while it would be clearer to use one consistently.\n\n* The introduction---especially the last paragraph from L64-L67---overstates the contribution of this work. To a large extent, this paper can be seen as a minor extension of AutoBalance leveraging a mapping from class attributes to embedding vectors. I believe that the authors should make this clear, rather than stating that \"This work makes key contributions to fairness and heterogeneous learning problems in terms of methodology, as well as practical impact.\"\n\n* The paper downplays its similarities to AutoBalance [1]. Section 3.2 of this paper (\"CAP for loss function design\") is nearly identical to AutoBalance, the only difference being the use of learned class embeddings. The loss function that is learned is identical to that in [1], and the bilevel optimization setup is the same. Also, part of this paper implies that the hyperparameter dimensionality for AutoBalance scales as $O(K)$ where $K$ is the number of classes; however, this scaling issue was already addressed in Section 2.2 in [1], where they use $K' < K$ dimensional hyperparameters, clustering classes together when they occur with similar frequencies in the dataset. While this is briefly mentioned in Lines 171-177 of this paper, the majority of the paper is still written in a way that suggests that reducing the hyperparameter dimensionality is a novel contribution, which is not the case.\n\n* Overall, the novelty of the paper is limited (and overstated), as it can be interpreted as a simple extension of AutoBalance that learns embedding vectors representing classes, based on attributes of those classes.\n\n* What meta-parameters were used to train the AutoBalance baseline? In particular, what was the dimensionality of the hyperparameter space (how many groups $g$ were used), and how does it compare with CAP?\n\n* L150: Why is this parameterization used for $\\Delta$? $\\Delta = \\text{sigmoid}\\left( \\sqrt{K} \\frac{\\mathcal{D} w_{\\Delta}}{\\| \\mathcal{D} w_{\\Delta} \\|} \\right)$\n\n* Because the loss function parameterization is the same as AutoBalance (among other related works), did the authors of CAP try adding the personalized data augmentation (PDA) from Eq. 2.2 in [1]? This would be interesting to see with CAP, as PDA was justified theoretically in [1]. It would be useful to evaluate the impact of using class attributes for data augmentation compared to learning separate parameters for each class.\n\n* No experimental results are provided for the iNaturalist-2018 dataset, which was used in [1].\n\n* None of the components of Figure 1 are described in the main text or figure caption. Figure 1 is not even referenced in the main text at all. The caption should give the necessary context to parse the figure; however, the caption is vague and does not ground any of its statements in the figure. Just some of the questions that arise: What does $S$ correspond to? What does the \"Function\" box around the attributes box mean? Why is CAP written on the arrow connecting the function to $S$? What does the dictionary inside the attributes box refer to? What does the dashed line from the \"full dataset\" to the \"sample size and distribution shift\" plot denote? Why is \"heterogeneous\" color-coded the same green as the dashed arrow? How should the green and blue points be interpreted (are they two clusters of examples with labels shown by the colors, to illustrate label noise)?\n\n - As with the rest of the paper, the caption of Figure 1 contains overly abstract and vague language. What is the \"global dataset\"? What is a \"composition of heterogeneous sub-datasets induced by classes\"? Why is the term \"strategy\" used interchangeably with \"hyperparameters\"?\n\n* The right-hand side of Figure 1 does not seem to be related to the LHS. The caption does not draw any connection between the RHS and LHS; I think that the information in the RHS would be better suited for a table, or simply discussed in the main text without a figure. Otherwise, the reader is left wondering which parts of the LHS are connected to each statement in the RHS.\n\n* The term \"meta-strategy\" is not clearly defined.\n\n* I think that it would be good to include more ablation studies over CAP. For example, what if only a subset of the parameters of the loss function are tuned (similarly to ablations in [1])? What happens as more features (class attributes) are passed into the embedding layer, increasing the hyperparameter dimensionality of CAP? What if the number of attributes is artificially inflated, either by adding new, random attributes, or by replicating existing attributes several times? Such investigations may be useful to check whether performance improvements are due to implicit regularization from a small number of attributes (and therefore few hyperparameters). If there are many hyperparameters, does CAP overfit to the validation set?\n\n* In some experiments, CAP is only slightly better than AutoBalance, and it is not clear where the improvement comes from (thus the need for more ablations).\n\n* The appendix section dedicated to experimental details is sparse.\n\n\n**Minor**\n\n* In the abstract, L12-L13, it should be clarified what type of improvement this refers to, e.g., compute/memory requirements, or performance on accuracy or fairness?\n\n* L30: \"that tailored\" --> \"that are tailored\"\n\n* L40: \"treating hyperparameters as free variables\" --> This terminology is unnecessarily confusing; \"free variables\" are not defined anywhere.\n\n* L47: \"as it requires $\\mathcal{O}(1)$ --> Shouldn't this be $\\mathcal{O}(M)$ where $M < K$?\n\n* L259: \"stand deviation\" --> \"standard deviation\"\n\n* L295: What does \"in the context of dataset heterogeneity adoption\" mean?\n\nreview_summary: While the problem is worth studying, and the results are decent, this paper needs a significant re-write to improve clarity. The clarity is a major issue.\nactions_required_to_increase_overall_recommendation: In order for this paper to meet the bar for acceptance, I believe it would have to be rewritten to improve clarity, and remove obfuscation. In addition, please see the comments regarding ablations and the comparison to AutoBalance listed in the Overall Review box.\nRating: 2 reject, significant issues present\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 6kuv\nWe thank you for your insightful comments and suggestions. We have revised the paper accordingly with the changes marked in red color.\n\n(1) **Re: Not well written.**\nWe express our gratitude for your valuable feedback and suggestions. We have made the revisions to the paper. The modifications have been highlighted in red for ease of identification.\n\n\n(2) **Re: Algorithm box.** We have revised the method introduction and added two new figures to further explain the algorithm. A detailed explanation is presented in \"Response about clarity and algorithm details\".\n\n(3) **Re: Method name and definition of \"prior\".** We appreciate the feedback regarding the term \"prior\" and would like to clarify that in our technique, \"prior\" has a broader definition that encompasses Bayesian priors. We refer to the priors as Class-attribute Priors because the attributes are computed prior to training, thereby constituting prior information.\n\n(4) **Re: Difference between CAP and A2H.** To avoid confusion, we would like to emphasize that A2H and CAP are not identical. CAP refers to the complete framework, which comprises the class attributes design, A2H, and training via either Bilevel optimization or post-hoc optimization. On the other hand, A2H is responsible for mapping the attributes to a class-specific strategy. We have added Fig.~2 to provide a more detailed explanation of the approach.\n\n(5) **Re: Novelty, contribution and relation to AutoBalance.** First, our work is not merely an additive extension of AutoBalance but instead proposes several different contributions. AutoBalance focuses on bilevel optimization and aims to optimize balanced accuracy. Their method directly optimizes the final weights $\\mathbf{\\omega}$ and $\\mathbf{l}$, resulting in a large search space, and they have to group tail classes for better stability when the number of validation examples is extremely small. In contrast, CAP tackles multiple fairness objectives, incorporates post-hoc optimization, and utilizes class attributes to develop a more comprehensive solution. These aspects go beyond the scope of AutoBalance and demonstrate the novelty of our approach.\n\nThen, regarding the relationship with AutoBalance, we do not intend to downplay the similarities with AutoBalance, and we acknowledge its importance in our implementation. However, it is crucial to note that our work also incorporates post-hoc optimization, and multiple fairness objectives, which distinguishes our approach from AutoBalance. \n\n(6) **Re: meta-parameter of AutoBalance baseline.** We employed the same parameters as AutoBalance when reproducing their results. In the case of CIFAR10-LT, CIFAR100-LT, and ImageNetLT datasets, we utilized 1,10, and 20 classes per cluster, respectively.\n\n(7) **Re: Why choose $\\Delta={sigmoid}(\\sqrt{K}\\frac{\\mathcal{D}\\omega_{\\Delta}}{\\|\\mathcal{D}\\omega_{\\Delta}\\|})$**\nFirst, we normalize $\\mathcal{D}\\omega_{\\Delta}$ to make it unit length and scale it back to $\\sqrt{K}$ length to maintain the magnitude of the vector. That is a common practice in machine learning and data processing to ensure that the features of the data are on the same scale. We also use a sigmoid function to normalize the delta value following the AutoBalance formular. As detailed in Appendix C, without proper early stopping or regularization, the delta value can increase continuously, leading to a stretched logits distribution where the logits become progressively larger.\n\n(8) **Re: Experiment with Personalized Data Augmentation (PDA) and iNaturalist.** In our paper, our primary focus is on fairness objectives and post-hoc adjustment. Our post-hoc adjustment method offers a flexible and efficient solution that enables rapid adaptation to various objectives. In contrast, PDA demands higher complexity and computational time, which is why we did not incorporate it in our current experiments. However, if deemed necessary, we are more than willing to include additional experiments concerning PDA to further strengthen our work and address the reviewer's concern.\n\n(9) **Re: Fig 1, discussion and caption**\nWe have modified the caption and added more discussions in the main context in the new manuscript. \n\n(10) **Re: Term \"meta-strategy\" is not clearly defined.** As our final strategy, the adjustment weights, are generated by the A2H algorithm, we refer to A2H as a meta-strategy. This is because it determines the appropriate training or adjustment strategy to use for a given dataset and fairness objective. This usage is consistent with the common understanding of the \"meta-\" prefix, indicating a higher level of abstraction that guides the selection of the underlying strategy. We will provide a more detailed explanation in the main context.\n\n\n(11) **Re: More ablation studies over CAP.** Thanks for your suggestion. Though we didn’t add a separate section for ablation experiments, we provided some experiments to support our claim. \n\n1) **What if only a subset of the parameters of the loss function are tuned?** It is important to note that our contribution does not pertain to the proposal of the loss function or the comparison of the parameters' roles. The family of loss functions has already been proposed, with theoretical insights provided in previous studies, particularly for bilevel optimization [25, 38, 19]. However, the relationship between loss function design and post-hoc adjustment is not entirely clear, so we have elaborated on this in Appendix C and conducted ablation experiments to provide a more comprehensive analysis. The results are shown in Table 6. And we further analyzed the performance in Appendix D. \n\n2) **What happens as more features (class attributes) are passed into the embedding layer, increasing the hyperparameter dimensionality of CAP?** The results are shown in Table.4 and discussed in Section 4.3. \n\n3) **What if the number of attributes is artificially inflated, either by adding new, random attributes or by replicating existing attributes several times?** We appreciate your suggestion, and we would be willing to include the experiments if deemed beneficial. However, we would like to clarify that the training may automatically down-weight random and harmful attributes which we also observed in previous experiments.\n\n(12) **Re: Ablation experiments on top of AutoBalance.** Our approach yields benefits through a reduction in the number of hyperparameters and the connection of all classes during training via weight-sharing, and the effectiveness of this approach is demonstrated in the experiments. We compared our technique with Autobalance, which employs a clustering strategy to reduce the number of hyperparameters, in Tables 2 and 4. This comparison highlights that simply reducing the number of hyperparameters may not be sufficient for stabilizing the training for tail classes. Our approach (CAP) is capable of providing additional assistance with tail classes while stabilizing the training process, as demonstrated in Table 4. Additionally, in Appendix B, we show that CAP enhances training stability and reduces the necessity for warm-up in contrast to Autobalance.\n\n(13) **Re: The appendix section dedicated to experimental details is sparse.** We add more details in the appendix. Furthermore, we provide code and reproduce configuration for this paper.\n\n[25] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. arXiv preprint arXiv:2007.07314, 2020\n\n[38] Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and compensating for feature deviation in imbalanced deep learning. arXiv preprint arXiv:2001.01385, 2020. \n\n[19] Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thrampoulidis. Label-imbalanced and group-sensitive classification under overparameterization. accepted to the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS), 2021.\n\n Global/general Response:\n```\nWe sincerely appreciate all reviewers for taking the time to review our paper and providing valuable reviews. Firstly, we would like to address the concerns regarding the clarity of the paper and the algorithm details. We have made significant revisions to section 3 and added Figure 2 and Figure 4 to provide further clarification.\n\nAs shown in Figure 2, **CAP** is the overall framework proposed in our paper, with **A2H** being the core algorithm. \\textbf{A2H} is a meta-strategy that transforms the class-attribute prior knowledge into hyper-parameter ${\\mathbfcal{S}}$ for each class through a trainable matrix $\\mathbf{W}$, forming a training strategy that satisfies the desired fairness objective. The left half of Figure 2 specifically illustrates how our algorithm calculates and trains the weights. In the first stage, we collect class-related information and construct an attribute table of $n \\times K$ dimension. This is a general prior, which is related to the distribution of training data, the training difficulty of each class, and other factors.\n\nSubsequently, by utilizing the constructed attribute table $\\mathbfcal{A}$, the first step of \\textbf{A2H} is to compute a $K\\times M$ Feature Dictionary $\\mathbfcal{D}=\\mathbfcal{F}(\\mathbfcal{A})$ by applying a set of functions $\\mathbfcal{F}$. It is important to emphasize that $M << K$, and in AutoBalance[1], the number of parameters to be trained is directly related to the number of classes $K$. Even with the group aggregation, the scale remains at the $\\mathcal{O}(K)$. However, in \\textbf{CAP}, $M$ is only related to the number of attributes $n$ and $|\\mathbfcal{F}|$, making it a constant. Therefore, we state in the introduction that its order of complexity belongs to $\\mathcal{O}(1)$. Then, in the second step, the weight matrix $\\mathbf{W}$ is trained through bilevel or post-hoc methods to construct the hyperparameter $\\mathbfcal{S}$. We further demonstrate our entire training process in Figure 4.\n\n[1] Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, and Samet Oymak. Autobalance: Optimized loss functions for imbalanced data. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. \n\n```\n", "role": "user" }, { "content": "Title: Thank you\nI have read the other reviews and the authors' rebuttal, and I will maintain my score. My concerns regarding writing clarity were not sufficiently addressed by the rebuttal or the updated PDF. I think more substantial re-writing is needed. I appreciate that the authors added a figure attempting to explain the method, but overall, I agree with reviewers Nafb and MKuD that the paper is unclear and hard to read in its current form. I think it could be improved in a future submission cycle.", "role": "assistant" } ]