Dataset Viewer
paper_id
stringlengths 10
10
| paper_title
stringlengths 31
207
| review_id
stringlengths 10
10
| forum
stringlengths 10
10
| reviewer
stringlengths 47
50
| paper_topic_and_main_contributions
stringlengths 1
4.76k
⌀ | reasons_to_accept
stringlengths 1
2.81k
⌀ | reasons_to_reject
stringlengths 1
7.1k
⌀ | questions_for_the_authors
stringlengths 1
6.4k
⌀ | soundness
stringclasses 5
values | excitement
stringclasses 5
values | reproducibility
stringclasses 6
values | ethical_concerns
stringclasses 2
values | reviewer_confidence
stringclasses 5
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
zwqDROxClj | {'value': 'IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions'} | LKMFvBsLIp | zwqDROxClj | EMNLP/2023/Conference/Submission1595/Reviewer_HrRJ | The authors propose an induction-augmented framework that utilizes inductive knowledge derived from LLMs and the retrieved documents for better implicit reasoning. Specifically, they enhance the conventional RAG with an inductor that generates inductive knowledge.
The authors propose an IAG-GPT model which directly utilizes GPT-3 and IAG-Student which is first trained via knowledge distillation with GPT-3 pseudo labels, and then optimized through a differentiable beam search algorithm.
The experiments show that IAG-GPT has significant advantages over ChatGPT and performs extremely well on CSQA2.0 and StrategyQA. IAG-Student outperforms RAG baselines.
| This paper addresses a non-trivial problem for implicit reasoning by introducing an inductor with the assistance of LLMs.
The authors conduct extensive experiments and the experiment results are reasonable.
| 1. The generalization ability of the model is a major concern.
- The model assigns inductive information to every question, even when some questions do not require it. A potential solution could be implementing a question classifier to identify the type of question and determine whether inductive information is necessary for a particular query.
- The strict structure/formulation of the prompt, especially the Knowledge part, is another issue (lines 245-247).
2. Another minor issue is that there is a huge gap between the performance of IAG-GPT and IAG-Student, which makes the distilled model and its corresponding algorithm less convincing. More experiments on larger models are expected.
| 1. In the IAG-student algorithm, the generator is first trained followed by the inductor, will finetuning the generator together with the inductor help?
2. In line 228, How to calculate the mu and sigma for the distribution?
| 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zwqDROxClj | {'value': 'IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions'} | ttwwsOhAIT | zwqDROxClj | EMNLP/2023/Conference/Submission1595/Reviewer_i5uE | This paper introduces the Induction-Augmented Generation framework, which integrates inductive knowledge statements for Open-Domain QA tasks to enhance implicit reasoning. The framework proposes two models: IAG-GPT and IAG-student. IAG-GPT generates inductive knowledge statements using GPT-3 and utilizes both generated knowledge prompts and retrieved documents for QA tasks. Since IAG-GPT is dependent on the GPT-3 API, IAG-student is trained to distill GPT-3. | The proposed approach achieved state-of-the-art performance by integrating inductive knowledge into the prompt on the CSQA and Strategy QA datasets. The presentation of quality examples (such as Table 6 and 7) further supports the validity of the work. | * The consistency in reporting the results needs to be done. Figure 3-4 and Table 2-4 uses StrategyQA dev which makes it hard to compare with Table 1 StrategyQA test baselines.
* Table 2 shows that knowledge statements generated with inductive prompting support QA performance. However, to fully verify the effectiveness of inductive knowledge statements, additional comparisons need to be made on CSQA dev with 15 retrieved documents versus CSQA dev with 10 retrieved documents and 5 inductive knowledge statements. On StrategyQA, a comparison between 10 retrieved documents and 5 retrieved documents with 5 inductive knowledge statements needs to be conducted. | * Is the concatenation style of Retrieval Only in IAG the same as FiD [1]? If it is FiD-style, then there will be N number of passages to encode with the query. If not, as shown in concept figure 2, there will be a single long input which is a concatenation of the query and all N passages for IAG-GPT and IAG-student. It needs more clarification because in the knowledge fusion experiment, it seems like all of the question, knowledge statements, and retrieved documents are concatenated into a single long input.
* On the CSQA dev dataset, it is stated that top-5 snippets are used (Line 423). However, in Line 465, on CSQA, it uses 10 retrieved documents. Is the top-5 snippets transformed into 10 retrieved documents? Or does it use top-10 snippets?
* Inductive prompting generates implicit knowledge through in-context learning of 5 demonstrations. How are trivial and CoT prompting done? The examples are shown in Table 7, but it is not clear how the prompts are formed.
* Are there cases where inductive knowledge statements are hallucinated?
[1] Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zwqDROxClj | {'value': 'IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions'} | LIuV4fZqAO | zwqDROxClj | EMNLP/2023/Conference/Submission1595/Reviewer_bY1q | The paper's key contributions are as follows:
1. The paper proposes an inductive prompting method inspired by cognitive functions of inductive reasoning. This method guides language models (LLMs), specifically GPT-3, to generate knowledge statements that establish reasoning paths.
2. The paper introduces the Induction-Augmented Generation (IAG) framework, which enhances the traditional Retrieval-Augmented Generation (RAG) architecture.
3. The paper presents two variants of the IAG framework. IAG-GPT leverages the inductive knowledge statements sampled from GPT-3 as evidence for the generator. | This paper offers several strengths and benefits:
1.The paper introduces a novel approach, Induction-Augmented Generation (IAG), which effectively combines inductive reasoning with language generation for answering implicit reasoning questions.
2. Implicit reasoning questions pose a significant challenge for open-domain question answering systems. By focusing on this challenge, the paper contributes to solving an important problem in the field of NLP, advancing the state of the art in understanding and generating reasoned answers.
3. The paper presents a well-defined framework (IAG) and a detailed methodology for integrating inductive knowledge into the answer generation process. | there are also some potential weaknesses:
1. The proposed Induction-Augmented Generation (IAG) framework involves multiple components, including retrieval, induction, and generation, which might make it challenging for researchers to reproduce and implement the approach.
2. The paper heavily relies on external language models, such as GPT-3, for generating inductive knowledge and improving performance. This reliance raises concerns about the availability, cost, and access to these models, which could limit the adoption of the proposed approach by researchers with limited resources or access.
3. While the paper highlights successful cases where inductive knowledge enhances answer prediction, it does not thoroughly analyze or discuss cases where the approach might fail or provide incorrect answers. Understanding the limitations and potential pitfalls of the IAG framework is crucial for its safe and reliable application. | null | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 5: Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. |
zwqDROxClj | {'value': 'IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions'} | gJTMXmoGmD | zwqDROxClj | EMNLP/2023/Conference/Submission1595/Reviewer_1Ny9 | This paper is based on Induction Augmented Generation Framework that uses LLMs for the implicit reasoning approach. The framework outperforms baselines for Retrieval Augmented Generation and ChatGPT on two Open domain tasks.
Overall Contributions :
1) Novel inductive prompting method which improves the factuality of knowledge elicited from LLMs.
2) A GPT implementation of the framework that improves over strong baseline models and ChatGPT.
3) A TAILBACK optimization algorithm that trains the inductor which allows IAG-Student to outperform Retrieval Augmented Generation baselines. | The description of the methodology and building up the framework was well explained.
Evaluation conducted on two large Open-domain QA benchmark datasets.
| 1) Although the Student Inductor model is shown to surpass the benchmark, the explanation and the underlying working principle was a bit hard to follow.
2) For tailback differential beam searching is used, but it is hard to follow what are the steps pertaining to it. | 1) Can you explain the tailback workflow using a pseudo code with detailed steps? Currently, it's a bit difficult to follow.
2) Are there any plan to switch to later GPT versions from GPT-3? What would be the implications in terms of evaluation outcomes ? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zrBrl2iQUr | {'value': 'Crossing the Aisle: Unveiling Partisan and Counter-Partisan Events in News Reporting'} | 9nIcwKQ0aO | zrBrl2iQUr | EMNLP/2023/Conference/Submission5006/Reviewer_5vXQ | This paper proposes a news article dataset consisting of 206 news articles annotated for partisan and counter-partisan events in the articles. Partisan events are defined as events reporting of which benefit the stance of the news article and counter-partisan events are events that are reported with partisan events to make the news article seem neutral.
The authors proposed a human annotation framework for annotating such events by focusing on reported entities and sentiment toward them. The authors identified the ideologies of the news articles and the partisanship of the events using a few off-the-shelf classifiers and ChatGPT. The authors also presented some qualitative analysis of the partisan events and reported how different polarities use such events. For example, in the right-biased media, counter-partisan events appear towards the end of the news articles. | [A1] This work proposes a novel dataset for analyzing partisan news media reporting of events.
[A2] The human annotation framework is well-described and it seems easy to replicate. | [R1] The concept of the partisan event as it is reported in the paper seems conceptually very similar to previously studied information bias (Fan et al., 2019) (discussed by the authors in the related works section) and subframes (Roy and Goldwasser, 2020) (not discussed by the authors in the paper). Hence, I am not totally convinced about the novelty of the work rather it seems like an incremental extension of the previous studies.
[R2] The number of news articles annotated is very few (only 206). Hence, it is hard to say that any trends found in this small amount of data are generalizable to the whole news media. | [Q1] What are the high-level discussion topics in the news articles studied in this paper? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zrBrl2iQUr | {'value': 'Crossing the Aisle: Unveiling Partisan and Counter-Partisan Events in News Reporting'} | 4WJ8SxwMk5 | zrBrl2iQUr | EMNLP/2023/Conference/Submission5006/Reviewer_wiNC | This work focuses on the task of detecting partisan vs counter-partisan events in news reporting. The authors aggregate 103 news stories, each with 2 news articles from opposing ideologies, end extract events from them using a RoBERTa-based model. They then annotate (a) each article as ideologically leaning towards left/right and on a Likert scale (1-5, left-right) and (b) each identified event in a news article as partisan/counter-partisan/neutral. After a qualitative analysis of their annotations, they test two RoBERTa-based models on two tasks: (A) article-level ideology prediction and (B) partisan event detection -- for (B) they also try ChatGPT.
The paper is well-written, albeit with a few clarification points needed primarily on the annotation process (§3). The major contribution is the introduction of a novel dataset, which can be useful for future research in this area. My major concern though is the lack of error and qualitative analysis of the results, which is quite important for this type of work: which are the most challenging cases for the best-performing model? Which cases are more easily captured? Is there a correlation between qualitative characteristics (e.g., sentiment) and the accuracy of the models? Incorporating a section on this type of analysis would greatly benefit the paper, as it can guide future work on this task/dataset. | - The introduction of a novel dataset. | - Lack of error and qualitative analysis of the results, which can set up the future research directions on this task.
- A few missing details/clarification points are needed, primarily in section 3 (annotation process) which is the most important. | - L134: "We manually inspect each story [...] differ": Please provide more details. Who inspected each story, which were the annotation guidelines/the inspectors' background, IAA, etc.
- L136: Which is the definition of "events from TimeML"?
- L138: How was the 89.31 F1 score calculated? On which test set? Is there a performance drop expected when applied on the PARTISAN EVENTS dataset?
- L174: "we held individual weekly meetings [...] if there was ambiguity": could you provide an example use case in an appendix possibly? Was this part of the training or part of the actual annotation task? If the latter is the case, could this introduce bias in the annotations?
- §3.2: Clarify early on this section that each annotation was performed by two students and not by all of them.
- §3.2: You mention that the IAA on "*stories'* relative ideology" is 91%; but on §3.1, you mention that "At the *article* level, the annotator determines the relative ideological ordering". Do these two quoted pieces of text refer to the same annotation? If so, please be consistent on your terminology (i.e., "story" vs "article level" annotation).
- L186: "a significant difference in their absolute ideologies": please provide the exact absolute value used.
- §5: Is Task A a two- or a five-class prediction task?
- Results section: it is unclear to my why you have omitted the neutral class. Surely the other two classes are more interesting to your task, but it is very important for the reader to see the overall picture of the results. | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zrBrl2iQUr | {'value': 'Crossing the Aisle: Unveiling Partisan and Counter-Partisan Events in News Reporting'} | lopURGm6td | zrBrl2iQUr | EMNLP/2023/Conference/Submission5006/Reviewer_ZkKA | The paper studied the effects of partisan and counter-partisan events in news reporting across different media outlets. A newly annotated dataset (PARTISAN EVENTS) is provided. Experiments on partisan event detection with a variety of models demonstrate the difficulty of the proposed task. | (1) The proposed partisan and counter-partisan event detection task is new and interesting
(2) A task-corresponding dataset is built for research purpose
(3) The paper is well written and easy to follow
| (1) The novelty of the paper is limited.
(2) It is mainly a case-study paper. No new methods and techniques are proposed. The experiments only show the difficulty of the proposed task.
| null | 2: Borderline: Some of the main claims/arguments are not sufficiently supported, there are major technical/methodological problems | 2: Mediocre: This paper makes marginal contributions (vs non-contemporaneous work), so I would rather not see it in the conference. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zpayaLaUhL | {'value': 'Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position'} | 1oyU2ZDkEj | zpayaLaUhL | EMNLP/2023/Conference/Submission1988/Reviewer_8B3x | This paper mainly discusses the behavior of attention mechanisms in pre-trained language models, specifically focusing on the impact of position embeddings on attention. The authors conduct a series of experiments to analyze the relationship between attention and position embeddings, and also investigate the effect of different model architectures and languages on attention behavior.
For example,the authors find that 1) learnable absolute position embedding contains sinusoid-like waves, 2) attention heads are able to extract periodic components from the position embedding in the hidden states, and 3) the self-attention mechanism is responsible for adjusting the phase of the periodic components in both the query and the key, thus influencing the direction of attention.
These findings provide a better understanding of the mechanisms underlying position encoding in attention, which can inform future model design and development. | 1. Thorough analysis of attention mechanisms. The paper's findings on the impact of position embeddings on attention is particularly valuable.
2. Clear presentation of experimental results and findings.
3. The authors' use of theoretical interpretations and visualization techniques to illustrate attention behavior is particularly helpful in conveying their findings to readers. | 1. The authors' findings are related to the models and training objectives used in their study, and may not be entirely generalizable. However, the detection and analysis methods proposed by the authors can be applied to other models. | null | 5: Excellent: This study is one of the most thorough I have seen, given its type. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zpayaLaUhL | {'value': 'Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position'} | aQtWnkoUAX | zpayaLaUhL | EMNLP/2023/Conference/Submission1988/Reviewer_5Jmf | This paper analyzes the mechanism of relative positional embeddings and shows that the relative positional dependence of attention emerges due to some factors. Besides, the word embedding is also a factor that enables inference based on relative position for the attention strongly concentrated on the adjacent tokens. | This paper introduces an interesting work on the relationship between self-attention and position embeddings. The factors proposed in the paper are based on several experiments and the results are persuasive. | There are some problems in this paper
1. The relationships between different sections are not clear and authors should give a clear description of the relationship in the introduction. In the Introduction, these factors are put in Section 4. However, Section 3 also shows the factors of RPE. Besides Section 4 focuses on nearby tokens and Section 5 focuses on adjacent tokens, while adjacent tokens can be viewed as one part of nearby tokens. It is better to combine Section 4 and 5.
2. Authors should make the equation more prescriptive. Line 244, $X$a->$X_a$, $Y$b->$Y_b$. Line 329, $q$s->$q=[q_n]$, $k$s->$k=[k_n]$ | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
|
zpayaLaUhL | {'value': 'Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position'} | n1Zz9dJ9U6 | zpayaLaUhL | EMNLP/2023/Conference/Submission1988/Reviewer_fPAJ | Positional encoding is important in Transformer architecture, and until a few years ago, learnable absolute position embedding (APE) was often used (e.g., BERT, RoBERTa, GPT-2).
Clark+'19 reported that some attention heads in BERT attend to context words according to their relative positions; Ravishankar&Søgaard+'21 reported that some columns of absolute position embeddings are periodic; Chang+'22 reported that the position information is encoded in a hidden representation while remaining periodic.
However, it is not clear how periodicity is used in models, which is what this paper attacks.
Specifically, this paper showed that several attention heads in RoBERTa realize an attention pattern that depends on relative position by extracting the periodic components derived from APE from the hidden representation with shifting the phase in query and key transformation. | - Analyses use convincing and solid mathematical tools.
- Mechanism for realizing relative position-dependent attention patterns from absolute position embedding is really interesting. | - Limited Experiments
- Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2.
- The input for the analysis is limited to only 100 or 200 samples from wikitext-2. It would be desirable to experiment with a larger number of samples or with datasets from various domains.
- Findings are interesting, but no statement of what the contribution is and how practical impact on the community or practical use. (Question A).
- Results contradicting those reported in existing studies (Clark+'19) are observed but not discussed (Question B).
- I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior is not such that it can be discussed to explain the behavior under normal conditions. Word embeddings may be the only prerequisites for the model to work properly rather than playing an important role in certain attention patterns.
- Introduction says to analyze "why attention depends on relative position," but I cannot find content that adequately answers this question.
- There is no connection or discussion of relative position embedding, which is typically employed in recent Transformer models in place of learnable APE (Question C). | - A. What is the substantial contribution to the community from these findings? For example, could they lead to any ideas to improve the Transformer architecture?
- B. Clark+'19 reported that most heads pay little attention to themselves and that there are heads that focus heavily on adjacent tokens, especially in the earlier layers. However, this paper shows that there are several heads that focus on themselves and that heads focusing heavily on adjacent token heads were found in various layers. What is the reason for this discrepancy?
- C. Nowadays, relative position embedding is often used instead of APE. Can this paper provide insights into the reasons for the superiority of relative position embedding? For example, can we interpret that relative position embedding can capture other language information richer than APE because there is no need to extract relative positions in the query or key transformation matrices? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zeGXjQYhXz | {'value': 'Video-Text Retrieval by Supervised Sparse Multi-Grained Learning'} | bjd85sbyKN | zeGXjQYhXz | EMNLP/2023/Conference/Submission4649/Reviewer_UH8H | This paper proposes the shared space to alleviate the problem by the representation mismatches among the modalities (i.e., video and text). | The best performances in three benchmarks in video-text retrieval
Sufficient experiments on the retrieval dataset | [-] Poor writing. To use the word 'Sparse', it is necessary to first specify the sparsity on what (e.g., quantities, distribution?). When reading the Introduction, I didn't know the sparsity of what and also didn't know why the concept of sparsity is needed or what it does.
[-] If the contributions of this paper are based on the representation enhancement of shared spaces about heterogeneous modalities, it is more convincing to validate the approach in several multi-modal video language tasks such as video question answering, video-grounded reasoning/dialogue. Why only experiments on text-video retrieval? Haven't you verified the effect on several other multi modal tasks?
[-] Does the author can visualize the effectiveness of the proposed method? | see above | 1: Poor: This study is not yet sufficiently thorough to warrant publication or is not relevant to EMNLP. | 2: Mediocre: This paper makes marginal contributions (vs non-contemporaneous work), so I would rather not see it in the conference. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zeGXjQYhXz | {'value': 'Video-Text Retrieval by Supervised Sparse Multi-Grained Learning'} | zV1gFrvzZ5 | zeGXjQYhXz | EMNLP/2023/Conference/Submission4649/Reviewer_VnYa | The authors have developed an efficient, sparse method for correlating textual and visual data within a unified conceptual framework. The most important novelty of the paper, in my opinion, is to cluster the textual tokens into concepts, and align the visual and textual representations in that space. | Interesting idea.
Important problem.
The solution can be applied to different types of visual-textual applications, for example phishing detection on social media, recommendation systems, and etc.
Superior results.
Well written. | KNN clustering to find concepts given words, makes the approach biased towards the word embeddings of he up-stream models. It can be claimed that the success of the networks is mostly achieved by the initial words embeddings, before clustering. | 1- How this method can handle rare and frequent concepts?
2- Is the cluster center selected as the representative?
3- Is there a way to take advantage of the cooccurrence of cluster words?
4- Why the Dense space similarity needed? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zeGXjQYhXz | {'value': 'Video-Text Retrieval by Supervised Sparse Multi-Grained Learning'} | LohakslYF7 | zeGXjQYhXz | EMNLP/2023/Conference/Submission4649/Reviewer_Vvmb | This paper introduces a multi-grained sparse learning framework designed to acquire a shared, aligned sparse space for the purpose of video-text retrieval tasks.
The authors adopt a supervised approach to learning and continuously updating the shared sparse space for text and video representations. This is achieved through the incorporation of the proposed similarity and alignment losses. Furthermore, the paper suggests incorporating a multi-grained similarity approach in the context of video retrieval tasks. | The conducted experiments aptly showcase the effectiveness of the proposed method. Notably, this paper stands out for its comprehensive ablation study and meticulous analysis of each individual module.
| In essence, this paper presents an integration of global video-text and local frame-text elements within both the introduced sparse space and the original dense space, all aimed at enhancing video retrieval. However, it's worth noting that the impact of the introduced sparse space alone is not thoroughly elucidated in the analysis. The method itself should be multi-space multi-grained learning framework for video retrieval.
| A. Table 6 showcases that the most optimal performance is attained by employing the multi-space multi-grained similarity computation. Nonetheless, it is important to underscore that the analysis regarding the influence of the introduced sparse space does not encompass its individual performance outcomes.
B. Referring to your assertion of achieving the state-of-the-art (SOTA) status, it might be appropriate to reconsider. To the best of my knowledge, several other published methods have surpassed your results, including CLIP-ViP [1], Cap4Video [2], DRL [3], and TemPVL [4].
[1] Xue, Hongwei, et al. "Clip-vip: Adapting pre-trained image-text model to video-language representation alignment."
[2] Wu, Wenhao, et al. "Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?"
[3] Wang, Qiang, et al. "Disentangled representation learning for text-video retrieval."
[4] Ma, Fan, et al. "Temporal perceiving video-language pre-training." | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zdMislOLTv | {'value': 'Zero-Shot-BERT-Adapters: a Zero-Shot Pipeline for Unknown Intent Detection'} | jYwtGQBcSp | zdMislOLTv | EMNLP/2023/Conference/Submission117/Reviewer_wrS3 | This paper tackles the zero-shot intent detection tasks and proposes a two-stage zero-shot bert adapters (Z-BERT-A), which first leverages a dependency parser to extract a set of potential intents, then uses NLI methods relying on Bert models to classify on the candidate classes. Experimental results show this method can outperform a wide variety of baselines in both known intents zero-shot classification and unseen intent discovery. | 1. This paper focus on important tasks of both known intents zero-shot classification and unseen intent discovery, and can leverages dependency parsers to enhance the intent generation process.
2. Experimental results show the proposed methods are effective in zero-shot intent detection.
| 1. This work is better suited as a demo track paper, rather than a regular long paper.
2. The idea of using NLI to handle zero-shot learning tasks are quite common.
| In section 4, the intent generation process generates the candidates novel class for intent classification, but I wonder for a set of unseen classes, how to normalize same intention with different utterances, and how to determine the total number of new intent classes? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zdMislOLTv | {'value': 'Zero-Shot-BERT-Adapters: a Zero-Shot Pipeline for Unknown Intent Detection'} | LzJ6Vg1zuU | zdMislOLTv | EMNLP/2023/Conference/Submission117/Reviewer_ihi6 | The paper presents a technique for zero-shot intent classification. The authors make use of a BERT model finetuned on the NLI task and a dependency parser to discover new intents not seen before. | Creativity: The authors present a creative pipeline that combines several components to predict new intents in a zero-shot setting.
Experiments: For many experiments, the authors show results for several different methods, comparing to a variety of LLMs. | Simplistic approach: The method presented in Algorithm 1 just extracts words from the sentence. If the intent word is not explicitly expressed in the sentence, this method will be incapable of generating the correct intent.
Lack of baseline in Table 4: The authors only present various settings for their model. I'm not familiar with this research area, so I have no idea if there are approaches in previously published work that outperform this method that the authors have left out.
Marginal improvement in Table 4: The difference in results for each approach are very small, so the benefit of the proposed method does not seem large.
Interpretability of remaining results: It's hard to compare the performance to the LLMs because they only use cosine distance. It's clear the model outperforms in semantic similarity (according to the semantic encoder models used), but for more trustworthy results, a small sample of human evaluations should be used as well to be sure that this method outperforms the LLMs in the zero-shot setting. Another option would be to modify the LLM experiment such that label F1 scores could be produced (use a verbalizer to map LLM output to intent classes). | 1. What is the frequency of examples in the dataset where the intent is explicitly mentioned in the sentence? If this is almost all of the cases, then my first reason to reject is not important. If there are a lot of examples without the intent mentioned, this method is fundamentally limited compared to LLMs which can generalize better than this approach (generate an intent without the intent being mentioned explicitly).
2. Are there any baselines that you could compare to for zero-shot intent classification? If so, why didn't you include them in Table 4?
3. What is the test set for Table 4? | 2: Borderline: Some of the main claims/arguments are not sufficiently supported, there are major technical/methodological problems | 2: Mediocre: This paper makes marginal contributions (vs non-contemporaneous work), so I would rather not see it in the conference. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 2: Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. |
zdMislOLTv | {'value': 'Zero-Shot-BERT-Adapters: a Zero-Shot Pipeline for Unknown Intent Detection'} | POnuqpw0mh | zdMislOLTv | EMNLP/2023/Conference/Submission117/Reviewer_cqfr | This paper proposed a method to do zero-shot intent classification, it can be applied to BERT-based transformer models. The method contains two stages, where for stage-1, the dependency parser is used to get potential intents and in stage-2 the zero-shot classification is performed for final output. Experiments are done on public datasets to verify the effectiveness of the proposed method. | The paper designed a method as a BERT adapter to handle the zero-shot intent discovery task. The model has been evaluated on two datasets and achieved state-of-the-art performance. | The contribution of the paper is not very clear, how does this method compare with other existing language model adapters.
More ablation study could be done to prove the effectiveness of components in the model architecture.
| 1. How does this method compare with other existing language model adapters.
2. What are the tunable parameters and what are the frozen parameters in the model?
3. What is the size of the trainable parameters in the proposed method?
4. The model has been used in English and Italian, can experiments be added to one more language to better prove the multilingual ability?
| 2: Borderline: Some of the main claims/arguments are not sufficiently supported, there are major technical/methodological problems | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 5: Could easily reproduce the results. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zaBPb6Pu21 | {'value': 'Chinese Lexical Substitution: Dataset and Method'} | CHrUCXf8aU | zaBPb6Pu21 | EMNLP/2023/Conference/Submission3500/Reviewer_24p3 | The paper proposes a novel large-scale Chinese lexical substitution (LS) dataset created by human-machine collaboration. The key contributions are:
- Construction of a large-scale Chinese LS dataset, CHNLS, containing 33,695 instances across 3 text genres. Significantly larger than prior English datasets.
- Presentation of 4 LS methods, including dictionary, embedding, BERT and paraphraser-based approaches.
- An ensemble method combining the 4 approaches that outperforms individual methods on CHNLS evaluation.
- Quantitative and qualitative analysis showing the high coverage and quality of CHNLS. | - Addresses the lack of Chinese LS datasets and enables future research for this under-studied language.
- The collaborative annotation approach is creative, efficient and results in higher coverage compared to solely human annotation. Could inform future dataset creation.
- Comprehensive experiments demonstrate the utility of the dataset and the effectiveness of the proposed ensemble method. Thorough quantitative and qualitative analysis. | - While larger than prior datasets, CHNLS still only covers 3 genres of Chinese text. More diversity could be beneficial.
- Some subjectivity and noise inevitable during human evaluation of machine-generated substitutes. Inter-annotator agreement is unclear.
- Ensemble approach is relatively simple. More sophisticated methods could be explored for combining multiple LS techniques.
- Limited analysis of how well methods generalize across the 3 genres. More cross-genre evaluation would be informative.
- The dataset quality should be improved. For example, in wiki_test_gold.txt, line 1, the second substitute "既" is in correct. The authors should do double check to make sure the substitute is correct. | - Did you apply any quality assurance measures been taken during the data annotation process? How is the consistency of the annotators' work ensured?
- It's recommended to increase the diversity of data to make this dataset more effective and compelling | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 5: Could easily reproduce the results. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zaBPb6Pu21 | {'value': 'Chinese Lexical Substitution: Dataset and Method'} | 7XKdnhSQ5p | zaBPb6Pu21 | EMNLP/2023/Conference/Submission3500/Reviewer_o7zy | The paper “Chinese Lexical Substitution: Dataset and Method” presents a novel approach for the creation of annotated Lexical Substitution datasets, focusing on the Chinese language to address the lack of an existing Chinese data source for LS. In addition, the authors propose a new ensemble method, combining four classes of LS methods. In doing so, the authors evaluate both the dataset they have created, as well as the effectiveness of their ensemble approach. | The contributions presented by the authors in this paper are many.
Firstly, the authors highlights shortcomings of existing LS datasets, particularly pointing to their relatively small scale and lack of coverage. While this has been pointed out in the literature before (e.g., SwordS), the authors justify that the issues still persist. Moreover, the lack of LS resources for the Chinese language is highlighted.
The new annotation scheme presented is highly original. Motivated by the issue of small-scale datasets, the authors propose a method that allows for larger-scale corpus creation via the automation of existing methods. The quality of these outputs is ensured by the placement of human-made decisions at the end of the pipeline. The creation and execution of this process is well-described and transparent.
The introduction of the CHNLS dataset is likewise a great contribution. This is supplemented by descriptive statistics of the dataset, as well as a discussion of the evaluation that was run to measure the quality and coverage.
The experiments run using the novel ensemble method are demonstrated by using well-known LS metrics from LS07, as well as the presentation of illustrative examples.
The paper is concluded by a meaningful discussion of the results, particularly in comparison to existing datasets and methods.
| Besides the occasional grammatical or stylistic errors (some highlighted below), there are only a few points of weakness exhibited by the paper.
For example, the details of the ensemble method are a bit lacking. The exact construction of the ensemble, as well as the scheme used to weigh the individual methods’ scores, is not fully elucidated.
In addition, the proposed ensemble is only evaluated on the newly created dataset, making it hard to compare against other existing methods, such as those that have been evaluated on English data.
Finally, while the top 10 substitutes tables are interesting, they are quite cluttered and slightly unclear. Who determined the substitutes in red? Moreover, it is difficult to interpret the results where no translation is provided. | Question A: can you explain why some translations are left out of the tables?
Question B: were any inter-annotator agreement statistics calculated? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 5: Could easily reproduce the results. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zaBPb6Pu21 | {'value': 'Chinese Lexical Substitution: Dataset and Method'} | QfmhxfSjSl | zaBPb6Pu21 | EMNLP/2023/Conference/Submission3500/Reviewer_tGuY | 1. This paper introduces CHNLS, a benchmark for Chinese lexical substitution (LS).
2. CHNLS contains 33,695 instances and 144,708 substitutes, encompassing various domains such as News, Novel, and Wikipedia.
3. The dataset exhibits both high quality and extensive coverage.
4. CHNLS may promote the development of lexical substitution in Chinese.
| 1.The first benchmark for Chinese lexical substitution (LS).
2. The mainstream models have undergone comprehensive evaluation.
3. being clear and well-written
| 1. Lack elucidation of certain pertinent indicators. such as "best-m," "oot," and "oot-m,"
2. Chinese translator to translate english sentences into Chinese may introduce noise.
3. Lack baseline test for LLMs. | Question A: Have you tried using chatGPT or other LLMs to produce data?
| 2: Borderline: Some of the main claims/arguments are not sufficiently supported, there are major technical/methodological problems | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zWGDn1AmRH | {'value': 'ReFSQL: A Retrieval-Augmentation Framework for Text-to-SQL Generation'} | STv0V5gHPA | zWGDn1AmRH | EMNLP/2023/Conference/Submission1338/Reviewer_wb9q | This paper proposed a framework that consists of a retriever and a generator. Within, the retriever aims to obtain similar samples according to the similarity score calculated by questions, SQLs and graphs built from schemas. The constructed similar samples as positive samples combined with the negative samples are employed for representation learning for the generator via contrastive learning. The experiments indicate that equipping the proposed ReFSQL can bring an obvious improvement to the backbone methods | 1. This paper proposes a retriever-generator framework to improve the representation via contrastive learning.
2. Good experimental results. The baselines equipped with the proposed ReFSQL achieve obvious promotions, especially in Spider (which seems achieving SOTA compared with the leaderboard). | 1. Poor writing, bad typesetting, and the figures are not vector diagrams.
2. How can the motivation "To further bridge the gap between specific and general knowledge" be implemented according to the proposed contrastive learning whose optimization is only minimizing the margin of the representation between similar samples?
3. Most of the recent baselines in Spider build interaction graphs to joint the representation of the questions and schemas (RATSQL, LGESQL, etc.). What are the advantages of the margin methods in the retriever part that split Query+SQL and Schema in two stages, and the self-designed "Interaction Graph Construction" methods? There is no analysis in the Methodology and no comparison in the Experiment.
4. Fine-grained ablation tests need to be provided. | null | 2: Borderline: Some of the main claims/arguments are not sufficiently supported, there are major technical/methodological problems | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zWGDn1AmRH | {'value': 'ReFSQL: A Retrieval-Augmentation Framework for Text-to-SQL Generation'} | 3qYhAb8JA2 | zWGDn1AmRH | EMNLP/2023/Conference/Submission1338/Reviewer_sEnG | This paper addresses the task of text-to-SQL generation and introduces a retrieval-augmented model to enhance SQL generation. The proposed method utilizes a structure-enhanced retriever to retrieve examples, which are then employed to improve the SQL generation process. To further enhance the model's performance, the author also incorporates a Mahalanobis contrastive learning method to maximize the representation of both retrieved and current examples. | 1.The idea of using retrieval methods to enhance the process is reasonable.
2.This paper demonstrates significant improvements over existing methods. | 1.This paper is challenging to follow, and the proposed method is highly complex, making it difficult to reproduce.
2.The proposed method comprises several complicated modules and has more parameters than the baselines. It remains unclear whether the main performance gain originates from a particular module or if the improvement is merely due to having more parameters. The current version of the ablation study does not provide definitive answers to these questions.
3.The authors claim that one of their main contributions is the use of a Mahalanobis contrastive learning method to narrow the distribution gap between retrieved examples and current samples. However, there are no experiments to verify whether Mahalanobis yields better results than standard contrastive learning.
4.The proposed method involves multiple modules, which could impact training and inference speed. There should be experiments conducted to study and analyze these effects. | null | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 2: Would be hard pressed to reproduce the results. The contribution depends on data that are simply not available outside the author's institution or consortium; not enough details are provided. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zWGDn1AmRH | {'value': 'ReFSQL: A Retrieval-Augmentation Framework for Text-to-SQL Generation'} | 9RdWQHdP0d | zWGDn1AmRH | EMNLP/2023/Conference/Submission1338/Reviewer_fMpS | This work proposed a framework called ReFSQL for the task of Text-to-SQL semantic parsing. This framework contains two parts, structure-enhanced retriever and the generator. More specifically, a structure-enhanced retriever that incorporates question semantics and schema structure is proposed. This retriever is used to obtain samples with similar SQL grammar. Two-stage retrieval is used: use question semantics to retrieve a rank list and then use schema structure for reranking. Furthermore, contrastive learning with Mahalanbis distance is used to improve the decoding process, facilitating the transfer of the sample toward the specific knowledge distribution. Experiments on Spider dataset and its variants show the effectiveness of the proposed method. | 1. Structure-enhanced retriever is designed to improve similar sample retrieval.
2. The methods generalize well on different sizes of models such as T5-small and Flan-T5.
3. Extensive experiments on different variants of Spider datasets to test the robustness of the model.
| Besides applying existing techniques (Li et al. 2022) to the application of Text-to-SQL, there are no significant weaknesses in this work if the authors can answer the questions properly (see Questions). | 1. Why do authors fix the parameters of BERT (line 213)? If they are not updated, then what’s the purpose of contrastive learning in that section?
2. What’s the model performance of keeping linking-structure-based schema retriever and Mahalanobis contrastive learning while removing the SQL prompting? Maybe this is helpful to rationalize the input design.
| 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zVi11zjaPe | {'value': 'EIT: Enhanced Interactive Transformer'} | t6kdAdzFSP | zVi11zjaPe | EMNLP/2023/Conference/Submission2581/Reviewer_gRpq | This paper takes a closer look at the multi-head self-attention mechanism in transformer architectures and seeks to improve it according to the notions of complementarity and consensus. The core idea with complementarity is to encourage a model to capture different aspects of the data (say, for example, syntax, semantics, etc.); at the same time the model should seek to find consensus with these different views, so as to minimize noise and disagreement. Based on these two principles the authors propose some changes to the transformer. Specifically, to promote complementarity, they introduce a mechanism that allows all key and query parameter matrices to interact with one another (as opposed to having a 1-to-1 mapping): this effectively blows up the latent attention space from M to M^2 (thereby, theoretically, capturing more views of the data). In order to build consensus between these M^2 attention maps, they introduce a set of convolutional transformations (both within sets of attention maps from a query - i.e. what they call inner-subspace interaction - and across sets of attention maps - i.e. what they call cross-subspace interaction) to eventually end up with M final attention maps.
On a set of experiments the authors demonstrate that these tweaks to the transformer architecture lead to improved performance over the vanilla variant, as well as other modifications to transformers in the literature. The authors also conduct a number of ablations and analyses to inspect the properties of their enhanced transformer. | I really enjoyed reading the first half of the paper. It starts from first principles (i.e. complementarity and consensus), and clearly motivates the explorations proposed in this paper. While transformers have become foundational building blocks for language modeling, and even AI more generally, there is no reason to believe that they cannot be improved, and the authors make a good case for how they are approaching the problem. The proposed modeling changes to multi-head self-attention are novel (to the best of my knowledge) and interesting modifications, but more importantly are closely tied to the first principles the authors begin with.
The experimental results generally seem to support the authors claims that the enhanced transformer they propose outperforms the vanilla version in multiple downstream tasks and in various settings, even though this does come at a computational premium for the added operations they need to perform. | Unfortunately, the second half of the paper dealing with the experiments are not particularly well written, even if they generally convey results that are impressive.
Section 4 simply has insufficient detail to fully understand the following sections. The authors use the Appendix as free additional pages to the 8-page limit and ask the reader to simply refer to it for the full experimental setup. The Appendix is meant for supplementary material only, not for things that should be core to the paper. In other words, a reader should be able to fully understand and make a judgement about the paper *without* having to refer to the Appendix at all. In this case, I could not.
Section 5 consists of the core experimental results, and while this is generally well structured and presented, I would have appreciated a little more detail and discussion. An important consideration here is the added computational complexity for EIT; none of the main tables of results contain any mention of this as compared to the vanilla transformer. The authors also only present the highlight of each result, without discussing things in more detail. For example, EIT appears to trade precision for recall on grammar correction. Why is this?
Section 6 unfortunately is somewhat of a mess. It consists of a ton of experiments and findings (which I commend the author for), but presented in such scant detail that it is difficult and bordering on impossible to sometimes gather what is being done. For example, how does the ablation in 6.1 even work? If you remove the M2M mapping, you don't have M^2 attention maps, so how do you even apply ISI and CSI after that? Similarly if you ablate only ISI, how can you go from M^2 attention maps directly to applying CSI? For other experiments in this section, I'm not even sure why the authors even included them in the main paper. For example, 6.2 seems to be a very minor finding that would really fit better in the Appendix (because it is Supplementary).
My advice to the authors would be to carefully reconsider what is the core set of experiments that are really important to gain insight into their work and focus on presenting them well in the 8-page limit, rather than throwing the experimental kitchen-sink at the reader while relegating important context and details to the Appendix. | null | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 2: Would be hard pressed to reproduce the results. The contribution depends on data that are simply not available outside the author's institution or consortium; not enough details are provided. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zVi11zjaPe | {'value': 'EIT: Enhanced Interactive Transformer'} | gEKnuxZr9T | zVi11zjaPe | EMNLP/2023/Conference/Submission2581/Reviewer_Exgp | The paper points out that the current design of multi-head attention, which is an instance of multi-view learning, prioritizes the complementarity but ignores the consensus. The problem motivates the authors to propose multi-head self-attention (EMHA). EMHA removes one-to-one mapping constrains and enables the queries to interact with multiple keys. Experimental results show that EMHA consistently produces impressive results across tasks. | Starting from the multi-view learning principles, the paper points out the problem of MHSA, i.e., ignoring the consensus.
The proposed Inner-Subspace Interaction and Cross-Subspace Interaction address the problem, which is empirically demostrated by the experiments.
Solid experiments with strong baselines. Good analysis on training perplexity and the selection of hyperparameters. | As mentioned in Limitations, the computational efficiency is the main drawback of the proposed method. As shown in Table 6, the training requires 1.45x time compared to Transformer, which makes the proposed architecture is not a good option for practical usage, especially with the current trend of scaling Transformers to large capacities. | null | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zVi11zjaPe | {'value': 'EIT: Enhanced Interactive Transformer'} | 5tnBCdfzLI | zVi11zjaPe | EMNLP/2023/Conference/Submission2581/Reviewer_S8VV | The paper presents a novel multi headed attention formulation based on the complementary and consensus principles. Specifically
1. The authors present a novel many to many mapping between queries and keys, where a query set is allowed to interact with M key sets.
2. The authors then present two modules for aggregating information from the M^{2} attention maps previously generated:
2.1 An inner subspace interaction module for aggregating information from maps generated by the same query set. This is implemented in practice using grouped convolutions.
2.2. A cross subspace interaction module that takes as input the previous inner subspace module outputs, and generates M attention maps combining information across different head interactions.
3. The authors present an efficient formulation of the previous approach using a single layer operation (a cascade of a group convolution and a full convolution, with a reduced head number as an intermediate representation, indicated as M^{H}).
4. The authors present results on Machine Translation, Grammatical Error Correction, Abstractive Summarization and Language Modelling, demonstrating the benefits of the EIT architecture.
5. Different ablations demonstrate the utility of each module, improved consensus across heads, impact of layer sharing, robustness to pruning and quality of representations for the proposed method. | 1. The consistent performance improvement of the proposed across a diverse number of tasks method demonstrates the utility of the EIT formulation.
2. The hierarchical interaction modules (Inner Subspace Interaction and Cross Subspace Interaction), in conjunction with the many to many mapping formulation, nicely captures the complimentary and consensus principles.
3. The ablation experiments are quite insightful, and help understand the contributions of the different modules. The robustness to pruning is especially of strong interest from an inference cost point of view. | 1. One of the most relevant baselines for this paper is the Talking Heads Transformer [1]. This should be, in my opinion, the de-facto baseline for all tasks. While there is a comparison to it in Table 1, it would be good to have this comparison for all the tasks presented in the paper, rather than comparing against the vanilla transformer. Otherwise, is hard to gauge how much benefit comes from the proposed interaction modules + many to many formulation, compared to a simple linear transform way of achieving consensus between the different attention maps.
2. From the ablation studies presented (especially 6.3, effect of number of EIT layers), it seems that the primary benefactors of the proposed approach are the lower layers of the encoder model in the transformer, and that including it across all layers does not particularly improve the performance). This somewhat raises the question about the efficacy of the approach: given the computational complexity, the gains are modest, especially if it is incorporated across all layers of the model. | 1. In section 3.1.1, the discussion signifies that the proposed approach avoids generating similar attention maps. However, in section 6.4.1, the ablation study demonstrates that the proposed method shows higher consensus among heads. Given that this is primarily brought upon due to the cross subspace interaction module (Fig 6), and that the CSI module does not contribute heavily towards performance (Table 6), how important is it to have consensus among heads: is this a necessity to achieve strong performance ?
2. The paper presents an efficient version of the proposed method (dubbed E-EIT). It would be good to numerically quantify the increase in training and inference times compared to a vanilla transformer (additional details are mentioned in Appendix C.1. It would be good to bring them into the main paper)
3. In section 6.3, Effect of M section, what does varying M mean for the vanilla transformer ? Isn't it fixed to be M=1 for the standard transformer ?
4. [Minor] Instead of calling M^{H_{csi}} as the head size in the CSI sub-module, it might be better to refer to it by some other name (eg: intermediate number of heads for example). Head size usually refers to the hidden size of the Q,K,V vectors, so this terminology causes some confusion. | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zVi11zjaPe | {'value': 'EIT: Enhanced Interactive Transformer'} | G4ny1HecKG | zVi11zjaPe | EMNLP/2023/Conference/Submission2581/Reviewer_gNnp | The authors of the paper claim that multi-head self-attention present in Transformer architectures emphasises the discrepancy of subspaces and ignores to maximise the agreement among the subspaces. To address this problem, the authors have proposed some enhancements to the existing interactions happening in the multi-head self-attention by introducing two things: (1) M2M mapping scheme, which will enhance the query-key pair interactions and generate multiple attention maps, thereby maximising information capacity. (2) To address the agreement among the subspaces (attention maps), they have introduced two relationships, which they call dual-enhanced interactions such as Inner-Subspace Interaction Modelling and Cross-subspace Modelling. | 1. The paper is very clear and well-written.
2. Though the performance improvement is minor, the paper addresses a problem that has shown previously ignored downsides of Transformers from a different and novel perspective.
3. The paper conducts thorough experiments with varied inclusion and exclusion of the proposed interactions. | 1. The paper does not mention the computational budget anywhere.
2. The results do not seem to be robust, but rather the best of an unknown number of runs with unknown standard deviation between runs. The paper also misses baseline comparisons on tasks like Model Variations, English Constituency Parsing, etc.
3. Though the authors acknowledge that the architecture is computationally inefficient basis the external frameworks (viz., PyTorch, Keras), they do not release any optimised code-base anywhere.
4. Could provide a better analysis on "Previous work done" and comprehensive view of "Related work". | 1. Equation (2) is not defined anywhere. It seems to be redundant and not mentioned anywhere, except a similar one on lines 549-550.
2. While the abstract says “modest increase in model size”, the result tables (Tables 1, 2 and 5) show the same number of parameters, which is confusing. | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zSUOfRVl28 | {'value': 'Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting'} | FEVYxD1GuU | zSUOfRVl28 | EMNLP/2023/Conference/Submission1221/Reviewer_9BLt | This paper aims to predict the personalized user response to a specific news item, where a comprehensive user representation plays a vital role. Though existing works have employed user profiles, history posts, and social networks to enhance the social contexts of users, the authors claim that they are ineffective in dealing with lurkers. To this end, the authors propose to incorporate user beliefs and offer the SOCIALSENSE framework. The framework utilizes LLMs to infer the social values of user beliefs and augment the social network with a belief-centered graph. A heterogeneous graph transformer is then adopted to learn user and news representations from the augmented graph and infer the personalized user response. Experimental results demonstrate the effectiveness of the proposed framework, especially for lurkers and unseen users. | A1. The idea of incorporating user beliefs for response forecasting is reasonable and can enhance the explainability of the predictions.
A2. The performance gain is significant in both zero-shot and supervised settings. The analysis experiment supports the claim that involving user beliefs facilitates the inference of lurkers.
A3. Utilizing LLMs to infer human values sounds interesting and worth further exploration. The proposed social prompt is also noteworthy, which mimics the feature aggregation in GNNs in a prompting way. | R1. Though LLMs show some merits in serving as data annotation tools, the outcomes need further verification, especially in such an unsupervised way. Analysis of the annotation results should be included.
R2. It is unclear why LLMs can provide convincing predictions of user beliefs for lurkers. If a big error happens at this stage, it will propagate to the following processing stages. This is also why R1 is important.
R3: The introduced value nodes might bring noise and increase the complexity. As I understand, almost all the users are 2-hop away via the "user-value-user" path. By adopting a 3-layer GNN, the users will gather information from a large number of 2-hop neighbors, which might bring noise. And the graph is much denser, resulting in increased complexity.
R4: The model details are missing, e.g., the choice of meta-paths when applying HGT. | Q1: Now that LLMs cannot make accurate predictions of user responses, how can they make accurate predictions of user beliefs for lurkers? How you analyzed the prediction quality of user beliefs?
Q2: Compared to the baselines, what's the number of parameters of your proposed framework? How much time does it cost to fine-tune your model?
| 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zSUOfRVl28 | {'value': 'Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting'} | K8oJFEc2Aj | zSUOfRVl28 | EMNLP/2023/Conference/Submission1221/Reviewer_EsDk | This paper proposes a new framework for predicting the sentiment intensity and polarity of twitter users. It creates a graph of users, posts, and beliefs, and includes user-user edges, user-post edges, and user-belief edges. The belief vertices are composed of moral and human values derived from latent personas of users. The latent personas are extracted using ChatGPT. Information is propagated through the constructed graph using a Heterogeneous Graph Transformer. The results are evaluated using correlation coefficients and F1 scores.
This paper proposes a framework using a large language model to generate a belief-centered graph to augment the existing social network. However, insights about the belief networks and clusters of users associated with certain beliefs are not discussed. | 1. The problem statement is well-defined and the method is clearly described.
2. The results for the Lurker and Unseen User scenarios are quite strong, which big improvements shown over the baselines.
3. The empirical evaluation includes several baselines and ablation studies, which helps to indicate how well the model performs. | 1. More insights about the interaction between users and beliefs or the distribution of identified beliefs are not included. As this is one of the main contributions of this paper, it would be beneficial to show some analysis of what the beliefs data looks like in this network.
2. The results of the proposed model are incrementally better than the baselines for response forecasting.
3. The framework is only evaluated on the task of sentiment intensity and polarity prediction. It would be helpful to include other evaluation tasks as well. | 1. Are certain beliefs harder to detect than others? Is there any performance gap associated with groups of different beliefs?
2. How is the ground truth sentiment intensity and polarity calculated based on the user and message? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zSUOfRVl28 | {'value': 'Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting'} | qi9rn6rU8f | zSUOfRVl28 | EMNLP/2023/Conference/Submission1221/Reviewer_HaTs | The paper introduces a novel approach to predict user responses to news media messages by leveraging ChatGPT to extract user beliefs. The primary contribution lies in the construction of a belief-centered social network through a GNN.
This network captures the patterns of how similar neighbors respond to similar news.
And the experiements validate its improved generalization, accuracy, and robustness. | The paper's novelty is commendable, particularly in its utilization of LLMs for practical applications. The combination of a supervised GNN and LLMs demonstrates an effective solution for complex tasks.
The ablation study strongly supports the argument that supervised learning along with LLM knowledge is more effective than pure supervised learning or zero-shot approaches. | The writing style could benefit from being more concise and focused. | Were there any challenges in dealing with label inconsistencies in ChatGPT? For example, generating novel labels beyond the predefined set. If so, how were these challenges addressed?
Is there a possibility of label leakage or user overlap between the training data (collected by you) and the testing data (Sun et al., 2023)? If such overlap exists, how might it impact the evaluation results and the generalizability of the proposed approach?
Could you please provide insight into the rationale behind your choice of belief features and latent personas derived from MORAL VALUES and HUMAN VALUES? Additionally, do you have plans or ideas for extending this list of belief sources to further enhance the effectiveness of your approach? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zSUOfRVl28 | {'value': 'Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting'} | X1bDFTEkuZ | zSUOfRVl28 | EMNLP/2023/Conference/Submission1221/Reviewer_2MJW | In this paper, the authors propose a framework called SocialSense. This framework uses ChatGPT to create a belief-centered graph leveraging profile information of users, their social content, and their historical posts. The objective of this framework is to predict responses of new articles for a given persona of users in terms of intensity and polarity of sentiments. The paper is more oriented towards real-life application of Large Language Models.
| The authors proposed a novel framework which leverages ChatGPT to predict responses of news articles for a given persona of users in terms of intensity and polarity of sentiments. This application of ChatGPT is innovative. It outperforms existing state-of-the-art approaches like InfoVGAE.
| The authors haven't shared how they will maintain consistency in the responses generated by ChatGPT.
Effectiveness of ChatGPT for unmasking latent personas needs to be validated.
| The responses from Large Language Models like ChatGPT can vary. Please share your plans to maintain consistency in the responses generated by ChatGPT.
Was any kind of human evaluation or validation done to understand if ChatGPT is suitable for unmasking latent persons?
For Table-2, could you please highlight how significant are the improvements?
| 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zM3mlyflTt | {'value': 'Approximating Two-Layer Feedforward Networks for Efficient Transformers'} | qaw1JaOPaF | zM3mlyflTt | EMNLP/2023/Conference/Submission3542/Reviewer_FXRA | The paper presents a comprehensive investigation of the use of sparse Mixtures of Experts (MoEs) and product-key-memories (PKMs) to improve the efficiency of any-scale-resource-efficient LMs in terms of computational and memory requirements. Evaluated under parameter-equivalent conditions, the empirical results underscore the competitive performance of MoEs compared to dense baselines. | Clear comparisons to existing methods and evaluation direction. Upon analysis, the steps for the proposed method are straightforward. The effectiveness of the method is validated and ablation studies on different hyperparameter settings and components are provided. | Extending and combining the existing technique lacks novelty and, as pointed out in limitations, involves considerable sensitive engineering tricks. Lack of FLOPs information on other approaches' tables leads to unclear comparisons on how this approach is attractive in terms of FLOPs and bits/characters. | The author emphasises that the \sigma MoE is applied to each MLP block of the model. What if the method is also applied in the usual way (e.g. once in every nth layer or even only in a single layer), how does the effectiveness change? Furthermore, including the number of layers applied, what is the main controllable factor in your approach that reduces FLOPs? In the \sigma n MoE initialisation? Why is the social care initialisation W_3 required? Does this effectiveness still validate on several downstream benchamark performance? What if the given reduced FLOPs are more than 50% (e.g. 75%) so that the Top-K activation function can be compared, can the proposed method outperform the Tok-K approach? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 2: Mediocre: This paper makes marginal contributions (vs non-contemporaneous work), so I would rather not see it in the conference. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 2: Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. |
zM3mlyflTt | {'value': 'Approximating Two-Layer Feedforward Networks for Efficient Transformers'} | O45fV309cW | zM3mlyflTt | EMNLP/2023/Conference/Submission3542/Reviewer_QrbU | The paper presents a novel Mixture-of-Experts (MoE) for Transformer models.
The paper starts with a review of existing MoE approaches, characterizing them as approximations of the 2-layer MLP of the transformer block, comparing it with other approximation schemes (top-k and Product-Key memories). After reviewing the different characteristics of these approaches, the paper proposes a novel MoE which uses a non-competitive expert selection function (sigmoid) followed by top-k selection, with a specific initialization scheme, expert dropout and a load-balancing regularization objective.
They evaluate the method on language modelling tasks, on standard datasets, and report that their approach provides comparable quality of parameter-equivalent dense models while being more compute-efficient, and also outperform other MoEs and approximation schemes. Ablations studies are also reported. The authors conclude that their MoE is beneficial even for small models. | - Interesting approach
- Compelling results
- The authors promise to release the code | - Some architectural choices are not entirely motivated
- Like all MoE approaches, this method is relatively complicated to implement compared to vanilla Transformers, requiring a custom CUDA kernel for efficient implementation. | null | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zM3mlyflTt | {'value': 'Approximating Two-Layer Feedforward Networks for Efficient Transformers'} | UkdVUPycze | zM3mlyflTt | EMNLP/2023/Conference/Submission3542/Reviewer_JndA | The paper studies an important problem, which is about reducing the compute and memory requirements of neural networks (NNs) without sacrificing performance. Specifically, the paper proposes using sparse Mixtures of Experts (MoEs) to create resource-efficient large language models (LMs). The paper presents a general framework that unifies various methods to approximate two-layer neural networks, including feedforward blocks of Transformers, and proposes methods to improve both MoEs and product-key memories (PKMs). | 1. This paper proposes a new perspective on MoEs and a general framework that unifies various methods to approximate two-layer neural networks, which demonstrates the effectiveness of Mixtures of Experts (MoEs) in creating resource-efficient large language models (LMs).
2. This paper studies an important problem of reducing the computing and memory requirements of neural networks while maintaining performance. The paper proposes a novel method of using sparse Mixtures of Experts (MoEs) to create resource-efficient LMs and presents a general framework that unifies various methods to approximate two-layer neural networks. The paper's proposed methods to improve both MoEs and PKMs could also be useful for researchers and practitioners working on creating resource-efficient LMs.
| 1. The paper lacks a clear motivation for why approximating two-layer feedforward networks is important for efficient transformers. The authors should provide a more compelling argument for why this is a necessary step toward creating more efficient transformers.
2. The paper does not provide a thorough comparison of their proposed method with existing methods for reducing compute and memory requirements of neural networks. The authors should include a more comprehensive analysis of how their approach compares to other methods in terms of performance and efficiency.
3. The proposed method has not been subject to a comprehensive complexity analysis or a comparative analysis of training time. Figure 2 shows the execution time and memory usage of a forward-backward pass of a single MLP and MoE layer, however, the authors should provide more information on the computational complexity of their approach and how it compares to other MoE methods.
4. While the algorithmic design of the proposed method appears intuitive, the authors would benefit from a more detailed theoretical or analytical analysis of the existing content of the paper, which is currently lacking in detail. The authors should provide more information on the theoretical underpinnings of their approach and how it relates to existing research in the field. | 1. How does the approximation of the two-layer feedforward network affect the overall performance of the transformer model?
2. Have you conducted any experiments to evaluate the trade-off between efficiency and performance? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 2: Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. |
zM3mlyflTt | {'value': 'Approximating Two-Layer Feedforward Networks for Efficient Transformers'} | tS0GnXilHq | zM3mlyflTt | EMNLP/2023/Conference/Submission3542/Reviewer_d9Y9 | This paper explores diverse techniques for approximating two-layer neural networks (NNs). To begin with, the authors introduce a comprehensive framework that unifies the Top-K activation function, Mixtures of Experts (MoEs), and product-key memories (PKMs). By thoroughly analyzing their approach, they subsequently present enhancements for both MoEs and PKMs. The empirical investigations reveal that the proposed MoEs perform competitively when compared to the dense Transformer-XL, underscoring the applicability of MoEs to language models of varying scales. | 1. The exploration of approximating two-layer feedforward networks (FFNs) is both novel and captivating. Notably, the authors provide a comprehensive perspective that encompasses various methods, and they also conduct a comprehensive comparison of the most notable MoE variants.
2. The motivation behind the paper is distinctly articulated and substantiated. The seamless progression from motivation to theoretical analysis, and ultimately to the proposal of the method, is presented in a coherent and natural manner.
3. The proposed approach is effectively supported by empirical experiments, demonstrating its ability to attain performance on par with dense networks. | 1. The proposed $\sigma$-MoE framework requires further justification. The specific design approach for $W_3$ and the uniform initialization of weight matrices are notable contributions. The paper should delve into the rationale behind these design choices and their benefits, particularly in terms of their impact on the performance of the MoE model.
2. The empirical investigations provided appear somewhat limited in scope. Given that all experiments are exclusively conducted on WikiText-103 and Enwik8, which share similar data distributions, it would be prudent to expand the experimental scope to include other datasets. This would provide additional support for the performance claims of the $\sigma$-MoE model.
3. While the paper introduces novel elements through the MoE variant design, the novelty level might be constrained. To enhance the clarity of the novelty introduced beyond the MoE variant design, it's advisable to provide further elaboration and illustration.
| 1. It would be valuable to include additional experiments on diverse text datasets such as PTB and C4. Expanding the experimental evaluation beyond Enwik8 and WikiText-103 can provide a more comprehensive understanding of the proposed approach's performance across various text domains and scales.
2. Could you provide further clarification and elaboration on the design principles underlying the $\sigma$-MoE model? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zLAHDHhgLa | {'value': 'Fine-grained Conversational Decoding via Isotropic and Proximal Search'} | fopGnA3yjT | zLAHDHhgLa | EMNLP/2023/Conference/Submission1736/Reviewer_T3Fd | This paper introduces a new decoding method that is argued to be more suitable for dialogue response generation. The authors based off their method on the finding that a good dialogue feature space should follow the rules of locality and isotropy. To achieve this, the authors designed a decoding objective that optimizes for locality (by maximizing the average cosine similarity between the representation of newly generated word and the previously generated words) and isotropy (by minimizing the cosine similarity between the representation of words generated sofar and that of the past utterances). The paper then used two datasets to compare their method against other decoding methods (e.g. contrastive search), and show that it outperforms others under both automatic evaluations (BERTScore, MAUVE, G-EVAL) and human evaluation. | The paper proposes a new decoding method for dialogue responses. This can be used in parallel with other methods (e.g. better modeling, better prompts) and could be helpful in building better dialogue systems in the future. The strength of this paper includes:
1. This paper defines a new decoding algorithm that promotes proximity and isotropy, which are found to be important for conversational responses.
2. Both automatic evaluations (2 datasets with 5 competitive baselines) and human evaluations show that IPS generated responses are better. | 1. The setup/purpose of the ablation study is confusing. Why is Figure 1a,b comparing G-EVAL but Figure 1c comparing MAUVE? Why are we even looking at G-EVAL/MAUVE as opposed to other metrics? Why are we comparing SimDRC+IPS against *SIMCTG*+constrastive search, but not SimDRC+IPS against SimDRC+contrastive search?
2. Lack of direct analysis of how IPS improves proximity and isotropy. The automatic metrics used (in both main experiment and ablation), such as "BERTScore, MAUVE, G-EVAL" are all very generic. While IPS does show improvement under these metrics, it is unclear if it is due to the utterances being more proximal and isotropic, or other reasons.
3. Only five annotators are involved in human evaluation, which may be too few as no statistical significance is measured. | Question A: How does/can IPS avoid degradation of the generation, if generating repetitive rephrases seems to be favored by $p_\text{value}$ and not disencouraged by $i_\text{value}$?
Question B: Can you provide more direct/concrete way of showing how IPS generated responses are more proximal and isotropic? Most experiments shown in this paper are too "end-to-end". | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | Yes | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zLAHDHhgLa | {'value': 'Fine-grained Conversational Decoding via Isotropic and Proximal Search'} | J6m2HNshFt | zLAHDHhgLa | EMNLP/2023/Conference/Submission1736/Reviewer_nWpT | This paper is motivated from the locality and isotropy for modeling dialogue, and proposes isotropic and proximal search (IPS). Specifically, there are additional terms on decoding process, which are proximal value (avg. distance between candidate token and the already generated tokens) and isotripic value (avg. similarity between undergoing response and all utterances). The authors evaluated on various metric including human evaluation and the proposed decoding method shows prominent performance. | This work nicely borrows the locality and isotropy concept and adopts them into decoding process. It is sufficiently intuitive and detailed experimental settings and ablations are evidential. Furthermore, although the IPS is slower than traditional methods (e.g., beam search, top-k sampling), as the authors mentioned in Limitation section, it is still faster than contrastive search. This work would enlighten future possibilities of conversational modeling. | Ablation studies exploring the effect of respective of proximal and isotropic values need to be conducted. Others are reasonable to understand, so please refer to the questions in the next section. | - Question A: Do the authors analyze the score distribution according to the length of previous context? In Table 3, I think the generated samples of third content looks quite similar, thus I wonder if the proposed decoding strategy works better on long previous context.
- Question B: Could the authors write English-translated version together in Table 4? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 5: Could easily reproduce the results. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zLAHDHhgLa | {'value': 'Fine-grained Conversational Decoding via Isotropic and Proximal Search'} | 8sof3Zrtvg | zLAHDHhgLa | EMNLP/2023/Conference/Submission1736/Reviewer_ybjD | The paper presents a conversational decoding strategy named isotropic and proximal search (IPS). As the name itself suggests, the decoding strategy is based on the concepts of locality and isotropy, two key features that are shown to be essential for generating high-quality dialogue responses. IPS has a parameter alpha that controls for the weight assigned to these two components; when alpha=1, the IPS behaves as greedy decoding. The authors assess the performance of IPS on two datasets (English and Chinese) and different variants of BART and decoding strategies. | The paper is well-written and easy to follow. It presents an original conversational decoding strategy and the authors evaluate its effectiveness against several other decoding strategies. The experimental setup is clear and well-described. The authors also conducted a human evaluation on a subset of the generated utterances and perform interesting ablation studies. Overall, the results seem to suggest that IPS outperforms other decoding strategies in most of the settings and the evaluation metrics employed. | Although the paper has the potential to represent an original and significant contribution to the field, a few issues deserve a closer look. The differences reported in Tables 1 and 2 are sometimes minor, and statistical significance tests are needed to validate the claim that IPS outperforms other decoding strategies. In case it turns out that some differences are not significant, IPS still represents a novel and original contribution; however, it is important to know which differences are significant to adjust the overall claim of the paper.
In the conclusions, the authors mention: “Experiments show that our method achieves impressive performance on both automatic and human evaluation”, and in the abstract they say: “Experiments show that our approach significantly outperforms existing decoding strategies”. These are clearly too strong claims that have to be adjusted depending on the results of significance tests.
More qualitative examples and surface-level statistics (utterance length, vocabulary coverage, etc.) would be helpful to assess the effectiveness of IPS. The examples reported in Table 3 do not show a clear qualitative advantage of IPS over other decoding strategies, while Table 1 and Table 2 illustrate that IPS (almost) always outperforms other methods against both automatic and human-based metrics. The examples reported in Table 3 suggest that it is advisable to have a closer look at possible limitations in the evaluation metrics used and/or biases during the human annotation procedure. | - Question A: Did the authors find any surface-level pattern in the utterances generated by IPS (utterance length, unique tokens, etc.)?
- Question B: Do the authors have any intuition about why human annotators disagree way more when evaluating the informativeness of generated utterances in the DailyDialog task compared to LCCC (0.56 vs. 0.78)? All the other metrics instead (fluency, coherence, semantic coverage) have similar values in the two settings.
- Question C: Did the authors inspect more examples compared to the ones reported in Table 3? Did they get any qualitative insight into the main advantages of IPS-generated utterances? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
zIgc1Qeceh | {'value': 'Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign'} | gN2femrVDO | zIgc1Qeceh | EMNLP/2023/Conference/Submission4608/Reviewer_Jcj8 | The paper introduces a new inter annotator agreement metric, HolisticIAA, based on sentence embedding similarity. The metric is compared against two traditional IAA metrics, Cohen's k and Krippendorff’s alpha, for the annotation process of one dataset on persuasion classification of text snippets. | The proposed metric allows to compute annotator agreement even if the annotators did not label the same documents, i.e., by pooling from the corpus similar sentences to those annotated by the annotators (using sentence embeddings) and computing label agreement on that set instead | Inter annotator agreement statistics are useful to measure the reliability of an annotation scheme, i.e, that the coders (aka annotators) have internalized the coding instructions s.t. a sufficient level of agreement can be observed), but are not informative about the quality of a dataset. Agreement is flawed for many reasons, e.g., agreement in mistakes, agreement due to label biases, large chance agreement in datasets with skewed classes, headline measurements with no information about the quality of the individual labels, and many more.
Also, I am not convinced about using as a form of evaluation the correlation with an existing IAA metric. Mainly because these metrics are already biased in some form, e.g., the kappa paradox.
Lastly, the quality of the sentence embeddings and the similarity thresholds seem central to the success of the proposed metric, however, their selection is treated rather lightly in the paper. | I would like to see addressed the concerns raised under 'Reasons to reject' | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zIgc1Qeceh | {'value': 'Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign'} | mb7JHUgvGM | zIgc1Qeceh | EMNLP/2023/Conference/Submission4608/Reviewer_Wmuj | This paper introduces a method called 'Holistic Inter-Annotator Agreement' to compute the (percentage) agreement between each pair of annotators on the semantically most similar sentences they have both annotated. This allows the authors to compute what annotators are most similar to a particular annotator. This ranking is shown to correlate well with the ranking obtained using standard IAA metrics such as Cohen's K. | 1. The paper is clearly built on a great deal of expertise about annotation, in particular in large-scale annotation projects. It raises some very good questions about the limitations of coefficients of agreement in such large-scale annotation projects, and illustrates some very good annotation practices - it could be useful to others aiming to run such a project.
2. The method itself makes a lot of sense.
3. the results obtained with the proposed method are analysed to a great depth. | 1. The main problem I have with this paper is that it's not completely clear to me what is the problem that the authors are trying to solve. My understanding is that they are trying to come up with a more useful way of measuring the actual agreement between annotators not just on the few examples they both annotate, but I am not completely sure this is right.
2. A more general problem is that others have realized that coefficients of agreement are really a limited metric - especially in case of subjective judgments - and have tried to devise more informative approaches, although nobody as far as I know has addressed the specific issue tackled in this paper. I am especially thinking of the work on probabilistic models of annotation by Passonneau and Carpenter, and their TACL 2014 paper. In that paper, the authors argue convincingly that a single number cannot be sufficient for the in-depth analysis of annotation that the authors of this paper have in mind. The type of analysis they propose involve building models of each annotator, and of each item, that do allow for more insightful comparisons. I would encourage the authors to have a look at these methods and perhaps think about generalizing them. | 1. Do I understand your objective correctly? Is your objective in effect to compute agreement between annotators on a larger scale, i.e., not only on the sentences they all annotate?
2. If the answer to the above is 'yes', how are you proposing to assess whether the obtained Holistic IAA value is sufficient to your purposes? | 5: Excellent: This study is one of the most thorough I have seen, given its type. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zIgc1Qeceh | {'value': 'Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign'} | FcbVbnNr1K | zIgc1Qeceh | EMNLP/2023/Conference/Submission4608/Reviewer_9ysf | This paper is about the complexity of persuasion technique annotation in a large annotation campaign involving 6 languages and approximately 40 annotators. Its main contribution is introducing a new word embedding-based annotator agreement metric called HolisticIAA. | The annotation campaign done for their experiments is massive, and the article is well written. | Annotating persuasion techniques is a very subjective task. Moreover, the paper introduces the Holistic IAA metric but fails to explain how this is actually computed. Also, they conclude that Holistic IAA highly correlates with rankings computed using Cohen Kappa's in some settings, so it is not clear what the usefulness of this metric is. | - | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zIb2DlqBxm | {'value': 'PHD: Pixel-Based Language Modeling of Historical Documents'} | xg3MAC1hDT | zIb2DlqBxm | EMNLP/2023/Conference/Submission1679/Reviewer_NET7 | This paper introduces PHD, a pixel-based language model for analyzing historical documents without using OCR. The main contributions are:
1. Proposes a novel method to generate synthetic scans that resemble historical documents for pretraining. This helps address the scarcity of large historical scan datasets.
2. Pretrains PHD on a combination of synthetic scans and real historical newspapers from the 18th-19th centuries.
3. Evaluates PHD on image reconstruction, clustering, language understanding tasks like GLUE, and question answering on both SQuAD and a real historical QA dataset.
4. Provides evidence that PHD can effectively understand language and has potential for assisting with NLP tasks involving historical documents.
5. Releases the datasets, models, and code to facilitate future research.
Overall, this paper explores using recent advances in pixel-based language modeling to process historical scans directly at the pixel level. This allows bypassing the OCR stage which can introduce noise when applied to historical documents. The proposed pretraining methodology and evaluations demonstrate the promise of this approach for historical document analysis. | This paper explores an interesting new direction and provides a thorough evaluation of the proposed techniques. Releasing the datasets and models could catalyze more work in this area.
1. Well-written paper that clearly explains the motivation, proposed approach, experiments, results, and limitations.
2. Novel application of pixel-based language models to historical document analysis, bypassing the need for OCR. This is an interesting new direction for processing historical texts.
3. Releases new datasets, models, and code to facilitate research in this area. The historical QA dataset created from real newspaper ads could be valuable for the community. | 1. Most of the pretraining data is modern text, not historical. More diverse historical data could help the model better adapt to that domain. Previous work, such as Donut, DiT, Dessurt, and LayoutLM (v1, v2, v3), pre-trained their models on IIT-CDIP. IIT-CDIP is a large-scale scanned document corpus used for pre-training language models.
2. The evaluation tasks, apart from one historical QA dataset, predominantly involve modern text. More historical evaluation data could better assess performance.
3. There are also some document understanding benchmarks, such as DocVQA (also used in Donut, DiT, Dessurt, and LayoutLM v1, v2, v3), which can be used to evaluate the question-answering performance of models.
4. As the paper mentions, evaluating the pixel-based completions is challenging. More robust quantitative evaluation methods are needed.
5. OCR techniques continue to improve over time. At some point, OCR quality may be sufficient to apply standard NLP pipelines to historical texts without needing to bypass OCR. | The previous studies utilized datasets such as IIT-CDIP and DocVQA for pre-training and evaluation. Can you discuss why you did not consider them? | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zIb2DlqBxm | {'value': 'PHD: Pixel-Based Language Modeling of Historical Documents'} | GTc9SFJ8ep | zIb2DlqBxm | EMNLP/2023/Conference/Submission1679/Reviewer_Uoyc | The paper proposes a method for historical document reconstruction based on PIXEL, a language model that’s unique in that it deals in visual, rather than token-based, representations of language. The main contribution of this paper is a language model that follows the general design of PIXEL but is trained on a synthetically generated corpus of historical documents. | The method proposed is well-argued and the limitations are clearly discussed. The evaluation is robust, evaluating both the model’s ability to resolve corruption of both synthetic and actual historical documents, as well as measuring the model’s language understanding against standard benchmarks. | This reviewer sees no core or overwhelming reason to reject this study; the contribution of the authors’ can be characterized as adapting an existing method to a novel domain, which may not be compelling to some, but it is an interesting and thorough study of the applicability of those ideas to said domain.
The cluster analysis may be superfluous — it’s not clear what the authors hoped to understand by performing that study if not the effectiveness of the encoder at providing a deep semantic representation of historical documents, however the authors noted that they only evaluated the visual similarity of similarly encoded documents.
One limitation perhaps not adequately discussed is that much of the synthetic training corpus consists of contemporary English that’s rendered in the same font and style as a historical document, and also much of the language understanding evaluation is based on contemporary English as well; depending on the time period in which the documents of interest are written, shifts in style, the meaning of words, etc., could limit the applicability of the model’s language understanding — however, evaluation on actual historical documents is performed, so this limitation is to a degree quantified. | null | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zIb2DlqBxm | {'value': 'PHD: Pixel-Based Language Modeling of Historical Documents'} | 0e20iCHSdA | zIb2DlqBxm | EMNLP/2023/Conference/Submission1679/Reviewer_nvph | This paper proposes a new modeling approach called PHD with pixel-based language modeling for scanned (historical) documents. Instead of modeling with text tokens (typically generated by OCR engines), the method takes in patches of document images (pixels) and trains a language model on top of it. A similar masked language modeling objective is used to pre-train such models on a combination of synthetic and real data; the objective is to “black out” specific image patches and the goal is to recover the original text image. Empirical results on both general NLU tasks like GLUE and historical document QA tasks show that PHD can achieve good and sometimes even matching performances compared to text-based language models like BERT.
| This is a pretty novel work and has the potential to inspire many follow up work for understanding scanned (historical) documents. It has the following strengths:
- Simplicity: the image-based modeling approach is a generalization of the PIXEL model in the previous work. No special technique is involved in pre-processing the images – simply dividing any input image to patches and PHD can treat each individual patch as a token and model them like a “language”.
- PHD has a clever design that uses the same representation space in both the input and the output. Compared to previous work like TrOCR that tries to convert input images to text output, in PHD, the input and output are both image patches. It eliminates some difficulties like aligning image and text (which often involves much post-host processing and needs special loss like CTC to handle this).
- Another interesting aspect in PHD is how it can be used for “text-only” classification or QA tasks like GLUE or SQUAD. The authors firstly render the text in images, and the PHD model generates a prediction based on the image.
| I don’t see any reasons why this paper should be rejected, while the paper can benefit with a discussion of the following:
- For almost all the examples shown in the paper, the cropping seems to be perfect – that is, no cross text line cropping. It would be helpful to include examples (e.g., in fig 13) to show what happens if the answer patch lies at the boundary of the image patch and is half cropped.
- It will be helpful to include a discussion of the efficiency comparison between PHD and BERT. For example, given the same 512 token width, it is unclear whether PHD can encode the same number of actual text compared to BERT.
- The choice of the 16 pixel height for each patch seems to be chosen specifically tuned for the given documents you have. However I can imagine there could be images with different scanning resolution (e.g., 200 dpi), sticking to 16 as the patch height might not lead to optimal results. It would be better to show some results testing the generalizability of the trained models.
| See above. | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zEJFYWWmbG | {'value': 'Primacy Effect of ChatGPT'} | zmRMVWqfKt | zEJFYWWmbG | EMNLP/2023/Conference/Submission649/Reviewer_4BGH | This work studies the problem of whether ChatGPT inherits humans' cognitive bias by selecting the labels at earlier positions as the answer. To begin with, this work introduces the definition of the primacy effect as the tendency to recall information presented at the start of a list better than the information at the middle or end. Then, this work takes the natural language understanding tasks as the testbed and analyzes the phenomenon by shuffling labels listed in a prompted input before every prediction. Finally, this work compares the predictions on the same instance with two different label orders and counts the predicted label indices on many instances with label shuffling.
This paper finds that ChatGPT’s prediction is sensitive to the order of labels in the prompt and ChatGPT tends to select labels in earlier positions in the prompt. This work would contribute to the research line of investigating how prompts affect the model performance. As recent studies have revealed the sensitivity of the order of in-context learning examples and the influence of label correctness, this work may provide another interesting perspective with regard to the order of candidate labels for an individual test instance. | This work presents an interesting perspective by investigating whether the order of candidate labels affects model performance and finds that ChatGPT tends to select labels in earlier positions in the prompt. The finding may help understand how prompt works and facilitate studied of more powerful prompt techniques. | The experiment is preliminary as there are still unresolved concerns that may result in different conclusions.
Firstly, it is reasonable that ChatGPT may provide different predictions even with the same input (with different temperature settings during decoding) because of the nature of a generation model. The comparison with BERT might be unfair to justify if the phenomenon happens in ChatGPT only or other generation models.
Secondly, it is unclear whether the phenomenon happens just because ChatGPT has low confidence when generating the labels for the input questions, in which case ChatGPT just gives random predictions. | null | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zEJFYWWmbG | {'value': 'Primacy Effect of ChatGPT'} | d32eaI8jSF | zEJFYWWmbG | EMNLP/2023/Conference/Submission649/Reviewer_Zv7g | This paper reports on a study of the primacy effect in zero-shot ChatGPT which finds that the order to labels in the prediction task have an influence on the prediction result. | The paper is interesting and self-contained - it contributes to the current effort of understanding latest technological developments. | The paper does not contribute any new technological development in NLP, it instead reflects on the current SOTA.
l. 156: While I see why the temperature was set to zero, it would still be good to compare the results with other temperatures (medium, high), to confirm the findings. | l28: citation for such a well-established concept as the primacy effect should not be some post --> remove and/or insert proper citation
Section 3.1. - how do these datasets range on the complexity scale?
How were the actual prompts formatted? I would exchange Figure 2 with an actual example. | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zEJFYWWmbG | {'value': 'Primacy Effect of ChatGPT'} | ECM8Hkmxdl | zEJFYWWmbG | EMNLP/2023/Conference/Submission649/Reviewer_SvBZ | This paper studies one cognitive bias, primacy effect, in ChatGPT, which tends to select labels that appear earlier in the context. And they find that ChatGPT's performance is sensitive to the order of labels in the prompt and they tend to select labels in earlier positions. | 1. Their evaluation of primacy effect in ChatGPT is interesting. | 1. Performing experiments on more tasks might make the claim stronger, for example, other classification tasks or even generation tasks like QA/summarization.
2. It's better to shuffle the labels more times to prove the primacy effect. | Does Chain-of-thoughts prompting which increase the number of words in the answer improve such cognitive bias? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 5: Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. |
zByqDt16qZ | {'value': 'Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension'} | cDOjUQcPxx | zByqDt16qZ | EMNLP/2023/Conference/Submission266/Reviewer_X3Ra | 1. The authors introduce the RULE dataset which consist reading comprehension question from the ReClor dataset and new follow up questions to understand the rationale a selected reading comprehension question response
2. Show limitations of existing language models in understanding why incorrect answers should be eliminated in reading comprehension question answering | 1. I think that the RULE dataset would be useful for future research, particularly for improving language models’ reasoning abilities
2. The examination of language model limitations show areas where language models can potentially be improved | 1. The RULE dataset is built on the ReClor dataset. However, ReClor is not released under an open source/data license. It is unclear if the authors obtained permission to use and redistribute this data. Licensing information is also not provided for the RULE dataset, so it is unclear how and if this new dataset can be used by other researchers.
Edit: The authors have clarified in there rebuttal that they have obtained a license for ReClor. However, it is still unclear if the RULE dataset will be made available and how it will be licensed
Edit 2: The RULE dataset will be made publicly available under the CC BY-NC 4.0 license | 1. I found the sub-question provided in Figure 1 a little difficult to parse. Did you try alternative ways of phrasing the sub-questions?
For example:
Why is “deriving implications of a generalization that it assumes to be true” not the correct answer for the question “The argument proceeds by doing which one of the following”?
(I think this phrasing might be easier for both humans and machines to parse) | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zByqDt16qZ | {'value': 'Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension'} | smD28y6BcO | zByqDt16qZ | EMNLP/2023/Conference/Submission266/Reviewer_i432 | This work evaluates models' logical reasoning in the MCQ setting by probing it with additional questions about the reasoning behind selecting or eliminating individual choices. To this end, the authors build a new dataset, RULE. They start with questions from the ReClor dataset and annotate rationales for selecting and eliminating each of its answer choices. They then generate questions like for each choice which has one of the generated rationales as the answer.
These additional (sub-)questions have 2 key features:
1. They are in the same MCQ format as the original question. So one can probe the models' reasoning on the original question by these additional questions.
2. They are contrastive (minimally different), i.e., all subquestions share the same passage and answer choices, but they have different answers depending on the subquestion. This feature prevents models from taking simple shortcuts.
The authors have ensured the annotations are of high quality by human validation, and have also established a very high human score on the task.
Finally, the authors benchmark a large number of recent (few-shot, fine-tuned, with/without CoT) models on ReClor and RULE and demonstrate that all the models struggle and are behind humans. In particular, they find that models are extremely bad at selecting why a given choice is incorrect.
Finally, they explore (i) model-generated rationales and find humans are better at this task. (ii) using human-written rationales to improve models and find that selective subquestions help, eliminative ones hurt. | - This is a very well-done study and evaluation design. I particularly liked the two features highlighted in the summary above.
- This is a nice high-quality resource, and should be helpful for people exploring logical reasoning models.
- I was impressed by the number and diversity of the models that they have benchmarked for this new task (both fine-tuned and few-shot). | I don't see any reason to reject this work.
| Suggestion:
A. I have a different hypothesis about why the score for eliminative subquestions is so much worse than selective subquestions: It is because the models ignore the word "not". It would be interesting to do an experiment to test this: Compare model predictions for eliminative subquestions with and without the word "not" and see if the scores are close and individual responses correlate. Although not necessary, it'd be a good addition to the paper (at least the appendix, if not the main paper). | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 5: Could easily reproduce the results. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
zByqDt16qZ | {'value': 'Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension'} | jjaoLTgwaa | zByqDt16qZ | EMNLP/2023/Conference/Submission266/Reviewer_ivgz | This paper proposes a dataset composed of auxiliary questions and rationales to test the model’s consistent ability for critical reasoning. The paper presents experiments that compare various models with different conditions on this task. Additional analysis on difficulties of eliminative subquestions and comparison between rationale writing ability also provides insights into the model’s behavior on reasoning tasks. | 1. The problem is well-motivated by grounding on the prior work.
2. The paper contributes valuable human-written free-from rationales datasets. The choice of the dataset is well-motivated and details about the annotation and qualification process are thoroughly performed. Especially, qualification and essential criteria for writing rationales are concrete. (It would be good to have examples of comparison between specific and unspecific samples and consistent and inconsistent samples)
3. The experiment setting is concrete and detailed enough to reproduce and findings are well-organized in that there are also comparisons between findings from prior works.
| Annotation of rationales: Annotators can write the rationales that are the semantically same with the option but with different expressions. Can this kind of rationale be filtered out with current verification? Or is this kind of rationale added to the dataset?
| Question A. (Line 70-72) What are the examples of selection elimination process of relevant alternatives in logical reasoning?
Question B. (Line 182-184) Why is faithfully testing the model’s performance on the main questions important?
| 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
z9l6nHpTyT | {'value': 'Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer'} | tvbDgCH8Ns | z9l6nHpTyT | EMNLP/2023/Conference/Submission1404/Reviewer_f2rF | The paper proposes an adapter-based multi-attribute text style transfer model with the parallel and stacked connection configurations. The authors focusing on an issue of the current PLM fine-tuning where the training examples are limited and the PLM contains a huge amount of parameters which may lead to overfitting. | The paper reads very smoothly and is an enjoyable read. The model is explained clearly and the paper is well-structured.
The method proposed is a new application scenario extension of the Adapter framework. | The experiments only utilized one backbone model, potentially leading to an evaluation bias concerning the model's effectiveness. To address this concern, the author should investigate and assess the impact of employing various backbones on the performance of the adapters.
The human evaluation lacks specific quality details, such as the number of examples used for evaluation and the number of workers hired to conduct the evaluations. Including these details is essential to ensure transparency and replicability of the evaluation process.
The comparison with prior works is insufficient, raising concerns about the effectiveness of the proposed model.
Missing some related references, e.g., FGIM, Wang et al., 2019. | null | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
z9l6nHpTyT | {'value': 'Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer'} | LTvE66R8hh | z9l6nHpTyT | EMNLP/2023/Conference/Submission1404/Reviewer_Jb8X | The paper introduces an adapter-based approach for the multiple-attribute text style transfer task. In short, the processed method Adapter-TST utilizes a series of adapters to model different types of attribute information. As a parameter-efficient method, Adapter-TST achieves better performance with a very low number of training parameters compared to the previous method. | 1. The paper was well-written and easy to follow.
2. How to use parametric efficient methods in style transfer tasks is an important field worth exploring. | 1. The contribution of the whole paper is limited. Using adapter-based PLMs is a common strategy for text generation tasks, even those involving multi-attribute-based generation tasks, such as [2][3]. At the same time, the adapter used in this paper is not significantly different from the previous common methods, nor is it optimized for tasks.
2. This paper designed the method only with BART as the backbone. For this reason, I think it might be difficult to call this particular approach a "parameter-efficient framework" because its generality has yet to be proven.
3. There are serious deficiencies and possible unfair comparisons in the experimental part. The main reasons are as follows:
3.1 Baselines selected in the experiment are seriously missing. The experiment only compares the proposed method with the non-parameter-efficient method Styletransformer, while other parameter-efficient methods are ignored, especially prompt-learning-based methods, such as[1][2][3]. Although these methods are not applied to style transfer, it is clear that they are general-purpose and can be easily migrated to this task, and [2][3] also perform a similar multi-attribute text generation task while using adapter-based PLMs as baselines.
3.2 Unfair comparison with baselines. The backbone of Adapter-TST is BART-Large while the backbone of the baseline is Transformer, it is difficult to determine whether the performance gains are due to the use of PLMs or the proposed approach.
References:
[1] Prefix-Tuning: Optimizing Continuous Prompts for Generation.
[2] Controllable Natural Language Generation with Contrastive Prefixes.
[3] Tailor: A Soft-Prompt-Based Approach to Attribute-Based Controlled Text Generation. | See Reasons To Reject | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 2: Mediocre: This paper makes marginal contributions (vs non-contemporaneous work), so I would rather not see it in the conference. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 5: Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. |
z9l6nHpTyT | {'value': 'Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer'} | y5Ecijt5fr | z9l6nHpTyT | EMNLP/2023/Conference/Submission1404/Reviewer_2k94 | This paper presents Adapter-TST, a straightforward yet effective and light-weight method for addressing the challenging task of multiple-attribute textual style transfer (TST). In contrast to single-attribute TST, which focuses on changing one stylistic property at a time, Adapter-TST deals with simultaneously altering multiple stylistic properties—such as sentiment, tense, voice, formality, and politeness. The Adapter-TST approach involves inserting simple neural adapters (consisting a feed-forward down-project layer, followed by a nonlinear activation function, followed by a feed forward up-project layer, along with a skip-connection layer between two projection layers) into a pre-trained network, like a pre-trained BART model, to capture and change diverse attribute information. The weights (parameters) of the original network are kept frozen during training, and only the additional adapter layers are trained. Adapter-TST offers two configurations, "parallel" and "stack," enabling models to perform compositional style transformation (text editing).
In order to validate the effectiveness of Adapter-TST, the authors employ BART-Large as their backbone model and conduct experiments on the Yelp and StylePTB datasets. Their results indicate that Adapter-TST often performs on par with, not outperforms, several baseline methods, including BackTrans, CrossAlign, DualRL, and StyleTransformer, along certain evaluation metrics. Although human evaluation shows positive outcomes for Adapter-TST, the improvements do not appear to be statistically significant. | * Textual style transfer with multiple attributes remains a relatively unexplored and challenging task within the field of NLP. This paper focuses on this intriguing problem and introduces an intuitive, effective, and light-weight approach to tackle it.
* The paper demonstrates a clear motivation for studying the problem of multiple-attribute textual style transfer and effectively outlines their research objectives. Moreover, the contributions of their work are well-defined, and the paper is written in a coherent manner overall, making it easy to understand both the arguments and results presented.
* This work has the potential to be of interest to not only the TST community but also the broader NLG community, as it proposes a parameter-efficient method for compositional text editing. (That said, the paper lacks clarity regarding the amount of fine-tuning data required to achieve satisfactory task performance and the overall generalizability of the proposed Adapter-TST approach. Further investigation on these aspects would be rather beneficial to better understand the practical implications of this method.)
* On the whole, the experimental setup seems to be sound and thorough. | * While this limitation is not a significant reason for rejection, I believe that the authors could enhance the credibility of their proposed approach by demonstrating its generalizability and robustness. Currently, the focus is solely on one true multi-attribute TST dataset (StylePTB) and one simple model type (BART-Large). The inclusion of an additional dataset would strengthen their claims and offer a clearer demonstration of Adapter-TST’s efficacy. Moreover, a more comprehensive analysis of their findings is needed in my opinion, as the paper, in its current form, offers only a surface-level discussion of the results.
* This is a relatively minor concern, but I fear that the paper does not sufficiently make reference to the recent studies on textual style transfer and consider stronger baselines for the Yelp dataset.
* In my opinion, the authors missed an opportunity to include two other simple yet potentially strong baselines in their evaluation: an Alpaca (or LLaMa) model with and without adapters. Considering these instruction-tuned models could have provided valuable insights even under a zero-shot setting. Comparing their proposed approach against such baselines could have further enriched the analysis and strengthened the overall study. | * Question A. Could you please provide more details about the training of the adapter layers? For instance, how many epochs did you train your models? How important is the classification loss?
* Question B. How crucial are the parallel connections? Do you think that stack connections might be enough to perform compositional text editing?
* Question C. Have you considered and conducted experiments using other models, in addition to BART-Large?
* Question D. Would your proposed method demonstrate favorable performance if trained on a combination of multiple textual style transfer datasets, such as sentiment, formality, grammar correction, and others? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 5: Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. |
z9CqYTwOiO | {'value': 'Solving the Right Problem is Key for Translational NLP: A Case Study in UMLS Vocabulary Insertion'} | X0FuG6Z7pa | z9CqYTwOiO | EMNLP/2023/Conference/Submission3586/Reviewer_F1a9 | The focus of this work is better integration of realistic scenarios in the task of UMLS vocabulary insertion. The authors introduce the UVI task of how a new UMLS atom can be inserted - if it is related to an existing concept or if it is a new concept altogether. The authors contribute five datasets of UMLS updates over a period of time for the task. They integrate domain knowledge by using biomedical LMs. They also propose a model for the task and show through comparisons with baselines and across the datasets the improvement brought by the model.
This is actually a nicely written paper and reads well. I think with some revisions this paper should be accepted | 1. The authors motivated the work nicely. They discuss the research gap in detail and how their work attempts to address this gap. The section on problem formulation defines the problem of UVI clearly.
2. Thorough experiments - comparison with baselines and across datasets for generalization.
3. Qualitative error analysis was done by experts | 1. The proposed approach is shown to have higher accuracy/f1, but no statistical significance is provided.
2. Findings from comparison across subdomains (Lines 475-492) are provided in the appendix. This looks like an important result since the authors mentioned significant performance variations across semantic categories. Also in the appendix data is provided only for the 2020AB dataset. | 1. Lines 067-068: How do they know about this number 300,000 related to UVI (unless of course, they work at NIH)? Providing a citation will significantly strengthen the claim and the motivation for the work. Also, Table 1 shows the statistics of the 5 datasets used with the Insertion sets which likely average to 300,000, however, can this be generalized beyond these 5?
2. Lines 243-244: Was there any rationale for choosing the 2020AB dataset for training?
3. Lines 248-252: The authors mention significant performance variability across concept categories. How different are these semantic categories across the five datasets?
4. Lines 390-393: How is the higher chance of being the most appropriate concept for the new atom determined? Any citation for this?
5. Lines 475-492: Can these findings be generalized for the other datasets? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 3: Could reproduce the results with some difficulty. The settings of parameters are underspecified or subjectively determined; the training/evaluation data are not widely available. | No | 3: Pretty sure, but there's a chance I missed something. Although I have a good feel for this area in general, I did not carefully check the paper's details, e.g., the math, experimental design, or novelty. |
z9CqYTwOiO | {'value': 'Solving the Right Problem is Key for Translational NLP: A Case Study in UMLS Vocabulary Insertion'} | BpSd8T2hop | z9CqYTwOiO | EMNLP/2023/Conference/Submission3586/Reviewer_SKdh |
This paper deals with a practical problem, namely the regular updates of the UMLS, which involve the integration of new terminological vocabularies. The papers proposes a methodology which could be used to speed up the updates, by automatically finding whether
a new term is a potential synonymous of a term already in the UMLS, or it can be attributed to a novel concept, not yet present in the UMLS.
The paper proposes a different conceptualization of the problem compared to previous approaches. Rather then simply evaluating the similarity of a new term to existing terms, the paper proposes an approach where each new term is assessed in relation to the entire
UMLS, and either a concept is found to which the term can be assigned, or the term is maked as novel, and a new concept will have
to be created. I am not completely convinced that this different conceptualization can be considered innovative, as it seems to me to
be entirely derivable from the original one.
|
Well developed study of the problem of UMLS update, which is framed as a problem similar but not identical to (biomedical) entity linking.
Well designed experimental setup, with one particular UMLS update used as a reference for training/development/testing, and other
updates used for further testing of the results. Interesting combination of rule-based and BERT-based methods. Good evaluation.
|
The problem tackled by the authors is extremely specific, and has a very narrow application, although the methods could probably be generalized to similar problems of knowledge base update.
The evaluation metrics are not sufficiently clearly described, but it might be just a question of providing a more formal definition. |
Please provide a more accurate description of the evaluation metrics in section 4.2. I suggest to use formulas.
In particular the "ranking accuracy", which is a central metric in the paper because it is the one where the most
improvements are seen, is very superficially defined. It is not clear if it is a proper ranking metric, as the name
suggest, or a measure of accuracy. If it is a ranking metric, please explain in which sense it provides a measure
of the quality of the ranking. If it is not, please name it differently.
I also struggled a bit to understand the definition of New Concept Precision" because it is not immediately
obvious what is the different between "correct new concept predictions" and "true new concept atoms", but
I believe I understood it.
In any case, formulas would help understanding and remove some ambiguity.
Figure 2, not clear what the "New Concept" as input to the Re-Ranker represents.
| 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 5: Positive that my evaluation is correct. I read the paper very carefully and I am very familiar with related work. |
z9CqYTwOiO | {'value': 'Solving the Right Problem is Key for Translational NLP: A Case Study in UMLS Vocabulary Insertion'} | Nb8hoZHvrD | z9CqYTwOiO | EMNLP/2023/Conference/Submission3586/Reviewer_wXDg | The authors present a new SOTA model for solving UMLS vocabulary insertion (UVI) and helping experts editors in their tasks. The paper start by introducing the UVI task and introduce its multiple contributions (language models, entity linking, candidate re-ranking, augmented RBA, NULL-injection and RBA enhancement). Then, they give context about previous contributions to the subject, why they are for most of them treating the problem in a very different manner than the one from real world usages and each pro & cons on them. Finally, they are concluding by the results, a deep error analysis and a study of model generalization over time and subjects. | The paper is well written and present extremely well the subject of study despite being a difficult task. They introduce the previous works perfectly and share constantly their motivations behind this work without denigrate any previous contributions.
The different experiments are interesting and give extraordinary results compared to the selected baselines. The best proposed system give constant and reliable performances across a large set of datasets curated from a large time frame (5 versions). The expert analysis give good hints about the reasons why the system are sometime badly performing and will be interesting improvements for the future.
Evaluating both accuracy and ranking accuracy is a very good way of combining both academia standard metrics and real-world applications metrics, since the tool can be used as an assistance tool for experts editors.
| null | Do you have any idea of the latency (inference time in ms) of your proposed systems ? And how does it compare to other systems ?
A question about L. 238, have you tried your systems on other languages to know if it is replicable on them ? Or if the specificities of other languages make it more complicated to transfer performances ? For example, Chinese, Hebrew, French or Turkish ?
| 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
z8gM4ZfK8l | {'value': 'Improving Cross-lingual Transfer through Subtree-aware Word Reordering'} | bPH9oGRRXy | z8gM4ZfK8l | EMNLP/2023/Conference/Submission4095/Reviewer_gucq | This paper presents a source-sentence reordering method to narrow the gap between typologically-distant languages in cross-lingual transfer. They learn reordering rules based on Universal Dependencies and apply them at all levels of the syntactic tree. Extensive experiments show the effectiveness of the proposed approach in enhancing cross-lingual transfer, particularly in the context of low-resource languages. | 1. Extensive experiments on different tasks (UD paring, relation classification, and semantic parsing) and settings (zero-shot and few-shot), which show the necessity of reordering.
2. Detailed analyses on different architectures.
| 1. Insufficient comparison with related works.
First, the differences with the most related work (Rasooli and Collins, 2019) should be clearer. Their approach is reported to leverage rich syntactic information, as stated on line 190, which is controversial with the expression “superficial statistics” in the line 192. Second, it is insufficient to solely compare the proposed methodology with that of Rasooli and Collins (2019). It would be prudent to also consider similar work, such as that of Liu et al. (2020a).
2. The effectiveness of these two settings (STANDARD and ENSEMBLE) varies depending on the tasks and languages. As shown in table 5, ENSEMBLE outperforms STANDARD for the majority of languages, with the notable exceptions of Japanese and Italian. A similar phenomenon is observed for Thai and Irish, as indicated in Table 2, warranting further investigation.
| null | 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
z8gM4ZfK8l | {'value': 'Improving Cross-lingual Transfer through Subtree-aware Word Reordering'} | iNf3nsHt5m | z8gM4ZfK8l | EMNLP/2023/Conference/Submission4095/Reviewer_YA2U | The paper presents an interesting approach to reordering for multilingual NLP, which instead of reordering the words of sentence, it reorders the sub-trees of its universal dependencies parse tree. The proposed method is shown to outperform previous approaches both in zero- and few-shots settings. | [1] The proposed method is simple, effective, and particularly suited to post-hoc interpretability.
[2] The paper considers both few-shot and zero-shot experiments.
[3] The "Related work" section is very informative and clear. Personally, I do not work on machine translation, and I really enjoyed quality of the overview.
[4] Interestingly, the proposed method is associated with a higher performance gain in the seq2seq setting, highlighting a difficulty of encoder-decoder architectures in dealing with variable word order patterns. | [1] I think that the study could greatly benefit from an objective quantification of the typological / word-order distance between the languages considered. One of the point I found most interesting about this work is that there seems to be an increase in performance for languages that are distant from English (e.g., Japanese, Hindi), and a decrease in performance (sometimes) for languages that are close to English (e.g., Spanish, German, especially with mT5). It would be great to assess this trend more formally, with a metric of typological similarity (e.g., using WALS).
[2] The authors state that their approach is suited for interpretability; however, the way POCs can be interpreted is never truly addressed in the body of the paper. It would have been a very nice follow-up analysis/appendix. | [1] l. 477 What about Irish? There seems to be an advantage of your method there.
[2] ll. 478-479 "No noticeable effect is observed for structurally closer languages." However there is once again a decrease in performance in Arabic, and a terrible drop in Persian!
[3] ll. 512-513 What do you mean by significant? Did you test it with some statistical test? One option could be the McNemar test on paired nominal data. It is true that the difference is numerically small, but statistical significance (p-values) depend on the variance of the samples. Very small differences can be statistically significant with large sample sizes.
[4] ll. 528-529 How do you explain the drop in performance in Hindi and Telugu? It's in contrast with your previous observation, where more typologically distant languages benefited the most from your approach. | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 5: Could easily reproduce the results. | No | 2: Willing to defend my evaluation, but it is fairly likely that I missed some details, didn't understand some central points, or can't be sure about the novelty of the work. |
z8gM4ZfK8l | {'value': 'Improving Cross-lingual Transfer through Subtree-aware Word Reordering'} | VpCXP2olFf | z8gM4ZfK8l | EMNLP/2023/Conference/Submission4095/Reviewer_W6Rf | This paper presents a new method to improve cross-lingual transfer based on source-side word reordering (operated at the level of dependency subtrees). The authors do so by estimating word order preferences from data, then casting them as constraints to an SMT solver to identify the optimal reordering. The method is experimented on 3 tasks and on typologically diverse languages. Results are ambivalent (usually beneficial but detrimental for some language pairs) but the authors propose a mitigation measure (concatenating original and reordered data) that is successful. Experiments are extended to the few-shot scenario and to comparison among architectures, which yields new insights on the generalizability of the method as well as on properties of existing algorithms. | [Ac-A] The proposed method is thought-enriching and opens new research avenues.
[Ac-B] The authors experiment with a large variety of languages (typologically diverse), tasks, settings (zero-shot, few-shot, different architectures, etc.), which yields more comprehensive and generalizable results.
[Ac-C] Much appreciated the initiative to include a longer discussion on the method's shortcomings in the annex (beyond the "Limitations" paragraph).
[Ac-D] Annexes provide comprehensive experimental details and results, including standard deviations, which is very good methodology. | **[Re-A]** The idea underlying this contribution is quite interesting (using a SMT solver to improve upon prior similar work), but it also seems that the authors missed an important related work (Aufrant et al. 2016, see Missing references), and that if accounting for it they would possibly have made some choices differently, and interpreted some of their results differently.
In particular, Equation (2) shows that they discretize the statistics measured empirically (turning a slight preference into a deterministic hard constraint). Why? Aufrant et al. (2016) have precisely discussed (and implemented accordingly) how it is beneficial to have "*smooth transformations (with mean preference rate objectives and error margins)*", in other words to avoid deterministic word order, because it is a linguistic fact that for some languages the word order preference is not necessarily deterministic for a given label pair. And they also show how deterministic reordering can be detrimental in practice, by losing useful (balanced) information in the source treebank.
Impact:
- [Re-A1] The argument made on line 331 (that ENSEMBLE works better because statistical estimation creates imperfection) appears dubious in light of that prior work. On the contrary: it is because the proposed method overlooks the statistical nature of word order preference, that ENSEMBLE works better. Indeed, when source has mostly prenominal adjectives and the target has a very slight preference for postnominal, then STANDARD contains 100% postnominal, whereas ENSEMBLE contains half prenominal / half postnominal (= much closer to the target)… which is exactly what Aufrant et al. advocated for. This sheds a completely new light on ENSEMBLE, and significantly changes the interpretation of the results.
- [Re-A2] Same for the case of performance decrease in case of close languages: a simple interpretation is that close languages have already similar word order ratios for a given label pair, and the discretized constraints move them further. This is exactly the "French to Italian" case analyzed by Aufrant et al. So these observations may just be an artefact of the choice to discretize the constraints, not an evidence of the applicability of reordering in general depending on the language pair.
- [Re-A3] And for the remark line 506 on MTOP results opposite to Multilingual-TOP ones: since language sets are different, that again may just be an artefact of the fact that target languages on one side or the other have more deterministic or more balanced word order preferences.
The authors acknowledge on line 1039 that there may be an issue with this discretization, but the analysis does not go as far as observing how that impacts (and possibly invalidates) some of their own conclusions and analyses in the paper. Actually, the mention of "statistical validity" (line 1041) looks as if the authors consider that either "nmod < amod" or "amod < nmod" is the appropriate constraint and when measuring P=0.51 it only prevents to identify which one is, NOT that the appropriate one would indeed be "half of each".
**[Re-B]** There is also an issue with the realism of the targeted scenarios. Often when working on low-resourced scenarios, there are irremediable issues that prevent from being fully realistic (e.g. evaluation can only be done when sufficient data exists, so not for actually low-resourced languages). So it is fully OK to assume that such work is done in best-effort mode. But this does not mean discarding the corresponding issues as being irrelevant, but rather to acknowledge them and acknowledge they are not solvable. Plus, care must be taken that the proposed methods would actually be meaningful in a real low-resourced scenario.
More precisely:
- [Re-B1] Regarding the pre-requisite for an UD treebank to estimate the POCs, line 359 accurately observes that UD is available for many languages anyway. However, treebank availability does not mean availability of a treebank with size comparable (hence the same reliability of POC estimation) to the large treebanks used in Table 5 for French, Spanish, or German. So this presumably overestimates a lot the quality of the POCs estimated in actually low-resourced settings. In particular, the treatment made of footnote 6 is questionable: precisely this is a real-world scenario of low-resourced language, so the conclusion "impossible to extract" raises major questions on the applicability of the method to actual low-resourced cases.
- [Re-B2] The scenario for estimating POCs without a treebank also does not seem very convincing. If using annotation projection to produce a treebank in the target language, why only using it for estimating POCs (and then training on a treebank from a different language), rather than directly using the projected trees as training treebank? And same for the other tasks, if there is parallel data to do annotation projection, then isn't it appropriate to project the available TOP & RC annotations through that parallel corpus, instead of resorting to a convoluted approach through another corpus and through data transforms? Or has this issue already been thought of, and were there specific reasons to prefer that approach?
**[Re-C]** Finally there is a number of places where the analysis is too shallow, or sometimes completely missing:
- [Re-C1] For the relation-classification task, lines 524-534 only describe results, without any comment on what can be concluded from those results.
- [Re-C2] The remark line 502 on English vs French overlooks that English and French are indeed related, but have nevertheless a number of important typological differences (pre/postnominal adjectives for instance), so it is unsurprising to observe gains. What is the concept of "typologically distant" used here?
- [Re-C3] Only English is used as source language, which is clearly not neutral when considering typology (English being rather atypical among the 7000 languages, regarding typology). Surely these experiments are computationally expensive, so experimenting with many sources was probably not doable. But the motivations and impact of that choice would have deserved at least a comment.
- [Re-C4] In Table 5, a number of s2s results are so low that it is hard to consider that the model does anything (or at least, it does not outperform a crude heuristic such as "dependency to the left neighbour, with the majority-class as relation label"). This raises many questions on how to interpret the score increase, so this would have at least deserved a comment (more than just calling it "subpar"). | [Question A] Lines 75-80, I don't understand the point made here. Isn't "use pre-nominal adjectival modification" the same as "high probability for adjectival modifiers to precede the headword"? What difference did the authors want to make here?
[Question B] Line 280 states "assuming that the target language does not have determiners", but it is unclear how this information has been used in the given example (has it?). How is the absence of determiners accounted for in the method? (Side remark related to [Re-A] above: Aufrant et al. also considered the case of absent determiners)
[Question C] Line 300 mentions an approach to estimate POCs without a treebank, but it is not clear where in the paper this is experimented with. For which languages has this method been used, and in which Table are the corresponding results?
[Question D] In the §4.2.1 experiment, what has been done exactly for the Thai data? Because line 390 states "never use the same dataset", but both line 381 and Table 6 mention Thai-PUD. Which one is correct?
[Question E] Line 447 mentions the use of TAC English for train and Trans-TAC for test, but Trans-TAC is already a translation of TAC English, so there may be leaking. Is it the case that the Trans-TAC data used is solely translated from the **test** split of TAC English? Can you clarify and justify why this is sound evaluation?
[Question F] For the few-shot scenario (line 518 and Table 4), is the experiment conducted with the models based on XLM or mT5?
[Question G] Line 1025, how does it help mitigating conflicts to discard the irrelevant constraints? If those constraints are not applicable to that subtree (because root labels do not match), then they do not take part in the computation of the solver, right? So why would they be the ones preventing the solver from finding an ordering? | 4: Strong: This study provides sufficient support for all of its claims/arguments. | 4: Strong: This paper deepens the understanding of some phenomenon or lowers the barriers to an existing research direction. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
z69tlSxAwf | {'value': 'Novel Slot Detection With an Incremental Setting'} | MQwLC7aBqC | z69tlSxAwf | EMNLP/2023/Conference/Submission3931/Reviewer_Mmoy | This paper proposes a task called incremental novel slot detection that continually detects new slots, labels new slots, and retrains the slot detection model for the next-round detection. The authors adapt the SNIPS and ATIS, two classical slot-filing datasets to fulfill this setting. They proposed to combine contrastive learning and noise-induced adversarial learning for novel type prediction and propose a Query Enhanced Knowledge Distillation to alleviate the catastrophe-forgetting problem during incremental learning. Experimental results show the efficacy of the proposed framework when compared with a novel-type prediction baseline and two continual learning baselines. | 1. Propose a research problem that is more relevant to the realistic setting, i.e., incremental slot detection (both out-of-distribution and in-distribution), which requires continually detecting new slot values and deploying the new slot prediction models. This could be an interesting research topic for online dialog system development.
2. For slot detection, the paper utilizes learnable query vectors for feature matching and sequence prediction and use Hungarian algorithm to optimize the labeled triples. the same query can also be used to retrieve some representative data from the training set for incremental learning to avoid the catastrophic forgetting problem. Knowledge distillation is also used for fast model adaptation. This framework could potentially be useful and efficient in real dialog systems. | 1. Lacking strong baselines. The incremental learning for dialog systems is a classical topic, many different approaches, like architecture-based, memory-based, and data-retrieval-based are proposed in previous work[1], and also compared with strong multi-task learning baseline. Since this paper focuses on incremental learning, it needs more comparison with different baselines.
2. Lacking more complex benchmarks to verify the effectiveness of the method. Current benchmarks are small and toylike. More realistic dialog datasets like MultiWOZ, SGD, and dataset on DialogStudio should be considered. In addition, ChatGPT or GPT-4 results on this benchmark should be included for more comparison. This may not be required, but it's crucial to demonstrate how hard and challenging the new task is.
[1] Madotto A, Lin Z, Zhou Z, et al. Continual learning in task-oriented dialogue systems[J]. arXiv preprint arXiv:2012.15504, 2020.
| 1. Table 2. For the Snips dataset, why the NSD results in a decrease from 5% to 10% and an increase from 10% to 15%?
2. Line 374. Step 2, Why replace the text token with MASK, and slot values belonging (T_p<i) are labeled with O? Should it be T_p>i?
3. Line 190 After a new slot value is detected at the current stage, the novel slots will be viewed as in-domain types. Does that mean only the set of the new slot names becomes available? Or all the labeled data for those new slots are available?
| 3: Good: This study provides sufficient support for its major claims/arguments, some minor points may need extra support or details. | 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., it describes incremental work), and it can significantly benefit from another round of revision. However, I won't object to accepting it if my co-reviewers champion it. | 4: Could mostly reproduce the results, but there may be some variation because of sample variance or minor variations in their interpretation of the protocol or method. | No | 4: Quite sure. I tried to check the important points carefully. It's unlikely, though conceivable, that I missed something that should affect my ratings. |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 18