paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
9.67k
| point
stringlengths 55
634
|
---|---|---|---|
NIPS_2020_595 | NIPS_2020 | - Why should the real data example follow a Laplacian-structured graphical model? This is currently not a convincing example, because it is not clear that the framework is suited to tackle this data set. The final sentence in section 4.2 is not sufficient. - It would really make the paper great to provide some more discussion and insight into why the l_1 norm causes arbitrarily dense graphs. - I found section 3.2 hard to follow. Some separation between steps would be helpful. - The broader impact section is somewhat weak and could be expanded. Is this framework actually trustworthy enough to base health care decisions on it? | - It would really make the paper great to provide some more discussion and insight into why the l_1 norm causes arbitrarily dense graphs. |
NIPS_2018_430 | NIPS_2018 | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whether the method could lead to progress in the direction of cleaner machine learning methods for standard machine learning tasks (e.g. MNIST). Questions: - How where the time limits in the inner and outer problem chosen? Did larger timeouts lead to better solutions? - It would be helpful to have an algorithmic writeup of the solution of the pricing problem. - SVM gave often good results on the datasets. Did you use a standard SVM that produced a linear classifier or a Kernel method? If the former is true, this would mean that the machine learning tasks where rather easy and it would be necessary to see results on more complicated problems where no good linear separator exists. Conclusion: I very much like the paper and strongly recommend its publication. The authors propose a theoretically well grounded approach to supervised classifier learning. While the number of problems that one can attack with the method is not so large, the theoretical (problem formulation) and practical (Dantzig-Wolfe solver) contribution can possibly serve as a starting point for further progress in this area of machine learning. | - How where the time limits in the inner and outer problem chosen? Did larger timeouts lead to better solutions? |
ICLR_2022_1276 | ICLR_2022 | 1 The paper is not clear enough. The formulation is a little complicated, although the authors describe the well-known methods. On page 5, the authors use the term tf-head before defining them; It would be better if the authors describe the two experimental tasks in detail;
2 This paper could do more practical experiments to show the usage of the bound or the finding of sub-linear scaling law.
3 The Related Work subsection has discussed many previous studies, but the paper does not compare or discuss previous methods in the theoretical analysis or in the empirical comparison.
Typos: " (for eg, linear functions"-> e.g. linear functions, page 5 | 2 This paper could do more practical experiments to show the usage of the bound or the finding of sub-linear scaling law. |
sl4hOq9wm9 | ICLR_2025 | 1. As shown in Figure 3, knowledge can be effectively represented and understood through either natural language or parameters. So, which kind of knowledge should be integrated through parameters? More explorations and investigations are recommended here.
2. The method demands additional pre-training or post-training costs. I suggest incorporating an additional baseline method where a copy of the LLM is adopted to represent the knowledge. The baseline will also demonstrate the contribution of knowledge encoding phrase.
3. Will the In-Parameter Knowledge Injection method generalize to LLMs with a larger scale (e.g., Llama 3.1 70B Instruct, Llama 3.2 11B instruct, etc.)? It’s unclear about the exact contribution of the method, since the model scale is relatively small. | 2. The method demands additional pre-training or post-training costs. I suggest incorporating an additional baseline method where a copy of the LLM is adopted to represent the knowledge. The baseline will also demonstrate the contribution of knowledge encoding phrase. |
NIPS_2022_2770 | NIPS_2022 | 1.) Highly Incremental work. Very closely related to ChebNet and related method. 2.) Homophily definition is not up to date and experiments are not adequate. | 1.) Highly Incremental work. Very closely related to ChebNet and related method. |
ACL_2017_557_review | ACL_2017 | - The approach is incremental and seems like just a combination of existing methods. - The improvements on the performance (1.2 percent points on dev) are relatively small, and no significance test results are provided.
- General Discussion: - Major comments: - The model employed a recent parser and glove word embeddings. How did they affect the relation extraction performance?
- In prediction, how did the authors deal with illegal predictions?
- Minor comments: - Local optimization is not completely "local". It "considers structural correspondences between incremental decisions," so this explanation in the introduction is misleading.
- Points in Figures 6 and 7 should be connected with straight lines, not curves.
- How are entities represented in "-segment"?
- Some citations are incomplete. Kingma et al. (2014) is accepted to ICLR, and Li et al. (2014) misses pages. | -General Discussion:- Major comments:- The model employed a recent parser and glove word embeddings. How did they affect the relation extraction performance? |
Q00XEQxA45 | ICLR_2025 | 1. Weak Motivation: The paper’s focus on JPEG compression as the main adversarial scenario for steganography is narrow. JPEG is not the sole or the most challenging attack vector in steganography, limiting the practical significance of this work.
2. The claim of "joint optimization of image compression and steganography" is not convincingly supported, as the paper mainly demonstrates standard compression techniques applied to a latent code without introducing novel optimization strategies. Integrating a GAN-based discriminator to improve image quality is not a new concept and has been widely used in image steganography. This reduces the paper's innovative contribution.
3. Confusing Experimental Comparisons: The experiments are difficult to follow, with unclear captions and a lack of structure in presenting results.
4. Lack of Diverse Baseline and Attack Comparisons: The evaluation would be more robust with additional baseline methods and attack scenarios beyond JPEG compression. Including comparisons with a broader range of steganography techniques and different attack types (e.g., scaling, noise addition) would provide a better understanding of the model’s resilience and comparative performance.
5. Parameter Choices and Absence of Ablation Study: The model’s reliance on multiple parameters (λ1 to λ5) without any ablation study to show their individual effects. A detailed ablation study would strengthen the paper by showing the impact of each parameter and justifying their choices.
6. The security claims would be stronger with evaluations using neural network-based steganalysis tools, as these are increasingly relevant in steganography. Including such tests would provide a more robust validation of the method's security. | 3. Confusing Experimental Comparisons: The experiments are difficult to follow, with unclear captions and a lack of structure in presenting results. |
NIPS_2019_463 | NIPS_2019 | 1. The central contribution of modeling weight evolution using ODEs hinges on the mentioned problem of neural ODEs exhibiting inaccuracy while recomputing activations. It appears a previous paper first reported this issue. The reviewer is not convinced about this problem. The current paper doesn't provide a convincing analytical argument or empirical evidence about this issue. 2. Leaving aside the claimed weakness of neuralODE, the idea of modeling weight evolution as ODE is itself very intellectually interesting and worthy of pursuit. But the empirical improvement reported in Table 1 over AlexNet, ResNet-4 and ResNet-10 is <= 1.75 % for both configurations. The improvement of decoupling weight evolution is in fact even small and not consistent - the improvement in ResNet for configuration 2 is smaller than keeping the evolution of parameters and activations aligned. The improvement for ablation study over neuralODE is also minimal. So, the empirical case for the proposed approach is not convincing. 3. The derivation of optimality conditions for the coupled formulation is interesting because of connections to a machine learning application (backpropagation) but a pretty standard textbook derivation from dynamical systems / controls point of view. | 2. Leaving aside the claimed weakness of neuralODE, the idea of modeling weight evolution as ODE is itself very intellectually interesting and worthy of pursuit. But the empirical improvement reported in Table 1 over AlexNet, ResNet-4 and ResNet-10 is <= 1.75 % for both configurations. The improvement of decoupling weight evolution is in fact even small and not consistent - the improvement in ResNet for configuration 2 is smaller than keeping the evolution of parameters and activations aligned. The improvement for ablation study over neuralODE is also minimal. So, the empirical case for the proposed approach is not convincing. |
NIPS_2021_1907 | NIPS_2021 | There is little improvement empirically. Furthermore, it is unclear if the gains in this paper are due solely to the confidence widths or if the design of the algorithm is important too. For the empirical study, it is unclear how the other experiments would perform if they had access to the same confidence widths presented in this work. This may make the algorithmic comparison fairer since the differences in performance would be solely due to the sampling procedures. Also, (and I am torn on this since the setup is nice and clear) it is worth noting that the authors are most of the way through page 5 before any results are presented.
Other comments and questions: - Does theorem 1 hold for an adaptive sequence of x_n’s or a fixed sequence? The theorem just seems to specify a set of (x,y)’s that have been collected. Ie, is this a truly anytime result or for a fixed sequence? In the case of a linear kernel, the gap in the confidence widths between an anytime and fixed confidence bound is O(\sqrt(d)) which behaves like O(sqrt(\gamma_n)) in that setting. I guess that the algorithm is using these as an adaptive sequence which is maybe okay from a Bayesian perspective. - Same question for Thm 2 - For the result in remark 2, do other works get the same factor of d since log(N^d) = dlog(N)? This work is tighter in terms of \sqrt(\gamma) but is the d dependence the same? - Why is MVR the right sampling objective? - Regarding the statement in Section 6 about simple and cumulative regret bounds, it is somewhat expected that the cumulative regret is linear if you do this well on simple regret as your objective is largely one of exploration. Take for example the SE kernel as the variance \sigma -> 0. In this setting, we recover standard multiarmed bandits where http://sbubeck.com/ALT09_BMS.pdf for instance show that there cannot be an algorithm that is simultaneously optimal in both simple and cumulative regret.
Minor comments: - Make sure that the colors chosen for the plots are colorblind friendly. There are a variety of palettes in python for this. - Some of the axes in the plots in the main body and especially Appendix G are hard to read.
The authors do a good job discussing the limitations of their work, though more consideration should be given to potential negative societal impacts than simply saying “our work is theoretical, therefore we can do no wrong.” | - Regarding the statement in Section 6 about simple and cumulative regret bounds, it is somewhat expected that the cumulative regret is linear if you do this well on simple regret as your objective is largely one of exploration. Take for example the SE kernel as the variance \sigma -> 0. In this setting, we recover standard multiarmed bandits where http://sbubeck.com/ALT09_BMS.pdf for instance show that there cannot be an algorithm that is simultaneously optimal in both simple and cumulative regret. Minor comments: |
NIPS_2022_2677 | NIPS_2022 | Weakness: despite the rigorous mathematical derivation, I am not sure how the presented PAC analysis result helps the community. One can directly apply the sample complexity analysis on the adversarial loss (say via Rademacher complexity). Does the presented result provide more insights or a better rate?
Weakness: the paper doesn't actually provide an executable algorithm, but only an existence results, i.e., the existence of measure mu, the existence of sample compression scheme. Whether the rate results pair with a computational feasible algorithm is unclear, and there is no simulation to demonstrate empirical performance/comparison. 4 Weakness: in conclusion, the paper neither provide a theoretical information limit result, nor an executable adversarial training algorithm. | 4 Weakness: in conclusion, the paper neither provide a theoretical information limit result, nor an executable adversarial training algorithm. |
ICLR_2022_1992 | ICLR_2022 | Weakness
The authors revealed their identity (affiliation) in the code snippets of appendix. It might violate the double-blind rule of ICLR.
The novelty of this paper is limited. Packing is not a new idea and it has been widely used in the official tensor2tensor library and achieved good results: https://github.com/tensorflow/tensor2tensor/blob/3f12173b19c1bad2a7c37eb390f3ad46baee0c19/tensor2tensor/data_generators/ops/pack_sequences_ops.cc. So this can be a useful trick but the contribution might be not significant enough to publish at ICLR.
The scenario discussed in this paper is too restricted. For example, 1) it only discuss the wikipedia dataset (that’s how the number 50% comes), but there are a lot more datasets; 2) it only works for BERT training, but there are quite a few other important tasks, such as language modeling (GPT-3). All the numbers reported in this paper are based on this setting, making its generalization capability questionable.
The experiment section only shows the training loss of pretraining, but never talks about the downstream fine-tuning. Then how do you conclude that the performance is little affected? After all, the accuracies of downstream applications is the final metric.
The paper is a bit hard to read for the following reasons: 1) The contents are not self-contained in the main text — quite a few important contents are deferred to appendix that one cannot easily follow the ideas in the main text; 2) The paragraphs are usually lengthy and verbose — they can be as long as 30 lines! 3) There are quite a few typos, e.g. “For achieve this”, “¡CLS¿” etc. | 1) The contents are not self-contained in the main text — quite a few important contents are deferred to appendix that one cannot easily follow the ideas in the main text; |
3nwlXtQESj | ICLR_2025 | 1. **Unsubstantiated Claim on Force Field Mimicking**: A major claim of the paper is that PCMP mimics the molecular mechanics (MM) force field, yet no benchmarks or empirical results using MM force field datasets (e.g., MD17, MD22) are provided to substantiate this claim. Without benchmarking against MM force fields, the claim appears unsupported, and this oversight detracts from the paper's validity in this area.
2. **Computational Complexity**: The inclusion of path complexes, especially higher-order ones, is likely computationally demanding. However, the authors do not provide insights into potential trade-offs, such as runtime or scalability on larger datasets.
3. **Lack of Equivariance in Model Design**: Given the model’s target application in molecular property prediction, its architecture does not incorporate rotational or translational equivariance, which would enhance its ability to handle spatial molecular data more robustly. Adding equivariant layers could make the model better suited to capturing geometry-sensitive properties.
4. **Interpretability**: The model’s complex hierarchical structure might hinder interpretability, as it’s not clear which paths contribute most significantly to predictions or whether high-order interactions have consistent relevance across datasets. A comprehensive interpretability study is recommended. | 1. **Unsubstantiated Claim on Force Field Mimicking**: A major claim of the paper is that PCMP mimics the molecular mechanics (MM) force field, yet no benchmarks or empirical results using MM force field datasets (e.g., MD17, MD22) are provided to substantiate this claim. Without benchmarking against MM force fields, the claim appears unsupported, and this oversight detracts from the paper's validity in this area. |
NIPS_2021_1941 | NIPS_2021 | - Task-related head looks the same as the cross-attention structure in Transformer Decoder[61], and a similar structure is mentioned in Cait[57], so this contribution is optional. - Spatial-filling curve does not seem to improve the performance, and the SIS mode stated in the paper is equivalent to the current token embedding in ViT. - EAT increases by 0.5 points on average compared to the baseline in Table 1, but the reported highest Top-1 is 82.4 that is lower than current methods, e.g., SwinTransformer[36] and Cait[57]. Authors should compare with these methods or explain why it is not being compared. - Why do authors claim that "... so that only 1D operators are required" in Figure 3? Adding 2D operations, such as 2D-convolution in CPVT[16], CeiT[70], and PVTv2[a], has been shown to improve the performance of Transformer in image classification tasks. - Grammatical errors and typo issues lower the paper quality that can be seen in the following section.
[a] Wang W, Xie E, Li X, et al. PVTv2:
Moreover, Here are more suggestions.
- Some quantitative results in Appendix E should be presented in Table 1 - Why does DCN dramatically damage performance in Table 4? - Figure 6 should be rearranged - How to integrate CV and NLP features is unclear in multi-modal experiments, and more details should be displayed. - Some grammar/typo issues. - L12, L51: multi-modal - L102: their ways - L116: the definition of "h" - L161: the weight - L174: concepts - L314: remove the comma after "format" | - EAT increases by 0.5 points on average compared to the baseline in Table 1, but the reported highest Top-1 is 82.4 that is lower than current methods, e.g., SwinTransformer[36] and Cait[57]. Authors should compare with these methods or explain why it is not being compared. |
NIPS_2021_1343 | NIPS_2021 | Weakness - I am not convinced that transformer free of locality-bias is indeed the best option. In fact, due to limited speed of information propagation, the neighborhood agents should naturally have more impacts on each other, compared to far away nodes. I hope the authors to explain more why transformer’s no-locality won’t make a concern here. - Due to the above, I feel graph networks seem to capture this better than the too-free transformer, and their lack of global context/ the “over-squashing” might be mitigated by adding non-local blocks (e.g., check “Non-Local Graph Neural Networks” or several other works proposing “global attention” for GNNs). - The authors also claimed “traditional GNNs” cannot handle direction-feature coupling: that is not true. See a latest work “MagNet: A Neural Network for Directed Graphs” and I am sure there were more prior arts. Authors are asked to consider whether those directional GNNs can possibly suit their task well too. - Transform is introduced as a centralized agent. Its computational overhead can become formidable when the network gets larger. Authors shall discuss how they prepare to address the scalability bottleneck. | - The authors also claimed “traditional GNNs” cannot handle direction-feature coupling: that is not true. See a latest work “MagNet: A Neural Network for Directed Graphs” and I am sure there were more prior arts. Authors are asked to consider whether those directional GNNs can possibly suit their task well too. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | + Non-parametric emission distributions add flexibility to the general HMM framework and reduce bias due to wrong modeling assumptions. Progress in this area should have theoretical and practical impact. |
ICLR_2023_752 | ICLR_2023 | The experiment tasks, while common in this type of paper, are rather simple. The locomotion problems are not complex enough to show improvement, and for the robotic hand experiments, the optmimal length L of the latent trajectory segments seems rather small (3). This makes it a bit unclear how much the learned latent representation helps with planning compared to just 1) reducing the time discretization of the TT by a factor of 1/L (i.e. executing the same action for L=3 steps in a row), and 2) manually discretizing the action space (e.g. clustering it using k-means), although for a fair comparison one would need to sample from a learned conditional sequence prior (or ignore it for both) 3) both of these in combination. I do not think this was covered by the existing baselines? Would 1) be feasible to add?
It would have been interesting with some introspection and qualitative examples of the learned latent action representation (trajectory segments). This approach should be more suitable for tasks that have a natural discrete structure. It would have been interesting to see how well it could recover that. | 1) be feasible to add? It would have been interesting with some introspection and qualitative examples of the learned latent action representation (trajectory segments). This approach should be more suitable for tasks that have a natural discrete structure. It would have been interesting to see how well it could recover that. |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compactâ -> âDespite being compactâ 2. line 56: âWe refer multiway arraysâ -> âWe refer to multiway arraysâ 3. line 158: âHPFN to a even deeper ConACâ -> âHPFN to an even deeper ConACâ 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularizationâ which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones. | - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? |
NIPS_2017_320 | NIPS_2017 | #ERROR! | - Related to the point above, I do believe it is important to point out that interpretability of attention weights is problematic as the attentive RTE and MT (and maybe the summarization model too?) are not solely relying on the context vector that is obtained from the attention mechanism. Hence, any conclusions drawn from looking at the attention weights should be taken with a big grain of salt. |
B3rTZovgaA | EMNLP_2023 | - I'm not sure about the novelty of the task characteristics. The statistics shown in Section 4.3 is informative, but I want to see the comparison with the existing language refinement benchmarks, including, e.g., JFLEG [Napoles+, 17]. I'm also curious about how the refinement is context-dependent, given that paragraph-level editing would be one important aspect of this task, and the paragraph length is somewhat short (1--2 sentences).
- Evaluation with GPT-4 was used without validating its appropriateness. At a minimum, shouldn't you show that source and target sentences have an appropriate difference in GPT-4 ratings?
- Why don't you use reference-based metrics such as GLUE? If you have some concerns about them, you should have some discussion or evaluation of the metrics on this task. | - Evaluation with GPT-4 was used without validating its appropriateness. At a minimum, shouldn't you show that source and target sentences have an appropriate difference in GPT-4 ratings? |
bePaRx0otZ | ICLR_2025 | 1. Writing can be further improved, especially the experiment section. The difference between URI and GR methods w/ URI index is unclear in Table 1. The analysis of token consistency is hard to understand and it will be better to demonstrate their differences to original URI through notations.
2. Insufficient analysis and comparison on other generative methods (e.g., ASI [1] and GenRet [2] ) that learns both retrieval models and indexing jointly.
3. The assumption of Theorem 1 seems very strong and there lacks empirical analysis on real-world dataset to verify the rationality.
4. The comparison in experiments is not unfair. The candidate size of URI is larger than other GR methods which requires the mapping between the token representation and the item is bijective while URI allows each leaf node contains more than 1 items. Besides, URI is equipped with an additional ranker.
[1] Yang T, Song M, Zhang Z, et al. Auto Search Indexer for End-to-End Document Retrieval[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. 2023: 6955-6970.
[2] Sun W, Yan L, Chen Z, et al. Learning to tokenize for generative retrieval[J]. Advances in Neural Information Processing Systems, 2024, 36. | 2. Insufficient analysis and comparison on other generative methods (e.g., ASI [1] and GenRet [2] ) that learns both retrieval models and indexing jointly. |
NIPS_2020_1433 | NIPS_2020 | 1. No theoretical insights about the speed-up M-HMC can achieve are provided. 2. It seems when the number of discrete variables becomes large, M-HMC will have to introduce much more continuous variables than DHMC. More continuous variables may make M-HMC slower in computation. Is it possible to provide a plot for both the actual computation time and sample size as a function of total number of discrete variables, for both DHMC and M-HMC? 3. Figure 3 shows that the performance of M-HMC is very dependent of the T choices. Any suggestions for the fix? 4. It is not clear at all whether the discrete part, after transforming it to be continuous variables, needs to use HMC. Neither is it clear why use HMC in the discrete part improves performance, because there is no gradient for the discrete part. Would a random walk type implementation enough for the discrete part? | 4. It is not clear at all whether the discrete part, after transforming it to be continuous variables, needs to use HMC. Neither is it clear why use HMC in the discrete part improves performance, because there is no gradient for the discrete part. Would a random walk type implementation enough for the discrete part? |
ARR_2022_314_review | ARR_2022 | 1. Although the work is important and detailed, from the novelty perspective, it is an extension of norm-based and rollout aggregation methods to another set of residual connections and norm layer in the encoder block. Not a strong weakness, as the work makes a detailed qualitative and quantitative analysis, roles of each component, which is a novelty in its own right.
2. The impact of the work would be more strengthened with the proposed approach's (local and global) applicability to tasks other than classification like question answering, textual similarity, etc. ( Like in the previous work, Kobayashi et al. (2020))
1. For equations 12 and 13, authors assume equal contribution from the residual connection and multi-head attention. However, in previous work by Kobayashi et al. (2021), it is observed and revealed that residual connections have a huge impact compared to mixing (attention). This assumption seems to be the opposite of the observations made previously. What exactly is the reason for that, for simplicity (like assumptions made by Abnar and Zuidema (2020))?
2. At the beginning of the paper, including the abstract and list of contributions, the claim about the components involved is slightly inconsistent with the rest. For instance, the 8th line in abstract is "incorporates all components", line 73 also says the "whole encoder", but on further reading, FFN (Feed forward layers) is omitted from the framework. This needs to be framed (rephrased) better in the beginning to provide a clearer picture.
3. While FFNs are omitted because a linear decomposition cannot be obtained (as mentioned in the paper), is there existing work that offers a way around (an approximation, for instance) to compute the contribution? If not, maybe a line or two should be added that there exists no solution for this, and it is an open (hard) problem. It improves the readability and gives a clearer overall picture to the reader.
4. Will the code be made publicly available with an inference script? It's better to state it in the submission, as it helps in making an accurate judgement that the code will be useful for further research. | 1. Although the work is important and detailed, from the novelty perspective, it is an extension of norm-based and rollout aggregation methods to another set of residual connections and norm layer in the encoder block. Not a strong weakness, as the work makes a detailed qualitative and quantitative analysis, roles of each component, which is a novelty in its own right. |
NIPS_2021_2074 | NIPS_2021 | The main concerns are the following: 1. The explanation of using soft assignment instead of har-assignment is discussed in the description of the methodology, but it is unclear to me whether the authors have reported an experiment allowing to prove their conjecture. 2. The authors should discuss the fact that CP CROCS seem to be underperforming the other clustering approaches based on the age attribute. This result is quite strange and an outlier, but it would be interesting to have a discussion on a possible explanation of this phenomenon. 3. Could the authors discuss the main difference of the tow datasets, and especially as depicted in figure 4 why the separability of the classes is so much lower on PTB-XL compared to Chapman 4. IN terms of quality of the information retrieval, cold the authors comment why evaluating soft assignment (and performance if only one attribute is correct) is clinical significant? It seems to me quite important that pathology and should be well retrieved, extracting an example of AF when asking for Normal rhythm, would have higher impact than a mistake on age or sex 5. Can the authors discuss whether age and sex are good attributes for ECG signals? Heart Rate variability is affected by age, but I am not aware of morphological changes in the ECG signals due to age (or even sex)? Figure 3 seem to be indicating that the representation of the ECG signal is not that influenced by either age or sec, although there seem to be a trend or evolution with age, and separation of sex on the proposed full framework. 6. The order in sections 2 related work and 3 background should be consistent clustering then IR. 7. The acronym HER is not introduced properly, that is at the first appearance of the term | 7. The acronym HER is not introduced properly, that is at the first appearance of the term |
ARR_2022_138_review | ARR_2022 | 1. The paper need further polish to make readers easy to follow.
2. In Table 6, the improvement of method is marginal and unstable.
3. The motivation of this new task is not strong enough to convince the reader. Is it a necessary intermediate task for document summarization and text mining (as stated in L261)?
4. It directly reverse the table-to-text settings then conducts the experiments on four existing table-to-text datasets. More analysis of the involved datasets is required, such as the number of output tables, the size/schema of the output tables.
Questions: 1. L68-L70, Is there any further explanation of the statement "the schemas for extraction are implicitly included in the training data"?
2. How to generate the table content that not shown in the text?
3. Why not merge Table 1 and Table 2? They are both about the statistics of datasets used in experiments.
4. What’s the relation between text-to-table task and vanilla summarization task?
5. How to determine the number of output table(s)? Appendex. C don't provide an answer about this.
6. What’s the version of BART in Table3 and Table 4?
Suggestions: 1. The font size of Figure 2 and Figure 3 is too small.
Typos: 1. L237: Text-to-table -> text-to-table 2. L432: "No baseline can be applies to all four datasets" is confusing.
3. Table 3: lOur method -> Our method | 2. How to generate the table content that not shown in the text? |
ICLR_2022_1895 | ICLR_2022 | 1.It is obvious that this paper applies CVAE to the OOD data detection. The question is why to select CVAE as the efficient model to generate the OOD data. What is the motivation? 2.This paper claims that we can already produce comparable results to existing SOTA contrastive learning models but much more efficient. Why? The detailed explanation is necessary. 3.The contribution is mainly the metrics. | 2.This paper claims that we can already produce comparable results to existing SOTA contrastive learning models but much more efficient. Why? The detailed explanation is necessary. |
ARR_2022_103_review | ARR_2022 | 1. The compositionality and transitivity tests require labeled data to train the probes, which defeats some of the utility of the method.
2. Compositionality typically means something like “the meaning of a full expression can be computed recursively as a function of its parts, where the structure of the recursion is modulated by the syntactic structure.” I don’t think that’s exactly what your “compositionality” test is formalizing, though it is somewhat related. I would suggest changing the name to something more precise; perhaps “faithfulness”, since what you’re really testing is the correspondence between the relations the model encodes and the relations that exist in the output? Or maybe I’m missing something, and you can elaborate on how this test reflects a more general definition of compositionality.
3. There is an inconsistency in the presentation of the systematicity test. In Equation 4, the test checks whether Psrc => Pflw. Later on, at line 340: this is condition is written as being bidirectional: Psrc ⇔ Pflw. Which one of these is right?
4. It's not obvious how humans would do under this kind of linguistic evaluation. It would be interesting to do human evaluations of this method. In other words, use human annotators in the role of the model, and see how well they do compared to neural models as a baseline.
- What is the training data for your probes in the compositionality and transitivity experiments?
- For the transitivity experiments, does 50% constitute a random baseline? If so, do you have any potential explanation why models are so substantially undershooting random guessing?
- The notation of x1 and x2 vs. x = (xa, xb) in the compositionality section is a bit confusing, especially when x1 and x2 are already themselves pairs. I would suggest changing this. Maybe you could have x and x’, or rewrite xa as x_{11} and xb as x_{12}. It was unclear whether x_1 was a pair or single sentence on my first read.
- Nit: In Def 2.2, is v a constant across all R, or can we pick a different v for each R? This is only a technical concern when the number of examples is countably infinite.
- Line 405: This last sentence seems a bit circular. You’ve defined systematicity such that the geometric property is satisfied if and only if f is systematic. If this is all you mean, I think the sentence could be clarified, but currently, it seems like it may be making a stronger claim than this. - Line 530: Use \citep not \citet | - For the transitivity experiments, does 50% constitute a random baseline? If so, do you have any potential explanation why models are so substantially undershooting random guessing? |
ICLR_2021_1960 | ICLR_2021 | Fig. 4 is not clear. For each value of decile on the x-axis, the error rate is computed on that 10% of data. And the error rate is shown to be higher for larger values of VoG. However, the maximum error rate even for the maximum value of VoG is 20-40%. Therefore, there are clearly upto 80% of examples with high VoG that are still correctly classified. Authors don't explain why this is the case.
Fig. 7: It is not clear why the error rate associated with the difficult examples that have low VoG early in training is low. Shouldn't the errors associated with difficult examples remain the same throughout training i.e., the network never learns to classify these examples correctly. Why would the error rate degrade? Or are the authors reporting percentage of total errors on the y-axis. In which case while the asbolute number of errors associated with difficult examples remains the same, their relative ratio as compared to the overall number of errors increases as training proceeds. Please clarify.
Fig. 8: Out of distribution examples e.g., deliberately shuffled labels are shown to be associated with slightly higher VoG score values. Can the authors include a significance test to show this is a material difference. There is a high variance in VoG values for shuffled examples. Why would these examples exhibit lower VoG? Can the authors provide some intuition behind this.
Conclusion: Overall, my decision is to accept the paper because this is a powerful proposal that deserves to be investigated further. However, I have some reservations about the empirical results as described above. If authors can explain/clarify these aspects it would be a much stronger submission.
In addition to those listed above, the authors should address the questions below in future work:
How come not all or even a majority of examples that are misclassified by a network have high VoG?
Will results hold across various types of models? What is the relationship of VoG with capacity of models?
Will results hold across domains? | 7: It is not clear why the error rate associated with the difficult examples that have low VoG early in training is low. Shouldn't the errors associated with difficult examples remain the same throughout training i.e., the network never learns to classify these examples correctly. Why would the error rate degrade? Or are the authors reporting percentage of total errors on the y-axis. In which case while the asbolute number of errors associated with difficult examples remains the same, their relative ratio as compared to the overall number of errors increases as training proceeds. Please clarify. Fig. |
ICLR_2023_4044 | ICLR_2023 | Weakness: 3. Low-rank matrix/tensor network compression has been well studied for CNN networks. Furthermore, the neural architecture search for low-rank CNN compression has also studied. This work seems a addation work by combining the two known tools. 4. The nueral architecture search is utilized to automatically determine the ranks. Besides NAS, how about the current research to search for the automatical rank in the low-rank network compression? That is to say, the authors should present the advantage of NAS for automatical rank selection, including proving that the searched architecture is what we want, and the related NAS method is better than the previous rank selection method. 5. The experimental results illustrate the advantage of the searched architecture. However, it is hard to distinguish that the efficiency improvement is from low-rank compression strategy or from the proposed rank selection via NAS. Furthermore, additional analysis of the search algorithm should be further added. | 4. The nueral architecture search is utilized to automatically determine the ranks. Besides NAS, how about the current research to search for the automatical rank in the low-rank network compression? That is to say, the authors should present the advantage of NAS for automatical rank selection, including proving that the searched architecture is what we want, and the related NAS method is better than the previous rank selection method. |
NIPS_2022_738 | NIPS_2022 | W1) The paper states that "In order to introduce epipolar constraints into attention-based feature matching while maintaining robustness to camera pose and calibration inaccuracies, we develop a Window-based Epipolar Transformer (WET), which matches reference pixels and source windows near the epipolar lines." It claims that it introduces "a window-based epipolar Transformer (WET) for enhancing patch-to-patch matching between the reference feature and corresponding windows near epipolar lines in source features". To me, taking a window around the epipolar line into account seems like an approximation to estimating the uncertainty region around the epipolar lines caused by inaccuracies in calibration and camera pose and then searching within this region (see [Förstner & Wrobel, Photogrammetric Computer Vision, Springer 2016] for a detailed derivation of how to estimate uncertainties). Is it really valid to claim this part of the proposed approach as novel?
W2) I am not sure how significant the results on the DTU dataset are: a) The difference with respect to the best performing methods is less than 0.1 mm (see Tab. 1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab. 4. Does the claim "It can be seen from the table that our proposed modules improve in both accuracy and completeness" really hold? Why not use another dataset for the ablation study, e.g., the training set of Tanks & Temples or ETH3D?
W3) I am not sure what is novel about the "novel geometric consistency loss (Geo Loss)". Looking at Eq. 10, it seems to simply combine a standard reprojection error in an image with a loss on the depth difference. I don't see how Eq. 10 provides a combination of both losses.
W4) While the paper discusses prior work in Sec. 2, there is mostly no mentioning on how the paper under review is related to these existing works. In my opinion, a related work section should explain the relation of prior work to the proposed approach. This is missing.
W5) There are multiple parts in the paper that are unclear to me: a) What is C in line 106? The term does not seem to be introduced. b) How are the hyperparameters in Sec. 4.1 chosen? Is their choice critical? c) Why not include UniMVSNet in Fig. 5, given that UniMVSNet also claims to generate denser point clouds (as does the paper under review)? d) Why use only N=5 images for DTU and not all available ones? e) Why is Eq. 9 a reprojection error? Eq. 9 measures the depth difference as a scalar and no projection into the image is involved. I don't see how any projection is involved in this loss.
Overall, I think this is a solid paper that presents a well-engineered pipeline that represents the current state-of-the-art on a challenging benchmark. While I raised multiple concerns, most of them should be easy to address. E.g., I don't think that removing the novelty claim from W1 would make the paper weaker. The main exception is the ablation study, where I believe that the DTU dataset is too easy to provide meaningful comparisons (the relatively small differences might be explained by randomness in the training process.
The following minor comments did not affect my recommendation:
References are missing for Pytorch and the Adam optimizer.
Post-rebuttal comments
Thank you for the detailed answers. Here are my comments to the last reply:
Q: Relationship to prior work.
Thank you very much, this addresses my concern.
A: Fig. 5 is not used to claim our method achieves the best performance among all the methods in terms of completeness, it actually indicates that our proposed method could help reconstruct complete results while keeping high accuracy (Tab. 1) compared with our baseline network [7] and the most relevant method [3]. In that context, we not only consider the quality of completeness but also the relevance to our method to perform comparison in Fig. 5.
As I understand lines 228-236 in the paper, in particular "The quantitative results of DTU evaluation set are summarized in Tab. 1, where Accuracy and Completeness are a pair of official evaluation metrics. Accuracy is the percentage of generated point clouds matched in the ground truth point clouds, while Completeness measures the opposite. Overall is the mean of Accuracy and Completeness. Compared with the other methods, our proposed method shows its capability for generating denser and more complete point clouds on textureless regions, which is visualized in Fig. 5.", the paper seems to claim that the proposed method generates denser point clouds. Maybe this could be clarified?
A: As a) nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation, b) the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets), c) the final results are the average across 22 test scans, we think that fewer errors could indicate better performance. However, your point about the accuracy of DTU GT is enlightening, and we think it's valuable future work.
This still does not address my concern. My question is whether the ground truth is accurate enough that we can be sure that the small differences between the different components really comes from improvements provided by adding components. In this context, stating that "the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets)" does not answer this question as, even though DTU has the most accurate GT, it might not be accurate enough to measure differences at this level of accuracy (0.05 mm difference). If the GT is not accurate enough to differentiate in the 0.05 mm range, then averaging over different test scans will not really help. That "nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation" does also not address this question. Since the paper claims improvements when using the different components and uses the results to validate the components, I do not think that answering the question whether the ground truth is accurate enough to make these claims in future work is really an option. I think it would be better to run the ablation study on a dataset where improvements can be measured more clearly.
Final rating
I am inclined to keep my original rating ("6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations."). I still like the good results on the Tanks & Temples dataset and believe that the proposed approach is technically sound. However, I do not find the authors' rebuttals particularly convincing and thus do not want to increase my rating. In particular, I still have concerns about the ablation study as I am not sure whether the ground truth of the DTU dataset is accurate enough that it makes sense to claim improvements if the difference is 0.05 mm or smaller. Since this only impacts the ablation study, it is also not a reason to decrease my rating. | 5. As I understand lines 228-236 in the paper, in particular "The quantitative results of DTU evaluation set are summarized in Tab. 1, where Accuracy and Completeness are a pair of official evaluation metrics. Accuracy is the percentage of generated point clouds matched in the ground truth point clouds, while Completeness measures the opposite. Overall is the mean of Accuracy and Completeness. Compared with the other methods, our proposed method shows its capability for generating denser and more complete point clouds on textureless regions, which is visualized in Fig. |
ICLR_2021_375 | ICLR_2021 | 1. The biggest problem with this article is that the contribution of the article is insufficient and lacks originality. This article proposes a benchmark for off-policy evaluation and verifies different OPE methods, but this article does not compare with other similar benchmarks to verify whether the benchmark proposed in this article is effective. 2. In the experimental part, this paper verifies different metrics for different OPE methods. However, in Figure 4 and Figure 5, the different methods in the two sets of benchmarks proposed in this article are quite different in different OPE methods. I hope the author can give some comments on the differences between the two sets of evaluation methods. 3. The author uses the value function in formula 1 to estimate the effect of the strategy. I doubt this method. Because of the attenuation factor here, I think it has an impact on the final value calculated by different methods. Because bellman equation is only an estimate of the value of a certain state, not an absolute strategy benefit, I hope the author will give some explanations here. | 1. The biggest problem with this article is that the contribution of the article is insufficient and lacks originality. This article proposes a benchmark for off-policy evaluation and verifies different OPE methods, but this article does not compare with other similar benchmarks to verify whether the benchmark proposed in this article is effective. |
NIPS_2018_567 | NIPS_2018 | (bias against subgroups, uncertainty on certain subgroups), in applications for fair decision making. The paper is clearly structured, well written and very well motivated. Except for minor confusions about some of the math, I could easily follow and enjoyed reading the paper. As far as I know, the framework and particularly the application to fairness is novel. I believe the general idea of incorporating and adjusting to human decision makers as first class citizens of the pipeline is important for the advancement of fairness in machine learning. However, the framework still seems to encompass a rather minimal technical contribution in the sense that both a strong theoretical analysis and exhaustive empirical evaluation are lacking. Moreover, I am concerned about the real world applicability of the approach, as it mostly seems to concern situations with a rather specific (but unknown) behavior of the decision maker, which typically does not transfer across DMs, needs to be known during training. I have trouble thinking of situations where sufficient training data, both ground truth and the DMs predictions, are available simultaneously. While the authors do a good job evaluating various aspects of their method (one question about this in the detailed comments), those are only two rather simplistic synthetic scenarios. Because of the limited technical and experimental contribution, I heavy-heartedly tend to vote for rejection of the submission, even though I am a big fan of the motivation and approach. Detailed Comments - I like the setup description in Section 2.1. It is easy to follow and clearly describes the technical idea of the paper. - I have trouble understanding (the proof of) the Theorem (following line 104). You show that eq (6) and eq (7) are equal for appropriately chosen $\gamma_{defer}$. However, (7) is not the original deferring loss from eq (3). Shouldn't the result be that learning to defer and rejection learning are equivalent if for the (assumed to be) constant DM loss, $\alpha$ happens to be equal to $\gamma_{reject}$? In the theorem it sounds as if they were equivalent independent of the parameter choices for $\gamma_{reject}$ and $\alpha$. The main takeaway, namely that there is a one-to-one correspondence between rejection learning with cost $\gamma_{reject}$ and learning to defer with a DM with constant loss $\alpha$, is still true. Is there a specific reason why the authors decided to present the theorem and proof in this way? - The authors highlight various practical scenarios in which learning to defer is preferable and detail how it is expected to behave. However, this practicability seems to be heavily impaired by the strong assumptions necessary to train such model, i.e., availability of ground truth and DM's decisions for each DM of interest, where each is expected to have their own specific biases/uncertainties/behaviors during training. - What does it mean for the predictions \hat{Y} to follow an (independent?) Bernoulli equation (12) and line 197? How is p chosen, and where does it enter? Could you improve clarity by explicitly stating w.r.t. what the expectations in the first line in (12) are taken (i.e., where does p enter explicitly?) Shouldn't the expectation be over the distribution of \hat{Y} induced by the (training) distribution over X? - In line 210: The impossibility results only hold for (arguably) non-trivial scenarios. - When predicting the Charlson Index, why does it make sense to treat age as a sensitive attribute? Isn't age a strong and "fair" indicator in this scenario? Or is this merely for illustration of the method? - In scenario 2 (line 252), does $\alpha_{fair}$ refer to the one in eq (11)? Eq. (11) is the joint objective for learning the model (prediction and deferral) given a fixed DM? That would mean that the autodmated model is encouraged to provide unfair predictions. However, my intuition for this scenario is that the (blackbox) DM provides unfair decisions and the model's task is to correct for it. I understand that the (later fixed) DM is first also trained (semi synthetic approach). Supposedly, unfairness is encouraged only when training DM as a pre-stage to learning the model? I encourage the authors to draw the distinction between first training/simulating the DM (and the corresponding assumptions/parameters) and then training the model (and the corresponding assumptions/parameters) more clearly. - The comparison between the deferring and the rejecting model is not quite fair. The rejecting model receives a fixed cost for rejecting and thus does not need access to DM during training. This already highlights that it cannot exploit specific aspects (e.g., additional information) of the DM. On the other hand, while the deferring model can adaptively pass on those examples to DM, on which the DM performs better, this requires access to DM's predictions during training. Since DMs typically have unique/special characteristics that could vary greatly from one DM to the next, this seems to be a strong impairment for training a deferring model (for each DM individually) in practice? While the adaptivity of learning to defer unsurprisingly constitutes an advantage over rejection learning, it comes at the (potentially large) cost of relying on more data. Hence, instead of simply showing its superiority over rejection learning, one should perhaps evaluate this tradeoff? - Nitpicking: I find "above/below diagonal" (add a thin gray diagonal to the plot) easier to interpret than "above/below 45 degree", which sounds like a local property (e.g., not the case where the red line saturates and has "0 degrees"). - Is the slight trend of the rejecting model on the COMPAS dataset in Figure 4 to defer less on the reliable group a property of the dataset? Since rejection learning is non-adaptive, it is blind to the properties of DM, i.e., one would expect it to defer equally on both groups if there is no bias in the data (greater variance in outcomes for different groups, or class imbalance resulting in higher uncertainty for one group). - In lines 306-307 the authors argue that deferring classifiers have higher overall accuracy at a given minimum subgroup accuracy (MSA). Does that mean that at the same error rate for the subgroup with the largest error rate (minimum accuracy), the error rate on the other subgroups is on average smaller (higher overall accuracy)? This would mean that the differences in error rates between subgroups are larger for the deferring classifier, i.e., less evenly distributed, which would mean that the deferring classifier is less fair? - Please update the references to point to the conference/journal versions of the papers (instead of arxiv versions) where applicable. Typos line 10: learning to defer ca*n* make systems... line 97: first "the" should be removed End of line 5 of the caption of Figure 3: Fig. 3a (instead of Figs. 3a) line 356: This reference seems incomplete? | - Is the slight trend of the rejecting model on the COMPAS dataset in Figure 4 to defer less on the reliable group a property of the dataset? Since rejection learning is non-adaptive, it is blind to the properties of DM, i.e., one would expect it to defer equally on both groups if there is no bias in the data (greater variance in outcomes for different groups, or class imbalance resulting in higher uncertainty for one group). |
NIPS_2019_263 | NIPS_2019 | weakness into a strength: Watermarking deep neural networks by backdooring." 27th {USENIX} Security Symposium ({USENIX} Security 18). (2018). Second, since we know that neural networks can contain backdoors, the motivation is a little bit fuzzy. The authors wrote: "...The regulators would mainly check two core criteria before deploying such a system; the predictive accuracy and fairness ... The interpretation method would obviously become an important tool for checking this second criterion. However, suppose a lazy developer finds out that his model contains some bias, and, rather than actually fixing the model to remove the bias, he decides to manipulate the model such that the interpretation can be fooled and hide the bias". In that case, how would the model interpretations look like? would it be suspicious? would it make sense? if so, maybe the lazy developer fixed it? What is the motivation for using Passive attacks? Generally speaking, it would make the paper much stronger if the authors would provide experiments to support their motivation. Third, did the authors try to investigate the robustness of such attacks? i.e. to explore how easy is it to remove the attack? for example, if one attacked fine-tune the model using the original objective with the original training set, would the attack still work? Lastly, there are some spelling mistakes, to name a few: - "Following summarizes the main contribution of this paper:" - "However, it clear that a model cannot..." | - "Following summarizes the main contribution of this paper:" - "However, it clear that a model cannot..." |
ICLR_2022_851 | ICLR_2022 | Weakness - If I understand correctly, quite a few factors contribute to the final superior results of MobileViT on three benchmarks, including MobileViT model architecture, multi-scale training, label smoothing, and EMA. The paper claims the model architecture and multi-scale training as the main contributions. Thus, it is important to measure how much label smoothing and EMA improves the final results, and note whether other competing approaches use them during external comparisons. - The paper mainly uses number of the parameters to compare model complexity. This is a theoretical metric. I would also like to compare GFLOPS between MobileViT and other competing models (e.g. MobileNet family models, DEIT, MnasNet, PiT). - Table 1 (b), comparisons with heavy-weight CNN are not quite convincing. More recent backbones should be considered, such as MobileNetV3, NAS-FPN, DET-NAS. - Table 2, PASCAL VOC 2012 is a small benchmark for segmentation. It would be more convincing to include results on more recent bencmarks such as COCO, LVIS. The competing approaches MobileNetv1, v2 and R-101 are not up-to-date. Please consider more recent backbones such as MobileNetV3, NAS-FPN, DET-NAS. - Table 2, is there a strong reason to not compare with transformer-based backbone, such as Multi-scale Vision Transformer and Pyramid Vision Transformer. - Technical exposition: After Eqn (1), “Another convolutional layer is then used to fuse local and global features in the concatenated tensor”. This is a bit confusing. According to Figure 1 (b), the concatenation takes the original feature and another feature computed by local conv and global self-attention building block | - Table 2, is there a strong reason to not compare with transformer-based backbone, such as Multi-scale Vision Transformer and Pyramid Vision Transformer. |
oLLZhbBSOU | ICLR_2024 | The main weakness of this paper is that I am suspicious of the results due to some confusion. Please answer the questions below, and I would be happy to revise my rating upward if they are satisfactory. [Edit: updated rating since questions were addressed]
Aside from the questions, below are a few recommendations for clarity improvement and a few grammatical errors caught.
Clarity recommendations:
- In the intro, add a more intuitive explanation for how RLIF is able to exceed the expert’s performance without access to a task reward signal.
- In the intro, you describe the theoretical analysis performed, but don't state the bottom line. What does the analysis tell us? Grammar:
- “…for selecting when to intervene lead to good performance…”
- “We leave the of DAgger analysis under…” | - In the intro, add a more intuitive explanation for how RLIF is able to exceed the expert’s performance without access to a task reward signal. |
NIPS_2022_33 | NIPS_2022 | The clarity and quality of presentation could be improved, which is a weakness that is hopefully straightforward to fix. Below, I will detail some points that were particularly unclear to me, in the hope that it is useful for the authors as they craft a response. I put explicit questions in "bullet points", with the remaining text as context. I am willing to increase my score if these points can be sufficiently clarified.
(1) Unclear significance of analysis of required rates for ( ϵ , λ )
As I understand it, the main goal of the analysis is to understand the extra error incurred by using a finite-differencing approach, above and beyond using analytic derivatives: As in Theorem 1, if this error decays fast enough, then one can use empirical derivatives while preserving O p ( n − 1 / 2 )
rates of estimation. This seems like a very clear type of result, but for an application where the analytic derivative is already well known. Meanwhile, the results in the remaining sections do not go "all the way" to a result like Theorem 1, but instead stop at giving some side-by-side comparison of the analytic and empirical derivatives.
It seems that there are two conclusions the reader is supposed to draw from this analysis, in the context of the remainder of the work: First, that ( ϵ , λ )
can decay slower than we might generically "expect", for problems with special structure. Second, that this special structure is present in the dynamic treatment regime (DTR) functional, but not in the policy optimization functional. This second point is implied to be surprising, because all three problems exhibit some double-robustness structure (see lines 279-280).
I had trouble drawing such conclusions, though I suspect this is mostly an issue of presentation / clarity.
(1a) First, it seems in several places that the reader should have a "baseline" result in mind, to contrast with the results presented here, but this baseline was not entirely clear. A few examples:
Line 188: "can be a slower rate than implied by the generic analysis of finite differences". What kind of rate would that be, and is there a reference for such results?
Lines 190-191: "potential improvement...could be on the order of generic rate improvements implied by a central difference scheme". What is this referring to?
Lines 278-280: "does not appear that rate-double robustness would admit weaker numerical requirements on ϵ
". Weaker requirements than what? The "generic analysis" referenced above?
Are there "conservative" rates on ( ϵ , λ )
that will always preserve O P ( n − 1 / 2 )
rates of estimation, obtainable via some generic analysis?
The first three questions are about contextualizing the results, but the last one this is important for clarifying whether (a) there is always a generic approach to derive rates on ( ϵ , λ )
that preserve O P ( n − 1 / 2 )
convergence, and this is just an improved analysis for specific estimands that shows slower rates are possible, or (b) it is generally necessary to do a "Theorem 1-style" analysis to verify O P ( n − 1 / 2 )
convergence. The latter conclusion seems much more restrictive than the former.
(1b) Second, conclusions are often made by comparing the form of the empirical and analytical derivatives directly, but these were somewhat difficult to follow:
Proposition 2 (DTR) "verifies that the requirements...are similar in ϵ
as in the case of a single-timestep", but there is no O ( ϵ 2 )
term in either Proposition 1 or Corollary 1. Could you clarify what is meant here?
Propositions 3 & 4 differ not only in an additive term, but also in the usage of perturbed nuisances, which makes them difficult to compare directly (this also applies to Corollary 1, as noted on line 170). Is there a reason why a direct comparison (e.g., isolating only an additive difference) is unnecessary here?
(2) Unclear significance of limitations of Empirical Gateaux derivatives:
As outlined in the introduction, constructive / algorithmic approaches to bias adjustment are very appealing, particularly for problems where small changes require re-derivation of the analytic derivative. This paper strikes an appropriate note of humility in the conclusion, giving limitations of Empirical Gateaux derivatives as a "completely general approach" (lines 329-335), namely the fact that (a) pathwise differentiability and (b) the second-order nature of the remainder must be verified analytically. However, these limitations do seem to undercut the general value of the approach. With that in mind, a few relevant questions
Are there existing scenarios where the analytic form of the gateaux derivative is non-obvious, but where these conditions (pathwise differentiability, second-order remainder) can nonetheless be verified to hold? Or does verifying these conditions always require derivation of the analytic form?
More broadly, are there scenarios where we can apply this approach (with appropriately conservative rates on ( ϵ , λ )
) and have confidence in achieving O P ( n − 1 / 2 )
rates, without deriving the analytic derivative? E.g., would the constrained MDP with arbitrary linear constraints be such an example?
If there exist some space of problems where the answer to these questions is "yes", then it would go a long way towards mitigating the impact of these limitations.
Comments on soundness: Regarding technical soundness, I have a (hopefully minor) question or two on Lemma 1
In the display following line 563, it is claimed that the following holds due to Cauchy-Schwarz. I'm not sure I see the application of CS here: Is there another reason why we would expect the cross term 2 E [ ( μ ~ ϵ ( X ) − μ ~ ( X ) ) ( μ ~ ( X ) − μ ( X ) ) ]
to be non-positive? E ( μ ~ ϵ ( X ) − μ ( X ) ) 2 ≤ E ( μ ~ ϵ ( X ) − μ ~ ( X ) ) 2 + E ( μ ~ ( X ) − μ ( X ) ) 2
In the display following line 562, the last inequality seems like a non-trivial jump, would you mind walking through the logic explicitly?
Otherwise, the proofs seem correct to me. Note that I only read through the proofs for Section 3 in depth, and only skimmed the proofs of other relevant results (e.g., Propositions 3 and 4). While I am well-versed in the causal inference literature, I am not otherwise an expert on non-parametric / semi-parametric statistics, so I may have missed something.
As an aside, it may be helpful to include a citation in the proof for some of the assumed results regarding kernels. [58] is referenced in the main text, referring to kernel smoothing more broadly, but it seems some of the prerequisite results could be cited more precisely (e.g., Lemma 25.1 of [58] appears to be a relevant result)
Other Minor Feedback
I consider the following points to be minor feedback re: presentation / notation / possible typos, and they did not meaningfully influence my score, and they do not require an explicit response from the authors (some are stated as questions only because I am unsure if they are typos).
Suggestions on clarity:
(Lines 15-19) This and some other sentences are a bit long and difficult to parse, and could perhaps be split into multiple sentences.
(Line 71) Is the introduction of projections onto the semi-parametric model necessary, given Remark 1's statement that this work focuses on nonparametric models? CLvdL (Equation 2.2) seems to refers to Luedtke, Carone, and van der Laan (2015) as a reference for the equation on between lines 71-72 holding generally in a non-parametric model.
Other typos / inconsistencies
Example 1 uses E P
for the outer expectation, but not for the inner expectation.
Proposition 1 uses $\tilde{\mathbb{E}}{\tilde{P}\epsilon} i n o n e p l a c e , p e r h a p s t h e t i l d e o n
\mathbb{E}$ was not intended?
Line 65, what is the observation o ~
? This is not referenced anywhere, I assume this is meant to be o ′ .
Footnote 1, should the kernel be λ − d K ( u / λ )
instead of h − d K ( u / λ )
? There is also a reference to an O ( h J )
error term on line 568 that should perhaps be O ( λ β ) ?
Algorithm 1: Should it be P ~
on lines 3-4? Should it likewise be P ϵ , λ i
instead of P ϵ i ?
Lemma 1: Should e ~ ϵ ( X ) = p ~ ϵ ( A = 1 , X ) / p ~ ( X )
instead of the current formulation, which uses p ~ ϵ ( A = 1 ∣ X )
in the numerator? This would also make it consistent with usage in the proof (see e.g., line 561)
Line 154, there seems to be a missing parenthesis
Line 149 "analyses of from kernel density estimation"
Assumption 1 (iv), should the equation refer to μ ~ ϵ
or just μ ~
? Additionally, should the product-rate condition be o p ( n − 1 )
as written or o p ( n − 1 / 2 ) ?
Line 561, says to bound perturbed e
one should "argue similarly", seemingly in reference to the (later) bound on the perturbed μ
, perhaps the order was swapped.
Line 562, following equation, third equality, missing a square on the final p ~ ( A = 1 , x ) term.
Line 572, E [ Γ ( O ; e ϵ , μ ) ]
should be E [ Γ ( O ; e ~ ϵ , μ ~ ) ]
Line 637, indicator is missing an ϵ
on the left-hand side
Supplement, Section D.2, the reference is to citation [20], but I believe this should be citation [18]
Undefined notation:
Line 66, I did not see a definition of the function g ( u ) .
I'm unsure if the dimension d
was precisely defined prior to usage in Lemma 1, though it is fairly obvious from context.
Eq. 7: I'm not sure if μ a
is defined anywhere, aside from being the optimization variable, and similar for μ ∗ ( s , a )
in Equation 8. ν
is used a few times in the proofs without being defined (end of equation starting on 576, end of equation starting on 562), presumably referring to a strong-overlap constant.
I think the authors do a fine job of explaining limitations of the approach. | 7: I'm not sure if μ a is defined anywhere, aside from being the optimization variable, and similar for μ ∗ ( s , a ) in Equation 8. ν is used a few times in the proofs without being defined (end of equation starting on 576, end of equation starting on 562), presumably referring to a strong-overlap constant. I think the authors do a fine job of explaining limitations of the approach. |
NIPS_2017_357 | NIPS_2017 | - the manuscript is mainly a continuation of previous work on OT-based DA
- while the derivations are different, the conceptual difference is previous work is limited
- theoretical results and derivations are w.r.t. the loss function used for learning (e.g.
hinge loss), which is typically just a surrogate, while the real performance measure would
be 0/1 loss. This also makes it hard to compare the bounds to previous work that used 0-1 loss
- the theorem assumes a form of probabilistic Lipschitzness, which is not explored well.
Previous discrepancy-based DA theory does not need Prob.Lipschitzness and is more flexible
in this respect.
- the proved bound (Theorem 3.1) is not uniform w.r.t. the labeling function $f$. Therefore,
it does not suffice as a justification for the proposed minimization procedure.
- the experimental results do not show much better results than previous OT-based DA methods
- as the proposed method is essentially a repeated application of the previous work, I would have
hoped to see real-data experiments exploring this. Currently, performance after different number
of alternating steps is reported only in the supplemental material on synthetic data.
- the supplemental material feels rushed in some places. E.g. in the proof of Theorem 3.1, the
first inequality on page 4 seems incorrect (as the integral is w.r.t. a signed measure, not a
prob.distr.). I believe the proof can be fixed, though, because the relation holds without
absolute values, and it's not necessary to introduce these in (3) anyway.
- In the same proof, Equations (7)/(8) seem identical to (9)/(10)
questions to the authors:
- please comment if the effect of multiple BCD on real data is similar to the synthetic case ***************************
I read the author response and I am still in favor of accepting the work. | - the experimental results do not show much better results than previous OT-based DA methods - as the proposed method is essentially a repeated application of the previous work, I would have hoped to see real-data experiments exploring this. Currently, performance after different number of alternating steps is reported only in the supplemental material on synthetic data. |
ACL_2017_201_review | ACL_2017 | Since this paper essentially presents the effect of systematically changing the context types and position sensitivity, I will focus on the execution of the investigation and the analysis of the results, which I am afraid is not satisfactory.
A) The lack of hyper-parameter tuning is worrisome. E.g. - 395 Unless otherwise notes, the number of word embedding dimension is set to 500.
- 232 It still enlarges the context vocabulary about 5 times in practice.
- 385 Most hyper-parameters are the same as Levy et al' best configuration.
This is worrisome because lack of hyperparameter tuning makes it difficult to make statements like method A is better than method B. E.g. bound methods may perform better with a lower dimensionality than unbound models, since their effective context vocabulary size is larger.
B) The paper sometimes presents strange explanations for its results. E.g. - 115 "Experimental results suggest that although it's hard to find any universal insight, the characteristics of different contexts on different models are concluded according to specific tasks."
What does this sentence even mean? - 580 Sequence labeling tasks tend to classify words with the same syntax to the same category. The ignorance of syntax for word embeddings which are learned by bound representation becomes beneficial. These two sentences are contradictory, if a sequence labeling task classified words with "same syntax" to same category then syntx becomes a ver valuable feature. Bound representation's ignorance of syntax should cause a drop in performance just like other tasks which does not happen.
C) It is not enough to merely mention Lai et. al. 2016 who have also done a systematic study of the word embeddings, and similarly the paper "Evaluating Word Embeddings Using a Representative Suite of Practical Tasks", Nayak, Angeli, Manning. appeared at the repeval workshop at ACL 2016. should have been cited. I understand that the focus of Nayak et al's paper is not exactly the same as this paper, however they provide recommendations about hyperparameter tuning and experiment design and even provide a web interface for automatically running tagging experiments using neural networks instead of the "simple linear classifiers" used in the current paper.
D) The paper uses a neural BOW words classifier for the text classification tasks but a simple linear classifier for the sequence labeling tasks. What is the justification for this choice of classifiers? Why not use a simple neural classifier for the tagging tasks as well? I raise this point, since the tagging task seems to be the only task where bound representations are consistently beating the unbound representations, which makes this task the odd one out. - General Discussion: Finally, I will make one speculative suggestion to the authors regarding the analysis of the data. As I said earlier, this paper's main contribution is an analysis of the following table.
(context type, position sensitive, embedding model, task, accuracy) So essentially there are 120 accuracy values that we want to explain in terms of the aspects of the model. It may be beneficial to perform factor analysis or some other pattern mining technique on this 120 sample data. | - 115 "Experimental results suggest that although it's hard to find any universal insight, the characteristics of different contexts on different models are concluded according to specific tasks." What does this sentence even mean? |
ICLR_2021_1539 | ICLR_2021 | - The authors claim about well-balanced robustness trade-off using their method and also claim that their major objective is only to improve network generalization on clean data. There is a little ambiguity regarding the major contribution of this paper. The authors can make this point more clear. - Isn’t the hypothesis that is stated as “new” in this work already discussed in AdvProp i.e. using different batch normalization for clean and adversarial images improves network generalization, which in turn draw the conclusion that rescaling operation of batch norm could control the robustness and generalization trade-off. Why this hypothesis considered as “new” then ? - The two learned adversarial maskings discussed in section 3.2, it is not clear how they are generated. - Results demonstrate that the proposed approach improves generalization but the performance gain is minimal (only 1%-2%) and not so significant compared to the baselines. Minor point: - I understand that the major objective of this work is to improve performance on clean images but not the adversarial robustness. The results demonstrate higher robustness against PGD based adversarial attacks with perturbation strength lower than 8/255 is interesting but not of practical importance since the method requires perturbation strength as an additional input and very specific to PGD based attack. I wouldn’t consider this as major weakness since it is not the primary objective of this work.
Final thoughts: The proposed method is clearly motivated. Although the performance gains on network generalization are minimal compared to the baselines, this work cleverly addressed the limitations of previous work and extend it with simple modifications. I tend to accept this paper. However, I suggest the authors to also consider the evaluations carried out in AdvProp (Xie et al. (2020)) to improve the significance of their work. | - Results demonstrate that the proposed approach improves generalization but the performance gain is minimal (only 1%-2%) and not so significant compared to the baselines. Minor point: |
FGBEoz9WzI | EMNLP_2023 | 1. The paper missed some strong prompt selection baselines for few-shot learning.
2. The notations in section 3.2 of the paper are poorly defined and explained, making it difficult to read.
3. As mentioned in the Limitations section, the paper does not include experiments with recent closed-source LLMs, such as ChatGPT. It is worth noting that ChatGPT may be both cheaper and stronger than text-davinci-002.
4. The relationship between the proposed method and the four motivation factors is still unclear. | 2. The notations in section 3.2 of the paper are poorly defined and explained, making it difficult to read. |
Na4DonsjLx | EMNLP_2023 | - The motivation behind applying a contrastive learning approach to mitigate the issue of the information gap is unclear. The proposed selection of negative samples might not be a straightforward approach to fill the information gap. I think the Non-Optional Generation does not always produce a wrong (negative) output, causing the model to struggle with distinguishing between positive/negative samples.
- While the authors claim the “Current LLMs, such as ChatGPT, lack the called “inductive reasoning” ability” in line 39, this paper evaluated only LLaMA model as the current LLM in Table 4. Therefore, I didn’t understand how current LLMs can perform inductive reasoning and suggest that other LLMs, such as ChatGPT, might be compared with proposed methods.
- As discussed in Sec. 6.5, I understand evaluating inductive reasoning is challenging. But, I failed to understand that overlap-based metrics (e.g., BLEU) do not reflect its abilities. CICERO Dataset contains many samples requiring inductive reasoning abilities, and therefore I think there is some extent to which overlap-based metrics can be employed to evaluate inference ability. | - While the authors claim the “Current LLMs, such as ChatGPT, lack the called “inductive reasoning” ability” in line 39, this paper evaluated only LLaMA model as the current LLM in Table 4. Therefore, I didn’t understand how current LLMs can perform inductive reasoning and suggest that other LLMs, such as ChatGPT, might be compared with proposed methods. |
kJFIH23hXb | ICLR_2024 | 1. It would be better if some detailed results on the generation process are presented (see Q1).
2. Ablation studies could be improved to provide a full picture on the proposed techniques/tricks (see Q2). | 2. Ablation studies could be improved to provide a full picture on the proposed techniques/tricks (see Q2). |
ICLR_2022_186 | ICLR_2022 | - Relationship to the existing work
I appreciate the authors summarize the relationship to the existing work in Section 4. I understand that the proposed generation procedure is different from the existing ones, but I don't understand why the difference is important. For example, the authors state that "Our STGG framework differs from this line of research since it proposes a new type of graph-based operations for generating the molecular graph", but do not clarify how it is different, and why the difference is important. Such a comparison is important for readers to understand the essence of the proposed method.
- No theoretical guarantee to generate valid molecules
I am not convinced with the mechanism to comply the valence rule, and I wonder if there is a theoretical guarantee that this mechanism can comply the rule or there is a counter-example where this mechanism cannot guarantee it. When constructing a ring, it is desirable that the tail atom has one remaining valence, and the ring closes by adding a residual edge. Is it possible that the tail atom has no remaining valence and the ring cannot be closed? For example, C*=C-C≡C seems not to be rejected by the mechanism, but we cannot close the ring. % This may be a question, rather than weakness. If there is any misunderstanding, please correct it.
- Relationship to the classical VAE+BO approaches
As discussed in Section 5.3, one of the major issues in the plogP optimization task is that unrealistic molecules can optimize the score. I consider there had been an implicit agreement that the optimized molecules should resemble the training data, which leads to the classical molecular optimization method combining VAE trained on the real-world molecules and Bayesian optimization. As far as I am aware of, the method by Kajino [Kajino, 19] achieves the best scores among the methods using this approach. While the proposed method can control the trade-off between the score and realisticness, it seems the proposed method is not better than VAE+BO approaches in this setting.
[Kajino, 19] Molecular Hypergraph Grammar with Its Application to Molecular Optimization, ICML-19.
Given the discussion below, all of my concerns have been addressed. | - Relationship to the existing work I appreciate the authors summarize the relationship to the existing work in Section 4. I understand that the proposed generation procedure is different from the existing ones, but I don't understand why the difference is important. For example, the authors state that "Our STGG framework differs from this line of research since it proposes a new type of graph-based operations for generating the molecular graph", but do not clarify how it is different, and why the difference is important. Such a comparison is important for readers to understand the essence of the proposed method. |
ICLR_2021_2038 | ICLR_2021 | Weakness
The paper lacks enough analyses of behaviors of learned structural landmarks, with only an analysis of choices of their numbers.
The time complexity of the developed method is not analyzed and its running time comparison with other baseline methods are also missing.
The analysis of how the choices of each part of features learned in graph pooling would affect the results is missing.
The experimental part lacks the analysis of why the method performs very well on some datasets while not performing well on others.
Summary: This paper studies the problem of graph classification on chemical and social datasets. Existing graph classification methods with graph neural networks learn node embeddings via aggregation of neighbors and combine all node features into a final graph feature for classification, while such operations usually lack the ability for identifying and modelling the inner interactions of substructures. To remedy the information loss in graph pooling, the authors leverage the learned substructure landmarks to project graphs onto them for modelling the interacting relations between component parts of a graph. In this regard, an inductive neural network model for structural landmarking and interaction modelling is developed to resolve potential resolution dilemmas in graph classification and capture inherent interactions in graph-structured systems. Empirical experiments on both chemical and social datasets validate the effectiveness of the method. Generally speaking, the paper is well written and easy to follow, with clear motivation and organization. However, I have concerns about the lack of analysis for learned structural landmarks, since in the paper only the choice of its number is well discussed. Also, the time complexity of the developed method is not well studied. The detailed comments and suggestions of this paper are as follows.
1.The paper proposes to learn structural landmarks and obtain representations of graphs by projecting them. Therefore, the quality of learned landmarks is crucial while the paper lacks enough analyses for them. I suggest providing a more comprehensive analysis for them.
2.The proposed method generates various kinds of graph-level features while lacking enough analyses of their impacts on results. I suggest conducting more experiments of ablation study for this part.
3.The detailed statistics of benchmark datasets are not mentioned such as the distribution of number of nodes in graphs.
4.Although the results are competitive compared with other baselines, the authors didn’t explain why the method performs well on some datasets while not performing well on others. I suggest analyzing the reasons comprehensively.
5.Only one evaluation metric is used in experiments, which is the accuracy. Since it’s a classification task, I suggest using various metrics to show the effectiveness.
6.There are some typos in the paper that requires double checking: For example, "breath-first search " -> "breadth-first search", "an molecule" -> "a molecule" | 4.Although the results are competitive compared with other baselines, the authors didn’t explain why the method performs well on some datasets while not performing well on others. I suggest analyzing the reasons comprehensively. |
tCEtFcrq8n | EMNLP_2023 | 1. One potential weakness of this paper is that it does not compare its proposed framework with other state-of-the-art few-shot cross-domain NER methods. A more comprehensive evaluation of the proposed framework's performance against other methods would provide a better understanding of its strengths and limitations.
2. Another potential weakness is that the paper does not provide a detailed analysis of the impact of different hyperparameters or design choices on the proposed framework's performance. A more thorough analysis of these factors would help readers understand how to best apply the framework to different scenarios.
3. The paper does not provide a detailed analysis of the computational complexity of the proposed method, which could be a concern for large-scale applications.
4. The paper does not compare the proposed method to more recent LLM-based NER methods. | 1. One potential weakness of this paper is that it does not compare its proposed framework with other state-of-the-art few-shot cross-domain NER methods. A more comprehensive evaluation of the proposed framework's performance against other methods would provide a better understanding of its strengths and limitations. |
ICLR_2023_2864 | ICLR_2023 | Weakness
Two major questions I have:
Section 3.1 is structured as first presenting the main result (i.e., Theorem 3.1), then explains the key ideas of proving this result and some lemmas along the way. Here, I am not understanding how these lemmas lead to Theorem 3.1.
Specifically, Thm 3.1 is about training loss decaying to zero. Lemma 3.3 is about bounding activation flipping probability when the change in network parameter is not too large. However, I don't see a discussion on 1) why we expect the change in parameter is not too large, and 2) what is the benefit of bounding the activation flipping probability. Then, Lemma 3.4 provides an error bound on initial loss L(0), but there is no discussion on why we need to bound it given that we only need to bound the ratio L(t) / L(0).
The paper is about sparsity in activation maps, but there is in general a lack of discussion on 1) how / whether sparsity helps or provides benefits with proving the convergence / generalization results, and 2) how / whether sparsity facilitates convergence / improves generalization by their theoretical results, and whether such results aligns with what can be observed in practice (e.g. faster convergence and better generalization).
Other comments:
Theorem 3.1: Maybe I missed it but I did not find where delta on the RHS of the inequality is defined.
Remark 3.2: Is the improvement over Song et al. obtained because this paper considers trainable bias while Song et al. considers non-trainable bias?
Definition 3.10: Data-dependent Region. This assumption is the key for the third contribution, but it seems quite arbitrary in the sense that we don't know if or how likely it is going to be satisfied in practice.
Additional Comments
There are some references on sparse activation in existing network architectures. These are particularly relevant as their sparsity emerges automatically from regular training.
Rhu, Minsoo, et al. "Compressing DMA engine: Leveraging activation sparsity for training deep neural networks." 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2018.
Andriushchenko, Maksym, et al. "SGD with large step sizes learns sparse features." arXiv preprint arXiv:2210.05337 (2022).
Li, Zonglin, et al. "Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers." arXiv preprint arXiv:2210.06313 (2022). | 1) how / whether sparsity helps or provides benefits with proving the convergence / generalization results, and |
NIPS_2018_986 | NIPS_2018 | The paper does not provide enough details on the game and various concepts in the game to justify the design of the proposed infrastructure. Due to the lack of sufficient details on the game to justify and inform the design of the deep network architecture, it is hard to convince readers why the approach works or does not work. The reproducibility is therefore affected. The take-away from the paper or generalization beyond StarCraft is also limited. Other general comments: - Tables and figures should appear near where they are mentioned, so Table 4 and Figure 4 should not be in the appendix. - Too many papers referenced are from non-peer reviewed websites, such as Arxiv and CoRR, weakening the paper in terms of argumentation. The authors are highly encouraged to replace these papers by their peer-reviewed counterparts - The selected baselines are too simple. They are all simple, rule-based, and myopic, with no attempt to learn anything from the provided training data, which the authorsâ architecture makes use of. It would be a fairer comparison should authors have tried to design a little more sophisticated baselines that incorporate some form of learning (via classical prediction models, for example). Other detailed comments: - Various statements about the game and how players behave/act in the game miss explanation and/or supporting reference * Page 1, line 31: âTop-level human players perform about 350 actions per minuteâ * Page 3, line 86: â... humans generally make forward predictions on a high level at a long time-scale; ...â * Page 6, line 240-241: âSince units often move very erratically...â: not really, units often move according to some path. This also illustrates why the selected baselines are too weak; they make no use of the training data to try to infer the path of the moving units. - Page 2, line 81: Full state is not defined properly. What are included in a full state? - Page 3, line 96: what are âwalk tilesâ? - Page 3, line 119: What is âthe faction of both playersâ? - (minor) Figure 4 does not print well in black and white - Page 3, line 131-132: The author refers to the g_op_b task that is introduced later in the paper. It would be better if the authors can describe such concepts in a section devoted to description of the games and related tasks of interest. - Page 4, line 139: âhâ is not defined yet. - Page 5, line 208-210: What game rules were used for this baseline? In general, how do game rules help? - Page 7, line 271-276: This part needs more elaboration on what modules are available in the bots, and how they can use the enhanced information to their advantage in a game. - Page 7, line 286-287: The reason to exclude Zerg is not convincing nor well supported externally. The author is encouraged to include a Zerg bot regardless, then in the discussion section, explain the result and performance accordingly. | - Various statements about the game and how players behave/act in the game miss explanation and/or supporting reference * Page 1, line 31: âTop-level human players perform about 350 actions per minuteâ * Page 3, line 86: â... humans generally make forward predictions on a high level at a long time-scale; ...â * Page 6, line 240-241: âSince units often move very erratically...â: not really, units often move according to some path. This also illustrates why the selected baselines are too weak; they make no use of the training data to try to infer the path of the moving units. |
NIPS_2017_182 | NIPS_2017 | Weakness
1. Paper misses citing a few relevant recent related works [A], [B], which could also benefit from the proposed technique and use region proposals.
2. Another highly relevant work is [C] which does efficient search for object proposals in a similar manner to this approach building on top of the work of Lampert et.al.[22]
3. It is unclear what SPAT means in Table. 2.
4. How was Fig. 6 b) created? Was it by random sub-sampling of concepts?
5. It would be interesting to consider a baseline which just uses the feature maps (used in the work, say shown in Fig. 2) and the phrases and simply regresses to the target coordinates using an MLP. Is it clear that the proposed approach would outperform it? (*)
6. L130: It was unclear to me how the geometry constraints are exactly implemented in the algorithm, i.e. the exposition of how the term k2 is computed was uncler. It would be great to provide details. Clear explanation of this seems especially important since the performance of the system seems highly dependent on this term (as it is trivial to maximize the sum of scores of say detection heat maps by considering the entire image as the set).
Preliminary Evaluation
The paper has a neat idea which is implemented in a very clean manner, and is easy to read. Concerns important for the rebuttal are marked with (*) above.
[A] Hu, Ronghang, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2016. âModeling Relationships in Referential Expressions with Compositional Modular Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1611.09978.
[B] Nagaraja, Varun K., Vlad I. Morariu, and Larry S. Davis. 2016. âModeling Context Between Objects for Referring Expression Understanding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1608.00525.
[C] Sun, Qing, and Dhruv Batra. 2015. âSubmodBoxes: Near-Optimal Search for a Set of Diverse Object Proposals.â In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 1378â86. Curran Associates, Inc. | 2.4. How was Fig. 6 b) created? Was it by random sub-sampling of concepts? |
NIPS_2022_539 | NIPS_2022 | Weakness
It is not clear to me how the variance in Theorem 3.2 turns into the variance in Theorem 3.1 in the limiting case α = 0
, could the author(s) elaborate on that?
Variance is not the only reason causing InfoNCE to perform sub-optimally. Discussions from [5-7] have presented diverse perspectives on how contrastive learning can be improved based numerical or optimization views, also backed up strong mathematical arguments. These works are complementary to the approach adopted here and should be discussed. Misc
Theorem 3.1 is related to the result from [1] on the exponential variance issue for MI estimation, which should be discussed.
The objective function for skewed χ 2
divergence presented in the [Tsai et al. 2021] (which this work is based on) is related to the spectrum contrastive learning objective [5], which is simpler and also demonstrated strong performance gains over the vanilla InfoNCE objective. The Renyi divergence and χ 2
divergence have also been successfully applied in the context of generative modeling [2-4], where the goal of minimizing D α ( P X Y ∥ P X P Y ) or D α ( P X ∥ Q X )
is shared.
Potential typos: α
-CPC is actually proposed by [B Poole et al. ICML 2019], not [J Song, et al. NeurIPS 2020].
[1] McAllester, D. and Stratos, K. Formal limitations on the measurement of mutual information. AISTATS 2020 [2] Y Li, et al. Rényi Divergence Variational Inference. NeurIPS 2016 [3] L Chen, et al. Variational inference and model selection with generalized evidence bounds. ICML 2018 [4] C Tao. Chi-square Generative Adversarial Network. ICML 2018 [5] J HaoChen, et al. Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss. NeurIPS 2021 [6] Q Guo, et al. Tight Mutual Information Estimation With Contrastive Fenchel-Legendre Optimization. 2021 [7] J Chen, et al. Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE. 2021
I don't have any concerns on limitations and potential negative societal impact. | 2021 I don't have any concerns on limitations and potential negative societal impact. |
ICLR_2023_4518 | ICLR_2023 | - The proposed idea of predicting the motion (trajectory) of objects within the masked regions is a bit misleading and confusing. If an object is never seen by the model (esp. considering a tube-based masking strategy was used), how can its motion be predicted?
- The novelty and contributions of the proposed method are limited. The overall idea of predicting masked information is similar to several previous works, as also mentioned by the authors. The idea of predicting the motion or tracking objects is also similar to several prior works [*1, *2]. The idea of interpolating frames was also derived from an existing work [Xiang et al. 2020].
- The claim of the proposed method "learns long-term fine-grained motion clues" lacks justification to back it up. It is unclear how and why the proposed method learns "long-term" motion and the same for "fine-grained".
- Missing evidence to support the claim of "the model is easy to finish this mask-and-predict task by only..." (bottom on page 1) in video representation learning.
- The statement of "interpolating dense trajectories...without increasing computational costs" (bottom on page 2) is overclaimed. Please justify why the added operation did not lead to any additional computational costs.
- Missing reference to works [*1, *2] with a similar idea of the proposed method, i.e. predicting the motion or tracking the objects. Besides, when reviewing the related work about "mask-and-predict", reference to key earlier works (e.g. [*3]) was also missing. For the "self-supervised video representation learning" related works, there are quite a few prior works missing, e.g. [*4]. Suggest the authors carefully and thoroughly do the literature review.
[*1] Wang, J., et al. "Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[*2] Wang, Xiaolong, Allan Jabri, and Alexei A. Efros. "Learning correspondence from the cycle-consistency of time." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[*3] Pathak, Deepak, et al. "Context encoders: Feature learning by inpainting." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[*4] Wei, Donglai, et al. "Learning and using the arrow of time." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
- Missing (a)-(d) in Figure 4.
- Duplicated paragraphs for the "Dataset" and "Model" in the Reproducibility statement. | - The statement of "interpolating dense trajectories...without increasing computational costs" (bottom on page 2) is overclaimed. Please justify why the added operation did not lead to any additional computational costs. |
070DFUdNh7 | ICLR_2024 | - Performance on some node and edge tasks is not state-of-the-art.
- Limited analysis of what properties are learned during pretraining and their utility.
- Does not experiment with very large models in the billions of parameters range. | - Limited analysis of what properties are learned during pretraining and their utility. |
ICLR_2021_78 | ICLR_2021 | weakness in comparisons to comparable virtual environments is given later.
d. The different variant agents being compared in the task benchmark are clearly explained and form an appropriate experimental design.
2. Novelty/Impact
a. The work aims to showcase a new challenge task, evaluation platform, and benchmark agent performance for goal recognition followed by collaborative planning. The work as described compares favourably to similar work in evaluation platforms and benchmarks referenced in the related work section and appendix. The differences are made clear, though the use of some featured distinctions are not demonstrated in the paper (e.g. visual observations are possible but not used in the benchmarking).
3. Experimental Rigour
a. This work is not primarily about demonstrating the benefits of a particular approach over others in a particular application. It demonstrates benchmarks for agent performance in a newly introduce problem setting. From that perspective, the paper has strong experimental rigour. The experimental design is appropriate, comparing multiple baselines and oracles with several sets of experimental variants from both automated planning and reinforcement learning communities.
b. The comparison of experimental variants is conducted with both a computationally-controlled agent and a human-controlled avatar to evaluate collaboration performance in increasingly realistic settings.
c. The claim that the computationally-controlled Alice agent is human-like is repeated throughout the paper. This is not justified in the actual text of the main paper, but is supported to a moderate degree (human-like in strategy/planning if not movement/behaviour) through experiments with human subjects that are described in the appendix.
d. Effort paid to ensure diverse avatars in experimentation.
4. Reproducibility
a. The work is almost entirely reproducible, with details of all agent architectures used for experiments provided with hyperparameters and architecture design. The authors describe that the environment will be released as open-source, which will then make the article wholly reproducible. This reviewer appreciated the level of additional detail provided in the appendix to improve this area of evaluation.
3. Weaknesses
1. This paper uses the term social intelligence to motivate the context for this challenge task. Social intelligence is a much broader term than what is actually being evaluated here and would require evaluating capabilities beyond goal recognition and coordinated planning/task execution. It is suggested to replace this claim with "goal recognition and collaborative planning".
2. From the motivation provided, i.e. evaluating social perception and collaboration, why is the task specifically about watching then helping and not both together or just helping or just watching or some combination of these activities fluidly occurring throughout the interaction?
3. Further, the work itself does not explicitly motivate why this, specific challenge task for goal recognition followed by collaborative planning is necessary for moving the state of the art in human-AI collaboration forward. However, it is a small leap to see the impact of this platform/task in evaluating applications like service robotics, social robotics, collaborative human-agent task performance, video games, etc. This reviewer can understand the impact of the work, but it would be clearer to explicitly discuss this.
4. It would be clearer to specify that this task is limited to situations where there is explicitly only one goal throughout the entire demonstration + execution episode. This is important since it precludes using this challenge task for research into agents that need to use goal recognition after the initial demonstration, potentially continuously over the course of execution. This second kind of continuous goal monitoring is more similar to real-world applications of watching and helping or assistive agents or social robotics, since the human collaborator can (and often will) change their mind.
5. Similarly, it should be noted that there is an explicit limitation of this challenge task and the evaluation metrics to scenarios where the entire success or failure of the approach is purely based on the final team accomplishment. This is similar to situations like team sports, where all that matters is the final game score. Many real-world scenarios for human-AI collaboration, differ by also requiring individual collaborators to do well or for the primary human user to do better with collaboration (than without). For example, in a computer game where Bob represents a team-mate to Alice who is a human player, Bob can choose to steam-roll Alice and win the game by itself. However, this leads to lower subjective user experience for the human team-mate. In this case, the score might be greater than what Alice could accomplish on their own and the game might be won faster than Alice could on their own, but the experience would be different based on whether they are truly collaborating or one is over-shadowing the other.
6. A final assumption, is that there is no difference in expertise between Alice and Bob. The human is expected to be able to competently finish the task and for Bob to linearly cut down the time taken to perform this task. There are many real-world tasks in human-AI collaboration where this assumption does not hold and there could be non-linear interactions between success-rate and speed-up due to different levels of expertise between Alice and Bob.
7. The fixed Alice agent is called human-like through out the article and this was not properly justified anywhere in the main text of the paper. However, the appendix actually describes results that compare the performance of the computationally-controlled and human-controlled variants of Alice to human observers. This potentially justifies this weakness. For clarity, it would be valuable to refer to the presence of this validation experiment in the main paper.
8. Why aren't there benchmark results (more than one) for the goal recognition component similar to the planning task experimentation? If both parts of the task are important, it would be valuable to provide additional experiments to show comparisons between goal recognition approaches as well, even if that is in the appendix for space reasons.
9. There could be more analysis of the benchmark agent performance, 1) Why does the purely random agent work relatively well across tasks? 2) Why doesn't HRL work better? Is this due to less hyperparameter tuning compared to other approaches or due to some intrinsic aspect of the task itself? 3) Perhaps I missed this, but why not try a flat model-based or model-free RL without a hierarchy?
10. There are several comments about other environments in the related work section and appendix being toy environments. However, the tasks in the environment demonstrated in this paper only use a small set of predicates as goals. Similarly, it CAN generate visual observations but that isn't used by any of the baselines in the paper. Several comparisons to related virtual environments are made in appendix, but some of the features aren't used here either (humanoid agent - this challenge task works equally well with non-humanoid avatars/behaviours and realism - visual realism is present but it isn't clear if behavioural or physical realism is present due to seeming use of animations instead of physical simulation).
11. None of the tasks described allow the use of communication between agents or evaluate that. Other multi-agent environments like Particle Environments (below) allow for that. Communication is a natural part of collaboration and should have been mentioned if only to distinguish future work or work out of current scope.
a. @article{mordatch2017emergence, title={Emergence of Grounded Compositional Language in Multi-Agent Populations}, author={Mordatch, Igor and Abbeel, Pieter}, journal={arXiv preprint arXiv:1703.04908}, year={2017}}
12. "planning and learning based baselines", "and multiple planning and deep reinforcement learning (DRL) baselines", etc. - There is potential for confusion with the use of terms "planning" and "learning" methods to do what both fields (automated/symbolic planning and reinforcement learning) would potentially consider as planning tasks. It would be clearer to indicate this distinction in terminology.
13. The human-likeness evaluation experiment asked subjects to evaluate performance one agent video at a time. A more rigorous evaluation might compare two agents side by side and ask the human to guess the human performance. This could also be in addition to the current evaluation. The current evaluation is a ceiling on performance while the comparative evaluation is a potential floor.
4. Recommendation:
1. I recommend accepting this paper, which was clear, novel, empirically strong, and supremely reproducible. The strengths conveyed above outweighed the weaknesses.
5. Minor Comments/Suggestions:
1. Some minor typos in the manuscript:
a. Using the inferred goals, both HP and Hybrid can offer effective. - page 6
b. IN(pundcake, fridge) - appendix table 2
c. This closeness perdition - appendix page 19 | 2. From the motivation provided, i.e. evaluating social perception and collaboration, why is the task specifically about watching then helping and not both together or just helping or just watching or some combination of these activities fluidly occurring throughout the interaction? |
ICLR_2022_142 | ICLR_2022 | 1. The proposed model is tied to the StyleGAN2 model, while the baseline method (Yu et al. 2021) is agnostic to models. It would be helpful if the authors could demonstrate how the same mechanism can work with other GAN models; 2. How the scalability of the model is demonstrated can be improved. As shown in section 4.3, the authors only provided the relationship between the detection accuracy and the training set size. The results suggest that there need to be at least 10k samples in the training set to reach high fingerprint detection accuracy in the testing set. It remains unclear if the baseline method, which fingerprints the images of the training set, also requires the same number of or more samples. It is possible that the baseline method requires a smaller sample size than the proposed method. It is also unclear how the training time of the proposed model is compared to the baseline model. It would be helpful if the authors could provide some information concerning the sample size required for the baseline method and the training time required for the baseline method and the proposed method. Alternatively, the authors can provide a brief explanation on this matter. 3. Even though the proposed method can allow for initiating a large number of models with different fingerprints efficiently once a model is trained, training the model requires at least 10k samples from the fingerprint space. This indicates that even if the model creator only needs to release a small number of models, the training will still need to be conducted to a large amount of fingerprints. If the authors can provide some additional insights about this issue that would be very helpful. Other comments: 1. It would be helpful to add some discussion on comparing the robustness and immunizability of the proposed method and the method proposed in Yu et al. 2021. 2. It seems like the baseline results are directly obtained from the published manuscript; it would be helpful to make this clear. | 2. It seems like the baseline results are directly obtained from the published manuscript; it would be helpful to make this clear. |
NIPS_2019_776 | NIPS_2019 | 1. When the authors say `white box attacks`, I assume this means that the adversary can see the full network with the final layers, every network in the ensemble, every rotation used by networks in the ensemble. I would like them to confirm this is correct. 2. Did the authors study numbers of bits in logits helps against a larger epsilon in the PGD attack? Because intuition suggests that having a 32 bit logit should improve robustness against a more powerful adversary. This experiment isn't absolutely necessary, but does strengthen the paper. 3. Did the authors study the same approach on Cifar? It seems like this approach should be readily applicable there as well. ---Edit after rebuttal--- I am updating my score to 8. The improved experiments on Cifar10 make a convincing argument for your method. | 1. When the authors say `white box attacks`, I assume this means that the adversary can see the full network with the final layers, every network in the ensemble, every rotation used by networks in the ensemble. I would like them to confirm this is correct. |
NIPS_2017_217 | NIPS_2017 | - The paper is incremental and does not have much technical substance. It just adds a new loss to [31].
- "Embedding" is an overloaded word for a scalar value that represents object ID.
- The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end without any post-processing.
- Keypoint detection results should be included in the experiments section.
- Sometimes the predicted tag value might be in the range of tag values for two or more nearby people, how is it determined to which person the keypoint belongs?
- Line 168: It is mentioned that the anchor point changes if the neck is occluded. This makes training noisy since the distances for most examples are computed with respect to the neck.
Overall assessment: I am on the fence for this paper. The paper achieves state-of-the-art performance, but it is incremental and does not have much technical substance. Furthermore, the main improvement comes from running [31] in a post-processing stage. | - Sometimes the predicted tag value might be in the range of tag values for two or more nearby people, how is it determined to which person the keypoint belongs? |
ICLR_2022_46 | ICLR_2022 | Experiments are only conducted on four custom environments. Why not use existing environments from NGE or [1] (see references below)? Also, three random seeds are far below the standard.
Little analysis on the empirical results, given no theoretical justification of the algorithm. The analysis can be further enhanced from several aspects: 1) Discussing comparison with NGE (probably the strongest baseline) about similarities and differences and how those differences lead to a huge performance increase; 2) Enriching ablation studies by training with skeleton transformation disabled and attribute transformation disabled respectively. Concerns:
I can't entirely agree with the argument that this approach enables first-order optimization of agent design. Technically, both this approach and ES-based methods do not have access to the ground-truth gradient and estimate first-order gradients based on the gathered experiences. So, this approach is still zeroth-order optimization, and it's not appropriate to claim that the sample efficiency comes from the first-order nature of the method.
While I agree that ES-based methods have a high-dimensional search space for design, your approach does not essentially reduce that search space. Instead, the search space for policy is much larger in your formulation, i.e., the dimension of MDP becomes much higher.
Could you provide more intuition on the comparison against NGE? By looking at the performance curves, it outperforms NGE by a large margin even without all JSMLPs. Is it because of the new formulation of the design optimization, or the attribute transform is optimized (unlike sampling from uniform distributions in NGE), or other reasons?
I'm skeptical about the reported performance of NGE because of several reasons: 1) In the original NGE's paper, it outperforms RGS by a large margin, while in Figure 3 of this paper, the improvement is marginal; 2) I believe NGE should be much better than RGS is because NGE also uses GNN policies that allow experience sharing across different designs; 3) If experience sharing via GNN is not effective in NGE, why this is effective in your approach as you claimed? A good way to address my concerns is to probably run NGE's original implementation besides your own implementation and report the performance.
Other suggestions:
Section 4 can be shortened to include more analysis in Section 5.
In many practical use cases, probably we already have a decent hand-designed agent at the beginning. If your approach can also effectively improve upon that and is better than other baseline methods, the results will be more solid.
The result will look even stronger if you can show your method even beat the previous works on continuous design optimization [2,3] (probably with attribute transform only and no skeleton transform). Typos:
JSMPLs -> JSMLPs in the caption of Figure 4. References:
[1] Gupta, Agrim, et al. "Embodied Intelligence via Learning and Evolution." arXiv preprint arXiv:2102.02202 (2021).
[2] Luck, Kevin Sebastian, Heni Ben Amor, and Roberto Calandra. "Data-efficient co-adaptation of morphology and behaviour with deep reinforcement learning." Conference on Robot Learning. PMLR, 2020.
[3] Schaff, Charles, et al. "Jointly learning to construct and control agents using deep reinforcement learning." 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019.
--------- Post Rebuttal Update ----------
The authors adequately addressed my main concerns through extra experiments and analysis. | 1) Discussing comparison with NGE (probably the strongest baseline) about similarities and differences and how those differences lead to a huge performance increase; |
SJYTfbI59J | EMNLP_2023 | 1. Paper claims that [1] does not meet the "genuine zero-shot" scenario a T0 has undergone fine-tuning on question generation (QG) tasks and there is a task leakage to downstream QLM ranking task. Lines 078 to 087. This is one of the key arguments of this paper, but it is not explained/proved in the paper why undergoing fine-tuning on question generation (QG) tasks disqualifies it as "genuine" zero-shot approach, nor it is supported by any references
2. The weights for interpolation strategy is set to 0.2 without conducting any grid search (Line 158). No explanation is provided for the choice of the value for this parameter. The main results for this paper's claims depend on the choice of this parameter. Thus, it is important to find the optimal value of this parameter for different LLMs in different settings. It is possible that the key finding of the paper that the instruction fine-tuning degrades the ranking performance maybe because 0.2 value may be better for non-instruction-tuned LLM and worse for the instruction-tuned LLM, and choosing optimal value of this parameter may carry a fair comparison.
3. Detailed discussion is lacking in the paper on its empirical observation to analyze why the further instruction fine-tuning degrades the ranking performance if question-generation task is not present in fine-tuning data. It would be interesting to understand the theoretical basis behind such behavior.
4. Lacks original ideas in the contributions: (i) Main QLM setting is the same as introduced in [1,2]. (ii) The interpolation of relevancy scores with first-stage retriever is taken as [3]. (iii) The prompt-template for few-shot template is taken from [4]. The paper carries out the ranking experiments using some newer LLMs which were not explored in [1,2].
5. It would be important to have the experimental results using different types of first-stage retrievers other than BM25 too, in order to validate if the observation that instruction-tuning hinders performance if QG task is not included, is consistent in different settings.
6. The experiments are performed only on a small subset (4 datasets) of BEIR benchmark. It is not sufficient to support the claims made in this paper. Thorough evaluation on a larger set is important to support the paper's claims. References:
1. https://aclanthology.org/2022.emnlp-main.249/
2. https://link.springer.com/chapter/10.1007/978-3-030-72240-1_49
3. https://dl.acm.org/doi/abs/10.1145/3471158.3472233
4. https://arxiv.org/pdf/2202.05144.pdf | 5. It would be important to have the experimental results using different types of first-stage retrievers other than BM25 too, in order to validate if the observation that instruction-tuning hinders performance if QG task is not included, is consistent in different settings. |
NIPS_2018_15 | NIPS_2018 | weakness of this paper is its lack of clarity and aspects of the experimental evaluation. The ResNet baseline seems to be just as good, with no signs of overfitting. The complexity added to the hGRU model is not well motivated and better baselines could be chosen. What follows is a list 10 specific details that we would like to highlight, in no particular order: 1. Formatting: is this the original NIPS style? Spacing regarding sections titles, figures, and tables seem to deviate from the template. But we may be wrong. 2. The general process is still not 100% clear to us. The GRU, or RNNs in general, are applied to sequences. But unlike other RNNs applied to image classification which iterate over the pixels/spatial dimensions, the proposed model seems to iterate over a sequence of the same image. Is this correct? 2.1 Comment: The general high-level motivation seems to be map reading (see fig 1.c) but this is an inherently sequential problem to which we would apply sequential models so it seems odd that one would compare to pure CNNs in the first place. 3. Section 2 begins with a review of the GRU. But what follows doesn't seem to be the GRU of [17]. Compare eq.1 in the paper and eq.5 in [7]. a) there doesn't seem to be a trained transformation on the sequence input x_i and b) the model convolves the hidden state, which the standard GRU doesn't do (and afaik the convolution is usually done on the input stream, not on the hidden state). c) Since the authors extend the GRU we think it would make section 2 much more readable if they used the same/similar nomenclature and variable names. E.g., there are large variations of H which all mean different things. This makes it difficult to read. 4. It is not clear what horizontal connections are. One the one hand, it seems to be an essential part of the model, on the other hand, GRU is introduced as a method of learning horizontal connections. While the term certainly carries a lot of meaning in the neuroscience context, it is not clear to us what it means in the context of an RNN model. 5. What is a feed forward drive? The equations seem to indicate that is the input at every sequence step but the latter part of the sentence describes it as coming from a previous convolutional layer. 6. The dimensions of the tensors involved in the convolution don't seem to match. The convolution in a ConvNet is usually a 2D discrete convolution over the 2 spatial dimensions. If the image is WxHxC (width, height, and, e.g., the 3 colour channels), and one kernel is 1x1xC (line 77) then we believe the resulting volume should be WxHx1 and the bias is a scalar. The authors most certainly want to have several kernels and therefore several biases but we only found this hyper-parameter for the feed forward models that are described in section 3.4. The fact that they have C biases is confusing. 7. Looking very closely at the diagram, it seems that the ResNet architectures are as good if not even slightly better than the hGRU. Numerical measurements would probably help, but that is a minor issue. It's just that the authors claim that "neural networks and their extensions" struggle in those tasks. Since we may include ResNets in that definition, their own experiment would refute that claim. The fact that the hGRU is using many fewer parameters is indeed interesting but the ResNet is also a more general model and there is (surprisingly) no sign of overfitting due to a large model. So what is the motivation of the authors of having fewer parameters? 8. Given the fact that ResNets perform so well on this task, why didn't the authors consider the earlier and closely related highway (HW) networks [high1]? HWs use a gating mechanism which is inspired by the LSTM architecture, but for images. Resnets are a special case of HW, that is, HW might make an even stronger baseline as it would also allow for a mix and gain-like computation, unlike ResNets. 9. In general, the hGRU is quite a bit more complex than the GRU. How does it compare to a double layer GRU? Since the hGRU also introduces a two-layer like cell (inhibiton part is seperated by a nonlinearity from the exhibition part) it seems unfair to compare to the GRU with fewer layers (and therefore smaller model complexity) 10. Can the authors elaborate on the motivation behind using the scalars in eq 8-11? And why are they k-dimensional? What is k? 11. Related work: The authors focus on GRU, very similar to LSTM with recurrent forget gates [lstm2], but GRU cannot learn to count [gru2] or to solve context-free languages [gru2] and also does not work as well for translation [gru3]. So why not use "horizontal LSTM" instead of "horizontal GRU"? Did the authors try? What is the difference to PyramidLSTM [lstm3], the basis of PixelRNNs? Why no comparison? Authors compare against ResNets, a special case of the earlier highway nets [high1]. What about comparing to highway nets? See point 8 above. [gru2] Weiss et al. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. Preprint arXiv:1805.04908. [gru3] Britz et al (2017). Massive Exploration of Neural Machine Translation Architectures. Preprint arXiv:1703.03906 [lstm2] Gers et al. âLearning to Forget: Continual Prediction with LSTM.â Neural Computation, 12(10):2451-2471, 2000. [lstm3] Stollenga et al. Parallel Multi-Dimensional LSTM, With Application to Fast Biomedical Volumetric Image Segmentation. NIPS 2015. Preprint: arxiv:1506.07452, June 2015. [high1] Srivastava et al. Highway networks. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (Jul 2015). Also at NIPS'2015. After the rebuttal phase, this review was edited by adding the following text: Thanks for the author feedback. However, we remain unconvinced. The baseline methods used for performance comparisons (on a problem on which few compete) are not the state of the art methods for such tasks - partially because they throw away spatial information the deeper they get, while shallower layers cannot connect the dots (literally) due to the restricted field of view. Why don't the authors compare to a state of the art baseline method that can deal with arbitrary distances between pixels - standard CNNs cannot, but the good old multi-dimensional (MD) RNN can (https://arxiv.org/abs/0705.2011). For each pixel, a 2D-RNN implicitly uses the entire image as a spatial context (and a 3D-RNN uses an entire video as context). A 2D-RNN should be a natural competitor on this simple long range 2D task. The RNN is usually LSTM (such as 2D-LSTM) but could be something else. See also MD-RNN speedups through parallelization (https://arxiv.org/abs/1506.07452). The submission, however, seems to indicate that the authors donât even fully understand multi-dimensional RNNs, writing instead about "images transformed into one-dimensional sequencesâ in this context, although the point of MD-RNNs is exactly the opposite. Note that an MD-RNN in general does have local spatial organization, like the model of the authors. For any given pixel, a 2D-RNN sees this pixel plus the internal 2D-RNN states corresponding to neighbouring pixels (which already may represent info about lots of other pixels farther away). Thatâs how the 2D-RNN can recursively infer long range information despite its local 2D spatial neighbourhood wiring. So any good old MD-RNN is in fact strongly spatially organised, and in that sense even biologically plausible to some extent, AFAIK at least as plausible as the system in the present submission. The authors basically propose an alternative local 2D spatial neighbourhood wiring, which should be experimentally compared to older wirings of that type. And to our limited knowledge of biology, it is not possible to reject one of those 2D wirings based on evidence from neuroscience - as far as we can judge, the older 2D-RNN wiring is just as compatible with neurophysiological evidence as the new proposal. Since the authors talk about GRU: they could have used a 2D-GRU as a 2D-RNN baseline, instead of their more limited feedforward baseline methods. GRU, however, is a variant of the vanilla LSTM by Gers et al 2000, but lacking one gate, thatâs why it has those problems with counting and with recognising languages. Since the task might require counting, the best baseline method might be a 2D-LSTM, which was already shown to work on challenging related problems such as brain image segmentation where the long range context is important (https://arxiv.org/abs/1506.07452), while I donât know of similar 2D-GRU successes. We also agree with the AC regarding negative weights. Despite some motivation/wording that might appeal to neuroscientists, the proposed architecture is a standard ML model that has been tweaked to work on this specific problem. So it should be compared to the most appropriate alternative ML models (in that case 2D-RNNs). For now, this is a Machine Learning paper slightly disguised as a Computational Neuroscience paper. Anyway, the paper has even more important drawbacks than the baseline dispute. Lack of clarity still makes it hard to re-implement and reproduce, and a lot of complexity is added which is not well motivated or empirically evaluated through, say, an ablation study. Nevertheless, we encourage the authors to produce a major revision of this interesting work and re-submit again to the next conference! | 9. In general, the hGRU is quite a bit more complex than the GRU. How does it compare to a double layer GRU? Since the hGRU also introduces a two-layer like cell (inhibiton part is seperated by a nonlinearity from the exhibition part) it seems unfair to compare to the GRU with fewer layers (and therefore smaller model complexity) 10. Can the authors elaborate on the motivation behind using the scalars in eq 8-11? And why are they k-dimensional? What is k? |
NIPS_2020_788 | NIPS_2020 | - My primary concern is insufficient comparison with the existing literature on online LP, like the two works cited [Agrawal '14, Kesselheim et al. '14]: - The paper claims novelty in the sublihear competitive ratios obtained in those works of the form O(1 - \eps(m,n)), so that \eps(m,n) * OPT is the regret. From a glance at the works cited by [Agrawal '14], "Online stochastic packing applied to display ad allocation" [Feldman et al. '10] has an 1/OPT term in this competitive ratio, giving a sublinear regret bound. Some clarifying discussion is necessary here. - Moreover, the standard in the literature on this problem, starting with [Kleinberg '05], is to prefer dependences on B := min_i b_i (from the notation of [Agrawal '14], underbar-d in this paper) rather than OPT; see the related work section in that reference. Some discussion and a clearer comparison is necessary, since this line of work is so well-established. - It seems (at a glance; I haven't verified completely) that the positivity assumptions in those cited works, the removal of which is pointed out as a novel contribution in this work, comes not from some fundamental mathematical reason, but rather to simplify the sign conventions when the authors choose to quantify their results using competitive ratio on a positive utility function rather than regret. Some clarification about this would be appreciated; in any case, the manuscript should discuss this at greater depth. - It seems that the original algorithms in these cited works operate under the model where constraint violation is not permitted, while expected violation is considered as a cost here. Could the authors clarify on this discrepancy? - In summary, I like the ideas in the paper; however, since this line of work is so well-established and the problem is so concrete, this works needs to be more concrete and thorough in establishing its relationship with prior work. | - Moreover, the standard in the literature on this problem, starting with [Kleinberg '05], is to prefer dependences on B := min_i b_i (from the notation of [Agrawal '14], underbar-d in this paper) rather than OPT; see the related work section in that reference. Some discussion and a clearer comparison is necessary, since this line of work is so well-established. |
ICLR_2021_2196 | ICLR_2021 | Weakness:
the paper needs a major rewrite to improve fluency and to better state motivation and contribution
the empirical validation is weak.
Reasons for accept: The advantages of this paper are: 1) this paper proposed a new evaluation benchmark and dataset to promote the related research of online continual learning; 2) the proposed plastic gate allows it to distribute different distributions among different experts, which has certain effects from the experimental results.
Reasons for reject: The shortcomings of this paper are: 1. This paper is not enough novel and has not contributed enough to continual learning related research; 2. The core motivation of this paper is not clear enough. The abstract mentioned that "it is hard to demarcate task boundaries in actual tasks", and then said that a new benchmark, new metrics, and gating technique are proposed. Stacked statements like this can hardly capture the main problem to be solved. 3. The advantages of the new metrics are not clear. Because from the experimental results, PPL and PPL@sw have a strong correlation. Therefore, please explain its advantages in detail (including the advantages of this evaluation framework compared with the evaluation framework of related literature, and verify it) 4. The baseline uses LSTM and does not use CNN, Transformer, .etc, which shows that its generalization is limited. 5. Can you provide the experimental results when λ is other values, and the combination of the number of modules? 6. Because what you are proposing is a continuous language modeling evaluation framework. Is it possible to evaluate some of the latest online continual learning systems? For example:
Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis
Learning a Unified Classifier Incrementally via Rebalancing Or other Task-Free Continual Learning related work. This will have a good evaluation effect on measuring the versatility of your evaluation framework. | 5. Can you provide the experimental results when λ is other values, and the combination of the number of modules? 6. Because what you are proposing is a continuous language modeling evaluation framework. Is it possible to evaluate some of the latest online continual learning systems? For example: Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis Learning a Unified Classifier Incrementally via Rebalancing Or other Task-Free Continual Learning related work. This will have a good evaluation effect on measuring the versatility of your evaluation framework. |
NIPS_2018_849 | NIPS_2018 | Weakness: - Some training details are missing. It may be better to provide more details in order for others to reproduce the results. In particular, a. K-Means is applied to initialize the centers. However, in the beginning of training stage, the feature maps are essentially random values. Or, is K-Means applied to a pretrained model? b. KL divergence between p(Q) and p is included in the loss. May be interesting to see the final learned p(Q) compared to p. c. It is not clear to the reviewer how one could backpropagte to x, w, and \sigma in Equation (2). Also, will the training be sensitive to \sigma? - Is it possible to visualize how the graph representations evolve during training? - What is the extra cost in terms of computation resources and speed when employing the GCU? - 4 GCUs with (2, 4, 8, 32) vertices are used for semantic segmentation. It may be informative to discuss the improvement of each added GCU (e.g., which one is more helpful for segmentation). - It may be also informative to provide the learned graph representations for object detection and instance segmentation. - The proposed model attains similar performance as non-local neural network. It may be informative to provide some more discussion about this (e.g., extra computational cost for each method.). - In line 121, what does \sigma_k = sigmoid(\sigma_k) mean? \sigma_k appears on both sides. In short, the reviewer thinks the proposed method is very interesting since it is able to capture the long-range context information for several computer vision tasks. It will be more informative if the authors could elaborate on the issues listed above. | - The proposed model attains similar performance as non-local neural network. It may be informative to provide some more discussion about this (e.g., extra computational cost for each method.). |
ICLR_2021_2829 | ICLR_2021 | (i) It is not clear or less explained why the action/perceptual incongruity can derive a good intrinsic reward signal of “novelty”. The paper is overall easy to follow, but the overall clarity (especially on the method section) could be improved.
(ii) The core idea and the mathematical formulation on the multimodal contrastive learning (section 3.2) is exactly the same as Contrastive Multiview Coding (CMC), which the authors did not cite or compare. A novelty/contribution of this work would be to use such techniques for intrinsic-reward exploration, but the significance and novelty is a bit limited.
(iii) Technical soundness: It is not convincing whether the action incongruity (the variance of actions across different combinations of multisensory inputs) might derive a useful exploration signal. (Please refer to the comments below)
(iv) I do not think the multimodal setting in robotics control is convincing enough. RGB/Depth information are possible modalities, but only there are only two. It would have been much more significant if the modality consisted of proprioceptive, touch information.
(v) Overall, the experiment studies the method on continuous and discrete environments, but the choice of environments is limited. I do not think the evaluation setup is strong enough (see below comments). Besides, on Atari, due to the nature of the environment, multimodal information might not help as much -- the benefit of audio signal might be not significant enough to derive a novel exploration signal. Also, qualitative examples and additional analysis (e.g. Appendix A.4) do not seem comprehensive enough.
Detailed/Additional comments:
Technical soundness about the action variance: In most of the cases, the number of modalities are limited; for example, M=2 in the experiments which gives only 3 possible inputs in total. Is it enough for measuring variances? Also, in many cases it is likely that the policy is ignoring one modality and using another to make a decision despite the uncertainty. However, no in-depth justification was made. Also, how to measure the action variance in Atari domains?
Empirically, the paper does not have a diverse set of environments for evaluation. For continuous control, only the FetchPush environment was considered, and there are only 3-4 environments reported from Atari. Experiments like Section 4.3/Figure 7 need to be evaluated on all sets of environments but only one was provided. Also, the experiment should focus on “success rate” or learning control ability in Fetch environments rather than interaction rate --- I doubt this is a valid metric to measure the exploration quality.
Some ablation studies are missing, such as balancing between perceptual incongruity and action incongruity. How critical is this balancing hyperparameter? From Table 1, I see SEMI(P) nearly approaches SEMI(PA) in performance, so my understanding is that the action incongruity itself is not contributing as much. What would be the performance of SEMI(A) where the action incongruity is used solely?
To me, the meaning of “alignment” in the “alignment predictor” sounds a bit unclear. It sounded like temporal alignment at first, but to my understanding, this appears to mean agreement between two multimodal observations.
Atari: The details of the policy network is missing. Question: 4 hidden layers are used, but this looks deeper than usual. Also, why does the action space consist of 12 actions (in ALE, it is usually 18)?
Overall recommendation
Overall, the biggest weaknesses of the paper are the experiment and clarity/justification of the method. Although the proposed method achieves a better performance compared to the baseline, the paper would have benefited by having more diverse environments and more careful analysis. | 4 hidden layers are used, but this looks deeper than usual. Also, why does the action space consist of 12 actions (in ALE, it is usually 18)? Overall recommendation Overall, the biggest weaknesses of the paper are the experiment and clarity/justification of the method. Although the proposed method achieves a better performance compared to the baseline, the paper would have benefited by having more diverse environments and more careful analysis. |
ACL_2017_588_review | ACL_2017 | and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems.
- Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task.
2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary.
This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities.
- Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested?
2) Have you tried building a classifier that just takes h_i^e as inputs?
I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores. | 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. |
ICLR_2023_2354 | ICLR_2023 | 1. For example, in the 3s_vs_5z map of SMAC, why the performance conducted in MARLib has an obvious different from EPyMARL?
2. For other environments (MPE, GRF, MAMuJoCo), why the performances of MARLlib are only included? 3. Author may clearly illustrate the benefits obtained from agent-level distributed dataflow, a unified agent-environment interface, and effective policy mapping in terms of implementations or experiments. 4. Some statements, like “value iteration used by VDN and QMIX prefers a dense reward function”, lack solid explanations. | 1. For example, in the 3s_vs_5z map of SMAC, why the performance conducted in MARLib has an obvious different from EPyMARL? |
BgZzJISvpY | ICLR_2024 | * **The novelty and technical contribution are trivial and very limited**. Since TQC has already considered the truncation technique to stabilize the RL training process, it is very incremental to use an already known Extreme value theorem to consider this problem again without any technical contribution. Besides, the authors claimed the advantage of the proposed algorithm over TQC is less computational cost, which is not important as both the two algorithms already largely increase the computation burden compared with other distributional RL algorithms. For example, EMD uses critic networks to output scale and location parameters, leading to unstable concerns compared with other distributional RL baselines with only one critic. More importantly, the performance of EMD is very similar to TQC.
* **The methodology is questionable and does not have justifications**. The authors are trying to borrow some existing statistical techniques or conclusions into RL problems without investigating whether those results are valid under real problems. Generally speaking, some existing results in statistics typically rely on distribution assumptions, which are normally not applicable to RL problems without rigorous justifications. In particular, Extreme value theory includes a few types and the paper here only considers the iid case with exponential family. Note that in online RL training, it is not clear whether the resulting distribution is exponential or not and I am afraid no in general based on my knowledge. Therefore, the proposed method relies on a very strong distribution assumption, which is typically not valid in complex environments. Also, the Gumbel distribution is an asymptotical result, and in practice, we are more likely to care about the non-asymptotical scenario. I thus doubt the rationale behind the proposed algorithm let alone the fact that the empirical improvement is very limited.
* **Necessary theoretical parts are missing.** To begin with, I double that the theorems presented in this paper are very likely directly based on the existing results in the extreme value theorem. Without providing the reference, the paper would suffer from the academic integrity issue. Apart from that, it is not clear whether the distributional Bellman operator under the extreme value truncation is convergence or not, which should be rigorously discussed. In addition, Theorem 2 is provided in a very non-mathematical way, which is less rigorous and not convincing to me.
* **Missing literature.** Truncating the extreme value is also highly linked with risk-sensitive RL, but the relevant literature is missing. It is not clear whether truncating the extreme value is helpful or not as it is a trade-off between exploration and stability. However, the paper fails to justify this trade-off rigorously.
* **Experiments.** Experiments are very restricted in continuous control cases, and improvement is not significant in Figure 2. Since the overestimation issue is more commonly used in value-based RL, experiments on Atari games are necessary and the current experimental results are weak.
* **Writing**. The writing should be substantially improved and the choice of some words is causal. Some paragraphs like continuous control algorithms in the literature are well-known. The most of parts in Section 4.2 are trivial and similar to existing works. | * **Necessary theoretical parts are missing.** To begin with, I double that the theorems presented in this paper are very likely directly based on the existing results in the extreme value theorem. Without providing the reference, the paper would suffer from the academic integrity issue. Apart from that, it is not clear whether the distributional Bellman operator under the extreme value truncation is convergence or not, which should be rigorously discussed. In addition, Theorem 2 is provided in a very non-mathematical way, which is less rigorous and not convincing to me. |
rEEjYlzXUD | ICLR_2025 | 1. Limited baselines
The baselines that exist in the paper are only standard SDE and artificial temperature. Including comparison with other sampling techniques such as steered molecular dynamics, and transition path sampling to estimate the committor function would provide a robust result.
2. Lack of comparison with prior works
Similar to W1, the papers lack comparison or difference with prior works. The authors have noted that the proposed approach generalizes prior sampling strategies, in section 2 and have cited papers related to it. However, the the difference/novelty of DASTR seems to not have been discussed clearly, could the authors add one? | 1. Limited baselines The baselines that exist in the paper are only standard SDE and artificial temperature. Including comparison with other sampling techniques such as steered molecular dynamics, and transition path sampling to estimate the committor function would provide a robust result. |
ARR_2022_113_review | ARR_2022 | - Although BFS is briefly introduced in Section 3, it's still uneasy to understand for people who have not studied the problem. More explanation is preferable.
- Algorithm 1, line 11: the function s(·) should accept a single argument according to line 198.
- Figure 6: the font size is a little bit small. | - Although BFS is briefly introduced in Section 3, it's still uneasy to understand for people who have not studied the problem. More explanation is preferable. |
ICLR_2023_1195 | ICLR_2023 | - The assumption that a set of analytical derivative functions is available is a very strong hypothesis so the number of cases where this method can be applied seems limited. - The high dimensional tensor can be also compactly represented by the set of derivative functions avoiding the curse of dimensionality, so it is not clear what is the advantage of replacing the original compact representation by the TT representation. Maybe the reason is that in TT-format many operations can be implemented more efficiently. The paper gives not a clear explanation about the necessity of the TT representation in this case. - It is not clear in which cases the minimum rank is achieved by the proposed method. Is there a way to check it? - In the paper it is mentioned that the obtained core tensors can be rounded to smaller ranks with a given accuracy by clustering the values of the domain sets or imposing some error decision epsilon if the values are not discrete. It is not clear what is, in theory, the effect on the approximation in the full tensor error. Is there any error bound in terms of epsilon? - The last two bullets in the list of main contributions and advantages of the proposed approach are not clear to me (Page 2). - The method is introduced by an application example using the P_step function (section 2.2). I found this example difficult to follow and maybe not relevant from the point of view of an application. I think, a better option would be to use some problem easier to understand, for example, one application to game theory as it is done later in the paper. - Very relevant ideas and results are not included in the main paper and referred instead to the Appendix, which makes the paper not well self-contained. - The obtained performance in terms of complexity for the calculation of the permanent of a matrix is not better than standard algorithms as commented by the authors (Hamilton walks obtained the result with half of the complexity). It is not clear what is the advantage of the proposed new method for this application. - The comparison with the TT-cross method is not clear enough. What is the number of samples taken in the TT-cross method? What is the effect to increase the number of samples in the TT-cross method. I wonder if the accuracy of the TT-cross method can be improved by sampling more entries of the tensor.
Minor issues: - Page 2: “an unified approach” -> “a unified approach” - Page 2: “and in several examples is Appendix” -> “and in several examples in the Appendix” - In page 3, “basic vector e” is not defined. I think the authors refers to different elements of the canonical base, i.e., vectors containing all zeros except one “1” in a different location. This should be formally introduced somewhere in the paper. - Page 9: “as an contraction” -> “as a contraction” | - The method is introduced by an application example using the P_step function (section 2.2). I found this example difficult to follow and maybe not relevant from the point of view of an application. I think, a better option would be to use some problem easier to understand, for example, one application to game theory as it is done later in the paper. |
NIPS_2019_854 | NIPS_2019 | weakness I found in the paper is that the experimental results for Atari games are not significant enough. Here are my questions: - In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not included? Also, why is $\sum_a N(s, a)$ (but not $N(s, a)$) used for $\lambda_s$ in Equation (7)? - In Figure 3, when $d=5$, MENTS performs slightly worse than UCT at the beginning (for about 20 simulation steps) and then suddenly performs much better than UCT. Any hypothesis about this? It makes me wonder whether the algorithm scales with larger tree depth $d$. - In Table 1, what are the standard errors? Is it just one run for each algorithm? There is no learning curve showing whether each algorithm converges. What about the final performance? Itâs hard for me to justify the significance of the results without these details. - In Appendix A (experimental details), there are sentences like ``The exploration parameters for both algorithms are tuned from {}.ââ What are the exact values of all the hyperparameters used for generating the figures and tables? What hyperparameters is the algorithm sensitive to? Please make it more clear to help researchers replicate the results. To summarize based on the four review criteria: - Originality: To the best of my knowledge, the algorithm presented is original: it builds on previous work (a combination of MCTS and maximum entropy policy optimization), but comes up with a new idea for selecting actions in the tree based on the softmax value estimate. - Quality: The contribution is technically sound. The proposed method is shown to achieve an exponential convergence rate to the optimal solution, which is much faster than the polynomial convergence rate of UCT. It is also evaluated on two test domains with some good results. The experimental results for Atari games are not significant enough though. - Clarity: The paper is clear and well-written. - Significance: I think the paper is likely to be useful to those working on developing more sample efficient online planning algorithms. UPDATE: Thanks for the author's response! It addresses some of my concerns about the significance of the results. But it is still not strong enough to cause me to increase my score as it is already relatively high. | - In Figure 3, when $d=5$, MENTS performs slightly worse than UCT at the beginning (for about 20 simulation steps) and then suddenly performs much better than UCT. Any hypothesis about this? It makes me wonder whether the algorithm scales with larger tree depth $d$. |
ICLR_2021_78 | ICLR_2021 | weakness in comparisons to comparable virtual environments is given later.
d. The different variant agents being compared in the task benchmark are clearly explained and form an appropriate experimental design.
2. Novelty/Impact
a. The work aims to showcase a new challenge task, evaluation platform, and benchmark agent performance for goal recognition followed by collaborative planning. The work as described compares favourably to similar work in evaluation platforms and benchmarks referenced in the related work section and appendix. The differences are made clear, though the use of some featured distinctions are not demonstrated in the paper (e.g. visual observations are possible but not used in the benchmarking).
3. Experimental Rigour
a. This work is not primarily about demonstrating the benefits of a particular approach over others in a particular application. It demonstrates benchmarks for agent performance in a newly introduce problem setting. From that perspective, the paper has strong experimental rigour. The experimental design is appropriate, comparing multiple baselines and oracles with several sets of experimental variants from both automated planning and reinforcement learning communities.
b. The comparison of experimental variants is conducted with both a computationally-controlled agent and a human-controlled avatar to evaluate collaboration performance in increasingly realistic settings.
c. The claim that the computationally-controlled Alice agent is human-like is repeated throughout the paper. This is not justified in the actual text of the main paper, but is supported to a moderate degree (human-like in strategy/planning if not movement/behaviour) through experiments with human subjects that are described in the appendix.
d. Effort paid to ensure diverse avatars in experimentation.
4. Reproducibility
a. The work is almost entirely reproducible, with details of all agent architectures used for experiments provided with hyperparameters and architecture design. The authors describe that the environment will be released as open-source, which will then make the article wholly reproducible. This reviewer appreciated the level of additional detail provided in the appendix to improve this area of evaluation.
3. Weaknesses
1. This paper uses the term social intelligence to motivate the context for this challenge task. Social intelligence is a much broader term than what is actually being evaluated here and would require evaluating capabilities beyond goal recognition and coordinated planning/task execution. It is suggested to replace this claim with "goal recognition and collaborative planning".
2. From the motivation provided, i.e. evaluating social perception and collaboration, why is the task specifically about watching then helping and not both together or just helping or just watching or some combination of these activities fluidly occurring throughout the interaction?
3. Further, the work itself does not explicitly motivate why this, specific challenge task for goal recognition followed by collaborative planning is necessary for moving the state of the art in human-AI collaboration forward. However, it is a small leap to see the impact of this platform/task in evaluating applications like service robotics, social robotics, collaborative human-agent task performance, video games, etc. This reviewer can understand the impact of the work, but it would be clearer to explicitly discuss this.
4. It would be clearer to specify that this task is limited to situations where there is explicitly only one goal throughout the entire demonstration + execution episode. This is important since it precludes using this challenge task for research into agents that need to use goal recognition after the initial demonstration, potentially continuously over the course of execution. This second kind of continuous goal monitoring is more similar to real-world applications of watching and helping or assistive agents or social robotics, since the human collaborator can (and often will) change their mind.
5. Similarly, it should be noted that there is an explicit limitation of this challenge task and the evaluation metrics to scenarios where the entire success or failure of the approach is purely based on the final team accomplishment. This is similar to situations like team sports, where all that matters is the final game score. Many real-world scenarios for human-AI collaboration, differ by also requiring individual collaborators to do well or for the primary human user to do better with collaboration (than without). For example, in a computer game where Bob represents a team-mate to Alice who is a human player, Bob can choose to steam-roll Alice and win the game by itself. However, this leads to lower subjective user experience for the human team-mate. In this case, the score might be greater than what Alice could accomplish on their own and the game might be won faster than Alice could on their own, but the experience would be different based on whether they are truly collaborating or one is over-shadowing the other.
6. A final assumption, is that there is no difference in expertise between Alice and Bob. The human is expected to be able to competently finish the task and for Bob to linearly cut down the time taken to perform this task. There are many real-world tasks in human-AI collaboration where this assumption does not hold and there could be non-linear interactions between success-rate and speed-up due to different levels of expertise between Alice and Bob.
7. The fixed Alice agent is called human-like through out the article and this was not properly justified anywhere in the main text of the paper. However, the appendix actually describes results that compare the performance of the computationally-controlled and human-controlled variants of Alice to human observers. This potentially justifies this weakness. For clarity, it would be valuable to refer to the presence of this validation experiment in the main paper.
8. Why aren't there benchmark results (more than one) for the goal recognition component similar to the planning task experimentation? If both parts of the task are important, it would be valuable to provide additional experiments to show comparisons between goal recognition approaches as well, even if that is in the appendix for space reasons.
9. There could be more analysis of the benchmark agent performance, 1) Why does the purely random agent work relatively well across tasks? 2) Why doesn't HRL work better? Is this due to less hyperparameter tuning compared to other approaches or due to some intrinsic aspect of the task itself? 3) Perhaps I missed this, but why not try a flat model-based or model-free RL without a hierarchy?
10. There are several comments about other environments in the related work section and appendix being toy environments. However, the tasks in the environment demonstrated in this paper only use a small set of predicates as goals. Similarly, it CAN generate visual observations but that isn't used by any of the baselines in the paper. Several comparisons to related virtual environments are made in appendix, but some of the features aren't used here either (humanoid agent - this challenge task works equally well with non-humanoid avatars/behaviours and realism - visual realism is present but it isn't clear if behavioural or physical realism is present due to seeming use of animations instead of physical simulation).
11. None of the tasks described allow the use of communication between agents or evaluate that. Other multi-agent environments like Particle Environments (below) allow for that. Communication is a natural part of collaboration and should have been mentioned if only to distinguish future work or work out of current scope.
a. @article{mordatch2017emergence, title={Emergence of Grounded Compositional Language in Multi-Agent Populations}, author={Mordatch, Igor and Abbeel, Pieter}, journal={arXiv preprint arXiv:1703.04908}, year={2017}}
12. "planning and learning based baselines", "and multiple planning and deep reinforcement learning (DRL) baselines", etc. - There is potential for confusion with the use of terms "planning" and "learning" methods to do what both fields (automated/symbolic planning and reinforcement learning) would potentially consider as planning tasks. It would be clearer to indicate this distinction in terminology.
13. The human-likeness evaluation experiment asked subjects to evaluate performance one agent video at a time. A more rigorous evaluation might compare two agents side by side and ask the human to guess the human performance. This could also be in addition to the current evaluation. The current evaluation is a ceiling on performance while the comparative evaluation is a potential floor.
4. Recommendation:
1. I recommend accepting this paper, which was clear, novel, empirically strong, and supremely reproducible. The strengths conveyed above outweighed the weaknesses.
5. Minor Comments/Suggestions:
1. Some minor typos in the manuscript:
a. Using the inferred goals, both HP and Hybrid can offer effective. - page 6
b. IN(pundcake, fridge) - appendix table 2
c. This closeness perdition - appendix page 19 | 2. Novelty/Impact a. The work aims to showcase a new challenge task, evaluation platform, and benchmark agent performance for goal recognition followed by collaborative planning. The work as described compares favourably to similar work in evaluation platforms and benchmarks referenced in the related work section and appendix. The differences are made clear, though the use of some featured distinctions are not demonstrated in the paper (e.g. visual observations are possible but not used in the benchmarking). |
NIPS_2018_612 | NIPS_2018 | Weakness: - Two types of methods are mixed into a single package (CatBoost) and evaluation experiments, and the contribution of each trick would be a bit unclear. In particular, it would be unclear whether CatBoost is basically for categorical data or it would also work with the numerical data only. - The bias under discussion is basically the ones occurred at each step, and their impact to the total ensemble is unclear. For example, randomization as seen in Friedman's stochastic gradient boosting can work for debiasing/stabilizing this type of overfitting biases. - The examples of Theorem 1 and the biases of TS are too specific and it is not convincing how these statement can be practical issues in general. Comment: - The main unclear point to me is whether CatBoost is mainly for categorical features or not. If the section 3 and 4 are independent, then it would be informative to separately evaluate the contribution of each trick. - Another unclear point is the paper presents specific examples of biases of target statistics (section 3.2) and prediction shift of gradient values (Theorem 1), and we can know that the bias can happen, but on the other hand, we are not sure how general these situations are. - One important thing I'm also interested in is that the latter bias 'prediction shift' is caused at each step, and its effect on the entire ensemble is not clear. For example, I guess the effect of the presented 'ordered boosting' could be related to Friedman's stochastic gradient boosting cited as [13]. This simple trick is just apply bagging to each gradient-computing step of gradient boosting, which would randomly perturb the exact computation of gradient. Each step would be just randomly biased, but the entire ensemble would be expected to be stabilized as a whole. Both XGBoost and LightGBM have this stochastic/bagging option, we can use it when we need it. Comment After Author Response: Thank you for the response. I appreciate the great engineering effort to realize a nice & high-performance implementations of CatBoost. But I'm still not sure that how 'ordering boosting', one of two main ideas of the paper, gives the performance improvement in general. As I mentioned in the previous comment, the bias occurs at each base learner h_t. But it is unclear that how this affects the entire ensemble F_t that we actually use. Since each h_t is a "weak" learner anyway, any small biases can be corrected to some extent through the entire boosting process. I couldn't find any comments for this point in the response. I understand the nice empirical results of Tab. 3 (Ordered vs. Plain gradient values) and Tab. 4 (Ordered TS vs. alternative TS methods). But I'm still unsure whether this improvement comes only from the 'ordering' ideas to address two types of target leakages. Because the comparing models have many different hyper parameters and (some of?) these are tuned by Hyperopt, so the improvement can come not only from addressing the two types of leakage. For example, it would be nice to have something like the following comparisons o focus only on two ideas of ordered TS and ordered boosting in addition: 1) Hyperopt-best-tuned comparisons of CatBoost (plain) vs LightGBM vs XGboost (to make sure no advantages exists for CatBoost (plain) ) 2) Hyperopt-best-tuned comparisons of CatBoost without column sampling + row sampling vs LightGBM/XGBoost without column sampling + row sampling 3) Hyperopt-best-tuned comparisons of CatBoost(plain) + ordered TS without ordered boosting vs CatBoost(plain) (any other randomization options, column sampling and row sampling, should be off) 4) Hyperopt-best-tuned comparisons of CatBoost(plain) + ordered boosting without ordered TS vs CatBoost(plain) (any other randomization options, column sampling and row sampling, should be off) | 1) Hyperopt-best-tuned comparisons of CatBoost (plain) vs LightGBM vs XGboost (to make sure no advantages exists for CatBoost (plain) ) |
NIPS_2022_857 | NIPS_2022 | In multimodal temporal contrastive loss, is there an assumption that the clip content and sentence will not repeat in the video? Otherwise, the line 50-51 will not make sense.
Line 157-158, what is the sampling strategy here for K and V. If these two sets are far away from each other, samples in the positive pair may not match each other.
The effectiveness of each component is not well analysed. There are some ablation studies in the Table 5. However, it is not clear to see the effectiveness of L_time alone, L_mlm, L_vtm, \lambda_1 and \lambda_2. It will be great to show how each part contribute to the final performance.
Missing details in the comparison: Table 1-3 demonstrate the performance of this paper and previous works. In addition to the pretraining dataset, these details are also important: a). Computational cost. 2) Running Time. 3) Additional dataset (used in pretraining and pretrained model). 4) Input resolution. Combined with the weakness No.3, it is hard to distinguish which part of this paper is the key for the superior performance. | 4) Input resolution. Combined with the weakness No.3, it is hard to distinguish which part of this paper is the key for the superior performance. |
XK7kyCVjqr | ICLR_2024 | - The novelty of this paper is limited. The synthetic generation and alignment-ascending curriculum learning seems simple and straightforward.
- The discussion of the baseline is not enough. I mean, I cannot get what the contribution this paper achieved.
- during the generation of synthetic code, can you generate the snippet-level alignment? | - The discussion of the baseline is not enough. I mean, I cannot get what the contribution this paper achieved. |
NIPS_2016_370 | NIPS_2016 | , and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. 2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts). 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: 1. The biggest weakness is some issues with the framework itself. In particular: 1a. It is not obvious that "k-bit representation" is the right notion for unsupervised learning. Presumably the idea is that if one can compress to a small number of bits, one will obtain good generalization performance from a small number of labeled samples. But in reality, this will also depend on the chosen model class used to fit this hypothetical supervised data: perhaps there is one representation which admits a linear model, while another requires a quadratic model or a kernel. It seems more desirable to have a linear model on 10,000 bits than a quadratic model on 1,000 bits. This is an issue that I felt was brushed under the rug in an otherwise clear paper. 1b. It also seems a bit clunky to work with bits (in fact, the paper basically immediately passes from bits to real numbers). 1c. Somewhat related to 1a, it wasn't obvious to me if the representations implicit in the main results would actually lead to good performance if the resulting features were then used in supervised learning. I generally felt that it would be better if the framework was (a) more tied to eventual supervised learning performance, and (b) a bit simpler to work with. 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. 3. The main point (that improper unsupervised learning can overcome NP hardness barriers) didn't come through until I had read the paper in detail. When deciding what papers to accept into a conference, there are inevitably cases where one must decide between conservatively accepting only papers that are clearly solid, and taking risks to allow more original but higher-variance papers to reach a wide audience. I generally favor the latter approach, I think this paper is a case in point: it's hard for me to tell whether the ideas in this paper will ultimately lead to a fruitful line of work, or turn out to be flawed in the end. So the variance is high, but the expected value is high as well, and I generally get the sense from reading the paper that the authors know what they are doing. So I think it should be accepted. Some questions for the authors (please answer in rebuttal): -Do the representations implicit in Theorems 3.2 and Theorem 4.1 yield features that would be appropriate for subsequent supervised learning of a linear model (i.e., would linear combinations of the features yield a reasonable model family)? -How easy is it to handle e.g. manifolds defined by cubic constraints with the spectral decoding approach? | 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. |
NIPS_2022_2398 | NIPS_2022 | Details are missing, especially about the baseline and it weakens the compelling results. 1) KPT is a method to expand the verbalizers, what verbalizers did you get? I don't understand why the results would be worse than LM-BFF as it seems to be LM-BFF + additional verbalizers? 2) For LM-BFF, how did you get the scores with demonstrations in a zero-shot setting?
Some design choices are not well justified or ablated: 1) Why specifically use FocalLoss? How does it compare to a linear combination of the original cross-entropy loss and a KNN loss similar to KNN-LM and it would be closer to the retrieval setup during test time? 2) Why does neural demonstration happen at the embedding layer?
Yes, the authors discuss about efficiency overhead of the proposed method in appendix. | 2) For LM-BFF, how did you get the scores with demonstrations in a zero-shot setting? Some design choices are not well justified or ablated: |
ARR_2022_105_review | ARR_2022 | 1) The data labeling methods transform the prediction score in a ranked way. However, different evaluation approaches may produce different order. In this paper, only one evaluation approaches are used to generate the prediction score.
2) Writing should be improved as the paper is not easy to follow.
1) As an abbreviation, PLM in line 67 should be defined first 2) The authors should give some intuitive case about hypothesis, source and reference in experiments 3) The introduction of Hard MRA need improvement. I thought the monotonic means the sequence h->s->r. Because h->s is not allowed, the hard MRA forbids the h->s, s->r, and h->r 4) Line081, learning-xbased should be learning-based | 2) Writing should be improved as the paper is not easy to follow. |
NIPS_2022_138 | NIPS_2022 | Weakness: 1. It says that many existing algorithms for bilevel optimization suffer from approximation errors. But it only analyzes the "Unrolled Optimization Scheme" 2. Step 9 in Algorithm 1 is using approximation, which is not precise. How can one implement it? 3. Any concrete example to show the other algorithms issue would be much better. 4. It only solves the bilevel optimization to stationary point and is not able to solve it to global optimality.
It says that many existing algorithms for bilevel optimization suffer from approximation errors. But it only analyzes the "Unrolled Optimization Scheme"
Step 9 in Algorithm 1 is using approximation, which is not precise. How can one implement it?
Any concrete example to show the other algorithms issue would be much better.
It only solves the bilevel optimization to stationary point and is not able to solve it to global optimality. | 4. It only solves the bilevel optimization to stationary point and is not able to solve it to global optimality. It says that many existing algorithms for bilevel optimization suffer from approximation errors. But it only analyzes the "Unrolled Optimization Scheme" Step 9 in Algorithm 1 is using approximation, which is not precise. How can one implement it? Any concrete example to show the other algorithms issue would be much better. It only solves the bilevel optimization to stationary point and is not able to solve it to global optimality. |
NIPS_2019_34 | NIPS_2019 | weakness of the paper is the results on the full navigation task, which are weak in comparison to past work. As the paper points out, this past work has used a variety of training and inference conditions which improve performance, and are likely orthogonal to the contributions here. However, much of this past work has also reported results without these augmentations, and those results are comparable or better than the navigation performance here. It would be clearer if the paper presented these results (for example the "co-grounding" and "greedy decoding" ablation of Ma et al. which obtains 42 SR and 28 SPL on the val-unseen environments, and the Behavioral Cloning (IL) ablation of Tan et al, which obtains 43.6 SR and 40 SPL on val-unseen) rather than the augmented settings, or explained why they are not comparable. In particular, since this paper uses the panoramic state representation of Fried et al, and an action space similar to theirs, it seems that their "panoramic space" ablation model might be a more appropriate baseline than the non-panoramic Seq2Seq model compared to here. However, all these differences seem at least partly explainable due to the use of different ResNet visual features than these past works. In addition, the results on the goal prediction task show a substantial improvement over the strong LingUNet model. *Clarity* I found the paper overall extremely clear about the model details and the intuition for each part, and the motivation for the work. There were a few minor details about the training procedure that were underspecified: - Is the true state sequence in 245 always the human trajectory, or does it include the exploration that is done by the model during training? - When training the policy with cross-entropy loss, are the parameters of the rest of the network (e.g. the filter and semantic map) also updated (or are they just updated by the filter supervision)? - Is the mapper-filter in the experiments 5.2 produced by training without the policy, or does this take a trained full model and remove the policy component? (it seems likely to be the first one, but the text is a bit unclear) *Significance* While the results on the full navigation task don't show an improvement over past work, I think that the model class is still likely to be built upon by researchers in this area. Past work has seen two high level problems in this task, which models like this one may be able to address: (1) Substantial improvements from exploration of the environment during inference time. Having a model with an explicit simulated planning component makes it possible to see how much the simulated planning using the learned environment representation could reduce the need for actual exploration. (2) A generalization gap in novel environments. It seems promising that this model has no gap in performance between seen and unseen environments, although the reason for this is not explained in this work. *Minor comments* - 238: The policy does seem to indirectly have access to a representation of the instruction and semantic map through the belief network; this could be clarified by saying that it has no direct access. - I found it surprising that incorporating the agent's heading into the belief state has such a large impact on performance, given the panoramic visual representation and action space. Some discussion of this would be helpful. | - I found it surprising that incorporating the agent's heading into the belief state has such a large impact on performance, given the panoramic visual representation and action space. Some discussion of this would be helpful. |
ARR_2022_28_review | ARR_2022 | The main concerns with this paper is that it doesn't fully explain some choices in the model (see comments/questions section). Moreover, some parts of the paper are actually not fully clear. Finally, some details are missing, making the paper incomplete.
- Algorithm 1 is not really explained. For example, at each step (1, 2, 2a, 3, 3a) are you sampling a different batch from S and T? Is the notation L(X) meaning that you optimize only the parameters X of the architecture?
- Line 232: When you say you "mine", what do you exactly mean? Does this mean you sample P sentences from the set of sentences of S and T with similar constraints?
- Lines 237-238 and Line 262: Why would you want to use the representation from the critic last layer? - Line 239: "Ci are a set of constraints for a sentence" should be moved before.
- Table 1: It seems that the results for DRG and ARAE are not averaged over 5 runs (they're exactly the same of the previous paper version) - Table 1: How did you choose the p=0.6?
- Table 1: row=ARAE, column=POLITICAL-FL It seems this value should be the one in bold.
- Lines 349-353: It seems you're comparing results for ARAE + CONTRA, ARAE + CLF and ARAE + CONTRA + CL with respect to simple ARAE, while in the text you mention only ARAE + CONTRA and ARAE + CLF.
- Line 361: and SIM to -> and SIM with respect to - Figure 3: Please, rephrase the caption of the errors bars (or explain it in the text). It is not clear what do you mean.
- Line 389: You mention here you used different p values as in Table 1. This table doesn't report results with different values for p. - Lines 422-423: why using nucleous sampling when the best results were with greedy decoding? Where does 0.9 come from?
- In general, in the experiments, what are the source and target domains?
- Line 426-Table4: What do you want to demonstrate here? Could you add an explanation? What constraints/attributes are preserved? What is the source domain? What is the target domain?
- Lines 559-560: This is not entirely true. In Cycle Consistency loss you can iterate between two phases of the reconstructions (A-B-A and B-A-B) with two separate standard backpropagation processes.
- Line 573: works focuses -> works focus | - Lines 422-423: why using nucleous sampling when the best results were with greedy decoding? Where does 0.9 come from? |
NIPS_2016_395 | NIPS_2016 | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work. | 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? |
NIPS_2018_185 | NIPS_2018 | Weakness: ##The clarity of this paper is medium. Some important parts are vague or missing. 1) Temperature calibration: 1.a) It was not clear what is the procedure for temperature calibration. The paper only describes an equation, without mentioning how to apply it. Could the authors list the steps they took? 1.b) I had to read Guo 2017 to understand that T is optimized with respect to NLL on the validation set, and yet I am not sure the authors do the same. Is the temperature calibration is applied on the train set? The validation set (like Guo 2017)? The test set? 1.c) Guo clearly states that temperature calibration does not affect the prediction accuracy. This contradicts the results on Table 2 & 3, where DCN-T is worse than DCN. 1.d) About Eq (5) and Eq (7): Does it mean that we make temperature calibration twice? Once for source class, and another for target classes? 1.e) It is written that temperature calibration is performed after training. Does it mean that we first do a hyper-param grid search for those of the loss function, and afterward we search only for the temperature? If yes, does it means that this method can be applied to other already trained models, without need to retrain? 2) Uncertainty Calibration From one point of view it looks like temperature calibration is independent of uncertainty calibration, with the regularization term H. However in lines 155-160 it appears that they are both are required to do uncertainty calibration. (2.a) This is confusing because the training regularization term (H) requires temperature calibration, yet temperature calibration is applied after training. Could the authors clarify this point? (2.b) Regarding H: Reducing the entropy, makes the predictions more confident. This is against the paper motivation to calibrate the networks since they are already over confident (lines 133-136). 3) Do the authors do uncertainty calibration on the (not-generalized) ZSL experiments (Table 2&3)? If yes, could they share the ablation results for DCN:(T+E), DCN:T, DCN:E ? 4) Do the authors do temperature calibration on the generalized ZSL experiments (Table 4)? If yes, could they share the ablation results for DCN:(T+E), DCN:T, DCN:E ? 5) The network structure: 5.a) Do the authors take the CNN image features as is, or do they incorporate an additional embedding layer? 5.b) What is the MLP architecture for embedding the semantic information? (number of layers / dimension / etc..) ##The paper ignores recent baselines from CVPR 2018 and CVPR 2017 (CVPR 2018 accepted papers were announced on March, and were available online). These baseline methods performance superceed the accuracy introduced in this paper. Some can be considered complementary to this work, but the paper canât simply ignore them. For example: Zhang, 2018: Zero-Shot Kernel Learning Xian, 2018: Feature Generating Networks for Zero-Shot Learning Arora, 2018: Generalized zero-shot learning via synthesized examples CVPR 2017: Zhang, 2017: Learning a Deep Embedding Model for Zero-Shot Learning ## Title/abstract/intro is overselling The authors state that they introduce a new deep calibration network architecture. However, their contributions are a novel regularization term, and a temperature calibration scheme that is applied after training. I wouldnât consider a softmax layer as a novel network architecture. Alternatively, I would suggest emphasizing a different perspective: The approach in the paper can be considered as more general, and can be potentially applied to any ZSL framework that outputs a probability distribution. For example: Atzmon 2018: Probabilistic AND-OR Attribute Grouping for Zero-Shot Learning Ba 2015: Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions Other comments: It will make the paper stronger if there was an analysis that provides support for the uncertainty calibration claims in the generalized ZSL case, which is the focus of this paper. Introduction could be improved: The intro only motivates why (G)ZSL is important, which is great for new audience, but there is no interesting information for ZSL community. It can be useful to describe the main ideas in the intro. Also, confidence vs uncertainty, were only defined on section 3, while it was used in the abstract / intro. This was confusing. Related work: It is worth to mention Transductive ZSL approaches, which use unlabeled test data during training, and then discriminate this work from the transductive setting. For example: Tsai, 2017: Learning robust visual-semantic embeddings. Fu 2015: Transductive Multi-view Zero-Shot Learning I couldnât understand the meaning on lines 159, 160. Lines 174-179. Point is not clear. Sounds redundant. Fig 1 is not clear. I understand the motivation, but I couldnât understand Fig 1. | 4) Do the authors do temperature calibration on the generalized ZSL experiments (Table 4)? If yes, could they share the ablation results for DCN:(T+E), DCN:T, DCN:E ? |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). |
NIPS_2016_241 | NIPS_2016 | /challenges of this approach. For instance... - The paper does not discuss runtime, but I assume that the VIN module adds a *lot* of computational expense. - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. - The results are only reported after a bunch of training has occurred, but in RL we are often also interested in how the agent behaves *while* learning. I presume that early in training the model parameters are essentially garbage and the planning component of the network might actually *hurt* more than it helps. This is pure speculation, but I wonder if the CNN is able to perform reasonably well with less data. - I wonder whether more could be said about when this approach is likely to be most effective. The navigation domains all have a similar property where the *dynamics* follow relatively simple, locally comprehensible rules, and the state is only complicated due to the combinatorial number of arrangements of those local dynamics. WebNav is less clear, but then the benefit of this approach is also more modest. In what kinds of problems would this approach be inappropriate to apply? ---Clarity--- I found the paper to be clear and highly readable. I thought it did a good job of motivating the approach and also clearly explaining the work at both a high level and a technical level. I thought the results presented in the main text were sufficient to make the paper's case, and the additional details and results presented in the supplementary materials were a good compliment. This is a small point, but as a reader I personally don't like the supplementary appendix to be an entire long version of the paper; it makes it harder to simply flip to the information I want to look up. I would suggest simply taking the appendices from that document and putting them up on their own. ---Summary of Review--- I think this paper presents a clever, thought-provoking idea that has the potential for practical impact. I think it would be of significant interest to a substantial portion of the NIPS audience and I recommend that it be accepted. | - The paper does not discuss runtime, but I assume that the VIN module adds a *lot* of computational expense. |
ACL_2017_201_review | ACL_2017 | Since this paper essentially presents the effect of systematically changing the context types and position sensitivity, I will focus on the execution of the investigation and the analysis of the results, which I am afraid is not satisfactory.
A) The lack of hyper-parameter tuning is worrisome. E.g. - 395 Unless otherwise notes, the number of word embedding dimension is set to 500.
- 232 It still enlarges the context vocabulary about 5 times in practice.
- 385 Most hyper-parameters are the same as Levy et al' best configuration.
This is worrisome because lack of hyperparameter tuning makes it difficult to make statements like method A is better than method B. E.g. bound methods may perform better with a lower dimensionality than unbound models, since their effective context vocabulary size is larger.
B) The paper sometimes presents strange explanations for its results. E.g. - 115 "Experimental results suggest that although it's hard to find any universal insight, the characteristics of different contexts on different models are concluded according to specific tasks."
What does this sentence even mean? - 580 Sequence labeling tasks tend to classify words with the same syntax to the same category. The ignorance of syntax for word embeddings which are learned by bound representation becomes beneficial. These two sentences are contradictory, if a sequence labeling task classified words with "same syntax" to same category then syntx becomes a ver valuable feature. Bound representation's ignorance of syntax should cause a drop in performance just like other tasks which does not happen.
C) It is not enough to merely mention Lai et. al. 2016 who have also done a systematic study of the word embeddings, and similarly the paper "Evaluating Word Embeddings Using a Representative Suite of Practical Tasks", Nayak, Angeli, Manning. appeared at the repeval workshop at ACL 2016. should have been cited. I understand that the focus of Nayak et al's paper is not exactly the same as this paper, however they provide recommendations about hyperparameter tuning and experiment design and even provide a web interface for automatically running tagging experiments using neural networks instead of the "simple linear classifiers" used in the current paper.
D) The paper uses a neural BOW words classifier for the text classification tasks but a simple linear classifier for the sequence labeling tasks. What is the justification for this choice of classifiers? Why not use a simple neural classifier for the tagging tasks as well? I raise this point, since the tagging task seems to be the only task where bound representations are consistently beating the unbound representations, which makes this task the odd one out. - General Discussion: Finally, I will make one speculative suggestion to the authors regarding the analysis of the data. As I said earlier, this paper's main contribution is an analysis of the following table.
(context type, position sensitive, embedding model, task, accuracy) So essentially there are 120 accuracy values that we want to explain in terms of the aspects of the model. It may be beneficial to perform factor analysis or some other pattern mining technique on this 120 sample data. | - 385 Most hyper-parameters are the same as Levy et al' best configuration. This is worrisome because lack of hyperparameter tuning makes it difficult to make statements like method A is better than method B. E.g. bound methods may perform better with a lower dimensionality than unbound models, since their effective context vocabulary size is larger. B) The paper sometimes presents strange explanations for its results. E.g. |
NIPS_2018_356 | NIPS_2018 | The paper doesn't have one message. Theorem 3 is not empirically investigated. TYPOS, ETC - Abstract. To state that the papers "draws useful connections" is uninformative, if the abstract doesn't state *what* connections are drawn. - Theorem 1. Is subscript k (overloaded later in Line 178, etc) necessary? It looks like one can simply restate the theorem in terms of alpha -> infinity? - Line 137 -- do the authors confuse VAEs with GANs's mode collapse here? - The discussion around equation (10) is very terse, and not very clearly explained. - Line 205. True posterior over which random variables? - Line 230 deserves an explanation, i.e. why conditioning p(x_missing | x_observed, x) is easily computable. - Figure 3: which Markov chain line is red and blue? Label? | - Line 137 -- do the authors confuse VAEs with GANs's mode collapse here? |
ICLR_2021_1409 | ICLR_2021 | weakness of the submission is a lack of clarity in the contributions and presentation. Key statements are vague, basic notation is not defined and the writing and supporting figures would benefit from an additional pass. More details below.
My initial recommending is towards a rejection. I think the paper needs major revisions to improve the presentation rather than a set of minor improvements that are easily fixed. But I am open to increase my score depending on the results of the discussion period if the issues below are addressed.
Major issues
Unclear theoretical contributions.
It is unclear from Section 2 whether the proposed algorithm and Theorem 1 are novel theoretical contributions or ϵ
-modification to existing work, such as the works of Diakonikolas et al. and Lugosi et al., that is more amenable to. Either are valuable contributions, but this needs to be made clear. If Thm. 1 is a significant theoretical contribution, the theoretical novely needs to be expanded upon and the differences with existing work made explicit. If the results are straightforward from existing theory but Alg. 1 more convenient for applications, the issues with existing estimators need to be clearly highlighted
Basic notation is not defined. I could only infer the following by reading referenced material
the dimensionality p
(p.2 or Eq. 3)
the unit ball S p
(p. 2)
an outer product ( v ) ⊗ 2
(Alg. 1)
a constraint set Θ
(Alg. 2)
the constant d 2
(Thm. 2, Eq. 5) is still mysterious
Clarity about the heuristic nature of Alg. 3.
There is nothing wrong with simplifications to allow the method to scale to large datasets, even if at the cost of some rigor. But the section needs to make clear that the simplifications are heuristic in nature, and not attempt to cover it with technical but wrong language. Eg: "reusing previous gradient samples to improve concentration" (p. 6) is inaccurate as the samples are not independent.
"it is unreasonable to expect that [the distribution of gradients] are vastly different in most cases, due to the smoothness of the objective" (p. 6) Reasonability is subjective, and the presented argument is wrong. Neural network objectives are most often non-smooth. This holds whether smoothness refers to differentiability, due to ReLU activations, or the Lipschitzness of the function or the gradient due to multiple layers.
Minor comments:
The figures need additional work as they are currently unreadable when printed.
The writing needs improvement, as some sentences are incomplete or contain duplicated words. Eg "This eigenpair is not required to be recomputed the current iteration" (p. 6), "sufficiently large enough" (remarks), "achieves the the optimal sub-Gaussian" (remarks)
Some of the cited preprints have been published (eg Che et al., Cherapanamjeri et al.). Please make sure your references are accurate and up-to-date. | 1 is a significant theoretical contribution, the theoretical novely needs to be expanded upon and the differences with existing work made explicit. If the results are straightforward from existing theory but Alg. |
ACL_2017_37_review | ACL_2017 | Weak results/summary of "side-by-side human" comparison in Section 5. Some disfluency/agrammaticality.
- General Discussion: The article proposes a principled means of modeling utterance context, consisting of a sequence of previous utterances. Some minor issues: 1. Past turns in Table 1 could be numbered, making the text associated with this table (lines 095-103) less difficult to ingest. Currently, readers need to count turns from the top when identifying references in the authors' description, and may wonder whether "second", "third", and "last" imply a side-specific or global enumeration.
2. Some reader confusion may be eliminated by explicitly defining what "segment" means in "segment level", as occurring on line 269. Previously, on line 129, this seemingly same thing was referred to as "a sequence-sequence [similarity matrix]". The two terms appear to be used interchangeably, but it is not clear what they actually mean, despite the text in section 3.3. It seems the authors may mean "word subsequence" and "word subsequence to word subsequence", where "sub-" implies "not the whole utterance", but not sure.
3. Currently, the variable symbol "n" appears to be used to enumerate words in an utterance (line 306), as well as utterances in a dialogue (line 389). The authors may choose two different letters for these two different purposes, to avoid confusing readers going through their equations.
4. The statement "This indicates that a retrieval based chatbot with SMN can provide a better experience than the state-of-the-art generation model in practice." at the end of section 5 appears to be unsupported. The two approaches referred to are deemed comparable in 555 out of 1000 cases, with the baseline better than the proposed method in 238 our of the remaining 445 cases.
The authors are encouraged to assess and present the statistical significance of this comparison. If it is weak, their comparison permits to at best claim that their proposed method is no worse (rather than "better") than the VHRED baseline.
5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text.
6. Their is a pervasive use of "to meet" as in "a response candidate can meet each utterace" on line 280 which is difficult to understand.
7. Spelling: "gated recurrent unites"; "respectively" on line 133 should be removed; punctuation on line 186 and 188 is exchanged; "baseline model over" -> "baseline model by"; "one cannot neglects". | 1. Past turns in Table 1 could be numbered, making the text associated with this table (lines 095-103) less difficult to ingest. Currently, readers need to count turns from the top when identifying references in the authors' description, and may wonder whether "second", "third", and "last" imply a side-specific or global enumeration. |
NIPS_2019_894 | NIPS_2019 | 1. Novelty/Significance/Soundness 1.1 The idea of clustering unlabeled unseen classes (transductive setting) was explored in [34, A] (Please discuss the difference with A.) The discussion on the differences between this work and [34] in L42-46 does not suggest that the proposed idea is more than incremental, especially on 2) and 3) which are about optimization details and an extension of transductive setting rather than about the method itself. 1) is also too ambiguous/not precise enough to allow me to quantify its significance. In my opinion, it would be better to describe [34] in more detail and explicitly point out why the proposed formulation is much better than what was proposed there. [A] Shojaee and Baghshah. Semi-supervised Zero-Shot Learning by a Clustering-based Approach. 2016. 1.2 The proposed approach is grounded in the existence of discriminative clusters. This is shown in Fig. 1 but on other datasets especially on find-grained datasets or where the number of unseen classes is large, this assumption could break or will not operate as well. If possible, the authors could show something similar to Fig. 1 for all datasets. Another drawback of the proposed approach is that using nearest center classifiers makes knowing of the number of clusters very important (In this paper, it is assumed to be the number of unseen classes; please make this clear in $3.2). Table 4 supports this fact and this is on the easy dataset AwA2 in which there are only 10 unseen classes. 1.3 The proposed extension of âtransductiveâ setting ($3.5) seems adhoc and there is not really any evaluation to support that it works well. 2. Experiments Experiments can be geared more toward showing that the domain shift problem has been resolved. Can we use quantitative measures / intrinsic evaluation of centers, before and after matching, to showcase this? In $4.2, the âvanillaâ harmonic mean is problematic. See Sect. 4.4.2 and especially Fig. 4 in [C] and [B]. This makes the discussion in L277-279 kind of invalid. I would also encourage evaluation using AUSUC. [B] Le Cacheux et al. From Classical to Generalized Zero-Shot Learning: A Simple Adaptation Process. 2019. [C] Changpinyo et al. Classifier and Exemplar Synthesis for Zero-Shot Learning. IJCV, 2019. 3. Discussion of related work should be improved. Besides discussion with respect to [34] which is my main concern above, more credit should be given to zero-shot learning approaches that take advantage of clustering structures by using nearest center classifiers for zero-shot learning. Besides [31] which is mentioned in this paper, see [C, D, E]. [D] Mensink et al. Distance-based image classification: Generalizing to new classes at near-zero cost. TPAMI 2013. [E] Changpinyo et al. Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning. ICCV 2017. Note that L150-154 is precisely what [E] mentions. Please also cite related work regarding the domain shift problem as well as generalized zero-shot learning. Please fix L237; Splits on all datasets do not belong to [14]. See Table 1 in [C]. 4. The paper is recommended to be revised for its English usage. Examples are a few instances of âNeed to note thatâ, âthe real casesâ (L45), âvalidâ (L167), âre-implementâ -> âtestâ? (L252). ### Updated Reviews ### Overall, several of my concerns are resolved and experiments do look stronger. However, I still have a strong concern regarding the relationship to [34] (see below) and in general discussion of related work. Given my current understanding of [34] and related work, the degree of the method's significance in terms of novelty, simplicity, or technical depth is in question, making it hard for me to be on the acceptance side. Taking the rebuttal and other reviews into account, I am happy to increase my score to 5. ***Key differences with [34]*** IMO, the paper did not make it clear about the relationship of this work to [34]. After reading the rebuttal, I was still unsure; thus, I checked [34] myself. With my limited understanding of what [34] did, I still have the following questions/concerns. First, the statement in the rebuttal â[34] is to improve label assignment over naive NN using a fixed project function while we aim to learn better projection function by only using naive NN assignmentâ doesnât add much to L47-58. In particular, I do not see a clear difference; why should we consider the process of reassigning cluster labels (and thus the centers) in [34] as simply improving label assignment but not adapting the projection function? Second, [34] builds an adaptation/transduction technique on top of JSLE [33] which is a much weaker baseline than VCL (see the comparison in Table 1, especially on CUB and SUN10 --- the datasets where see larger improvement of the proposed three variants over [34]). In other words, how much of the gain is simply due to the focus on nearest center classifier in the visual feature space? Third, the rebuttal mentions that one of the variants WDVSc is different because it uses soft matching vs. hard matching. This point is valid but leaves the question of whether what would happen if the method in [34] uses soft matching. Please also note that most of the baselines, both in the main text and the rebuttal, are inductive zero-shot learning methods. In particular, the only comparison we have to transductive ZSL baselines ([34] and 4 others) are only presented in Table 1. To be fair, this is in part due to the fact that the new proposed splits in and the use of ResNet [28] were proposed in 2017 and not yet adopted by the transductive ZSL community. ***Key differences with [A]*** I am good with not comparing with the unpublished [A] in detail but I think the paper can still discuss it. ***Dependence on discriminative clusters and known cluster number K*** This concern in my review 1.2 is to point out that this is where the proposed method could break down. I reread L303 -317 and am OK with the argument that the predicted cluster labels could still be useful even though the features may not be so discriminative. Moreover, ImageNet results the rebuttal help resolve the concern to some degree regarding the number K. However, it would be great to discuss why the method is less brittle on ImageNet, in stark contrast to the results in Table 6. It would also be nice to have results when K > the number of unseen classes and in the more adversarial setting proposed in this paper ($3.5). Finally, I do not know the details on these experiments due to the space limit on the rebuttal, but it would be nice to know why VCL is much worse than EXEM for Hit@K >=10 even though the two methods are very similar. ***New setting*** My opinion toward the new setting both in terms of the approach ($3.5) and results ($4.3) remain the same. The approach looks quite straightforward/adhoc and the results look preliminary, so it is hard for me to count this as a significant contribution. This is also orthogonal to the main contribution of the paper. ***AUSUC and domain shift results in Table 2 of the rebuttal.*** I appreciate these new results. | 1. Novelty/Significance/Soundness 1.1 The idea of clustering unlabeled unseen classes (transductive setting) was explored in [34, A] (Please discuss the difference with A.) The discussion on the differences between this work and [34] in L42-46 does not suggest that the proposed idea is more than incremental, especially on |
ICLR_2023_2919 | ICLR_2023 | I don't think the authors can claim contribution for presenting a "new perspective" on backdoor attacks as it is common knowledge that backdoor triggers can be discriminative features of the data. Also, I think this analysis applies more to the "clean label" setting - a point I would make clearer in the manuscript.
The work only deals with rudimentary backdoor attacks. It would have been nice to test on modern backdoor attacks - some of which even disguise the "trigger" e.g. [1,2].
The empirical results seem lukewarm - compared to some older defenses, the proposed approach wins out, but often it is not the most successful defense.
There is a lot of clunkiness with the definitions/assumptions. For example:
You might be missing an S ′
in the Definition 2 - f ( x ; S ′ ) vs. f ( x ; S )
I don't like the presentation for Definition 1. If you're calling ϕ
the feature ϕ : S → 0 , 1
, then define another
symbol to refer to the preimage of 1
. When you reuse ϕ
to define ϕ ( S ) = x ϕ ( x ) ― = 1
Also, what is ϕ ( x ) ―
? Also what is feature f
? Is this a typo?
Why have S = ( X × Y ) n
? This seems unnecessary. Just define S as X × Y
for some input space X
and label space Y .
[1] Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden trigger backdoor attacks." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 07. 2020.
[2] Souri, Hossein, et al. "Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch." arXiv preprint arXiv:2106.08970 (2021). | 34. No.07. 2020. [2] Souri, Hossein, et al. "Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch." arXiv preprint arXiv:2106.08970 (2021). |
P2gnDEHGu3 | ICLR_2024 | Although I believe the idea is interesting, and there may be some
valuable finding in the paper, I have difficulties seeing a clear
take-home message based on the results presented, and probably also
due to the way they are presented. I have some concrete points of
criticism listed in the comments below (with approximate order of importance).
- The main claim, additivity of the multiple mechanisms, is not very
clearly demonstrated in the paper. The separation of the
subject/relation heads (as displayed in Fig. 2) is impressive.
However, neither the roles of the "mixed head" mechanism, the MLP,
and additivity of all these mechanisms are not clearly demonstrated.
- The dataset is rather small and it is not described in the paper at
all. The description of in the appendix is also rather terse,
containing only a few examples. Given the data set size (hence the
lack of diversity), and the possible biases (not discussed) during
the data set creation, it is unclear if the findings can generalize
or not. In fact, some of the clear results (e.g., the results in
Fig. 2) may be due to the simple/small/non-diverse examples.
- I also have difficulty for fully understanding the insights the
present "mechanisms" would provide. To me, it seems we do not get
any further insights than the obvious expectation that the models
have to make their decisions based on different parts of the input
(and meaningful segments may provide independent contributions). I
may be missing something here, but it is likely that many other
readers would miss it, too.
- Visualizations are quite useful for observing some of the results.
However, the discussion of findings based on a more quantitative
measure (e.g., DLA difference between factual and counterfactual
attributes) would be much more convincing, precise, repeatable, and general.
- Overall, the paper is somewhat difficult to follow, relying data in
the appendix for some of the main claims and discussion points.
Appendixes should not really be used for circumvent page-limits.
Ideally, most readers should not even need to look at them.
- The head type (subject/relation) definition uses an arbitrary
threshold. Although it sounds like a rather conservative choice, it
would still be good to know how it was determined. | - I also have difficulty for fully understanding the insights the present "mechanisms" would provide. To me, it seems we do not get any further insights than the obvious expectation that the models have to make their decisions based on different parts of the input (and meaningful segments may provide independent contributions). I may be missing something here, but it is likely that many other readers would miss it, too. |
lWlBAJTFOm | EMNLP_2023 | - A. The motivation proposed by the authors that "... decomposition and solution generation .... need distinct capabilities" (Line 13) is not sufficiently supported. The citation from the neuroscience literature in Line 67 ("cerebral functions are specialized and spatially localized") does not support the need for a separate language models for decomposition and solution generation, for two reasons: (1) the connection assumes that language models and human brains work in the same way, which is risky conjecture and (2) multiple cognitive abilities of language models can also be localized in terms of model parameters, without the need for separate models–in fact, pre-trained language models already demonstrate a wide variety of abilities. On the contrary, concurrent work [1] shows that step-by-step supervision of a monolithic LLM can greatly improve complex reasoning performance, without the need for specialized models.
- A1. In my opinion, the paper does not need to include this speculative motivation. The well-known issue of {hallucinations/errors from chain-of-thought reasoning, and the limitations of existing prompting methods to mitigate this issue} provide sound motivation to apply more explicit forms of supervision via RL to guide the model. The motivation to train a separate external model could come from the fact that foundational LLMs are typically not available for fine-tuning and it is computationally prohibitive to do so. In fact, the versatility of the proposed method, which only requires fine-tuning of a *small* language model and can be applicable to *black-box* models for the underlying solver model, is a major strength of the proposed method. This could be highlighted further in the introduction.
- B. The case study (Section 7) only pertains to one sample, suggesting that it may have been cherry-picked. While this is effective at highlighting the strength of the proposed method, overall qualitative analysis based on a number of random samples would help readers to understand the overall strengths and weaknesses. This could also be supported with more samples provided in the paper, highlighting both success and failure cases of the method. Comprehensive analysis of both strengths and weaknesses would greatly improve the paper.
- C. If results are available, it would be helpful for the readers to provide a full performance table for GPT-4 baselines (near Line 515), even if the results may not be favorable for the proposed method.
- D. The writing is generally easy to follow, but off-putting or erroneous at times. It is evident that the writing could benefit from more meticulous attention. I would encourage the authors to revise the text for a more polished and refined presentation, given the significant contribution of their work. A non-exhaustive list of potential issues are noted in "Typos Grammar Style And Presentation Improvements".
- D1. The organization and writing for page 6 are notably very good.
I'd like to note that the idea and contribution of the paper are quite strong, but my initial scores will have to reflect the issues above. I am open to adjusting my score if these issues are addressed.
[1] Lightman 2023, Let's Verify Step by Step | - D1. The organization and writing for page 6 are notably very good. I'd like to note that the idea and contribution of the paper are quite strong, but my initial scores will have to reflect the issues above. I am open to adjusting my score if these issues are addressed. [1] Lightman 2023, Let's Verify Step by Step |
NIPS_2020_1143 | NIPS_2020 | - there are several places that have inaccurate descriptions or misleading: e.g. For IBP [29], the method is actually "certification method" because it introduces the interval bounds in the training. It is not based on the "verification" method. They use Interval bounds in the training, and some of their results use MIO verifier to evaluate the best test errors they can get. Also, the computation complexity of [12, 17, 18] in Line 27 are totally different. Some are polynomial time, some are NP-complete. Usually in this field, only the formal verification based method such as [17] will be described as computationally expensive. For line 29, those methods are not used for detecting adversarial examples, because they provide a certified region for consistent classifications as the input example x. They can only be used to detect guaranteed "non"-adversarial example of x given a new input x'. - there are also some places that are not clear: e.g. it's not clear how to train the prototypes w in equation 2. It looks like d is pre-defined and the only parameters are w. Also, what are the relations of eq 3 to eq 2? - The NPC models are not as commonly used as other models such as NN in my understanding, so it's not clear how useful/important the robustness analysis is in this regard. | - The NPC models are not as commonly used as other models such as NN in my understanding, so it's not clear how useful/important the robustness analysis is in this regard. |
ICLR_2021_2329 | ICLR_2021 | Weakness of this paper 1. The novelty of this paper might be limited. Previous works have explored the possibility of utilizing text as weak supervision for video representation learning (MIL-NCE), from the reviewer’s perspective, the main difference is that the different loss function is adopted. 2. Compared with methods that adopt other information (such as audio) as weak supervision, there is an inherent advantage of using text as supervision since pretrained text models such as BERT can be utilize as a guidance. So a meaningful comparison would be the comparison with TWS and MIL-NCE, although the proposed method can achieve comparable performance with other methods with much less data, the author does not give analysis about what design in the proposed method that enables this. 3. The performance comparison is not convincing enough. From Table 3, we can see that different backbones are used for different methods, the reviewer worries that the superiority of the proposed method might be brought by a stronger backbone. | 2. Compared with methods that adopt other information (such as audio) as weak supervision, there is an inherent advantage of using text as supervision since pretrained text models such as BERT can be utilize as a guidance. So a meaningful comparison would be the comparison with TWS and MIL-NCE, although the proposed method can achieve comparable performance with other methods with much less data, the author does not give analysis about what design in the proposed method that enables this. |
ARR_2022_153_review | ARR_2022 | Some evaluations results are mixing. Please see the detailed comments below.
Question: 1. In line 257, it's strange that including the collected LIV-EN parallel data for finetuning actually makes the NMT system perform worse. Could you provide more discussions/explanations on this?
2. Could you provide some insight why the multilingual model is "noticeably weaker" (line 236) on the ET→En and LV→EN evaluations?
3. It's interesting that the LV→EN model performs better than the ET→EN model, especially in section 2 authors mentioned that Livonian and Latvian languages are similar in many aspects. minor: 1. Maybe the information that "the translation is done by hired experts (section 3)" can be added to the footnote 2 on page 2 (section 1)? I was a bit confused when first reading that footnote since I wasn't sure how the translation is done.
2. Line 194, maybe I'm not familiar with the context, could you explain what does "implement the support of ..." mean?
3. For future work, it may be worth considering adding cross-lingual contrastive learning to the training.
4. Line 210, on what ET, EN, LV data is the Sentencepiece tokenizer obtained on? Those in the 4-language parallel corpus?
5. Line 255: typo, "perform performed" | 3. For future work, it may be worth considering adding cross-lingual contrastive learning to the training. |
EcDO5EXFdH | ICLR_2024 | 1. One concern is the novelty of the paper. SiGeo feels like a combination proxy of ZiCO, FR norm, and loss functions.
2. SiGen is Sub-One-Shot, but the authors did not use the warm-up to analyze the correlation and search accuracy in the main experiments, including NAS Benchmarks and CIFAR-10/CIFAR-100. As the authors mentioned, SiGen is equivalent to a simplified ZiCO without warm-up, so the performance improvement in Tables 2 and 3 is also marginal.
3. It would have been preferable to conduct experiments on NAS-Bench-201 benchmark and ImageNet dataset. | 2. SiGen is Sub-One-Shot, but the authors did not use the warm-up to analyze the correlation and search accuracy in the main experiments, including NAS Benchmarks and CIFAR-10/CIFAR-100. As the authors mentioned, SiGen is equivalent to a simplified ZiCO without warm-up, so the performance improvement in Tables 2 and 3 is also marginal. |
NIPS_2019_1049 | NIPS_2019 | - While the types of interventions included in the paper are reasonable computationally, it would be important to think about whether they are practical and safe for querying in the real world. - The assumption of disentangled factors seems to be a strong one given factors are often dependent in the real world. The authors do include a way to disentangle observations though, which helps to address this limitation. Originality: The problem of causal misidentification is novel and interesting. First, identifying this phenomenon as an issue in imitation learning settings is an important step towards improved robustness in learned policies. Second, the authors provide a convincing solution as one way to address distributional shift by discovering the causal model underlying expert action behaviors. Quality: The quality of the work is high. Many details are not included in the main paper, but the appendices help to clarify some of the confusion. The authors evaluated the approach on multiple domains with several baselines. It was particularly helpful to see the motivating domains early on with an explanation of how the problem exists in these domains. This motivated the solution and experiments at the end. Clarity: The work was very well-written, but many parts of the paper relied on pointers to the appendices so it was necessary to go through them to understand the full details. There was a typo on page 3: Z_t â Z^t. Significance: The problem and approach can be of significant value to the community. Many current learning systems fail to identify important features relevant for a task due to limited data and due to the training environment not matching the real world. Since there will almost always be a gap between training and testing, developing approaches that learn the correct causal relationships between variables can be an important step towards building more robust models. Other comments: - What if the factors in the state are assumed to be disentangled but are not? What will the approach do/in what cases will it fail? - It seems unrealistic to query for expert actions at arbitrary states. One reason is because states might be dangerous, as the authors point out. But even if states are not dangerous, parachuting to a particular state would be hard practically. The expert could instead be simply presented a state and asked what they would do hypothetically (assuming the state representations of the imitator and expert match, which may not hold), but it could be challenging for an expert to hypothesize what he or she would do in this scenario. Basically, querying out of context can be challenging with real users. - In the policy execution mode, is it safe to execute the imitatorâs learned policy in the real world? The expert may be capable of acting safely in the world, but given that the imitator is a learning agent, deploying the agent and accumulating rewards in the real world can be unsafe. - On page 7, there is a reference to equation 3, which doesnât appear in the main submission, only in the appendix. - In the results section for intervention by policy execution, the authors indicate that the current model is updated after each episode. How long does this update take? - For the Atari game experiments, how is the number of disentangled factors chosen to be 30? In general, this might be hard to specify for an arbitrary domain. - Why is the performance for DAgger in Figure 7 evaluated at fewer intervals? The line is much sharper than the intervention performance curve. - The authors indicate that GAIL outperforms the expert query approach but that the number of episodes required are an order of magnitude higher. Is there a reason the authors did not plot a more equivalent baseline to show a fair comparison? - Why is the variance on Hopper so large? - On page 8, the authors state that the choice of the approach for learning the mixture of policies doesnât matter, but disc-intervention obtains clearly much higher reward than unif-intervention in Figures 6 and 7, so it seems like it does make a difference. ----------------------------- I read the author response and was happy with the answers. I especially appreciate the experiment on testing the assumption of disentanglement. It would be interesting to think about how the approach can be modified in the future to handle these settings. Overall, the work is of high quality and is relevant and valuable for the community. | - The authors indicate that GAIL outperforms the expert query approach but that the number of episodes required are an order of magnitude higher. Is there a reason the authors did not plot a more equivalent baseline to show a fair comparison? |
ACL_2017_67_review | ACL_2017 | The main weaknesses for me are evaluation and overall presentation/writing.
- The list of baselines is hard to understand. Some methods are really old and it doesn't seem justified to show them here (e.g., Mpttern).
- Memb is apparently the previous state-of-the-art, but there is no mention to any reference.
- While it looks like the method outperforms the previous best performing approach, the paper is not convincing enough. Especially, on the first dataset, the difference between the new system and the previous state-of-the-art one is pretty small.
- The paper seriously lacks proofreading, and could not be published until this is fixed – for instance, I noted 11 errors in the first column of page 2.
- The CilinE hierarchy is very shallow (5 levels only). However apparently, it has been used in the past by other authors. I would expect that the deeper the more difficult it is to branch new hyponym-hypernyms. This can explain the very high results obtained (even by previous studies)... - General Discussion: The approach itself is not really original or novel, but it is applied to a problem that has not been addressed with deep learning yet. For this reason, I think this paper is interesting, but there are two main flaws. The first and easiest to fix is the presentation. There are many errors/typos that need to be corrected. I started listing them to help, but there are just too many of them.
The second issue is the evaluation, in my opinion. Technically, the performances are better, but it does not feel convincing as explained above.
What is Memb, is it the method from Shwartz et al 2016, maybe? If not, what performance did this recent approach have? I think the authors need to reorganize the evaluation section, in order to properly list the baseline systems, clearly show the benefit of their approach and where the others fail.
Significance tests also seem necessary given the slight improvement on one dataset. | - The list of baselines is hard to understand. Some methods are really old and it doesn't seem justified to show them here (e.g., Mpttern). |
ICLR_2022_3058 | ICLR_2022 | . At the end of section 2, the authors tried to explain noisy signals are harmful for the OOD detection. It's obvious that with more independent units the variance of the output is higher. But this affects both ID and OOD data. The explanation is not clear.
. The analysis in section 6 is kind of superficial. 1) Lemma 2: the conclusion is under the assumption that the mean is approximately the same. However, as DICE is not designed to guarantee this assumption, the conclusion in Lemma 2 may not apply to DICE. 2) mean of output: the scoring function used for OOD detection is max_cf_c(x). The difference of mean is not directly related to the detection scoring, so the associated observation may not be used to explain why the algorithm works.
. Overall, it is not well explained why the proposed algorithm would work for some OOD detection. 1) From the observation, although DICE can reduce the variance of both ID and OOD data, the effect on OOD seems more significant. This may due to the large difference between ID and OOD. Therefore, it would be interesting to exam the performance of DICE by varying the likeness between OOD and ID. 2) From Figure 4, the range of ID and OOD seems not to be changed much by sparsification. Similarly, Lemma 2 requires approximately identical mean as the assumption. These conditions are crucial for DICE, but is not well discussed, eg., how to ensure DICE meet these conditions.
. In the experiment, the OOD samples generally are significantly different from ID samples (thus less challenging). As pointed out in the above comment, it would be interesting to compare the performance of DICE by varying the OODness of test samples. For example, the ID data is 8 from MNIST, OOD datasets can be 1) 3 from MNIST; 2) 1 from MNIST; 3) FMNIST; and 4) CIFAR-10.
. The comparison between DICE and generative-based model (Table 3) is unfair as DICE is supervised while the benchmarks are unsupervised. It's not surprising that DICE is better. The authors should add comments on that.
. It is claimed in the experimental part that the in-distribution classification accuracy can be maintained under DICE. Only the result on CIFAR-10 is shown. Please provide more results to support the conclusion if possible.
. Instead of using directed sparsification, one possible solution may be just using a simpler network. Of course this would change the original network architecture. But as one part of the ablation study, it would be interesting to know whether a simpler network would be more beneficial for the OOD detection. | 1) From the observation, although DICE can reduce the variance of both ID and OOD data, the effect on OOD seems more significant. This may due to the large difference between ID and OOD. Therefore, it would be interesting to exam the performance of DICE by varying the likeness between OOD and ID. |
NIPS_2022_1048 | NIPS_2022 | and comments: 1. This paper mainly focused on group sufficiency as the fairness metric. Is it possible to derive similar results under criteria of demographic parity or equalized odds? What are the potential challenges for other fair metrics? Under these settings, is it still possible to achieve both fairness and accuracy for many subgroups? 2. The regularization coefficient λ
seems to have a joint optimal value in 0.1-2. Could you elaborate more on why both fairness and accuracy drop when λ
is large? 3. Is it possible to assume the general gaussian distribution rather than isotropic gaussian in the proposed algorithm? What is the difference? 4. Can the proposed theoretical analysis be extended for a regression or segmentation task? For example, could we obtain the same results as the classification task? 5. Could you explain a bit more on the intuition of group sufficiency? Is there any relation to the well-known sufficient statistics? Other comments: 1. Could we extend the protected feature A
to a vector form? For instance, A
represents multiple attributes. 2. In the Introduction part, the authors introduced a medical therapy instance to present the importance of group sufficiency. Could you explain a bit more about the difference between sufficiency and DP/EO metrics in the real-world application scenarios? 3. In line 225 and line 227, the mathematical expression of gaussian distribution is ambiguous. 4. In section 4.4, it mentions the utilization of Monte-Carlo sampling method. I am curious about the influence of different sampling numbers.
================================================ Thanks the effort from the authors, and I am satisfied with the rebuttal. I would like to raise my score to 8. | 5. Could you explain a bit more on the intuition of group sufficiency? Is there any relation to the well-known sufficient statistics? Other comments: |
ARR_2022_12_review | ARR_2022 | I see no major weaknesses. But there are some minor issues that could be improved.
- The description of the dataset is incomplete. It will be helpful to describe some demographic information of the singers in the dataset, such as gender and age range. What is the percetange of male and female singers? Are English songs recorded by native speakers? Male and female singers may pose slightly different challenges, so it will be good to know this.
- Have the forced alignment results been manually checked? Montreal Forced Aligner works well with modal speech but may not work well on singing. Will this affect the evaluation results?
- Figure 4 is not easy to interpret. All pitch contours are mixed together. Is it possible to hightly the specific areas where DTW and CTW make errors? In this way it will be easier to spot the errors and validate the alignment result. - The same for Figure 3. Since this is a new distance measure, it might not be familiar to readers. It will be immensely helpful if a numerical example could be provided. For example, this could be "if a points falls into region n with an angle of 30 degree, what is the numerical distance between this point and the anchor point?" | - The description of the dataset is incomplete. It will be helpful to describe some demographic information of the singers in the dataset, such as gender and age range. What is the percetange of male and female singers? Are English songs recorded by native speakers? Male and female singers may pose slightly different challenges, so it will be good to know this. |
ICLR_2022_350 | ICLR_2022 | Writing. The writing quality and the presentation of the paper can be substantially improved. For example, in section 2, It might be better to move the loss functions to the experimental section, as they are not part of the algorithm.
Novelty. Overall, I don't see enough originality, and it's not very challenging to adapt the sequential testing to this setting. The main idea is to detect the bad distribution shifts by nonparametric sequential testing based on comparing the risk function on the target data and the risk function on the source data. The monitoring statistics are the risk functions (and their CI).
Comments. 1. The proposed method requires the labels from the test data, which is a restrictive setting for practical use. The paper claims that the label can be revealed in a delayed fashion, but we could also do batch detection for the past observations (and wait for a period and redo the testing). The paper doesn't provide any expected sample size analysis. I think it'd be better to give some comparison with the batch detection algorithm. 2. Since the algorithm requires to compute the CI at each time step, how efficient is it? | 1. The proposed method requires the labels from the test data, which is a restrictive setting for practical use. The paper claims that the label can be revealed in a delayed fashion, but we could also do batch detection for the past observations (and wait for a period and redo the testing). The paper doesn't provide any expected sample size analysis. I think it'd be better to give some comparison with the batch detection algorithm. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.