Dataset Viewer
paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
9.67k
| point
stringlengths 55
634
|
---|---|---|---|
NIPS_2019_899 | NIPS_2019 | Weakness: - Latent language seems to add a more complex intermediate problem. You are now introducing text understanding which might be a harder problem. Do we really need text? why not use a program for guidance? Surely, a program is more expressive than macro-action and interpretable and you dont have language understanding challenges. Maybe a clever data collection strategy can collect programs. - The problem is solved using supervised learning without any exploration based learning. This makes me wonder how easy the setup is. Did you try comparing against agents that are trained without text but use reinforcement learning? The environment must have some reward (score, number of enemies killed etc.). Of course, once you consider exploration it is not clear how accurate your instruction model would be. Maybe this is a limitation of this direction? Questions and Other Comments - Why would any instruction be repeated at all? I am trying to understand the purpose of using one-hot vector encoding for instructions. How many instructions occur more than once? - Did you evaluate against rule-based bots? How is the performance against another model trained using the same strategy (i.e. self play). - I believe authors first generate a text using the instruction model and then re-encode the text using an encoder. Why not do something like this: create an instruction embedding g(f(s)) from state encoding f(s). Pass this instruction encoding directly to the executor (as opposed to generated text). Then you can add an auxiliary objective which will try to bring g(f(s)) closer to the gold instruction encoding. This might work better as you are not adding discretization in between which can fail due to a single wrong decoding (say as based on a tie). - Equation on line 214 should have different state s for each time "i". Otherwise your state representation is not changing while you take new actions. - Many missing citations (see Tellex et al., AAAI 2011, Tellex et al., RSS 2014, Chaplot et al., AAAI 2017, Bahdanau et al., ICLR 2017, Misra et al., EMNLP 2018, Mirowski et al., 2019, Chen et al., CVPR 2019) etc. | - Many missing citations (see Tellex et al., AAAI 2011, Tellex et al., RSS 2014, Chaplot et al., AAAI 2017, Bahdanau et al., ICLR 2017, Misra et al., EMNLP 2018, Mirowski et al., 2019, Chen et al., CVPR 2019) etc. |
NIPS_2018_710 | NIPS_2018 | - My general reservation about this paper is that while it was helpful in clarifying my own understanding of BN, a lot of the conclusions are consistent with folk wisdom understanding of BN (e.g. well-conditioned optimization), and the experimental results were not particularly surprising. Questions: - Taking Sharp Minima Hypothesis at face value, Eq 2 suggests that increasing gradient variance improves generalization. This is consistent with the theme that decreasing LR or decreasing minibatch size make generalization worse. Can you comment on how to reconcile this claim with the body of work in black-box optimization (REBAR, RELAX, VIMCO, Reinforcement Learning) suggesting that *reducing* variance of gradient estimation improves generalization & final performance? - Let's suppose that higher SGD variance (eq 2) == better generalization. BN decreases intra-unit gradient variance (Fig 2, left) but increases intra-minibatch variance (Fig 4, right). When it is applied to a network that converges for some pair \alpha and B, it seems to generalize slightly worse (Fig 1, right). According to the explanations presented by this paper, this would imply that BN decreased M slightly. For what unnormalized architectures does the application of BN increase SGD variance, and for what unnormalized architectures does BN actually decrease SGD variance? (requiring LR to be increased to compensate?) How do inter-layer gradient variance and inter-minibatch gradient variance impact on generalization? - For an unnormalized network, is it possible to converge AND generalize well by simply using a small learning rate with a small batch size? Does this perform comparably to using batch norm? - While Section 4 was well-written, the argument that BN decreases exponential condition number is not new; this situation has been analyzed in the case of training deep networks using orthogonal weights (https://arxiv.org/pdf/1511.06464.pdf, https://arxiv.org/pdf/1806.05393.pdf), and the exploding / vanishing gradient problem (Hochreiter et al.). On the subject of novelty, does this paper make a stronger claim than existing folk wisdom that BN makes optimization well-conditioned? | - My general reservation about this paper is that while it was helpful in clarifying my own understanding of BN, a lot of the conclusions are consistent with folk wisdom understanding of BN (e.g. well-conditioned optimization), and the experimental results were not particularly surprising. Questions: |
dapU3n7yfp | ICLR_2024 | - Lack of baselines: The attack algorithm is based on HotFlip (2018), which is a bit old and less effective than the recently proposed baselines. I am wondering if the authors have compared with adversarial attack baselines proposed more recently such as Seq2sick [1], which shows better optimization effectiveness for text-to-text generation tasks.
- Lack of defense models (detoxified models): While I appreciate the authors’ efforts in comparing different pretrained models, it would also be interesting to evaluate against different defense approaches/detoxification approaches, such as [2,3,4], and confirm whether the attack is still effective.
- Validity of toxicity evaluation: The authors mention that their evaluation setup “can be applied to bridge the evaluation of toxicity in different PLMs”, and “speculate that the success rate of ASRA attack might be positively correlated with the toxicity of language models.” However, I do not see any evidence about whether the ASR here can be a good proxy to reflect model toxicity. Given that the model toxicity evaluation is conducted by evaluating model responses with a lot of different inputs and contexts, the setup of this paper is to evaluate model responses given “unnatural” prompts. I am thus suspicious about whether the test above can give an accurate evaluation of model toxicity.
- There is an important concern regarding the potential misuse of this work by malicious users to bypass safety controls of LLMs and elicit model toxicity. I believe it would be valuable for the paper to include a discussion of this aspect.
[1] Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples. (AAAI 2020)
[2] Plug and Play Language Models: A Simple Approach to Controlled Text Generation (ICLR 2020)
[3] DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts (ACL 2021)
[4] Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models (NeurIPS 2022) | - Lack of defense models (detoxified models): While I appreciate the authors’ efforts in comparing different pretrained models, it would also be interesting to evaluate against different defense approaches/detoxification approaches, such as [2,3,4], and confirm whether the attack is still effective. |
ICLR_2023_3305 | ICLR_2023 | 1.The review of related work on uncertainty in meta-learning is small and it is difficult to locate the main contribution of this paper. 2.The Ood detection only extends from classification to regression, which is not very innovative. 3.The experiments only focus on the comparison between various models proposed by authors and lack of comparison with other methods. | 2.The Ood detection only extends from classification to regression, which is not very innovative. |
NIPS_2020_1507 | NIPS_2020 | 1. The proposed method is based on a pre-defined causal graph, which has limitations if the causal graph is unavailable. In the experimental results sections, the authors only showed the results with the graph constructed by the PC algorithm. It is not clear how the way of graph construction affects the final results. 2. The optimization details for the objective function is missed. 3. This paper lacks the complexity analysis for the proposed method. 4. Only validating the proposed method on one real dataset (i.e., Adult) cannot guarantee its applicability in the wide spectrum of real-world applications. 5. The assumption about the mutual independency of the exogenous variables are too strong to be satisfied in real-world applications. | 1. The proposed method is based on a pre-defined causal graph, which has limitations if the causal graph is unavailable. In the experimental results sections, the authors only showed the results with the graph constructed by the PC algorithm. It is not clear how the way of graph construction affects the final results. |
oIwoBDsJJI | ICLR_2024 | 1. The graph Foster distance is a direct application of the optimal transport problem on the graph Foster distributions.
2. Compared with the Fused Gromov-Wasserstein Distance (FGW), the improvement in the computation time and the classification accuracy for the graph Foster distance in the experiments is very marginal. | 2. Compared with the Fused Gromov-Wasserstein Distance (FGW), the improvement in the computation time and the classification accuracy for the graph Foster distance in the experiments is very marginal. |
3LdaPmAnji | EMNLP_2023 | 1. Experimental results do not show the effectiveness of fine-grained classes.
2. Pherpas lacks Inter-annotator Agreement to better demonstrate the quality of EDeR. | 1. Experimental results do not show the effectiveness of fine-grained classes. |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinforcement learning is not well motivated
The problem domain seems simple enough that a linear approximation would have likely sufficed? The network is fairly small and isn't "deep" either.
- > We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores
This is probably the most unclear part. It's not clear to me why the paper considers one to be more realistic than the other rather than just modeling different incentives? Probably not enough space in the paper but actual comparison of learning dynamics when the opportunity costs are modeled as penalties instead. As economists say: incentives matter. However, if the intention was to explicitly avoid such explicit incentives, as they _would_ affect the model-free reinforcement learning algorithm, then those reasons should be clearly stated.
- Unclear whether bringing connections to human cognition makes sense
As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated".
* Minor comments
** Line 16:
> [18] found them...
Consider using \citeauthor{} ?
** Line 167:
> be the N -th agentâs
should be i-th agent?
** Figure 3:
Clarify what the `fillcolor` implies and how many runs were the results averaged over?
** Figure 4:
Is not self contained and refers to Fig. 6 which is in the supplementary. The figure is understandably large and hard to fit in the main paper, but at least consider clarifying that it's in the supplementary (as you have clarified for other figures from the supplementary mentioned in the main paper).
** Figure 5:
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
- Increase space between the main caption and sub-caption.
** Line 299:
From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. | - Consider increasing the axes margins? Markers at 0 and 12 are cut off. |
ACL_2017_333_review | ACL_2017 | The criticisms are very minor: - It would be best to report ROUGE F-Score for all three datasets. The reasons for reporting recall on one are understandable (the summaries are all the same length), but in that case you could simply report both recall and F-Score. - The Related Work should come earlier in the paper. - The paper could use some discussion of the context of the work, e.g. how the summaries / compressions are intended to be used, or why they are needed. - General Discussion: - ROUGE is fine for this paper, but ultimately you would want human evaluations of these compressions, e.g. on readability and coherence metrics, or an extrinsic evaluation. | - It would be best to report ROUGE F-Score for all three datasets. The reasons for reporting recall on one are understandable (the summaries are all the same length), but in that case you could simply report both recall and F-Score. |
NIPS_2016_208 | NIPS_2016 | 1. The novelty is a little weak. It is not clear what's the significant difference and advantage compared to NCA [6] and "Small codes and large image databases for recognition", A. Torralba et al., 2008, which used NCA in deep learning. 2. In the experiment of face recognition, some state-of-the art references are missing, such as Baidu' work "Targeting Ultimate Accuracy: Face Recognition via Deep Embedding", http://vis-www.cs.umass.edu/lfw/results.html#baidu. In that work, the triplet loss is also used and it reported the result trained on the dataset containing 9K identities and 450K images, which is similar with Webface. The VRF can achieve 98.65% on LFW which is better than the result in Table 3 in this paper. | 1. The novelty is a little weak. It is not clear what's the significant difference and advantage compared to NCA [6] and "Small codes and large image databases for recognition", A. Torralba et al., 2008, which used NCA in deep learning. |
NIPS_2021_1872 | NIPS_2021 | Weakness: 1. The background on linear GCNs may not be clear. Why do we need to study linear GCN? Most of current models are non-linear. 2. The difference between with self loop and without self loop? 3. The authors used the step-size to be T/K. Why exactly T/K? can we use larger or smaller than T/K? | 2. The difference between with self loop and without self loop? |
NIPS_2021_1907 | NIPS_2021 | There is little improvement empirically. Furthermore, it is unclear if the gains in this paper are due solely to the confidence widths or if the design of the algorithm is important too. For the empirical study, it is unclear how the other experiments would perform if they had access to the same confidence widths presented in this work. This may make the algorithmic comparison fairer since the differences in performance would be solely due to the sampling procedures. Also, (and I am torn on this since the setup is nice and clear) it is worth noting that the authors are most of the way through page 5 before any results are presented.
Other comments and questions: - Does theorem 1 hold for an adaptive sequence of x_n’s or a fixed sequence? The theorem just seems to specify a set of (x,y)’s that have been collected. Ie, is this a truly anytime result or for a fixed sequence? In the case of a linear kernel, the gap in the confidence widths between an anytime and fixed confidence bound is O(\sqrt(d)) which behaves like O(sqrt(\gamma_n)) in that setting. I guess that the algorithm is using these as an adaptive sequence which is maybe okay from a Bayesian perspective. - Same question for Thm 2 - For the result in remark 2, do other works get the same factor of d since log(N^d) = dlog(N)? This work is tighter in terms of \sqrt(\gamma) but is the d dependence the same? - Why is MVR the right sampling objective? - Regarding the statement in Section 6 about simple and cumulative regret bounds, it is somewhat expected that the cumulative regret is linear if you do this well on simple regret as your objective is largely one of exploration. Take for example the SE kernel as the variance \sigma -> 0. In this setting, we recover standard multiarmed bandits where http://sbubeck.com/ALT09_BMS.pdf for instance show that there cannot be an algorithm that is simultaneously optimal in both simple and cumulative regret.
Minor comments: - Make sure that the colors chosen for the plots are colorblind friendly. There are a variety of palettes in python for this. - Some of the axes in the plots in the main body and especially Appendix G are hard to read.
The authors do a good job discussing the limitations of their work, though more consideration should be given to potential negative societal impacts than simply saying “our work is theoretical, therefore we can do no wrong.” | - Does theorem 1 hold for an adaptive sequence of x_n’s or a fixed sequence? The theorem just seems to specify a set of (x,y)’s that have been collected. Ie, is this a truly anytime result or for a fixed sequence? In the case of a linear kernel, the gap in the confidence widths between an anytime and fixed confidence bound is O(\sqrt(d)) which behaves like O(sqrt(\gamma_n)) in that setting. I guess that the algorithm is using these as an adaptive sequence which is maybe okay from a Bayesian perspective. |
NIPS_2020_773 | NIPS_2020 | * It's not clear how the Kalman Filtering perspective provides any new insight. Both the global query-specific prior and frequency capping are trivial to specify in the standard attention framework. The Kalman Filtering perspective seems like an unnecessary distraction from what is in reality two simple modifications to a standard attention mechanism. It's especially confusing since a Kalman filter is traditionally specified as a sequential mechanism, more similar to an RNN than a Transformer. Section 3.5 doesn't sufficiently address these questions. For example, is the difference between expectation and estimation important in this setting? I currently view the KF perspective as a negative that distracts from the core modeling changes. * There are only two benchmark results. While the improvements are statistically significant, it's unclear whether they are nontrivial improvements. The authors need to provide more context here, especially for the real-world system. For example, is a +4.4% CTRgain big or small for this system? | * There are only two benchmark results. While the improvements are statistically significant, it's unclear whether they are nontrivial improvements. The authors need to provide more context here, especially for the real-world system. For example, is a +4.4% CTRgain big or small for this system? |
NIPS_2017_320 | NIPS_2017 | #ERROR! | - I believe in section 2 it could be made clearer when gradients are calculated by a solver and when not. |
bxltAqTJe2 | EMNLP_2023 | 1. I strongly recommend that the authors validate the quality of the GFC API output to ensure the accuracy of the ground truth. This step is crucial in ensuring the reliability of the findings.
2. Since the ground truth output can be assess on the Internet, which may lead to test data leakage problem. I encourage authors to discuss the data leakage problem and its implications in this work.
3. The novelty of the proposed CACN method appears limited, as it seems to be a direct combination of CoT and Reverse check worthiness. Additionally, the claim normalization task could be seen as a form of text augmentation or summarization.
4. The description of the baseline is not sufficiently clear. For instance, it is unclear whether the finetuning refers to full-parameter or parameter-efficient prompt tuning. Clarifying this aspect would enhance the understanding of the experimental setup.
5. It is essential to ensure fairness in the comparisons made. For example, the same method should be applied to different models, and the same base model should be evaluated with different prompt paradigms. This would provide a more comprehensive and unbiased analysis.
6. Table 4 lacks results for the proposed CACN method. It is important to include these results to provide a complete evaluation of the proposed approach under 0 and few shot setting.
7. The performance degradation in few-shot learning compared to 0-shot learning requires further explanation. I recommend that the authors examine the few-shot learning exemplars and provide additional insights in this aspect. | 2. Since the ground truth output can be assess on the Internet, which may lead to test data leakage problem. I encourage authors to discuss the data leakage problem and its implications in this work. |
ARR_2022_63_review | ARR_2022 | 1. There are some other contemporary state-of-the-art models, the authors can consider citing and including them for an extensive comparison.
2. It will be good to see some analysis and insights on different combinations of pre-training datasets introduced in Table 1.
Here are some questions: 1. Since some of the sub-tasks, like dialogue state tracking, require a fixed format of the output, if the model generation is incomplete or in an incorrect format, how can we tackle this issue?
2. The dialogue multi-task pre-training introduced in this work is quite different from the original language modeling (LM) pre-training scheme of backbones like T5. Thus I was curious about why not pre-train the language backbone on the dialogue samples first with the LM scheme, then conduct the multi-task pre-training? Will this bring some further improvement?
3. It will be good to see some results and analysis on the lengthy dialogue samples. For instance, will the performance drop on the lengthy dialogues? | 1. Since some of the sub-tasks, like dialogue state tracking, require a fixed format of the output, if the model generation is incomplete or in an incorrect format, how can we tackle this issue? |
NIPS_2020_1491 | NIPS_2020 | 1. The algorithms require some prior knowledge of the problem such as the number of tasks and switches, time horizon, and the full-information feedback, which is due to the binary loss. 2. I think the authors need a further survey in the contextual bandit with switching regret. Here the task index resembles the context. [Luo et al, 2018] would be close to this paper. Moreover, the idea of "meta-experts" can also be seen in [Wu et al, 2019]. [Luo et al, 2018] Luo, Haipeng, et al. "Efficient contextual bandits in non-stationary worlds." Conference On Learning Theory. 2018. [Wu et al, 2019] Wu, Yi-Shan, Po-An Wang, and Chi-Jen Lu. "Lifelong Optimization with Low Regret." The 22nd International Conference on Artificial Intelligence and Statistics. 2019. | 2. I think the authors need a further survey in the contextual bandit with switching regret. Here the task index resembles the context. [Luo et al, 2018] would be close to this paper. Moreover, the idea of "meta-experts" can also be seen in [Wu et al, 2019]. [Luo et al, 2018] Luo, Haipeng, et al. "Efficient contextual bandits in non-stationary worlds." Conference On Learning Theory. 2018. [Wu et al, 2019] Wu, Yi-Shan, Po-An Wang, and Chi-Jen Lu. "Lifelong Optimization with Low Regret." The 22nd International Conference on Artificial Intelligence and Statistics. 2019. |
ICLR_2022_3267 | ICLR_2022 | Weakness:
Rigorousness: This paper is a purely theoretical work in my opinion (though it has numerical simulations). Unfortunately, I did not find the claims in the paper is rigorous enough for a theoretical work. Specifically, the paper made several strong claims without rigorous mathematical analysis. For example, in section 2, the paper tries to explain the transfer phenomenon of adversarial examples from the lens of holomorphic optimal Bayes classifier. But there is no theorem/proof or detailed analysis, making the claim not convincing at all. Another example, in section 3, the paper lists the primal and dual problem for learning the maximum margin classifier, but there is again no rigorous mathematical derivation. Although the paper states that its claims are evident from the definition, as a reader, I did not find it evident at all. To sum up, the problem of the current presentation of this paper is that I cannot tell which parts are rigorous theorems and which parts are just intuitions. Therefore, I am very concerned about the rigorousness of this paper, and I do not think it qualifies as a theoretical work. I recommend the authors to 1) state clearly which parts are rigorous math and which are just intuitions/illustrations 2) summarize all the rigorous parts into mathematical theorems, 3) then give rigorous proofs. | 1) state clearly which parts are rigorous math and which are just intuitions/illustrations |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. | 2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later. |
NIPS_2019_634 | NIPS_2019 | see section 5 ("improvements") below. Originality: while the methods are not particularly novel (autoregressive and masked language modelling pretraining have both been used before for ELMo and BERT; this work extends these objectives to the multi-lingual case), the performance gains on all four tasks are still very impressive. - Quality: This paper's contributions are mostly empirical. The empirical results are strong, and the methodology is sound and explained in sufficient technical details. - Clarity: The paper is well-written, makes the connections with the relevant earlier work, and includes important details that can facilitate reproducibility (e.g. the learning rate, number of layers, etc.). - Significance: The empirical results constitute a new state of the art and are important to drive progress in the field. ---------- Update after authors' response: the response clearly addressed most of my concerns. I look forward to the addition of supervised MT experiments on other languages (beyond the relatively small Romanian-English dataset) on subsequent versions of the paper. I maintain my initial assessment that this is a strong submission with impressive empirical results, which would be useful for the community. I maintain my final recommendation of "8". | - Quality: This paper's contributions are mostly empirical. The empirical results are strong, and the methodology is sound and explained in sufficient technical details. |
NIPS_2022_768 | NIPS_2022 | .
W1: The presentation can be improved. There is no overview of the approach to explain the components, and a few components and concepts appear without much prior context. For example, "encoder" appears without where it is exactly being used. Same for "topic aggregation". "count vector" was used only once without definition.
W2: This paper does not provide the related work description for the existing ETM this work is based on in the related work section. It also doesn't distinguish that in the method section. Thus, it is hard to evaluate the novel contribution of this paper compared to the existing approach. For example, much of the framework comes from the SawTooth paper, but this paper failed to include or summarize the overall structure.
W3: Many parts of the paper are not justified or explained enough. For example, how modeling the relation with a Bernoulli distribution complement Phi modeling the relations between layers in SawTooth, or why the proposed approach is better was not explained. How is the adaptive structure different from just fine-tuning the parameters? The paper quickly attributes the improved performance to the joint tree likelihood, but how exactly is it better than the other approaches such as SawTooth or TopicNet? What information does it capture that SawTooth and TopicNet do not? | .W1: The presentation can be improved. There is no overview of the approach to explain the components, and a few components and concepts appear without much prior context. For example, "encoder" appears without where it is exactly being used. Same for "topic aggregation". "count vector" was used only once without definition. |
ICLR_2022_1919 | ICLR_2022 | The proposed Plug-In inversion method is introduced very late in the paper (at the end of Section 3.4) even though it consists only of combining the augmentation and search space restriction techniques provided in the previous sections (3.1 - 3.3). It would have been much clearer for the reader if this fact was fully described earlier in Section 3 (for example, by moving Section 3.4 earlier in the paper), instead of delaying the presentation of the full method.
My greatest concern is the fact that I am not sure whether the claim of the authors that the method can be applied to multiple networks without tuning is fully supported by the experimental results. More specifically, in Figures 7 and 8 the method is applied in a variety of networks, resulting in images which, while intelligible, are of varying quality. As such, to fully support the argument of the lack of need of extensive hyperparameter tuning, one would also need to show that using the same regularizer (for example, TV, which can be applied to any model) with the same hyperparameter leads to more extensive degradation, when applied to different models. I believe that a small example to demonstrate this motivation would greatly improve the paper.
The above point is made even more unclear by the fact that one of the models considered by the authors does use a TV regularizer (which leads to drastic improvement in image quality).
The ColorShift augmentation proposed by the authors is, unless I am mistaken, a variant of color jittering (in the sense that the adjustment is made directly to the pixels of the image, rather than on hue, saturation, contrast etc.). Given that applying some form of adjustment to the color of the image is a form of data augmentation which has been used in prior work (Krizhevsky et al., 2012, section 4.1), I am not sure if this data augmentation is as novel as the authors claim (although I do appreciate the qualitative analysis of its effects in the context of inversion).
Questions: - The hyperparameters for ColorShift are fixed to a = b = 1
, and if I understand correctly this comes from a qualitative analysis of the results in a few simple experiments. Is this correct?
Minor comments/typos: - As a minor comment related to the above, I believe the authors should indicate in the captions of the figures which model the respective images come from. - There is a space missing in the second-to-last line of page 6. - The caption in Figure 10 in the appendix is incomplete.
References: Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25, 1097-1105. | - As a minor comment related to the above, I believe the authors should indicate in the captions of the figures which model the respective images come from. |
NIPS_2018_840 | NIPS_2018 | 1. It is confusing to me what the exact goal of this paper is. Are we claiming the multi-prototype model is superior to other binary classification models (such as linear SVM, kNN, etc.) in terms of interpretability? Why do we have two sets of baselines for higher-dimensional and lower-dimensional data? 2. In Figure 3, for the baselines on the left hand side, what if we sparsify the trained models to reduce the number of selected features and compare accuracy to the proposed model? 3. Since the parameter for sparsity constraint has to be manually picked, can the authors provide any experimental results on the sensitivity of this parameter? Similar issue arises when picking the number of prototypes. Update after Author's Feedback: All my concerns are addressed by the authors's additional results. I'm changing my score based on that. | 3. Since the parameter for sparsity constraint has to be manually picked, can the authors provide any experimental results on the sensitivity of this parameter? Similar issue arises when picking the number of prototypes. Update after Author's Feedback: All my concerns are addressed by the authors's additional results. I'm changing my score based on that. |
ZyAwBqJ9aP | ICLR_2025 | 1. The paper does not introduce a novel method, it simply applies a typical graph neural network (GAT) and protein language model (ESM) to solve a binary classification problem. If the paper is to be improved, it will need to introduce a novel approach that brings with it significantly improved performance on this task.
2. The task is presented as a new task, however much work has been done predicting the substrates of CYPs (a leaderboard can be found on the Therapeutics Data Commons) including novel approaches to solve this problem. This paper takes a subset of the data used on the ESP model (Kroll et al) and applies a very similar architecture to a subset of the data.
3. The authors compare the performance of their model to the performance of other models and do not achieve the best results across all of the Isoforms. They claim this is because the DeepP450 model is trained on a smaller dataset which might have an impact on generalizability but this claim is not substantiated with evidence.
4. Almost two pages of the paper are dedicated to explaining how Graph Attention Networks work (which were not developed by the authors). Do you think that modeling the mechanics of P450 reactions could be a better approach? | 3. The authors compare the performance of their model to the performance of other models and do not achieve the best results across all of the Isoforms. They claim this is because the DeepP450 model is trained on a smaller dataset which might have an impact on generalizability but this claim is not substantiated with evidence. |
NIPS_2020_1671 | NIPS_2020 | I have some major concerns with the evaluation part of the paper. 1. Paper compared their method with influence functions and representer selection. A simple baseline could be a loss based selection method. Simply select training points based on loss change. A recent paper [DataLens IJCNN 20] shows that a simple loss based selection outperforms both influence functions and representer selection on mislabelled data identification when the mislabeled data is small. As the fraction of mislabelled data increases, influence function works better than loss based method. 2. Paper doesn't show the performance of TrackIn with varying amounts of mislabelled data. As pointed above, I expect TrackIn to perform poorly when we increase the mislabelled data. 3. Checkpoint ensembling is a widely used technique in machine translation [MT ensemble WMT16], semi-supervised learning [Temporal Ensembling ICLR17], knowledge distillation [KD Distillation NAACL 19] so it's not surprising that it helps in TrackIn. One can argue that influence functions can also benefit from the checkpoint ensembling. The authors should explain that. Also, the paper should cite prior work related to checkpoint ensembling as a motivation for picking multiple checkpoints. | 1. Paper compared their method with influence functions and representer selection. A simple baseline could be a loss based selection method. Simply select training points based on loss change. A recent paper [DataLens IJCNN 20] shows that a simple loss based selection outperforms both influence functions and representer selection on mislabelled data identification when the mislabeled data is small. As the fraction of mislabelled data increases, influence function works better than loss based method. |
4wAKqlfV5t | EMNLP_2023 | 1. The authors say that ''Although previous methods have proposed multimodal representations and achieved promising results, most of them focus on forming positive and negative pairs, neglecting the variation in sentiment scores within the same class'', do you mean previous MSA research mainly use contrastive learning for representation learning?
2. I can't find equation 3.2 because of the format mismatch. Do you mean equation (4)?
3. The benefit of the loss function defined in equation (11) should be detailed. Why it can model the volume of the difference of a pair of samples?
4. The modality-losing problem has been solved by several previous works, but the authors don't discuss them. For example, Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities and Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities.
5. More recent baselines should be compared, such as Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment Analysis.
6. The performance improvement in the experimental section is not significant. | 4. The modality-losing problem has been solved by several previous works, but the authors don't discuss them. For example, Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities and Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities. |
ACL_2017_395_review | ACL_2017 | My main concern with the paper is the magnification of its central claims, beyond their actual worth.
1) The authors use the term "deep" in their title and then several times in the paper. But they use a skip-gram architecture (which is not deep). This is misrepresentation.
2) Also reinforcement learning is one of the central claims of this paper.
However, to the best of my understanding, the motivation and implementation lacks clarity. Section 3.2 tries to cast the task as a reinforcement learning problem but goes on to say that there are 2 major drawbacks, due to which a Q-learning algorithm is used. This algorithm does not relate to the originally claimed policy.
Furthermore, it remains unclear how novel their modular approach is. Their work seems to be very similar to EM learning approaches, where an optimal sense is selected in the E step and an objective is optimized in the M step to yield better sense representations. The authors do not properly distinguish their approach, nor motivative why RL should be preferred over EM in the first place.
3) The authors make use of the term pure-sense representations multiple times, and claim this as a central contribution of their paper. I am not sure what this means, or why it is beneficial.
4) They claim linear-time sense selection in their model. Again, it is not clear to me how this is the case. A highlighting of this fact in the relevant part of the paper would be helpful. 5) Finally, the authors claim state-of-the-art results. However, this is only on a single MaxSimC metric. Other work has achieved overall better results using the AvgSimC metric. So, while state-of-the-art isn't everything about a paper, the claim that this paper achieves it - in the abstract and intro - is at least a little misleading. | 4) They claim linear-time sense selection in their model. Again, it is not clear to me how this is the case. A highlighting of this fact in the relevant part of the paper would be helpful. |
ICLR_2022_1895 | ICLR_2022 | 1.It is obvious that this paper applies CVAE to the OOD data detection. The question is why to select CVAE as the efficient model to generate the OOD data. What is the motivation? 2.This paper claims that we can already produce comparable results to existing SOTA contrastive learning models but much more efficient. Why? The detailed explanation is necessary. 3.The contribution is mainly the metrics. | 1.It is obvious that this paper applies CVAE to the OOD data detection. The question is why to select CVAE as the efficient model to generate the OOD data. What is the motivation? |
NIPS_2019_573 | NIPS_2019 | of the paper: - no theoretical guarantees for convergence/pruning - though experiments on the small networks (LeNet300 and LeNet5) are very promising: similar to DNS [16] on LeNet300, significantly better than DNS [16] on LeNet5, the ultimate goal of pruning is to reduce the compute needed for large networks. - on the large models authors only compare GSM to L-OBS. No motivation given for the choice of the competing algorithm. Based on the smaller experiments it should be DNS [16], the closest competitor, rather than L-OBS, showed quite poor performance compared to others. - Authors state that GSM can be used for automated pruning sensitivity estimation. 1) While graphs (Fig 2) show that GSM indeed correlates with layer sensitivity, it was not shown how to actually predict sensitivity, i.e. no algorithm that inputs model, runs GSM, processes GSM result and output sensitivity for each layer. 2) Authors don't explain the detail on how the ground truth of sensitivity is achieved, lines 238-239 just say "we first estimate a layer's sensitivity by pruning ...", but no details on how actual pruning was done. comments: 1) Table 1, Table 2, Table 3 - "origin/remain params|compression ratio| non-zero ratio" --- all these columns duplicate the information, only one of the is enough. 2) Figure 1 - plot 3, 4 - two lines are indistinguishable (not even sure if there are two, just a guess), would be better to plot relative error of approximation, rather than actual values; why plot 3, 4 are only for one value of beta while plot 1 and 2 are for three values? 3) All figures - unreadable in black and white 4) Pruning majorly works with large networks, which are usually trained in distributed settings, authors do not mention anything about potential necessity to find global top Q values of the metric over the average of gradients. This will potentially break big portion of acceleration techniques, such as quantization and sparsification. | - Authors state that GSM can be used for automated pruning sensitivity estimation. |
NkmJotfL42 | ICLR_2024 | I find the formal results stated in Sections 5 and 6 to be extremely difficult to follow. While the informal statements in Section 2 are understandably vague, Sections 5 and 6 failed to clarify my confusions from Section 2. I think this is due to two issues:
1. Section 4 did a poor job at explaining the formal notation. I feel that Definitions 1 and 2 are not particularly well-motivated (more on this later). And shoving everything else into the Appendix does not help either.
2. Certain points in the intro were not explained properly in later sections. a) The term "vacuous" was never formally defined, b) the connection between tightness of generalization bound (eq 1) and the notion of estimability is also not discussed in depth (why are they equivalent? I know the argument is not hard, but this is provides important contexts for the main theorems).
3. The theorem statements are pretty mouthful themselves and except for Theorem 2, the authors did not offer discussions that helps with parsing the theorem statements.
Next, some *major gaps* I found in the paper:
4. The definition of over-parameterizaton (Definition 2) in this paper is not standard and my impression is that they are phrased to make the proofs simpler. While Definition 2 does intuitively fit the idea of over-parameterization, I feel strongly that the author should add: a) detailed discussions on why this definition is consistent with the standard setting in the literature, b) examples.
5. Similar to the previous point, in Theorem 3, the condition on TV distance is unmotivated and seems to only exist to make the problem easier.
6. In the proof of Theorem 1, the authors did not show the existence of Bayes-like Random ERM.
A few minor comments regarding the proofs (I did not check Appendix G or H):
Page 18: the theorem statement is about Theorem 2, *not* 1.
Page 19: the final sentence should start with "the second equality holds"
Page 20: the result "Theorem 1 in Angel & Spinka (2021)" is just a standard fact on the existence of coupling, so it is better to just say that directly.
Page 20: the big equation block looks atrocious, please left align the lines and use indentation to make the + sign on the second line more visible.
Page 20: Please define what event $B$ is before that big equation block. Also in the definition of $B$, the final inequality $L_{D_{I_1}}(A(S_1)) \ge \alpha$ is missing its RHS. | 2. Certain points in the intro were not explained properly in later sections. a) The term "vacuous" was never formally defined, b) the connection between tightness of generalization bound (eq |
NIPS_2021_776 | NIPS_2021 | weakness:
1 The theoretical parts (Section 3.2 and 3.3) are a bit hard to follow.
1-1. too many symbols are used. It would be more clear if the table list of all symbols is provided.
1-2. I am not sure if the following assumption is really valid:
1-2-1. lines 143-144: "Besides, under mild assumptions, if (E) ! 0 144 then % ! 0".
1-2-2. lines 148--149: "Though it is hard to conduct task-agnostic analysis on (t) term, we believe that perfect alignment is still an adequate criteria in minimizing (t) term".
For the assumption made at lines 148--149, I'd like to know (t) is really reduced by using a perfect alignment encoder (PA-SF) in the robotic arm control benchmarks.
also, lines 145--146 (and lines 631--636 in appendix): "Theoretically speaking, when # is a perfect alignment encoder, an goal-conditioned RL policy trained 146 over the encoded space {z(s)}s2S will minimize (E) to 0."
I think this statement is not valid if multiple optimal policies \pi_G exist. (say that there are two optimal policies \pi_{G,1} and \pi_{G,2} and that \pi is converged to \pi_{G,1}, \epsilon^{e_i}(\pi || \pi_{G, 2}) is still can be greater than zero.)
Minor comments: Typos?:
line 145: an goal-conditioned -> a goal-conditioned
line 149: criteria -> criterion
line 330: is formally formulated -> is formulated
line 369: mdps -> {MDP}s
Reference style is not consistent (e.g., the abbreviated style of conference name is used in some references, and not in the others).
---Edit after reading the author response and the other reviews:------
I would like to improve my score, as the author has basically adequately addressed my concerns: WR->WA
I still have some concerns about the assumptions made in the theoretical analysis. For example, I saw the author's response regarding the term (t), but was not sufficiently convinced.
I also think that the presentation of the theoretical analysis section needs to be improved (e.g., as suggested by Reviewer S2fL).
Yes. # Discussion about societal impact is not contained in the paper, but I think it is not really necessary for this research. | 1 The theoretical parts (Section 3.2 and 3.3) are a bit hard to follow. 1-1. too many symbols are used. It would be more clear if the table list of all symbols is provided. 1-2. I am not sure if the following assumption is really valid: 1-2-1. lines 143-144: "Besides, under mild assumptions, if (E) ! |
ICLR_2023_1584 | ICLR_2023 | Weakness:
1. The proposed method relies on a pretrained object detection network that contains the sufficient semantic information for the in-distribution data. When the semantic of in-distribution data like medical images is not covered by the object detection network (pre-trained on natural image), the built semantic graph can be incomplete or even erroneous. If we use additional annotations to train a sufficiently strong object detection network, the effort will be extremely expensive comparing the existing methods.
2. The paper is not polished and not ready to publish, with missing details in related work / experiment / writing. See more in "Clarity, Quality, Novelty And Reproducibility". | 1. The proposed method relies on a pretrained object detection network that contains the sufficient semantic information for the in-distribution data. When the semantic of in-distribution data like medical images is not covered by the object detection network (pre-trained on natural image), the built semantic graph can be incomplete or even erroneous. If we use additional annotations to train a sufficiently strong object detection network, the effort will be extremely expensive comparing the existing methods. |
hn0B3jTlwE | EMNLP_2023 | - The paper lacks some results on the performance of the models on other tasks. While perplexity does not increase significantly, it would be interesting to know the impact of the method on other probing tasks.
- A limitation of the method is its dependency on human-annotated datastores. The paper lacks some results on the quality of the datastores data on the final performance of the model. | - A limitation of the method is its dependency on human-annotated datastores. The paper lacks some results on the quality of the datastores data on the final performance of the model. |
ICLR_2022_1648 | ICLR_2022 | Weakness/concerns: 1. The theoretical analysis is limited to regular graph for influence score and non-attribute graph for expressiveness of link representation. Could them be generalized to more applicable graphs? 2. The authors concatenate node representation as link representation. In this way, the expressiveness of link representation is highly related to the expressiveness of node representation. Therefore, it seems powerful GNNs for node representation or node classification can be directly used for link representation or link prediction. But it seems that P-GNN conflicts with this claim as the performance of P-GNN is really bad though the authors mention some concerns about it in supplements. 3. As the experimental results show, virtual nodes do not always benefit the link prediction, such as on Cora and Pubmed. Although the authors give some analysis, readers may still be confused about in what situations virtual nodes are recommended and vice versa. I would appreciate if the authors can give a table to further explain on it, especially clarify ambiguous expression in the article. For example, what “cora/pubmed have no cluster structure” means, when both Cora and Pubmed have clearly defined classes and previous works have shown that their data points have underlying clusters. 4. The proposed method seems to rely heavily on sophisticated hyperparameter searching. (as shown in Appendix D) | 4. The proposed method seems to rely heavily on sophisticated hyperparameter searching. (as shown in Appendix D) |
ICLR_2023_1400 | ICLR_2023 | - While the paper shows improvements on CIFAR derivatives, it lacks analysis or results on other datasets (e.g., ImageNet derivatives). Verifying the effectiveness of the framework on ImageNet-1k or even ImageNet-100 is important. These results ideally can be presented in the main paper.
- The authors should add some details on how to solve the optimization in the main paper. It's an important piece of information currently lacking in the paper.
- Some baselines such as [1] are not considered and should be added.
I feel that influence function can be replaced by other influence estimation methods such as datamodels[2] or tracin[3]. It will be beneficial to understand if the updated framework results in better pruning than the baselines. I am assuming it would result in better pruning results, however it would be beneficial to understand which influence based methods are particularly suitable for pruning.
[1]. https://arxiv.org/pdf/2107.07075
[2]. https://arxiv.org/abs/2202.00622
[3]. https://arxiv.org/abs/2002.08484 | - Some baselines such as [1] are not considered and should be added. I feel that influence function can be replaced by other influence estimation methods such as datamodels[2] or tracin[3]. It will be beneficial to understand if the updated framework results in better pruning than the baselines. I am assuming it would result in better pruning results, however it would be beneficial to understand which influence based methods are particularly suitable for pruning. [1]. https://arxiv.org/pdf/2107.07075 [2]. https://arxiv.org/abs/2202.00622 [3]. https://arxiv.org/abs/2002.08484 |
NIPS_2016_482 | NIPS_2016 | of the method (see above) would clearly help in making the case for its impact. Clarity: The paper is very clearly written and easy to follow. It would be interesting to see a version of Fig. 1 including error bars estimated from the method - it seems that currently only the estimated means are ever used. More emphasis could be put on explaining the big picture of when the method is actually useful too. Other comments/questions: 1. In Eq. (1), why does y depend on theta but not f? 2. It would be nice to know the source of the variance seen in Fig. 1. | 2. It would be nice to know the source of the variance seen in Fig. |
ICLR_2022_3056 | ICLR_2022 | . It is claimed that the generated OOD samples cover larger diversity ranges. If the generated OOD samples all belong to the same classes as those in the training set, how to quantify such diversity?
. The experiments show that with more synthesized data the OOD generalization improves. Then the question is what if we use only synthesized data for training classifiers? What's the performance? Is there any tradeoff between using real data and synthesized data?
. The authors emphasized the superior performance of the proposed algorithm. However, with more synthetic data the training time would be significantly increased. It's better to add discussion on both the advantage and limitations of the proposed algorithm for a fair comparison with benchmarks.
. In Algorithm 1 line 3: how to pick \theta_i to update \theta_j? To achieve a better performance, do you have to carefully pick a network for fine-tuning?
. How to set the interpolation coefficients (Equ. 1) in your experiments? How do these parameters affect the performance of OOD generalization?
. In Table 1, what's the difference between in distribution check mark and cross mark? What's the OOD data here? Are the results averaged over different OOD datasets or for some particular OOD dataset?
. Results on colored fashion MNIST: the described numbers are incorrect. They are from Table 1, not Table 2. | . The authors emphasized the superior performance of the proposed algorithm. However, with more synthetic data the training time would be significantly increased. It's better to add discussion on both the advantage and limitations of the proposed algorithm for a fair comparison with benchmarks. |
HZtBP6DZah | ICLR_2024 | One major weakness is in clarity and presentation. This is a complicated model with many components, and I found it difficult to follow.
Specific suggestions:
- The authors may consider rewriting or reorganizing the last two paragraphs in Introduction.
- Please provide a table of notations and variables in Appendix.
- (1) does not make sense mathematically. Is Group_j a set or a vector or a scalar?
For others, please see the list of questions below. | - (1) does not make sense mathematically. Is Group_j a set or a vector or a scalar? For others, please see the list of questions below. |
l8zRnvD95l | ICLR_2025 | 1.The dataset was compiled from multiple sources with various modalities, which may introduce inconsistency or OOD samples when doing model training. Careful data analysis can be helpful
2. The experiment shows the proposed EcoPerceiver outperformed the current SOTA approach for most IGBP types especially WET, WAT, and ENF. However, the paper did not include an ablation study to showcase why the proposed model achieved this performance.
3.There is only one baseline compared and there is no model with single modalities. | 3.There is only one baseline compared and there is no model with single modalities. |
en3NwykrHW | ICLR_2025 | 1. There is no experiment.
2. In the upper bound of Theorem 7, the last three terms dominate. In contrast, the abstract and the introduction claim that the upper bound is determined by the first term. The authors should clarify under what conditions, if any, the first term dominates. If the first term is not asymptotically dominant, the authors should explain why they only focus on the first term in the abstract and the introduction.
3. The introduction claims that the developed algorithm for RL with trajectory feedback achieves the same asymptotically optimal regret bound as the standard RL. The authors should explain why trajectory feedback does not lead to a worse regret bound and what properties of their algorithm allow them to overcome the information disadvantage of only receiving trajectory feedback.
4. Section 3 should clarify that how the expected trajectory reward is a linear function of the state-action visitation frequencies.
5. Some mathematical derivations are not intuitive. The authors can add explanations about what the mathematical properties mean and how they are derived. Here are some examples.
5.1 The second key observation on page 5.
5.2 Inequality (3).
5.3 The equation in (8).
6. There are a few typos: The third term of the upper bound in Theorem 7. P1 in Line 4 of Algorithm 2. D2 in Line 6 of Algorithm 2. | 3. The introduction claims that the developed algorithm for RL with trajectory feedback achieves the same asymptotically optimal regret bound as the standard RL. The authors should explain why trajectory feedback does not lead to a worse regret bound and what properties of their algorithm allow them to overcome the information disadvantage of only receiving trajectory feedback. |
XhdckVyXKg | ICLR_2025 | * In general, I believe the quality of the writing, presentation and conclusions in the paper can improve significantly. There are several unbacked claims and missing details throughout the paper (see below), which make the paper very hard to follow. I highly suggest authors consider revising the manuscript write up to provide a better flow and additional information. I have done my best to provide several examples in below, but I’m sure there are more improvements that can be made.
* The number of subjects in the pre-training and evaluation datasets makes the conclusions intransferrable to large datasets for claims of “NormWear as a foundation model”. A foundation model is really a generalist model that can perform well on a variety of corner cases and downstream applications. Some modalities (e.g. EEG) have less than 50 pre-training/evaluation subjects, for example, their evaluation of “Driver Fatigue detection” has only 12 * 20% = [2-3] subjects in the test set, which is very low to conclude generalizable performance and conclusions for health applications. I believe this weakens the conclusion of NormWear being “[the first] *foundation model* specifically designed for wearable sensing data, capable of processing any number of multivariate signals from sources such as the heart, skin, brain, and physical body.”. I recommend the author revise the language or provide additional empirical back up for NormWear being a foundation model.
* There are a variety of inadequate references and claims throughout the paper. I recommend authors take a pass through the claims in the paper and revisit them as needed. I provide some examples below:
* “Despite the great potential of these works across various tasks such as forecasting, anomaly detection, and classification, they are not easily transferable to wearable health applications for two main reasons“: Transformers with images or spectrograms, have been previously used for physiological signals, so authors may reconsider this claim [1], [2].
* “When modeling this type of data, relying solely on modality-specific backbone feature encoders, such as RNNs (Yu et al., 2019) or transformer-based (Vaswani et al., 2023) neural networks, is insufficient. Therefore, it becomes essential to incorporate established signal processing techniques, such as the short-term Fourier transform (Brigham, 1988) and wavelet transform (Torrence & Compo, 1998)”. It would be great if authors justify these claims. To the best of my knowledge, Transformers (without Fourier transforms) are widely used for physiological signals, and it is not clear to me how transforming the time-series to frequency domain, can remove modality-to-modality variations. If authors provide theoretical/empirical justification for this, it can improve the motivation.
* “Nevertheless, this method completely ignores information in the frequency domain, leading to significant information loss and suboptimal performance in downstream tasks.“: In my opinion, this is incorrect. Just because a model is trained on time domain, does not mean it *completely ignores information in frequency domain* as there’s a duality between frequency and time domain. I suspect authors may have meant to claim that it’s easier to capture certain frequency-related information if the input in frequency domain is directly given to the model. If yes, it’s a different claim, but please note that a powerful enough encoder with enough data, should be able to capture frequency-related information from time-domain input as well. I recommend authors provide more empirical/theoretical evidence for this claim, or reconsider the writing.
* “Another important point to consider is that although empirical studies (Nie et al., 2023; Abbaspourazad et al., 2023) show that channel-independent structures effectively capture local patterns, they fail to account for relationships across channels.”: Please provide reasoning for such claims, it’s not clear to me how these conclusions are made from these prior papers.
* “In order to stay consistent with the literature on foundational representation learner (Devlin et al.,2019; Dosovitskiy et al., 2020; Gong et al., 2021), the backbone of our proposed model consists of a convolutional patching layer followed by 12 standard Transformer blocks (Vaswani et al., 2023).”, there are a lot of different representation learning approaches (masked auto encoder, variational auto encoders, contrastive learning, autoregressive pre-training, ...), so perhaps authors can more accurately rewrite this sentence.
* “With the state-of-the-art (SoTA) back- bone model for modeling time series data, each intermediate layer will output tensors that contain the timestamp dimension”, what does this mean? Can authors provide back up for this claim or provide more information?
* “Such a visualization pipeline can assist researchers and clinicians by offering insights into how the model reaches its final predictions” It’s not clear to me whether these visualizations provide any gradient signal or they’re random. To the best of my knowledge, the relationship between PPG and diabetes is not well-understood, so not sure if I can directly conclude that the shown results match with the well-known concepts in the literature. It would be great if the authors can relate this to the literature and present the efficacy of their visualization method.
* “However, recent works have shown that features extracted from deep learning methods generally outperform handcrafted features in most cases (Yan et al., 2023a; Krizhevsky et al., 2012; Luo et al., 2024).”. I’m not sure how AlexNet is relevant to tokenization discussion in Section 2.2 here, also not very recent :). Can the authors reconsider the discussion here.
* Many important details of technical implementation is missing from the paper, I recommend the authors incorporate all necessary information to aid the reader. I provide few examples below:
* Information about how patches are selected and how many patches are there for each segment, appear to be missing.
* Architectural hyperparameters regarding the tokenizer, the reconstruction module (de-tokenization), the details of the encoder/decoder transformer (token dimension, number of attention heads, positional encoding, dimension of MLP hidden layer, normalization, ...) appear to be missing.
* The details regarding the downstream evaluations (linear probing) appear to missing
* The details about how sentences are chosen in Section 3.2, what language model (or encoder) was used to get the “question semantic” embeddings appear to be missing.
* Hyperparameters of equation 1/2 and L295-311 appear to be missing from the paper.
* Details of masking strategies in Table 8 are missing.
* Several major claims in the paper seem overstated. For example, the delta between NormWear and Chronos in Table 1 seems very small considering that Chronos is not even a proper foundation model on physiological signals (Chronos is just a model trained on some time-series datasets, and to the best of my knowledge, there’s no prior work showing that Chronos is even close to SOTA for physiological signals such as PPG/ECG/EEG). Despite this shortcoming for Chronos, its difference between NormWear in the first 8 evaluations is very small, and in some cases it is even better. Similarly, authors make several big claims about processing frequency domain and CWT (see examples above), however, in Table 9, they show that the difference between processing with CWT vs. raw input is not that much (76.25 vs. 78.27). I recommend authors provide further explanation/discussion regarding these claims.
* It would be great if the authors provide details about how confidence bounds are selected in Tables, e.g., Table 1. It is surprising that they get such narrow confidence bounds with such small N (e.g., 2/3 for Driver Fatigue detection if I understand correctly)?
* Please consider fixing typo and formatting issues, for example:
* L42: missing space
* L157: missing space.
* Table captions not being above the tables.
[1] Mathew, G., Barbosa, D., Prince, J., & Venkatraman, S. (2024). Foundation models for cardiovascular disease detection via biosignals from digital stethoscopes. npj Cardiovascular Health, 1(1), 25.
[2] Vaid, A., Jiang, J., Sawant, A., Lerakis, S., Argulian, E., Ahuja, Y., ... & Nadkarni, G. N. (2023). A foundational vision transformer improves diagnostic performance for electrocardiograms. NPJ Digital Medicine, 6(1), 108. | * Details of masking strategies in Table 8 are missing. |
NIPS_2021_2024 | NIPS_2021 | below). Using the related literature on active interventions would require full identification of the underlying DAG. It is emphasized that matching only the means can be done with significantly smaller number of interventions, and this is the difference from previous works. - Identifiability in terms of Markov equivalence classes (MEC) is well discussed. Graphical characterization of the proposed shift-interventional (shift-I) MEC, and its refinement over the general interventional MEC is given clearly. Assumptions are reasonable within the given setting. - Extending the decomposition of intervention essential graphs to shift interventional essential graphs is sound. Both of the proposed approaches for solving the problem, clique tree and supermodular strategies are reasonable. Use of a lower bound surrogate function to enable supermodularity is clever. - The paper is organized clearly, and the theoretical claims are well supported.
Weaknesses: I have several concerns on the importance of the proposed settings and usefulness of the results. - Although the causal matching problem seems interesting and new, it is not well motivated. To the reviewer’s knowledge, interventions on a causal model are tied to inferring the underlying structure (it does not need to be the whole structure of the model). In this regard, it is not clear how exactly matching the means of a causal system is preferable to performing more relaxed cases of soft interventions. The authors are encouraged to further explain how this setting can be beneficial. - Deterministic shift interventions are useful to test the applicability of the proposed ideas. However, restricting the problem setting to only shift interventions is quite limited and leads to some rather trivial results. For instance, existence and uniqueness results of matching shift-intervention in Lemma 1, and the properties of source nodes in Lemma 2 are immediate observations in a DAG. - Clique tree approximation is just a minor modification of the cited central node algorithm (Greenewald et al., 2019). - Complexity of the submodularity approach subroutine uses SATURATE algorithm (Krause et al., 2008), and is said to scale with N 5
in appendix D.4. It is worth commenting on the feasibility of this approach. For instance, what are the runtimes of the simulations for large models in Section 6? - It is a nice result that the number of proposed interventions is only a logarithmic factor of the lower bound. However, the baselines in the simulations are not very strong to demonstrate the usefulness. Though coloring approach of Shanmugam et al., 2015 is a related active intervention design, the goal of it is broader than finding a matching intervention. For instance, a simple random upstream search, the other baseline, performs much better than coloring due to the simpler objective. That being said, the reviewer understands that the proposed task is new and fair comparisons may not be easy.
Although this paper has several nice properties, the overall contribution, constraints on the problem, and the importance of the results are not adequate for publication at NeurIPS.
Main limitations of the work, which are also stated in the above review, and potential impact of the work, which is not very imminent, are adequately addressed in the discussion section. | - The paper is organized clearly, and the theoretical claims are well supported. Weaknesses: I have several concerns on the importance of the proposed settings and usefulness of the results. |
OhTzuWzO6Q | ICLR_2024 | - The proposed method seems to heavily depend on how good AD is. Indeed, for common image and text tasks, it might be easy to find such a public dataset. But for more sensitive tasks on devices, such a public dataset might not exist.
- Scale of experiments is small, where the tasks such as MNIST or CIFAR10 are relatively simple. It is hard to know whether the method can generalize to larger models or harder tasks by just sharing the model outputs.
- Local DP noise results with such a small epsilon seems to be unreasonably good, as they are nearly all better than the non-DP baseline for CIFAR. From Theorem 2, with $(\epsilon, \delta)=(5, 10^{-4}), E=200, K=2000$, then $\rho\approx 1.7 * 10^{-6}$, the noise standard deviation is about 767 which is much larger than the output scale. It would be great if the authors can explain how local prior optimization is not impacted by DP noise and outperform the non-DP baselines.
- Also since the authors considered a public dataset is available, then the DP baseline should also be those with such assumptions, such as [1].
- Minor: presentation of the hierarchy in Algorithm 1 can be improved. References
[1] Li, Tian, et al. "Private adaptive optimization with side information." International Conference on Machine Learning. PMLR, 2022. | - Also since the authors considered a public dataset is available, then the DP baseline should also be those with such assumptions, such as [1]. |
NIPS_2019_1348 | NIPS_2019 | 0. My first concern is the assumption that a human risk measure is gold standard when it comes to fairness. There are many reasons to question this assumption. First, humans are the worst random number generators, e.g. the distribution over random integers from 1 to 10 is highly skewed in the center. Similarly, if humans perceive a higher risk in the tails of a distribution, it doesn't necessarily mean that minimizing such risk makes the model fair. This still needs to be discussed and proven. 1. The paper suggests that using EHRM has fairness implications. These fairness implications are obtained as a side effect of using different hyperparameter setting for the skewness of the human risk distribution. There is no direct relationship between fairness consideration and the risk metric used. 2. In the Introduction, the authors choose to over-sell their work by presenting their work as a "very natural if simple solution to addressing these varied desiderata" where the desiderata include "fairness, safety, and robustness". This is a strong statement but incorrect at the same time. The paper lacks any connection between these objectives and the proposed risk metric. One could try to investigate these connections before claiming to address them. 3. One example of connection would be the definition of Calibration used in, for example, Kleinberg et al. and connect it to a human calibration measure and derive a Human risk objective from there as well. It is a straightforward application but the work lacks that. 4. There are no comparison baselines even when applying to a fairness problem which has a number of available software to get good results. Agarwal 2018: "A Reductions Approach to Fair Classification" is seemingly relevant as it reduces fairness in classification to cost-sensitive learning. In this case, the weighting is done on the basis of the loss and not the group identities or class values, but it may be the reason why there is a slight improvement in fairness outcomes. Since the EHRM weights minorities higher, it might be correlated to the weights under a fair classification reduction and hence giving you slight improvements in fairness metrics. 5. There were a few typos and some other mistakes: - doomed -> deemed (Line50) - Line 74: Remove hence. The last line doesn't imply this sentence. It seems independent. | 2. In the Introduction, the authors choose to over-sell their work by presenting their work as a "very natural if simple solution to addressing these varied desiderata" where the desiderata include "fairness, safety, and robustness". This is a strong statement but incorrect at the same time. The paper lacks any connection between these objectives and the proposed risk metric. One could try to investigate these connections before claiming to address them. |
ICLR_2023_1957 | ICLR_2023 | • The experimental datasets were very simple. I would like to see more complex datasets, such as ImageNet/Tiny Imagenet. • Please expand on the contribution of the compromised clients in the model update. It’s not clear whether the attack success rate is low because the compromised clients have a low genuine score or if their updates result in weak backdoor success.
• It is okay to include a few baseline comparisons. However, many of the defenses compared were not intended for backdoor attacks. Please show comparisons to other new defenses aimed for backdoor defense. • The paper relies on prior work for reverse engineering backdoor triggers and target class. I would like to see more about this. What limitations does this have? If the compromised clients use larger L1 norm triggers, does this fail? | • Please expand on the contribution of the compromised clients in the model update. It’s not clear whether the attack success rate is low because the compromised clients have a low genuine score or if their updates result in weak backdoor success. |
ICLR_2023_2664 | ICLR_2023 | 1.There is no evidence showing that the relationship between NC and transferability is robust. As the authors already mentioned, a large NC leads might not lead to good transfer performance as well, e.g. the model is randomly initialized and not trained. This is a simple sanity check that the correlation between NC and transferability does not pass.
2.It is unclear what it the causality between NC, transferability, and the diversity of features. To me, learning diverse features in pre-training is the cause, while NC and transferability are consequences. Therefore, using NC to understand transferability is misleading in this sense. The idea that learning less diverse features leads to bad transfer performance is not novel. [1] contains ideas like this. Self-supervised learning contains more transferable features than supervised learning, so that it is more transferable.
3.It is also not surprising at all that NC on the downstream tasks correlated well with the performance on downstream tasks. Even without pre-training, this correlation should be true, because small NC means the margin in the classification problem is large. [5] already showed that this is the case for few-shot learning.
4.Lacking comparison with other works on transferability. There exist a line of works on predicting the transferability of the models with various metric. See [2] and references therein. The authors should provide a comparison with them to give the readers a sense how NC perform as a metric of transferability.
5.Lacking justifications on larger datasets. It’s better to provide the NC results on ImageNet apart from CIFAR100 and CIFAR-10. Evaluating the numbers based on ImageNet should not be difficult with the publicly available pre-trained models? This gives readers more confidence on this phenomenon.
6.The proposed transfer algorithm is not novel. Training more than one layers should definitely perform better than linear probe. The authors should also provide the numbers of fine-tuning as a comparison. Besides, there are a bunch of works on efficient fine-tuning, such as fine-tuning only the bias [3], and adapters [4]. It would be better to compare with them as well. Minors:
Abstract, “when pretrain models”, Should be pre-training
Basics of NNs, Is the layer index notation used anywhere else? If not, including it here will only make it more cluttered. Besides, for resnets, it is not correct definition. The definition here only applies to MLPs.
[1] Self-supervised Learning is More Robust to Dataset Imbalance.
[2] LogME: Practical Assessment of Pre-trained Models for Transfer Learning.
[3] BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models.
[4] VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks.
[5] Unraveling meta-learning: understanding feature representations for few-shot tasks. | 5.Lacking justifications on larger datasets. It’s better to provide the NC results on ImageNet apart from CIFAR100 and CIFAR-10. Evaluating the numbers based on ImageNet should not be difficult with the publicly available pre-trained models? This gives readers more confidence on this phenomenon. |
ICLR_2023_4133 | ICLR_2023 | 1.The structure of this paper is confused and difficult to understand.
2.The motivation of introducing graphic information into attention calculation is not clear, and the model is not novel enough.
3.More explanation is needed for the experiment to calculate the standard deviation of attention scores of 1,2,3 hop neighbors.
4.Some important methods [1,2,3] for dealing with heterophily graphs should be either discussed in related works or compared.
5.The writing need to be improved. There are many grammar mistakes and vague expressions.
[1]Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns, KDD, 2021.
[2]Adaptive Universal Generalized PageRank Graph Neural Network, ICLR, 2021.
[3]Graph Neural Networks with Heterophily, AAAI, 2021. | 4.Some important methods [1,2,3] for dealing with heterophily graphs should be either discussed in related works or compared. |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinforcement learning is not well motivated
The problem domain seems simple enough that a linear approximation would have likely sufficed? The network is fairly small and isn't "deep" either.
- > We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores
This is probably the most unclear part. It's not clear to me why the paper considers one to be more realistic than the other rather than just modeling different incentives? Probably not enough space in the paper but actual comparison of learning dynamics when the opportunity costs are modeled as penalties instead. As economists say: incentives matter. However, if the intention was to explicitly avoid such explicit incentives, as they _would_ affect the model-free reinforcement learning algorithm, then those reasons should be clearly stated.
- Unclear whether bringing connections to human cognition makes sense
As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated".
* Minor comments
** Line 16:
> [18] found them...
Consider using \citeauthor{} ?
** Line 167:
> be the N -th agentâs
should be i-th agent?
** Figure 3:
Clarify what the `fillcolor` implies and how many runs were the results averaged over?
** Figure 4:
Is not self contained and refers to Fig. 6 which is in the supplementary. The figure is understandably large and hard to fit in the main paper, but at least consider clarifying that it's in the supplementary (as you have clarified for other figures from the supplementary mentioned in the main paper).
** Figure 5:
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
- Increase space between the main caption and sub-caption.
** Line 299:
From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. | - Increase space between the main caption and sub-caption. ** Line 299: From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. |
ICLR_2023_2869 | ICLR_2023 | Weakness:
1.The technical quality of this paper is not enough, and it seems like a direct combination with Evidential Theory and Reinforcement Learning.
2.The paper is not sound as there are many exploration methods in RL literature, such as count-based methods and intrinsic motivations(RND,ICM). But the paper does not discuss and compare these methods.
3.The theoretical analysis is not novel, as it is a direct result of RL theory.
4.The update rule of the critic network does not follow Double DQN, but follow the clipped double q-learning in the well known TD3 algorithm.
5.The paper does not provide a specification of the experimental setup. Did the authors bulid simulator? If not, how to evaluate the performance of each policy in the offline setting?
6.Why not compare SAC in Table 2 as SAC is compared in Figure 6?
7.How to verify that the performance improvement over pervious RL methods indeed comes from the evidential reward? As we can see, you choose some advanced techniques like Eq (11), and it is not deployed in previous baselines. | 5.The paper does not provide a specification of the experimental setup. Did the authors bulid simulator? If not, how to evaluate the performance of each policy in the offline setting? |
kN25ggeq1J | ICLR_2025 | This paper is not suitable for a computer-science/ML conference like ICLR. It seems best suited to a cognitive psychology or philosophy conference.
The paper starts off in line 42 with "From the perspective of human cognitive psychology, reasoning can be viewed as a process of memory retrieval," and this is the perspective taken. There is insufficient mathematical theory or implementation to show results relevant to ICLR.
There are several claims made without defintion and/or validation. Examples include:
- LLMs perform better on System 1 tasks than on System 2 tasks. ???? Must define System 1 tasks and System 2 tasks for this to make any sense.
- l. 52: "We believe that, similar to humans, there is no significant distinction between memorizing and reasoning tasks for LLM
----this is a scientific document NOT a statement of unprovable beliefs.
- l. 112: what is "the execution space"?
- l. 115: what is "abstract level understanding of function’s behavior" or "perform abductive inference" ? | - LLMs perform better on System 1 tasks than on System 2 tasks. ???? Must define System 1 tasks and System 2 tasks for this to make any sense. |
NIPS_2017_40 | NIPS_2017 | . Are other methods such as Barak, Kelner, Steuer 2014 "Rounding sum-of-squares relaxations" relevant?
6. Sec 4 Experiments.
When you run BP-SP, you obtain marginals. How do you then compute your approximate MAP solution? Do you use the same CLAP rounding approach or something else? This may be important since in your experiments, BP-SP performs very well.
Since you use triangles as regions for PSOS(4), could you try the same for GBP to make the comparison more similar? Particularly since it appears somewhat odd that the current GBP with 4-sets is not doing better than BP-SP.
Times should be reported for all methods to allow more meaningful comparisons [I recognize this can be tricky with non-optimized code but the pattern as larger models are examined would still be helpful].
If possible, it would be instructive to add experiments for larger planar models with no singleton potentials, where it is feasible to compute the exact MAP score.
Minor points:
In a few places, claims are perhaps stronger than justified - e.g. in the Abstract, "significantly outperforms BP and GBP" ; l. 101 perhaps remove "extensive"; l. 243 - surely the exact max was obtained only for the small experiments; you don't know for the larger models?
A few capitalizations are missing in the References, e.g. Ising, Burer-Monteiro, SDP =======================
I have read the rebuttal and thank the authors for addressing some of my concerns. | . Are other methods such as Barak, Kelner, Steuer 2014 "Rounding sum-of-squares relaxations" relevant? |
AQiuwWLvim | EMNLP_2023 | * Empathy is very difficult to be captured by automatic metrics, hence human evaluation is a must to verify the improvements. However, the authors only report automatic metrics. Moreover, the difference of the automatic scores between approaches are small. Therefore it is hard to tell whether the approach in the paper is effective or not.
* Although the authors highlight dialogue act labels to be their main contribution, the results do not favor their claim, because the scores are better when the dialogue act label for the source target is not given (implicit vs. explicit). Moreover, the plain Target prompting, which do not include dialogue act labels, is comparable to DA-Pairwise. Again, further human evaluation is needed to compare the true effectiveness of the authors’ approach. | * Although the authors highlight dialogue act labels to be their main contribution, the results do not favor their claim, because the scores are better when the dialogue act label for the source target is not given (implicit vs. explicit). Moreover, the plain Target prompting, which do not include dialogue act labels, is comparable to DA-Pairwise. Again, further human evaluation is needed to compare the true effectiveness of the authors’ approach. |
ACL_2017_145_review | ACL_2017 | The comparison against similar approaches could be extended.
- General Discussion: The main focus of this paper is the introduction of a new model for learning multimodal word distributions formed from Gaussian mixtures for multiple word meanings. i. e. representing a word by a set of many Gaussian distributions.
The approach, extend the model introduced by Vilnis and McCallum (2014) which represented word as unimodal Gaussian distribution. By using a multimodal, the current approach attain the problem of polysemy.
Overall, a very strong paper, well structured and clear. The experimentation is correct and the qualitative analysis made in table 1 shows results as expected from the approach. There’s not much that can be faulted and all my comments below are meant to help the paper gain additional clarity. Some comments: _ It may be interesting to include a brief explanation of the differences between the approach from Tian et al. 2014 and the current one. Both split single word representation into multiple prototypes by using a mixture model. _ There are some missing citations that could me mentioned in related work as : Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space Neelakantan, A., Shankar. J. Passos, A., McCallum. EMNLP 2014 Do Multi-Sense Embeddings Improve Natural Language Understanding? Li and Jurafsky, EMNLP 2015 Topical Word Embeddings. Liu Y., Liu Z., Chua T.,Sun M. AAAI 2015 _ Also, the inclusion of the result from those approaches in tables 3 and 4 could be interesting. _ A question to the authors: What do you attribute the loss of performance of w2gm against w2g in the analysis of SWCS?
I have read the response. | _ There are some missing citations that could me mentioned in related work as : Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space Neelakantan, A., Shankar. J. Passos, A., McCallum. EMNLP 2014 Do Multi-Sense Embeddings Improve Natural Language Understanding? Li and Jurafsky, EMNLP 2015 Topical Word Embeddings. Liu Y., Liu Z., Chua T.,Sun M. AAAI 2015 _ Also, the inclusion of the result from those approaches in tables 3 and 4 could be interesting. |
NIPS_2021_2082 | NIPS_2021 | weakness of the existing works. 3. In Table 1 we can see that, besides the CAMELYON16 dataset, the baseline MIL-based methods showed much lower performances than max-pooling. Please give some discussions about the reason. 4. For ablation study, The Table 2 and Fig. 5 were not mentioned in the manuscript. What do the values stand for in Table 2 and Fig. 5? Why giving the detailed discussion in Appendix? I suggest move the discussion to the main manuscript. 5. For Fig.6, What is the purpose to show the zoom-in view of heatmap? I cannot see anything special in this area. 6. For Fig. 7, the initial accuracy of MIL-based baseline model were higher than the converged models, especially for the NSCLC dataset, why? | 5. For Fig.6, What is the purpose to show the zoom-in view of heatmap? I cannot see anything special in this area. |
NIPS_2019_374 | NIPS_2019 | ---------- 1. Except the new definition of the Generalized Gauss-Newton matrix (that is not pursued), no other proposition in the paper is original. 2. As the authors point themselves, analyzing the EF as a variance adaptation method would have explained its efficiency and strengthened the paper: "This perspective on the empirical Fisher is currently not well studied. Of course, there are obvious difficulties ahead:" Overcoming these difficulties is what a research paper is about, not only discussing them. 3. The main point of the paper relies in paragraph 3.2. This requires clear and sound propositions such as: for a well-specified model, and a consistent estimator, the empirical fisher matrix converges to the Hessian at a rate ... It is claimed to be specified in Appendix C.3 but there seems to be a referencing problem in the paper. This would highlight both the reasoning of previous papers and the difference with the actual approximation made here. Minor comments: --------------- Typos: - Eq. 5 no square for gradient of a _ n - Eq. 8 subscript theta should be under p not log - Replace the occurrences of Appendix A to Appendix C Conclusion: ---------- Overall I think this is an good lecture on natural gradient and its subtleties, yet not a research paper since almost no new results are demonstrated. Yet, if the choice has to be made between another paper that uses the empirical Fisher and this one that explains it, I'll advocate for this paper. Therefore I tend to marginally accept this paper though I think its place is in lecture notes (in fact Martens long review of natural gradient [New insights and perspectives on the natural gradient method, Martens 2014] should incorporate it, that is where this paper should be from my opinion.) After discussion -------------------- After the discussion, I increased my score, I don't think that it is a top paper as it does not have new results but it should clearly be accepted as it would be much more helpful than "another state of the art technique for deep learning" with some misleading approximations like ADAM. Note that though refining the definition of a generalized gauss-newton method seems to be a detail, I think it could have a real potential for further analysis in optimization. | 2. As the authors point themselves, analyzing the EF as a variance adaptation method would have explained its efficiency and strengthened the paper: "This perspective on the empirical Fisher is currently not well studied. Of course, there are obvious difficulties ahead:" Overcoming these difficulties is what a research paper is about, not only discussing them. |
NIPS_2017_217 | NIPS_2017 | - The model seems to really require the final refinement step to achieve state-of-the-art performance.
- How does the size of the model (in terms of depth or number of parameters) compare to competing approaches? The authors mention that the model consists of 4 hourglass modules, but do not say how big each hourglass module is.
- There are some implementation details that are curious and will benefit from some intuition: for example, lines 158-160: why not just impose a pairwise relationship across all pairs of keypoints? the concept of anchor joints seems needlessly complex. | - There are some implementation details that are curious and will benefit from some intuition: for example, lines 158-160: why not just impose a pairwise relationship across all pairs of keypoints? the concept of anchor joints seems needlessly complex. |
ACL_2017_108_review | ACL_2017 | The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader interested. As for the approach itself, some important details are missing in my opinion: What is the decision criterion to include an edge or not? In lines 229--233 several different options for the I^k_t nodes are mentioned but it is never clarified which edges should be present!
As for the empirical evaluation, the achieved results are better than some previous approaches but not really by a large margin. I would not really call the slight improvements as "outperformed" as is done in the paper. What is the effect size? Does it really matter to some user that there is some improvement of two percentage points in F_1? What is the actual effect one can observe? How many "important" entities are discovered, that have not been discovered by previous methods? Furthermore, what performance would some simplistic dictionary-based method achieve that could also be used to find overlapping things? And in a similar direction: what would some commercial system like Google's NLP cloud that should also be able to detect and link entities would have achieved on the datasets. Just to put the results also into contrast of existing "commercial" systems.
As for the result discussion, I would have liked to see some more emphasis on actual crossing entities. How is the performance there? This in my opinion is the more interesting subset of overlapping entities than the nested ones. How many more crossing entities are detected than were possible before? Which ones were missed and maybe why? Is the performance improvement due to better nested detection only or also detecting crossing entities? Some general error discussion comparing errors made by the suggested system and previous ones would also strengthen that part.
General Discussion: I like the problems related to named entity recognition and see a point for recognizing crossing entities. However, why is one interested in nested entities? The paper at hand does not really motivate the scenario and also sheds no light on that point in the evaluation. Discussing errors and maybe advantages with some example cases and an emphasis on the results on crossing entities compared to other approaches would possibly have convinced me more.
So, I am only lukewarm about the paper with maybe a slight tendency to rejection. It just seems yet another try without really emphasizing the in my opinion important question of crossing entities.
Minor remarks: - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one?
- e.g.: why in italics?
- time linear in n: when n is sentence length, does it really matter whether it is linear or cubic?
- spurious structures: in the introduction it is not clear, what is meant - regarded as _a_ chunk - NP chunking: noun phrase chunking?
- Since they set: who?
- pervious -> previous - of Lu and Roth~(2015) - the following five types: in sentences with no large numbers, spell out the small ones, please - types of states: what is a state in a (hyper-)graph? later state seems to be used analogous to node?!
- I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph?
- in Figure 2 it was not obvious at first glance why this is a hypergraph.
colors are not visible in b/w printing. why are some nodes/edges in gray. it is also not obvious how the highlighted edges were selected and why the others are in gray ... - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one?
- denoting ...: sometimes in brackets, sometimes not ... why?
- please place footnotes not directly in front of a punctuation mark but afterwards - footnote 2: due to the missing edge: how determined that this one should be missing?
- on whether the separator defines ...: how determined?
- in _the_ mention hypergraph - last paragraph before 4.1: to represent the entity separator CS: how is the CS-edge chosen algorithmically here?
- comma after Equation 1?
- to find out: sounds a little odd here - we extract entities_._\footnote - we make two: sounds odd; we conduct or something like that?
- nested vs. crossing remark in footnote 3: why is this good? why not favor crossing? examples to clarify?
- the combination of states alone do_es_ not?
- the simple first order assumption: that is what?
- In _the_ previous section - we see that our model: demonstrated? have shown?
- used in this experiments: these - each of these distinct interpretation_s_ - published _on_ their website - The statistics of each dataset _are_ shown - allows us to use to make use: omit "to use" - tried to follow as close ... : tried to use the features suggested in previous works as close as possible?
- Following (Lu and Roth, 2015): please do not use references as nouns: Following Lu and Roth (2015) - using _the_ BILOU scheme - highlighted in bold: what about the effect size?
- significantly better: in what sense? effect size?
- In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one?
- due to _an_ insufficient - Table 5: all F_1 scores seems rather similar to me ... again, "outperform" seems a bit of a stretch here ... - is more confident: why does this increase recall?
- converge _than_ the mention hypergraph - References: some paper titles are lowercased, others not, why? | - I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph? |
ICLR_2022_200 | ICLR_2022 | (1) The progressive distillation process seems to need a much larger computational cost than many previous fast sampling methods, such as DDIM and DDPM respacing. As stated in the paper, its training budget is almost the same as training a diffusion model from scratch. I wonder how this concern can be addressed in practice?
(2) The main claim is that the method can reduce the number of model evaluations in sampling to as small as 4 or 8 steps while retaining high image quality. Currently, the highest resolution of considered datasets is 128x128. I wonder if the claim still holds for datasets with higher resolution? Does the resolution or complexity of the dataset impact the final steps of model evaluations?
(3) Minor issues in the writing. 1) Typos. For example, in the abstract, “as little as 4 steps” => “as few as 4 steps”. 2) Repeated references. For example, “Prafulla Dhariwal and Alex Nichol. Diffusion models beat GANs on image synthesis.”, and “Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. ”. 3) Confusing sentences. For example, what do you mean by saying “... unlike the original data point x, since multiple different data points x could conceivably have led to observing noisy data z t
”? Also, when saying “we found this to work slightly better than starting from a non-zero signal-to-noise ratio as used by e.g. Ho et al. (2020)”, does it refer to the undistilled sampler or the distilled sampler? | 3) Confusing sentences. For example, what do you mean by saying “... unlike the original data point x, since multiple different data points x could conceivably have led to observing noisy data z t ”? Also, when saying “we found this to work slightly better than starting from a non-zero signal-to-noise ratio as used by e.g. Ho et al. (2020)”, does it refer to the undistilled sampler or the distilled sampler? |
TKzERU0kq1 | EMNLP_2023 | - There are many basic writing or grammatical errors. Some sentences are not fluent (for example L259-262, it makes me hard to understand the motivation of Sec5.2).
- Current pipeline is a sequence of five types of editing, but it is not clear the contribution of each type of editing.
- I’m skeptical about the quality of the ShortcutQA. Though it’s manually verified that “the edits did not change the semantics”, it’s not guaranteed that they don’t add ambiguity to the texts, leading to the degradation in performance. Can you provide some details about how "answerable" the distracted texts are? Can you provide some examples and qualitative results?
- Some details in experiment setting are not clear. For example, ShortcutQA has 490 examples but the original subsets (natural) has 600 examples? Do you make sure two settings (Natural vs Edited) in Table 2 are directly comparable? | - There are many basic writing or grammatical errors. Some sentences are not fluent (for example L259-262, it makes me hard to understand the motivation of Sec5.2). |
YvOq7jHT6R | ICLR_2025 | 1. The experiments are somewhat weak.
- The main paper only presents ridge regression experiments, while important black-box adversarial experiments are deferred to the appendix.
- I recommend moving key adversarial attack results to the main paper, particularly those demonstrating the practical benefits of bias cancellation in zeroth-order optimization
- No evaluation on real-world large-scale datasets. Specifically, I recommend testing on: a) Sparse feature selection problems using MNIST/CIFAR-10 for computer vision, b) Gene expression datasets like Colon Cancer or Leukemia for bioinformatics applications, c) Text classification with sparse word embeddings using Reuters or 20 Newsgroups datasets. These datasets would demonstrate the practical utility of the proposed methods across diverse domains.
2. As for the theretical analysis, the discussion of when these assumptions might fail is limited and there is no analysis of what happens when conditions are violated.
- The Restricted Strong Convexity (RSC) and Restricted Strong Smoothness (RSS) assumptions are quite strong and their limitations should be discussed. Specifically, these assumptions may fail in:
a) Deep neural network optimization where loss landscapes are highly non-convex.
b) Problems with heavy-tailed noise where smoothness is violated.
c) High-dimensional settings where restricted eigenvalue conditions break down.
- The paper should analyze algorithm behavior when these conditions are violated and propose potential modifications or relaxations of the assumptions. | - The main paper only presents ridge regression experiments, while important black-box adversarial experiments are deferred to the appendix. |
NIPS_2019_494 | NIPS_2019 | of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt). - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: - from just the paper: the results would be more interesting (and significant) if there was a way to reproduce the work more easily. At present I cannot see this work easily taken up by many other researchers mainly due to lack of detail in the description. The work is interesting, and I like the idea, but with a relatively high-level description of it in the paper it would need a little more than the peudocode in the materials to convince me using it (but see next). - In the supplementary material it is stated the source code will be made available, and in combination with paper and information in the supplementary material, the level of detail may be just right (but it's hard to say without seeing the code). Given the promising results, I can imagine this approach being useful at least for more research in a similar direction. | - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: |
ACL_2017_676_review | ACL_2017 | The most annoying point to me is that in the relatively large dataset (ASPEC), the best proposed model is still 1 BLEU point lower than the softmax model. What about some even larger dataset, like the French-English? There are at most 12 million sentences there. Will the gap be even larger?
Similarly, what's the performance on some other language pairs ?
Maybe you should mention this paper, https://arxiv.org/abs/1610.00072. It speeds up the decoding speed by 10x and the BLEU loss is less than 0.5. - General Discussion: The paper describes a parameter reducing method for large vocabulary softmax. By applying the error-corrected code and hybrid with softmax, its BLEU approaches that of the orignal full vocab softmax model.
One quick question: what is the hidden dimension size of the models?
I couldn't find this in the experiment setup.
The 44 bits can achieve 26 out of 31 BLEU on E2J, that was surprisingly good. However, how could you increase the number of bits to increase the classification power ? 44 is too small, there's plenty of room to use more bits and the computation time on GPU won't even change.
Another thing that is counter-intuitive is that by predicting the binary code, the model is actually predicting the rank of the words. So how should we interpret these bit-embeddings ? There seems no semantic relations of all the words that have odd rank. Is it because the model is so powerful that it just remembers the data ? | -General Discussion: The paper describes a parameter reducing method for large vocabulary softmax. By applying the error-corrected code and hybrid with softmax, its BLEU approaches that of the orignal full vocab softmax model. One quick question: what is the hidden dimension size of the models? I couldn't find this in the experiment setup. The 44 bits can achieve 26 out of 31 BLEU on E2J, that was surprisingly good. However, how could you increase the number of bits to increase the classification power ? |
NIPS_2020_530 | NIPS_2020 | There are multiple issues with the claims and evaluations presented in the paper. In particular, as a reader, I am not convinced that reported gains are due to exploiting gaze information. 1. An improvement over SOTA? : For paraphrasing task, the paper claims Patro et al. (2018) as SOTA which is an outdated baseline. [Decom_para ACL19] is a better baseline for comparison. Given that "No Fixation" method gives 27.81 BLEU-4 score with 69M params, I doubt that the proposed model's 28.82 BLEU-4 score with 79M is truly better than Patro et al. (2018)'s model. Ideally, authors should report the performance of baseline models using the same number of parameters. Similarly, on sentence compression task they should use a baseline with similar model params. With the current evaluation setup, it's not clear if gains can be attributed to the higher model capacity. 2. Evaluation: Paper reports only BLEU-4 scores for paraphrase task. Often people report multiple metrics to compare methods as a 1 point improvement in BLEU (27.81->28.82) on a single data might not mean anything in general. Usually, people report other metrics such as METEOR, ROUGE along with BLEU for a fair evaluation. For future revision of the paper, authors can also consider using more accurate metrics such as [BERTScore ICLR20], [BLEURT ACL20]. 3. Model architecture choice: What is the motivation of adding a transformer layer after a bilstm in text saliency model? The paper claims that this architecture allows us to better capture the sequential context without quantifying what do they mean by "better" ? Bi-LSTM followed by n-layer transformers in a non-standard NLP architecture so authors should describe what advantages does it provide over a standard Bi-lstm or a started transformer model? 4. Impact of pre-training on CNN and Daily Mail: Since the proposed models were pre-trained on CNN and Daily Mail and the baseline models are not pre-trained, it's not clear if the gains are due to model exploiting gaz information. We know that pre-training models on unlabeled corpus lead to better generalization performance across NLP tasks. I am still not convinced that predicting fixation durations provides any advantage over standard pre-training task such as masked language modeling. 5. Task/Dataset Choice: I think text summarization might be a good candidate to show the advantage of adding gaze information. Is there any particular reason for not considering that task? Also, to ensure that these techniques generalize, it's important to report numbers on more than 1 dataset for a given task. 6. Missing important implementation details: For the seq2seq model, author mentioned that they used greedy search. Is there any reason for not using a standard beam-search? | 6. Missing important implementation details: For the seq2seq model, author mentioned that they used greedy search. Is there any reason for not using a standard beam-search? |
NIPS_2022_1340 | NIPS_2022 | .
The claim regarding the ability of the proposed method to alleviate the popularity bias is not well supported in the paper, nor by theoretical analyses, neither by convincing targeted experiments. For examples, I would recommend reporting statistics about the popularity distribution of the recommended items by the different baselines. Also, some quantitative and qualitative experiments on how popular/rare items are ranked by the different models for a given set of users would be meaningful (see for instance Figures 2 and 3 in [1]).
Experiments are weak. Three datasets are considered for evaluation: Yelp, MovieLens and Douban. However, most of the results are on Yelp and/or MovieLens, except in Table 1. I would recommend reporting the results of every experiment across all the three datasets.
Additional comments/questions.
In section 4.3, the part related to proposition 1 is a bit hard to follow and connect to the objective of eq. 13. This is due to using different notations for the mutual information terms in the proposition and in eq. 13. Please consider improving the notations.
I would recommend keeping the legend consistent across all the experiments. For instance, CGI is represented by a green bar in figures 3 and 4, while in figure 5 it is represented by a blue bar.
What type of significance test is used in the experiments, and how many trials are performed for every algorithm in Table 1. References.
[1] Liang, Dawen, et al. "Factorization meets the item embedding: Regularizing matrix factorization with item co-occurrence." Proceedings of the 10th ACM conference on recommender systems. 2016. | 13. This is due to using different notations for the mutual information terms in the proposition and in eq. |
ACL_2017_524_review | ACL_2017 | - The evaluation datasets used are small and hence results are not very convincing (particularly wrt to the alchemy45 dataset on which the best results have been obtained) - It is disappointing to see only F1 scores and coverage scores, but virtually no deeper analysis of the results. For instance, a breakdown by type of error/type of grammatical construction would be interesting. - it is still not clear to this reviewer what is the proportion of out of coverage items due to various factors (running out of resources, lack of coverage for "genuine" grammatical constructions in the long tail, lack of coverage due to extra-grammatical factors like interjections, disfluencies, lack of lexical coverage, etc. - General Discussion: This paper address the problem of "robustness" or lack of coverage for a hand-written HPSG grammar (English Resource Grammar). The paper compares several approaches for increasing coverage, and also presents two creative ways of obtaining evaluation datasets (a non-trivial issue due to the fact that gold standard evaluation data is by definition available only for in-coverage inputs). Although hand-written precision grammars have been very much out of fashion for a long time now and have been superseded by statistical treebank-based grammars, it is important to continue research on these in my opinion. The advantages of high precision and deep semantic analysis provided by these grammars has not been reproduced by non-handwritten grammars as yet. For this reason, I am giving this paper a score of 4, despite the shortcomings mentioned above. | - it is still not clear to this reviewer what is the proportion of out of coverage items due to various factors (running out of resources, lack of coverage for "genuine" grammatical constructions in the long tail, lack of coverage due to extra-grammatical factors like interjections, disfluencies, lack of lexical coverage, etc. - |
NIPS_2021_1604 | NIPS_2021 | ).
Weaknesses - Some parts of the paper are difficult to follow, see also Typos etc below. - Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6].
After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised my score.
All unclear parts have been answered
The authors' explained why the chosen baseline makes the most sense. It would be great if this is added to the final version of the paper.
Questions - Do you think there is a way to test beforehand whether I(X_1, Y_1) would be lowered more than I(X_2, Y_1) beforehand? - Out of curiosity, did you consider first using Aug and then CF.CDA? Especially for the correlated palate result it could be interesting to see if now CF.CDA can improve. - Did both CDA and MMI have the same lambda_RL (Eq 9) value? From Figure 6 it seems the biggest difference between CDA and MMI is that MMI has more discontinuous phrase/tokens.
Typos, representation etc. - Line 69: Is X_2 defined as all features of X not in X_1? Stating this explicitly would be great. - Line 88: What ideas exactly do you take from [19] and how does your approach differ? - Eq 2: Does this mean Y is a value in [0, 1] for two possible labels? Can this be extended to more labels? This should be clarified. - 262: What are the possible Y values for TripAdvisor’s location aspect? - The definitions and usage of the various variables is sometimes difficult to follow. E.g. What exactly is the definition of X_2? (see also first point above). When does X_M become X_1? Sometimes the augmented data has a superscript, sometimes it does not. In line 131 the meaning of x_1 and x_2 are reverse, which can get confusing - maybe x’_1 and x’_2 would make it easier to follow together with a table that explains the meaning of different variables? - Section 2.3: Before line 116 mentioned the change when adding the counterfactual example, it would be helpful to first state what I(X_2, Y_1) and I(X_1, Y_1) are without it.
Minor points - Line 29: How is desired relationship between input text and target labels defined? - Line 44: What is meant by the initial rationale selector is perfect? It seems if it were perfect no additional work needs to be done. - Line 14, 47: A brief explanation of “multi-aspect” would be helpful - Figure 1: Subscripts s and t should be 1 and 2? - 184: Delete “the”
There is a broader impact section which discusses the limitations and dangers adequately. | - Section 2.3: Before line 116 mentioned the change when adding the counterfactual example, it would be helpful to first state what I(X_2, Y_1) and I(X_1, Y_1) are without it. Minor points - Line 29: How is desired relationship between input text and target labels defined? |
ARR_2022_101_review | ARR_2022 | 1. The method in this paper is quite similar to BERTScore, but the authors have not cited that paper.
2. Figure 2 does not show the time complexity of SimCSE_{CLS} method.
3. I am confused about the definition of "\vec \mathbf{1}" in Equation(1).
Missing citation: BERTScore: Evaluating Text Generation with BERT (Zhang et al. 2020) For other suggestions, please refer to the weakness section. | 2. Figure 2 does not show the time complexity of SimCSE_{CLS} method. |
ICLR_2023_1935 | ICLR_2023 | Missing literature and baselines: there are many learning-based approaches for heuristic search that are not based on L_2 and are not cited in the paper [e.g., 1-4]. [1-2] have specifically focused on Sokoban. [3][4] are older works that avoid problems with L_2 by focusing on learning to rank.
Optimality: the paper seems to focus on A* and is motivated by the "false sense of optimality" in L2, however the proposed approach is, to my understanding, not optimal. Specifically:
The theoretical optimality guarantees (Section 3.1) only hold for instances in the training set (i.e., to guarantee optimality in general, we would have to train on all possible instances).
L* is not differentiable and there is no guarantee that training find optimal solution (even with respect to the training set).
Experimental results not sufficient to evaluate the proposed approach: Despite the focus on optimality, the experiments only focus on coverage and not on solution cost. Some of the baselines are not optimal, including Mercury 14, Stone soup, and the RL approach, as well as the proposed approach (to my understanding). It is therefore important to analyze what is the (average) obtained solution quality by each of the methods. Given the lack of optimality and the comparison to non-optimal baselines, it would also be interesting to study the performance of the proposed L* heuristic function in greedy best-first search.
Proofs are said to be in the supplementary material, however I could not find such material (neither at the end of the document nor uploaded to OpenReview).
[1] Orseau, Laurent, and Levi HS Lelis. "Policy-guided heuristic search with guarantees." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 14. 2021.
[2] Feng, Dieqiao, Carla P. Gomes, and Bart Selman. "The Remarkable Effectiveness of Combining Policy and Value Networks in A*-based Deep RL for AI Planning." (2021).
[3] Garrett, Caelan Reed, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. "Learning to rank for synthesizing planning heuristics." Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. 2016.
[4] Xu, Yuehua, Alan Fern, and Sungwook Yoon. "Learning Linear Ranking Functions for Beam Search with Application to Planning." Journal of Machine Learning Research 10.7 (2009). | 35. No.14. 2021. [2] Feng, Dieqiao, Carla P. Gomes, and Bart Selman. "The Remarkable Effectiveness of Combining Policy and Value Networks in A*-based Deep RL for AI Planning." (2021). [3] Garrett, Caelan Reed, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. "Learning to rank for synthesizing planning heuristics." Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. 2016. [4] Xu, Yuehua, Alan Fern, and Sungwook Yoon. "Learning Linear Ranking Functions for Beam Search with Application to Planning." Journal of Machine Learning Research 10.7 (2009). |
NIPS_2018_243 | NIPS_2018 | I had trouble understanding this paper, but I'm not sure if that's due to the exposition or because it's a little ways outside my normal area of expertise. In particular, I don't immediately see the answer to: 1. When, if ever, is SITE a consistent estimator of the true average ITE? What assumptions are required for the estimator to be a valid estimator of the causal effect? In particular, the loss function is only defined over minibatches, so the objective function for the regression is left implicit. Presupposing that SGD does in fact converge, does this oddball objective function cause any issues? 2. Is the intuitive motivation (representation preserves local similarity) actually supported by the empirical results? It's clear that SITE does somewhat better than the comparitor methods, but it's not clear to me why this is---could it attributed simply to higher model capacity? Experiments showing that SITE actually learns representations that do a better job of preserving local similarity, and that this matters, would strengthen the paper. I note that the authors claim (line 263-265) that the experiments show this already, but I don't understand how. More broadly, the whole procedure in this paper feels a bit ad hoc. It's not clear to me which parts of the model are really important for the results to hold. # Overview Based on the above critiques, I've voted (weakly) to reject. However, if good answers to the above points of confusion can be incorporated into the text, I'd be happy to reconsider. # Further questions and comments 1. The learning procedure described in the paper seems to take propensity scores as inputs. E.g. triplet pair selection relies on this. How are the required propensity scores determined? 2. Does the SGD actually converge? It seems possible that selecting triplets within the mini-batch may lead to issues. 3. What is the summation (\Sum) in equation 7 over? I thought there was exactly one \hat{i}, \hat{j}, etc. per minibatch 4. Is the correct interpretation of the experiment in section 3.2 that the covariates x are generated according to one of the two normal distributions, the response is generated independent of T, and the ground truth ITE is 0? Assuming this is correct, the text should be rewritten to be clearer. I'm also unclear what the inputs to the KL distance are, and why KL is chosen as the distance #### Update I thank the authors for responding to my comments and questions. 1. The ablation study with PDDM is helpful. I think the paper would be improved by adding this, and also one for MPDM loss term. The presented results show that PDDM improves the ITE estimate, but, as far as I can see, it does not show that the PDDM term actually causes the learned representations to better preserve local similarity. 2. I'm still confused by when, if ever, the method described in the paper is a correct estimator for ITE (defining 'correct' sensibly is part of the challenge here). The author response didn't cleared this up for me. For example, it was still unclear why assumptions 2.1 and 2.2 are included given that, as far as I can tell, neither is actually referenced again in the paper. I was sufficiently confused by this that I went back and read the related prior work ("Learning Representations for Counterfactual Inference" Johansson, Shalit, Sontag, and "Estimating Individual Treatment Effect: Generalization Bounds and Algorithms", ref [28] in the paper). These papers establish a clear meaning for the use of representation learning for causal inference: the PEHE is bounded by an expression that includes the distance between P(r | t=0) and P(r | t=1), the distributions over the representations for the treated and untreated. I'm convinced this could provide a clear motivation and (hand wavy) justification for SITE. However, I don't think the paper explains this well---if the method is justified by ref [28] then the key results and the connection should be summarized in the paper! There is presumably some analogous idea making use of the assumption that similar units will have similar responses, but I'm again unsure what such a result is or exactly how it might apply to the proposed method. Although I'm now convinced that the method can probably be justified properly, I don't think this is done in the paper, and I view the the confusing presentation as a serious problem. For this reason, I'm still voting to reject. However, I note that reviewer 1 has essentially the same criticism (to wit: this method doesn't seem to actually estimate ITE), but doesn't view this is a serious issue. | 2. Does the SGD actually converge? It seems possible that selecting triplets within the mini-batch may lead to issues. |
NIPS_2021_2326 | NIPS_2021 | Weakness
Overview of the main concerns, which are detailed in the paragraphs below:
Improper evaluation of the universal controller.
Missing insights into the data and the strong performance of the pose estimation, which perform better than state-of-the-art pose estimation from third-person view on established benchmarks, despite the much harder task.
Generalization of the learned tasks is not properly evaluated.
1. Universal controller
First of all, learning robust universal controllers is a challenging task and is known to suffer from instabilities, especially if it has to replicate unseen motions. There is a branch of research [4][5][6] with the sole focus on building controllers that can scale to larger datasets, but have only achieved learning on a subset of AMASS (CMU ~ 2k motion sequences). The proposed method builds on a universal controller that is trained on AMASS (~11k sequences). Hence, the claim of a universal controller that can scale to an order of magnitude larger motion database than state-of-the-art needs to be properly evaluated on its own, since it would mark a substantial improvement in the direction of imitation learning from motion capture. As stated by the authors, the policy is able to execute a wide variety of motion “ranging from dancing to kickboxing” (cf. L.40) and therefore needs to be empirically evaluated on the full AMASS dataset to substantiate this claim. The current controller is only tested on the relatively simple motions contained in H36M (walking, standing, etc.). Furthermore, it is important to see how well it adapts to noisy estimates, because at the beginning of training the dynamic-regulated model, the kinematics policy will likely produce random residuals and hence noisy target poses.
2. Missing insights into data and performance
The universal controller is then utilized in the downstream task of egocentric pose estimation. Since the task is ill-posed, estimating pose from egocentric video without top-down view is extremely difficult. The results, namely the mean per joint position error, which is very low (~30-40mm), may indicate that there is very little variation between different motion sequences and overfitting to the training data (which is likely similar to the test data). For instance, since there is no way to tell how the occluded arms are moving, such a small error is only possible if the deviation of motion between the sequences in the data is very small. The authors should provide: 1) statistical data on their dataset to be able to better assess their quantitative results (e.g., data on per joint trajectories, pose diversity against available 3D datasets), 2) a proper discussion of the very small reported errors on the test data and 2) the per joint error to be able to get better insights into the performance of the model.
This seems even more evident when looking at the supplementary video, where it appears that the agent is always moving in the same way (movement of arms, gait, speed, etc.). Since the authors already provide the statistical data of the speed in the datasets, it would be good to see whether the agent actually learns to adapt to different speeds or just overfits to one single motion.
3. Generalization
As illustrated in the details of the dataset (Appendix D), the interaction objects are always positioned in the same location for both datasets, and hence it would be important to conduct experiments on generalization. For instance, how would the method fare if the objects’ locations were slightly different? Otherwise, the learned policy is likely to overfit to single object instances and not be useful for downstream applications. This should ideally be tested and at least be discussed in the paper. Furthermore, although the trajectory analysis provided in Figure 5 indicates a slight variation in the facing direction towards the object, it seems that the agent is mostly facing the interaction objects and needs to walk straight to reach it (cf. supplementary video). It would be interesting to see whether the agent can act in the scene without being right in front of the object of interest and facing towards it.
Other Comments and Technical Questions
The training procedure of the universal controller proposed for sampling sequences likely impose quite a large computational overhead since it has to run all frames of AMASS (4000k) through the value function. How often is this distribution recomputed? Moreover, is there a mistake in the notation of the initialization states, or what is the reason that the target state is the same as the input state and not the next state?
Did the authors run experiments without the redundancy of information in the state space of the universal controller (e.g. to have joint angles in axis angle and quaternion representation or the difference between joint position and target joint position in world and agent-centric coordinates) or what was the incentive behind overloading the state space?
Do the authors employ any technique, such as early stopping, when training the universal controller and the dynamics-regulated kinematic policy to avoid learning from failure cases?
The notation in Appendix C Policy Network architecture seems to be inconsistent. The quaternion difference ⊖
and the minus seem to be used for angle-axis difference and vice-versa, at least according to the dimensions provided.
Will the authors release the dataset in case of publication to foster further research?
A potential missed citation is [7]. The authors use a differentiable physics model to correct their kinematics model for pose reconstruction.
[1] Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, and Jason Saragih, “Simpoe: Simulated character control for 3d human pose estimation”, CVPR, 2021
[2] Soshi Shimada, Vladislav Golyanik, Weipeng Xu, and Christian Theobalt. 2020, “PhysCap: physically plausible monocular 3D motion capture in real time”, ACM Transactions on Grap., 2020.
[3] Davis Rempe, Leonidas J Guibas, Aaron Hertzmann, Bryan Russell, Ruben Villegas, and Jimei Yang, “Contact and human dynamics from monocular video”, ECCV 2020
[4] Jungdam Won, Deepak Gopinath, and Jessica Hodgins, “A scalable approach to control diverse behaviors for physically simulated characters”, ACM Trans. Graph., 2020.
[5] ] Tingwu Wang, Yunrong Guo, Maria Shugrina, and Sanja Fidler, “Unicon: Universal neural controller for physics-based character motion”, arxiv, abs/2011.15119, 2020.
[6] Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, and Nicolas Heess, “Neural probabilistic motor primitives for humanoid control”, ICLR, 2019
[7] Soshi Shimada, Vladislav Golyanik, Weipeng Xu, Patrick Pérez, and Christian Theobalt, “"Neural PhysCap" Neural Monocular 3D Human Motion Capture with Physical Awareness”, ACM Trans. Graph., 2021.
Post rebuttal
I appreciate the author's extensive answer to my concerns and questions. In the light of how the concerns were addressed, I'm willing to raise my score to 6. What I would like to see in the final version of the paper in case of acceptance is 1) a thorough discussion of the limitations (which is missing in the current main part of the manuscript) 2) the requested evaluations of the universal controller and the dataset statistics. 3) Clarification of how the pose metrics were obtained with respect to failing sequences. It seems that such good numbers for the pose can only be achieved if the sequences are ended for episodes deemed unsuccessful. | 1) statistical data on their dataset to be able to better assess their quantitative results (e.g., data on per joint trajectories, pose diversity against available 3D datasets), |
NIPS_2022_80 | NIPS_2022 | weakness: 1.There is not enough theory in this article to explain the effectiveness of Structural Knowledge Distillation. 2.In sec4.4, only two SOTA KD methods in detections used for comparing.
The explanation and theory analysis of the proposed method is limited.
The compared SOTA methods is not enough. | 2.In sec4.4, only two SOTA KD methods in detections used for comparing. The explanation and theory analysis of the proposed method is limited. The compared SOTA methods is not enough. |
2tIyA5cri8 | ICLR_2025 | Only minor weaknesses.
1. In the background section on RL, TD is presented for a fixed policy, and then the paper switches to Q-learning, assuming the policy chooses \argmax_a Q(s,a). But this will change the policy as the Q function is updated, so it's not technically the same setting.
2. It was a bit unclear what "control lesion" referred to in Fig. 2F. And more generally, I was not familiar with the "lesion" terminology, so a brief definition would be welcome. I assume it's a form of activation patching?
3. I would have liked slightly more explanation regarding "clamping" the activations. I assume this means setting them to a specific value, but how is that different from deactivating them (i.e. clamping them to zero)? Is the purpose of clamping the activations to show degraded, unchanged, or improved performance?
4. Line 458, mangled sentence "our study is, we have explored". | 4. Line 458, mangled sentence "our study is, we have explored". |
ICLR_2021_2568 | ICLR_2021 | **
Unfortunately, the proposed approach is not described clearly enough for it to be widely useful.
In general, I believe that when formal tools (like group theory) are applied to prove anything outside of their original domain (i.e. when we are using group theory to reason about compositional representations in machine learning), it is crucial to 1) clearly define all involved notions (not only mathematical, but also the ones to which mathematical tools are applied) 2) clearly motivate the application.
** Clarity **
Unfortunately, the clarity of the contribution is not up to the standards of ICLR conference. In general, I believe that clarity concerns are secondary to other evaluation components (experimental support, novelty, etc.). In this case, however, it becomes impossible for me to evaluate other components because I can not fully understand the approach from its description.
For example, while the paper is focused on compositional representations, the actual description/definition of what exactly authors mean by compositional representations comes only on the 4th page (after some formal results were already stated).
The description is as follows: "Compositionality arises when we compare different samples, where some components are the same but others are not. This means compositionality is related to the changes between samples. These changes can be regarded as mappings, and since the changes are invertible, the mappings are bijective. To study compositionality we consider a set of all bijections from a set of possible representation values to the set itself, and construct a group with the following Proposition 4.1.". At the same time, there was no formal definition of "representation" before that paragraph. In the very next paragraph, however, the authors say "We consider two representations and corresponding sets. X is original entangled representation, and Y is compositional representation".
The concerns I described above are related to the overall structure of the contribution. A separate and also a major concern is that the writing itself should be improved too. There are numerous confusingly phrased sentences which make reading difficult.
For example, we can take a look at the very first sentence in the abstract: "Humans naturally use compositional representations for flexible recognition and expression, but current machine learning lacks such ability". It's not clear what is meant by "recognition and expression", it also seems unnatural to say that "machine learning" lacks a certain ability, because machine learning is a field of study. It may be better to rephrase it to "machine learning methods". While these concerns are minor, they are ubiquitous thoughout the paper, which substantially hinders readability.
** Suggestions **
The direction may be promising, but unfortunately, the paper needs a thorough reorganization in order to be publishable.
I would like to suggest moving the standard group theory definitions from the main text into appendix. Some of the proofs could be moved there too. The space obtained this way may be used to
formally define a) what a "representation" is b) what a "compositional representation" is c) the general problem setting
motivate the chosen definitions with some potential applications. I realize that the examples in the end of the article are intended to serve that goal, but in my opinion, neither of them is explored in enough detail.
I understand that a lot of work went into this article, and I hope that the authors won't feel discouraged by the feedback, but use it as an opportunity to improve the paper.
** Update after the authors' response **
I have read the authors' response and other reviews. I still believe that my evaluation is correct at the moment.
At the same time, I believe that the research direction is very and promising, and I hope to see the updated version of the manuscript published in the future! | 1) clearly define all involved notions (not only mathematical, but also the ones to which mathematical tools are applied) |
NIPS_2018_8 | NIPS_2018 | I personally think the paper does not do justice to weight adaptation. The proposed setup is only valid for classification using same kind of data (modality, appearance etc.) between training and adaptation; however, weight adaptation is a simple method which be used for any problem regardless of change of appearance statistics or even changing the type of the modality. In practice, weight adaptation is proven to be useful in all these cases. For example, when transferred from ImageNet to audio spectrograms or transferred from classification to object detection, weight transfer stays useful. The paper studies a very specific case of transfer learning (k-ITL) and it shows that weight adaptation is not useful in this specific setting. Whether the weight adaptation useful in remaining settings or not still remains as an open question. The authors should clarify that there are more problems in transfer learning beyond k-ITL and they only consider this specific case. As a similar issue, the experimental setup only consider class distribution as a possible difference between training and adaptation. Clearly, low-level appearance statistics does not change in this case. It would be an interesting study to consider this since weight adaptation is shown to be handling it well. One simple experiment would be performing an experiment with training on tinyImageNet and adapting on CIFAR-10/100. This experiment would increase the robustness of the claims of the paper. Although it is little beyond of scope of k-ITL setup, I think it is necessary to be able to make the strong claim in L275-280. MINOR ISSUES: - Averaging the accuracies over different tasks in Figure 6 does not seem right to me since going from 90% to 95% accuracy and going from 10% to 15% should ideally be valued differently. Authors should try to give the same plot for each dataset separately in addition to combining datasets. - It is not direcly obvious that the penultimate layer will discard the inter-class relationship by orthogonalizing each class as stated in L281-L288. This statement is dependent on the type of the loss is used. For example, correctness of the statement is clear to me for L2 loss but not clear for cross-entropy. Either authors should provide some additional proof or justification that it is correct for all loss functions or should give more specifications. In summary, paper is stating an important but hidden in plain-sight fact that few-shot learning and metric learning is closely related and metric learning methods can very well be used for few-shot learning. This unified approach results in a straightforward algorithm which improves the state-of-the-art significantly. Although there are minor issues with the paper, it is an important contribution to both metric learning and few-shot learning communities. UPDATE: I have read the authors response and here are my comments on them: 1.a) Example citation for transfer from image to audio: Amiriparian et al., Snore Sound Classification using Image-Based Deep Spectrum Features. 1.b) Transfer beyond k-ITL: I also agree the proposed method should be able to handle this as well but unless we see an experiment on this, it is mostly a speculation. I think the other answers are satisfactory. I am keeping my score as 8. | - Averaging the accuracies over different tasks in Figure 6 does not seem right to me since going from 90% to 95% accuracy and going from 10% to 15% should ideally be valued differently. Authors should try to give the same plot for each dataset separately in addition to combining datasets. |
dVOXsyVcik | EMNLP_2023 | * The equation of agreement@k, which is used to compute most of the results in this study, is not clear. Based on the response of the authors, I will need to revisit my assessment regarding this weakness.
* The authors overlook existing work in event/anomaly detection (e.g.: https://arxiv.org/abs/2007.02500 or https://www.nature.com/articles/s41598-021-03526-y) and implement a new algorithm to define the optimum k dynamically. | * The authors overlook existing work in event/anomaly detection (e.g.: https://arxiv.org/abs/2007.02500 or https://www.nature.com/articles/s41598-021-03526-y) and implement a new algorithm to define the optimum k dynamically. |
NIPS_2021_34 | NIPS_2021 | of this paper are summarized as follows.
Strengths: 1. The experiments are comprehensive. To show the effectiveness of the proposed method, the authors take SGR as an example and provide comparisons with other SOTA methods. The results with varying noise ratios and real-world noisy data demonstrated the effectiveness of NCR. Besides the SGR, experimental results of NCR-SCAN also show the superiority of the proposed method. 2. the technique presented in the paper is novel. To achieve the robust cross-modal matching, the authors propose to turn the rectified soft labels into soft margins which enforce the true positive pairs closer than the negative by a large margin, while the false positive will has a small margin.
Weaknesses: 1. It seems that the proposed method NCR is only applicable to the triplet loss in cross-modal matching as defined in Eq.5-6. How to achieve robustness to other loss formulations like the softmax loss in ALIGN (Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision)? 2. NCR recast the soft labels as soft margins defined in Eq.7. Why NCR need this procedure for robust matching? Why it could achieve robustness? 3. Since two models are trained individually in the manner of co-teaching. It inevitably needs more computation time for convergence. And how to obtain the final retrieval results with two models?
The paper provides a comprehensive discussion about the potential negative impact in the Border Impact Statement. | 1. It seems that the proposed method NCR is only applicable to the triplet loss in cross-modal matching as defined in Eq.5-6. How to achieve robustness to other loss formulations like the softmax loss in ALIGN (Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision)? |
NIPS_2019_663 | NIPS_2019 | of their work?"] The submission is overall reasonably sound, although I have some comments and questions: * Regarding the model itself, I am confused by the GRU-Bayes component. I must be missing something, but why is it not possible to ingest observed data using the GRU itself, as in equation 2? This confusion would perhaps be clarified by an explanation in line 89 of why continuous observations are required. As it is written, I am not sure why it you couldn't just forecast (by solving the ODE defined by equation 3) the hidden state until the next measurement arrives, at which point g(t) and z(t) can be updated to define a new evolution equation for the hidden state. I am guessing the issue here is that this update only changes the derivative of the hidden state and not its value itself, but since the absolute value of the hidden state is not necessarily meaningful, the problem with this approach isn't very clear to me. I imagine the authors have considered such a model, so I would like to understand why it wouldn't be feasible here. * In lines 143-156, it is mentioned that the KL term of the loss can be computed empirically for binomial and Gaussian distributions. I understand that in the case of an Ornstein-Uhlenbeck SDE, the distribution of the observations are known to be (conditionally) Gaussian, but in the case of arbitrary data (e.g. health data), as far as I'm aware, few assumptions can be made of the underlying process. In this case, how is the KL term managed? Is a Gaussian distribution assumption made? Line 291 indicates this is the case, but it should be made clear that this is an assumption imposed on the data. For example, in the case of lab test results as in MIMIC, these values are rarely Gaussian-distributed and may not have Gaussian-distributed observation noise. On a similar note, it's mentioned in line 154 that many real-world cases have very little observation noise relative to the predicted distribution - I assume this is because the predicted distribution has high variance, but this statement could be better qualified (e.g. which real-world cases?). * It is mentioned several times (lines 203, 215) that the GRU (and by extension GRU-ODE-Bayes) excels at long-term forecasting problems, however in both experiments (sections 5.2 and 5.3) only near-term forecasting is explored - in both cases only the next 3 observations are predicted. To support this claim, longer prediction horizons should be considered. * I find it interesting that the experiments on MIMIC do not use any regularly-measured vital signs. I assume this was done to increase the "sporadicity" of the data, but it makes the application setting very unrealistic. It would be very unusual for values such as heart rate, respiratory rate, blood pressure and temperature not to be available in a forecasting problem in the ICU. I also think it's a missed opportunity to potentially highlight the ability of the proposed model to use the relationship between the time series to refine the hidden state. I would like to know why these variables were left out, and ideally how the model would perform in their presence. * I think the experiment in Section 5.5 is quite interesting, but I think a more direct test of the "continuity prior" would be to explicitly test how the model performs (in the low v. high data cases) on data which is explicitly continuous and *not* continuous (or at least, not 2-Lipschitz). The hypothesis that this continuity prior is useful *because* it encodes prior information about the data would be more directly tested by such a setup. At present, we can see that the model outperforms the discretised version in the low data regime, but I fear this discretisation process may introduce other factors which could explain this difference. It is slightly hard to evaluate because I'm not entirely sure what the discretised version consists of , however - this should be explained (perhaps in the appendix). Furthermore, at present there is no particular reason to believe that the data in MIMIC *is* Lipschitz-2 - indeed, in the case of inputs and outputs (Table 4, Appendix), many of these values can be quite non-smooth (e.g. a patient receiving aspirin). * It is mentioned (lines 240-242, section H.1.3) that this approach can handle "non-aligned" time series well. As mentioned, this is quite a challenging problem in the healthcare setting, so I read this with some interest. Do these statements imply that this ability is unique to GRU-ODE-Bayes, and is there a way to experimentally test this claim? My intuition is that any latent-variable model could in theory capture the unobserved "stage" of a patient's disease process, but if GRU-ODE-Bayes has some unique advantage in this setting it would be a valuable contribution. At present it is not clearly demonstrated - the superior performance shown in Table 1 could arise from any number of differences between this model and the baselines. 2.c Clarity: ["Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.)"] While I quite like the layout of the paper (specifically placing related work after a description of the methodology, which is somewhat unusual but makes sense here) and think it is overall well written, I have some minor comments: * Section 4 is placed quite far away from the Figure it refers to (Figure 1). I realise this is because Figure 1 is mentioned in the introduction of the paper, but it makes section 4 somewhat hard to follow. A possible solution would be to place section 4 before the related research, since the only related work it draws on is the NeuralODE-VAE, which is already mentioned in the Introduction. * I appreciate the clear description of baseline methods in Section 5.1. * The comprehensive Appendix is appreciated to provide additional detail about parts of the paper. I did not carefully read additional experiments described in the Appendix (e.g. the Brusselator) out of time consideration. * How are negative log-likelihoods computed for non-probabilistic models in this paper? * Typo on line 426 ("me" instead of "we"). * It would help if the form of p was described somewhere near line 135. As per my above comment, I assume it is a Gaussian distribution, but it's not explicitly stated. 2.d Significance: ["Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?"] This paper describes quite an interesting approach to the modelling of sporadically-measured time series. I think this will be of interest to the community, and appears to advance state of the art even if it is not explicitly clear where these gains come from. | * Section 4 is placed quite far away from the Figure it refers to (Figure 1). I realise this is because Figure 1 is mentioned in the introduction of the paper, but it makes section 4 somewhat hard to follow. A possible solution would be to place section 4 before the related research, since the only related work it draws on is the NeuralODE-VAE, which is already mentioned in the Introduction. |
Zes7Wyif8G | ICLR_2025 | 1) Although the paper indicated in 3rd paragraph that it focused on a "particular flavor of neurosymbolic AI", i.e., a neural network feeding into probabilistic inference based on arithmetic circuits, it would be beneficial to reflect it explicitly in other parts, e.g., in the abstract, or a more specific title (Accelerating Arithmetic Circuits in Neurosymbolic AI), to be able to attract the right audience.
2) The actual "interface" and details between the neural network and the used arithmetic circuits remain largely a secret for readers(of course there are pointers to prior arts). It would be beneficial to open up and explain how exactly a neural network is interfaced to the arithmetic circuits, what are the assumptions and domain knowledge etc. at least for one of the tasks (e.g. MNIST addition) in an appendix. | 2) The actual "interface" and details between the neural network and the used arithmetic circuits remain largely a secret for readers(of course there are pointers to prior arts). It would be beneficial to open up and explain how exactly a neural network is interfaced to the arithmetic circuits, what are the assumptions and domain knowledge etc. at least for one of the tasks (e.g. MNIST addition) in an appendix. |
NIPS_2016_314 | NIPS_2016 | I found in the paper includes: 1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same? 2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping. - Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of layers or the skip connections, and how does the performance vary when the skip connections are used for every layer. 4. The paper says that almost all existing methods estimate the corruption level at first. There is a high possibility that the same is happening in the initial layers of their Residual net. If so, the only advantage is that theirs is end to end. 5. The authors mention in the Related works section that the use of regularization helps the problem of image- restoration, but they donât use any type of regularization in their proposed model. It would be great if the authors can address these points (mainly 1, 2 and 3) in the rebuttal. | - Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of layers or the skip connections, and how does the performance vary when the skip connections are used for every layer. |
NIPS_2022_2182 | NIPS_2022 | Weakness: 1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. 2. There is a gap between the proposed metric and method. Based on post-aggregation node similarity, they propose an aggregation similarity metric. However, the final 3-channel filterbank has nothing to do with the above metric. 3. The novelty of the idea is not enough. In addition to the limitations pointed out above, both new metric and method are relatively straightforward. 4. The improvement in Table 4 does not seem statistically significant because of high variance. 5. There is a problem with the typesetting of the paper.
In addition to the limitations mentioned in the paper, the intrinsic relationship between the proposed metric and method should be taken into consideration. No potential negative societal impact. | 1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. |
ICLR_2023_3879 | ICLR_2023 | - There is no theoretical result or analysis supporting the proposed method. - The method applies only when testing data is incomplete requiring complete training datasets with limits its application in many practical situations where training datasets are also incomplete. - The paper compares the proposed method only against simple and basic imputation methods. There are many other approaches in the literature to train classifiers on incomplete or complete training datasets and apply them to incomplete test datasets. See for example: o C. Caiafa, Z. Wang, J. Sole-Casals and Q. Zhao, "Learning from Incomplete Features by Simultaneous Training of Neural Networks and Sparse Coding," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2021 pp. 2621-2630. o Marek Smieja, Lukasz Struski, Jacek Tabor, Bartosz Zielinski, and Przemyslaw Spurek. Processing of missing data by neural networks. In NeurIPS, 2018. - This paper considers only small datasets with small NNs (only 3-layer NNs). The paper would be benefited by considering also the application of the method to deeper architectures and larger datasets such as computer vision datasets (MNIST and CIFAR for example). - There is no explanation on why a maximum of P/2 features can be removed. Is there any theoretical explanation for such constraint? How the algorithms behaves when missing entries are more than P/2? - Results comparing classical training (MLP) with augmented data training (AMLP) does not consider any statistical analysis. In fact results are very similar and it is not clear if the differences are statistically significant. | - There is no explanation on why a maximum of P/2 features can be removed. Is there any theoretical explanation for such constraint? How the algorithms behaves when missing entries are more than P/2? |
NIPS_2017_122 | NIPS_2017 | * It is not clear if the ability of the model to detect fall height is because of the absolute timing of the simulations. Falling from a greater height leads to a longer delay before the first impact. This is obvious to an algorithm analyzing fixed-sized wav files, but not to a human listening to sound files with somewhat unknown silent beginnings. A fairer comparison would be to add a random amount of delay before starting the sounds for both listeners.
* The comparison method is changed between the synthetic and real tasks, which seems unfair. If it is necessary to use a more complex comparison method for the real task, then also use it for the synthetic one.
* Line 226 reports several analysis parameters in samples, but never states the sample rate. Please describe these quantities in seconds or ms or provide the sample rate so the reader can perform the conversion themselves.
Overall, this is a strong paper that has gotten a relatively old and appealing idea to work much better than in the past. | * It is not clear if the ability of the model to detect fall height is because of the absolute timing of the simulations. Falling from a greater height leads to a longer delay before the first impact. This is obvious to an algorithm analyzing fixed-sized wav files, but not to a human listening to sound files with somewhat unknown silent beginnings. A fairer comparison would be to add a random amount of delay before starting the sounds for both listeners. |
CP1PLnFzbr | EMNLP_2023 | Two major reasons to reject this paper.
- The proposed approaches are combinations of well-known algorithms and the metrics are already popular in the community. The Author's contributions are marginal at best.
- No theory behind. Formal definition and theoretical background of the proposed method are missing in the paper. | - The proposed approaches are combinations of well-known algorithms and the metrics are already popular in the community. The Author's contributions are marginal at best. |
yNJEyP4Jv2 | ICLR_2024 | ### Correctness and clarity of the theoretical results
The paper formulates an adversarial optimization problem particularly tailored for the latent diffusion models (LDM). The analysis guides the algorithm design to some degree (more on this later). However, due to the lack of clarity and various approximations being introduced without proper justification, the theoretical results become less convincing. I will compile all my questions and concerns from Section 3 and 4 in place:
1. I am not sure what the sum $\sum_z$ is over in Eq. (3). The expectation is already over $z$ so I am a bit confused about the summation. My guess is that the sum is over all the latent variables in the diffusion process (different $z$’s in different steps). Is this correct?
2. If my previous understanding is correct, my next question is why should the adversary care about the latent variables in the intermediate steps of the diffusion process instead of, say, the final step of the inverse process before the decoder?
3. Based on the text, Eq. (3) should be equivalent to $\mathbb E_{z \sim p_{\theta}(z|x)}[- \log p_\theta(z|x')]$. My question is that a slightly different formula $\mathbb E_{z \sim p_{\theta}(z|x')}[- \log p_\theta(z|x) + \log p_{\theta}(z|x')]$ also seems appropriate (swapping order in the KL-divergence). Why should we prefer one to the other?
4. Section 3.2 uses the notation $\mathcal N(\mathcal E(x), \sigma_\phi)$ instead of $\mathcal N(f_{\mathcal E}(x), \sigma_{\mathcal E})$ from Section 2.1. Do they refer to the same quantity?
5. In the last paragraph of page 4, the Monte Carlo method must be used to estimate the mean of $p_\theta(z_{t-1}|x)$, but I cannot find where the mean is actually used. It does not seem to appear in Eq. (10) or in Appendix A.1. I also have the same question for the variance of $p_\theta(z_{t-1}|x)$ mentioned in the first paragraph of page 5.
6. Related to the previous question, it is mentioned that “the variance of $z_{t-1}$ is estimated by sampling and optimizing over multiple $z_{t-1}$.” It is very unclear what “sampling” and “optimizing” refer to here.
7. I do not quite see the purpose of Proposition 1. It acts as either a definition or an assumption to me. The last sentence “one can sample $x \sim w(x)$ from $p_{\theta(x)}(x)$” is also very unclear. Is the assumption that the true distribution is exactly the same as the distribution of outputs of the fine-tuned LDM?
8. $x^{(eval)}$ is mentioned in Section 3.4 but was never defined.
9. In Eq. (11), should both of the $\theta(x)$’s be $\theta(x')$ instead? Otherwise, $x'$ has no effect on the fine-tuning process of the LDM.
10. Section 4.1 is very convoluted (see details below).
### Issues with the offset problem and Section 4.1
**Comment #1**: I do not largely understand the purpose of the “offset” problem in Section 4.1. In my understanding, most of the discussion around the offset can be concluded by simply expanding the second term on the first line of Eq. (13): $$
\sum_{t \ge 1}\mathbb E_{z_t,z'_t} || \Delta z_t + \frac{\beta_t}{\sqrt{1 - \bar{\alpha_t}}} \Delta \epsilon ||_2^2 $$ $$
= \sum_{t\ge 1}\mathbb E_{z_t,z'_t} ||\Delta z_t||_2^2 + || \frac{\beta_t}{\sqrt{1 - \bar{\alpha_t}}}\Delta\epsilon ||_2^2 + \frac{2\beta_t}{\sqrt{1 - \bar{\alpha_t}}}\Delta z_t^\top\Delta\epsilon $$
So the problem that prevents optimizing just the norm of $\Delta z_t$ and the norm of $\Delta \epsilon_\theta$ directly is the last term in the equation above (the dot product or the cosine similarity). I might be missing something here so please correct me if I’m wrong.
**Comment #2**: It is also unclear to me how the last line of Eq. (13) is reached and what approximation is used.
**Comment #3**: In theory, there is nothing preventing one from optimizing Eq. (13) as is. The issue seems to be empirical, but I cannot find the empirical results showing the failure of optimizing Eq. (13) directly and not using the target trick.
**Comment #4**: The authors “let *offset rate* be the ratio of pixels where the vector $\Delta z_t$ and $\Delta \epsilon_\theta$ have different signs.” If my understanding of the cosine similarity above is correct, this seems unnecessary and imprecise given that the cosine similarity is the exact way to quantify this.
**Comment #5**: In the first paragraph of page 7, it is mentioned that “meanwhile, since the original goal is to maximize the mode of the vector sum of…” I think instead of “mode,” it should be “magnitude” or the Euclidean norm?
### Empirical contribution
1. After inspecting the generated samples in Figure 11-15, my hypothesis is that the major factor contributing to the empirical result is the target pattern and the usage of the targeted attack. The pattern is clearly visible on the generated images when this defense is used, and this pattern hurts the similarity scores. This raises the question of whether the contribution comes from the theoretical formulation and optimization of the three objectives or the target. I would like to see an ablation study on this finding: (1) the proposed optimization + untargeted and (2) the prior attacks + targeted.
2. The choice of the target $\mathcal T$ is ambiguous. While the target pattern is shown in the Appendix, there is no justification for why such a pattern is picked over others and whether other patterns have been experimented with.
Overall, I believe that the paper can have a great empirical contribution, but it seems to be clouded by the theoretical analysis which appears much weaker to me. | 7. I do not quite see the purpose of Proposition 1. It acts as either a definition or an assumption to me. The last sentence “one can sample $x \sim w(x)$ from $p_{\theta(x)}(x)$” is also very unclear. Is the assumption that the true distribution is exactly the same as the distribution of outputs of the fine-tuned LDM? |
ICLR_2022_3218 | ICLR_2022 | Weakness: 1) Since this paper focuses on biometric verification learning, the comparison against the state-of-the-art loss functions widely used in face/iris verification should be added (e.g., Center-Loss, A-Softmax, AM-Softmax, ArcFace). 2) Cosine similarity score is more often used in biometric verification, so I wonder if it would work better than the Euclidean distance when computing the Decidability. 3) Large batch-size may be significant in the proposed loss. The authors conducted on three settings to select the best number of batch-size. However, it may be better to examine the performance with more settings. For example, what would happen if a small batch-size is used. 4) Why triplet loss cannot convergent on CASIA-V4? I guess many previous iris verification works have employed such loss. 5) Figure 5 shows the impact of the D-Loss before and after training the model. It is suggested to compare with other losses on it. | 4) Why triplet loss cannot convergent on CASIA-V4? I guess many previous iris verification works have employed such loss. |
GVhfWu5L8D | ICLR_2025 | ## The motivation is good but some method details are quite strange
1. Eq. (7) tries to ensure $(1-\alpha) c + \alpha(c + V_c)=Q_c$. This is not a common Bellman equation for the cost value Q and V function. Instead, this equation is similar to the one for the feasible value function but is still different: $(1-\alpha) h + \alpha \max (h, V_h) =Q_h$. Specifically, for the feasible value function, a maximization term exists, but in Eq. (7), the authors directly replace it as a summation term.
2. The definition of $A_r^\pi$ in Eq. (9) is somehow ad-hoc, solely providing some intuitive explanations.
3. It is strange that directly minimizing the $Q_c$ value in Eq.(11) will not lead to a large bootstrapping error accumulation.
4. If not, this means that the policy under unsafe regions still stay near the behavior policy. So the introduction of TRPO-style optimization in Eq. (12) still tries to ensure a relatively relaxed behavior regularization, which contradicts the motivation of this paper that the behavior regularization for unsafe regions should be dropped.
5. In my view, the potential benefits of BARS are primarily attributed to the definition of $A_r^\pi$. Under its definition, the policy will exhibit a more conservative behavior to avoid unsafe regions and so is safer. For unsafe regions, it is quite hard for me to judge if the policy can obtain a reasonable behavior since no behavior regularization exists anymore and the policy can easily exploit the approximation errors of the Q_c value function. It would be better if the authors could show more rollout trajectories for BARS in Figure 1 when starting at one unsafe region.
6. The authors have identified the safe and unsafe regions through expectile regression, but still need to learn additional Q and Qc value functions through standard bellman update in Eq.(14-15). This can be unstable due to error accumulations and inefficient due to the costly diffusion sampling process.
## Evaluations
7. Table 3 shows that FISOR produces different results for varied cost limits. However, FISOR is one cost limits-agnositc method that studies hard constraint and should not behave differently with different cost limits. | 2. The definition of $A_r^\pi$ in Eq. (9) is somehow ad-hoc, solely providing some intuitive explanations. |
NIPS_2017_110 | NIPS_2017 | weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach.
If we take the advantage of the manifold approach on faith, then this paper certainly presents a highly useful extension to the method presented in Schiratti et al. (2015). The added flexibility is very welcome, and allows for modelling a wider variety of trajectories. It does seem that only a single breakpoint was tried in the application to renal cancer data; this seems appropriate given this dataset, but it would have been nice to have an application to a case where more than one breakpoint is advantageous (even if it is in the simulated data). Similarly, the authors point out that the model is general and can deal with trajectories in more than one dimensions, but do not demonstrate this on an applied example.
(As a side note, it would be interesting to see this approach applied to drug response data, such as the Sanger Genomics of Drug Sensitivity in Cancer project).
Overall, the paper is well-written, although some parts clearly require a background in working on manifolds. The work presented extends Schiratti et al. (2015) in a useful way, making it applicable to a wider variety of datasets.
Minor comments:
- In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth).
- The paper has a number of typos, here are some that caught my eyes: p.1 l.36 "our model amounts to estimate an average trajectory", p.4 l.142 "asymptotic constrains", p.7 l. 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
- Section 2.2., it is stated that n=2, but n is the number of patients; I believe the authors meant m=2.
- p.4, l.154 describes a particular choice of shift and scaling, and the authors state that "this [choice] is the more appropriate.", but neglect to explain why.
- p.5, l.164, "must be null" - should this be "must be zero"?
- On parameter estimation, the authors are no doubt aware that in classical mixed models, a popular estimation technique is maximum likelihood via REML. While my intuition is that either the existence of breakpoints or the restriction to a manifold makes REML impossible, I was wondering if the authors could comment on this.
- In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise.
- For Figure 2, I think the x axis needs to show the scale of the trajectories, as well as a label for the unit.
- For Figure 3, labels for the y axes are missing.
- It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. | 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient". |
ZhZFUOV5hb | EMNLP_2023 | - The major concern with this paper is the performance of the proposed model. Since one doc id could be matched to multiple documents, the recall/MRR metric computation is unfair for baselines. Though the authors compute the expectation of Recall/MRR, essentially the proposed model still look for documents at more positions than the baselines.
- Some key technical details are missing. When one doc id is associated with multiple documents, how those documents will be ranked? Are they considered equally relevant to the query?
- The generative retrieval baseline in ADS dataset is weak. From the experiment in MS MARCO dataset, SEAL is not the best performing generative retrieval baseline. The authors should report the results for stronger baseline like the Ultron-Atomic. | - The generative retrieval baseline in ADS dataset is weak. From the experiment in MS MARCO dataset, SEAL is not the best performing generative retrieval baseline. The authors should report the results for stronger baseline like the Ultron-Atomic. |
NIPS_2018_328 | NIPS_2018 | Weakness: - This paper's approach proposes multi-layer representation learning via gradient boosted trees, and on the top of that, linear regression/softmax regression are placed for supervised learning. But, we can do the opposite representation learning via neural nets, and on the top of that, decision trees can be used (Deep Neural Decision Forests by Kontschieder et al, ICCV 2015). Comment: - Basically I liked overall ideas. It can demonstrate the deep learning is not the only option for multi-layer representation learning. It's also nice that the input-output recovery like autoencoder would be also in a gradient descent manner similar to gradient boosting itself. Seemingly, it is simpler than the stacked trees such as Deep Forests (Zhou and Feng, IJCAI 2017). - Something is wrong in the descriptions of Algorithm 1. If we initialize G_{2:M}^0 <- null, then the first run of "G_j^t <- G_j^{t-1}" for the j=M to 2 loop would become G_M^1 <- G_M^0 = null, and thus the computation of L_j^inv (as well as the residuals r_k) can be problematic. - This is just a comment, and does not need to reflect the paper this time, but there would be several interesting points to be investigated in future. First, the presented multi-layer representation is probably way weaker than the ones possible with the current neural networks. The presented representation corresponds to the fully connected (FC) architectures of neural networks, and more flexible architectures such as CNN and RNN would not be directly possible. Given this point, it would be unclear whether we should use trees for multi-layered representation learning. At least, we could use boosted model trees having linear regression at leaves, for example. As mentioned in the weakness section, 'differentiable forests' such as Deep Neural Decision Forests was already proposed, and we can use this type of approaches with the modern neural nets for 'representation learning / feature learning' parts. What situation fits what choice would be one interesting open questions. Comments after author response: Thank you for the response. It'll be nice to have the revisions on the notations for initialization, and some discussion or mentions about the different approach to integrate tree-based and multi-layer representation learning. | - Something is wrong in the descriptions of Algorithm 1. If we initialize G_{2:M}^0 <- null, then the first run of "G_j^t <- G_j^{t-1}" for the j=M to 2 loop would become G_M^1 <- G_M^0 = null, and thus the computation of L_j^inv (as well as the residuals r_k) can be problematic. |
NIPS_2021_1743 | NIPS_2021 | 1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of language modeling. The authors may consider to include tasks like language modeling, machine translation or text sumarization to strenghen this part, as this is one of the main motivations of COCO-LM. 2. Analysis of SCL in section 5.2 regarding few-shot abaility looks not convincing. The paper claims that a more regularized representation space by SCL may result in better generalization ability in few-shot scenarios. However, results in Figure 7(c) and (d) do not meet our expectation such that COCO-LM achieves much more improvements with less labels and the improvements will gradually disappear with more labels. Besides, the authors may check if COCO-LM brings benefits to sentence retrieval tasks with the learned anisotropy text representations. 3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above related works.
Questions for the Authors 1. In experimental setup, why did the authors switch the types of BPE vocabulary, i.e., uncased and cased. Will the change of BPE cause the variance of performance? 2. In Table 2, it looks like COCO-LM especially affects the performance on CoLA and RTE hence the final performance. Can the authors provide some explanation on how the proposed pre-training tasks affect the two different GLEU tasks? 3. In section 5.1, the authors say that the benefits of the stop gradient operation are more on stability. What stability, the training process? If so, are there any learning curves of COCO-LM with and without stop gradient during pre-training to support this claim? 4. In section 5.2, the term “Data Argumentation” seems wrong. Did the authors mean data augmentation?
Typos 1. Check the term “Argumentation” in line 164, 252, and 314. 2. Line 283, “a unbalanced task”, should be “an unbalanced task”. 3. Line 326, “contrast pairs”, should be “contrastive pairs” to be consistent throughout the paper? | 3. In section 5.1, the authors say that the benefits of the stop gradient operation are more on stability. What stability, the training process? If so, are there any learning curves of COCO-LM with and without stop gradient during pre-training to support this claim? |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 61