paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
9.67k
point
stringlengths
55
634
ICLR_2022_1067
ICLR_2022
As the term “online” appears in the title of the paper, this modality should be better introduced and motivated. A clear explanation appears in the related work at the end of the continual learning paragraph. However, the term remains not properly defined and seems to be related with the problem of imbalance. What exactly does imbalance mean? In the reviewer understanding the term seems to be defined as in [*] and in (Aljundi 2019b), that is, task distribution is not i.i.d. Given that definition, why selecting the core-set before should provide an advantage? The reviewer understands that the performance is improved, however, the writing states that selecting the core-set before has a dependency on the imbalanced task distribution and that this is an advantage with respect to (Rebuffi et al., 2017; Aljundi et al., 2019b;a; Chaudhry et al., 2019a;b). If this is not the case and the motivation of selecting the core-set before adaptation is because of the good empirical results, then it should be better remarked in the paper. [*] Chrysakis, Aristotelis, and Marie-Francine Moens. "Online continual learning from imbalanced data." International Conference on Machine Learning. PMLR, 2020. Figure 2 shows a dataset that is not addressed by the method and is somewhat misleading. The "Multidataset" used in the paper does not include the CIFAR dataset (i.e. complex objects like dogs and vehicles). In the reviewer's opinion the claim of large improvement, included in the third contribution, seems to be somewhat a bit bold. For example, in CIFAR-100 balanced and unbalanced learning settings in Tab.1, the performance does not differ too much from the herding strategy used in iCARL (i.e. 60.3 vs 60.5, 51.2 vs 51.4). Although the reviewer noticed that in rotated MNIST and Multidataset improvements are evident, these datasets do not typically “transfer” their performance to larger datasets (i.e., ImageNet) as CIFAR-100 typically does. The validating hypothesis of Fig.3 (i.e. learning from MNIST to CIFAR10) seems to favor diversity, which is exactly what herding is not doing. In herding, examples are selected closer to the class mean of the feature representation in each class. Herding is somewhat orthogonal to diversity. This may partly explain the not improving performance on CIFAR-100. This part should be discussed in depth (i.e., motivate that the method does not mostly favor datasets with diversity). In the reviewer’s opinion the ablation study of the effect of the gradient is not sufficient to justify its usage. As also remarked in the point 3) of this review, the MNIST family datasets typically do not “transfer” their performance to more complex and bigger datasets. In other words, using the gradient on MNIST does not imply that the gradient on CIFAR or bigger datasets is a good choice. Is the gradient computationally expensive for deeper neural networks? For example, what happens with one or two more orders of parameters?
3) of this review, the MNIST family datasets typically do not “transfer” their performance to more complex and bigger datasets. In other words, using the gradient on MNIST does not imply that the gradient on CIFAR or bigger datasets is a good choice. Is the gradient computationally expensive for deeper neural networks? For example, what happens with one or two more orders of parameters?
ICLR_2021_1603
ICLR_2021
Weakness: 1.The authors claimed that compared to Diakonikolas 19, they improved the error from eps to sqrt(eps). However, the eps result relies on the fact that the gradient of good data has bounded norm, and I believe in that setting Diakonikolas 19 also achieves eps error. 2. In paragraphs close to Lemma 1 and Lemma 3, the authors mentioned a randomized filtering algorithm, and proved Lemma 1 Lemma 3 for the algorithm. However, I can’t find the mentioned randomized filtering algorithm in the paper. 3. Theorem 1 and Theorem 4 have no formal proof. 4. Theorem 2 has no proof. 5. In the experiment section, there is no comparison to other state-of-the-art algorithms, for example Diakonikolas 19. Overall, I think the theoretical result in the paper is incomplete, and the experimental evaluation is insufficient.
1.The authors claimed that compared to Diakonikolas 19, they improved the error from eps to sqrt(eps). However, the eps result relies on the fact that the gradient of good data has bounded norm, and I believe in that setting Diakonikolas 19 also achieves eps error.
NIPS_2018_30
NIPS_2018
- As mentioned in section 3.2, MetaAnchor does not show significant improvement for two-stage anchor-based object detection. - The experiment evaluation is only done for one method and on one dataset. It is not very convincing that MetaAnchor is able to work with most of the anchor-based object detection system. Rebuttal Response: Overall, I think this approach has value on one-stage frameworks. The experiment is only on COCO, but performed in an extensive way. It would be great to have more consistently good results on another benchmark dataset such as OpenImage. But for this submission now, I am willing the upscale my score from 4 to 6.
- As mentioned in section 3.2, MetaAnchor does not show significant improvement for two-stage anchor-based object detection.
NIPS_2020_310
NIPS_2020
The main weakness of the paper is its lack of focus, which is most evident in empirical evaluations and theoretical results that don’t seem relevant to the main ideas of the paper. I don’t think this is because the empirical and theoretical results are not relevant, but because the paper emphasizes the wrong aspects of these results. To reiterate, the main idea of the paper is that the representations learned when minimizing the InfoNCE loss may be useful for continual learning in cases where the environment dynamics don’t change too much but the reward function does. A secondary idea is the addition of the action information to the InfoNCE. About the last experiment in the procgen environment (Section 6.4), the section reads as an attempt to demonstrate the main algorithm (DRIML) is the best. Not only is this not true because nothing conclusive can be said with such few runs, but it obfuscates more interesting findings and relevant information. - First, it would be useful to provide some relevant information about why these evaluations were performed in the procgen environment. This choice is important for the main hypothesis because procgen environments are procedurally generated. Hence, if we hypothesize that DRIML will learn a robust representation that captures the environment dynamics and will be better suited to overcome the random variations in the environment, then we would expect DRIML to perform better than other models that are not explicitly designed this way, such as C51. This is indeed what happens, but the text does not emphasize what the main hypothesis is and why this environment is relevant. - Second, there are some interesting findings that are not emphasized enough in Table 1. The impact of the action information on the performance of DRIML is striking. In some environments such as jumper, the performance almost tripled. Additionally, it is possible that the advantage that DRIML has over CURL is due to the action information. Here, it would be good to emphasize this fact and leave it for future work to investigate whether CURL would benefit from including the action information into its architecture. These two additions would make the argument stronger because instead of a simple comparison to determine which algorithm is best, the emphasis would be on the two main ideas of the paper that motivate the DRILM agent. About the first and second experiments (Section 6.1 and 6.2), these three sections are great for demonstrating that DRIML is indeed working as intended. However, it is often difficult to tell what is the main takeaway from each experiment because the writing doesn’t emphasize the appropriate parts of the experiments. - In Section 6.1, it seems that the wrong plots are referenced in Lines 217 and 218. The paragraph references FIgure 2b and 2c, but it should be referencing 2a and 2b. Moreover, it would be useful to have more details about these two plots: what are the x and y axis, what would we expect to see if DRIML was working as intended, and why do the plots have different scales? For Figure 2c, it is not clear why it is included. It seems to be there to justify the choice of alpha = 0.499; if this is the case, it should be explicitly stated. Figure 2d is never referenced and it’s not clear what the purpose of this figure is, so it should be omitted. - In Section 6.2, it isn’t clear what architecture is used in the experiment and how the DIM similarity is computed. An easy fix for this is to move most of the information about the Ising model from the main text to the appendix (Section 8.6.1) and move the information about the architecture to the main text. In fact, the appendix motivates this experiment fairly well in Lines 511 to 513: “If one examines any subset of nodes outside of [a patch], then the information conserved across timesteps would be close to 0, due to observations being independent in time.” You can motivate the hypothesis of this experiment based on this statement: if the DIM loss in Equation (6) is measuring mutual information across timesteps, then we would expect its output to have high measure inside of the patches and a low measure outside of the patches. This would make it very clear that the DIM loss is in fact working as intended. About the theoretical results, the main issue is the organization and the lack of connection between the theoretical results and the main ideas of the paper. - In terms of organization, it seems odd that Theorem 3.1 is introduced in page 3, but is referenced until page 6 after Proposition 1. It would be easier on the reader to have these two results close together. - More importantly, it is not clear what the connection between the theoretical results and the main idea of the paper is. The proposition is used as evidence that the convergence rate of \tilde{ v_t } is proportional to the second eigenvalue of the Markov Chain induced by the policy. However, I don’t follow the logic used for this argument since the proposition tells us that if \tilde{ v_t } and v_t are close, then v_t and v_\infty are also close. Combined with Theorem 3.1, this tells us that v_t will converge to v_\infty in a time proportional to the second eigenvalue of the Markov Chain and the error between v_t and \tilde{ v_t }, but it says nothing about the convergence rate of \tilde{ v_t } to v_t. Even if this was true, it is not clear how this convergence rate relates to the continual learning setting, which is the motivating problem of the paper. One could make a connection by arguing that in environments where the dynamics don’t change but the reward function does, the convergence rate of the InfoNCE loss remains unchanged. However, this is not what is written in the paper. This, in my opinion, is the weakest part of the paper, to the point where the paper would be better off without it since it is not adding much to the main argument. Perhaps this proof would be better suited for a different paper that specifically focuses on the convergence rate of the InfoNCE loss. Finally, there are a few architectural choices that are not well motivated. -It is not clear why the algorithm uses 4 different auxiliary losses: local-local, local-global, global-local, and global-global in Equation (7). To justify this choice, it would be useful to have an ablation study that compares the performance of DRIML with and without each of these losses. - Second, in Algorithm Box 1, it is not clear why each auxiliary loss is optimized separately instead of optimizing all of them at once. - Third, it’s not clear what architecture is used for the DRIML agent. Line 11 in the abstract mentions that the paper augments the C51 agent, but line 259 says that “all algorithms are trained… with the DQN... architecture,” yet Table 2 in the appendix (Section 8.5) shows hyperparameters that are not part of the DQN or C51 architectures. For example, gradient clipping, n-step returns, and soft target updates (tau in Table 2) are not original hyperparameters of the DQN or C51 architectures. The n-step return is more commonly associated with the Rainbow architecture (Hessel et. al., 2018) and the soft target updates correspond to the continuous control agent from Lillicrap et. al. (2016). There should be some explanation about these choices and, more importantly, the paper should clarify whether the other baselines also use these modifications. Of particular interest to me is the motivation behind gradient clipping since it is not used in any of the 4 architectures mentioned above; is this essential for the DRIML agent? - Finally, how were all these hyperparameters selected? Neither the main text or the appendix provide an explanation for these choices of hyperparameter values.
- In terms of organization, it seems odd that Theorem 3.1 is introduced in page 3, but is referenced until page 6 after Proposition 1. It would be easier on the reader to have these two results close together.
NIPS_2022_2367
NIPS_2022
1) Since this method needs a forward-backward training process, does it require more time to train the network? How many extra parameters are introduced in the newly proposed method compared with the previously proposed method VolMinNet[13]? 2) This paper states that it tries to estimate the transition matrix under the sufficiently scattered assumption, what’s the difference between this assumption and the previous anchor point assumption?
1) Since this method needs a forward-backward training process, does it require more time to train the network? How many extra parameters are introduced in the newly proposed method compared with the previously proposed method VolMinNet[13]?
NIPS_2022_1719
NIPS_2022
1. The new iteration complexity results for nonlinear SA hold under the smoothness assumption on the fixed point. While the paper has justified it in several applications, it does not improve the existing complexity of SA without this smoothness assumption. 2. The paper can do a better job on discussing and highlighting in the main paper how the new proof techniques improve the existing analysis.
1. The new iteration complexity results for nonlinear SA hold under the smoothness assumption on the fixed point. While the paper has justified it in several applications, it does not improve the existing complexity of SA without this smoothness assumption.
ACL_2017_494_review
ACL_2017
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the vector space. The framework is based on the idea of "attract" and "repel" constraints, where attract constraints are used to pull morphological variations close together (e.g. look/looking) and repel constraints are used to push derivational antonyms apart (e.g. responsible/irresponsible). They test their algorithm on multiple different vector spaces and several language, and show consistent improvements on intrinsic evaluation (SimLex-999, and SimVerb-3500). They also test on the extrinsic task of dialogue state tracking, and again demonstrate measurable improvements over using morphologically-unaware word embeddings. I think this is a very nice paper. It is a simple and clean way to incorporate linguistic knowledge into distributional models of semantics, and the empirical results are very convincing. I have some questions/comments below, but nothing that I feel should prevent it from being published. - Comments for Authors 1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumptions the model makes. I don't think you need to include this dataset at all, since it is a potentially erroneous evaluation that can cause confusion, and your results are convincing enough on the standard datasets. 2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actually compare against it empirically. 3) Ideally, we would have a vector space where morphological variants are just close together, but where we can assign specific semantics to the different inflections. Do you have any evidence that the geometry of the space you end with is meaningful. E.g. does "looking" - "look" + "walk" = "walking"? It would be nice to have some analysis that suggests the morphfitting results in a more meaningful space, not just better embeddings.
- Comments for Authors1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumptions the model makes. I don't think you need to include this dataset at all, since it is a potentially erroneous evaluation that can cause confusion, and your results are convincing enough on the standard datasets.
ICLR_2022_1789
ICLR_2022
Strengths Clear and thoughtful discussion of the motivation, namely the importance of grounded language learning and continual learning. I enjoyed reading it! Straightforward description of the main technical contribution Weaknesses As presented, I'm not sure if the presented approach has signicant technical novelty. The proposed solution boils down to finding a linear transformation for the frozen CLIP representations. I would argue that this is already present in any system that uses a foundational model, e.g., BERT or CLIP, to produce input representations for a downstream task (see [1] for broad overview of typical practices). It's common practice for the model to include a fully-connected feed-forward network for transformation of these input embedding. As presented, I don't think the given approach can properly be called "continual learning". CoLLIE re-purposes the CLIP representations for a specific task, but the CLIP model itself is unchanged, and no transfer-learning or fine-tuning happens on the CLIP weights. In sec. 6 its mentioned that future work could explore incorporating the "learned transformation function into the base model." I agree that this direction is worth exploring and I think it would yield results that would better fit the description of "continual learning". In a similar vein, it's unclear how this approach could be scaled to multiple tasks, which is a vital part of any continual learning system (see comments below) Comments and clarifying questions sec. 4: Regarding the function A ( t ) = β t + m , could the authors give a brief explain for their choice of a affine transformation function here? Was this motivated by some previously observed property of the CLIP embedding space? Why should an affine transform be preferred over any other non-linear transform, e.g. quadratic, affine with a non-linear activation etc.? sec. 4: Can the authors comment on how this approach can be scaled to deal with multiple distributional shifts? What happens if the descriptions for some of the tangrams change? Would you propose creating a second A ′ and S ′ ? If so, how would they interact with A and S ? sec 5.1: What is the "few shot classifier"? If it is a multi-class classifier, then how can it be compared to CoLLIE, which is learning a fundamentally different task, namely a mapping between embedding spaces? sec 5.1: Shouldn't the CoLLIE (without scaling) be trained with negative examples in order to give a fair comparison? More exactly, the paper mentions an ablation where the classifier S ( . ) is removed from the system, so the system only consists of the transformation A ( . ) . Figure 5 is claimed to show that S ( . ) is necessary to prevent catastrophic forgetting. However, the training for the ablated system only consists of (pseudo-words, image) pairs. So the transformation A does not have any negative examples, which S would normally have access to. sec 6: "stored training examples have a very small footprint": If I understand correctly, these training examples are not stored in memory during prediction, so is it very important that their footprint be small? Suggestions Sec 1: "Unless the model is re-trained entirely from scratch (which is infeasible for large models like CLIP), the updates to the model’s parameters might interfere with previously learned knowledge, resulting in abrupt performance drops." As I understand, this reasoning explains why transfer-learning or fine-tuning on CLIP is undesirable. However, is there reason to believe that catastrophic forgetting will occur for the settings considered in this paper (nonce-words, tan-gram descriptions)? The tasks don't seem too dissimilar from what CLIP sees ordinarily during training. I think it would strengthen your case if Figure 5 or Table 1 had results that show how forgetting occurs when the original embedding model is fine-tuned. I understand that CLIP isn't available for fine-tuning right now, but maybe a conceptually similar multi-modal transformer could be used. Sec 3: Could you report the mean squared error between the transformed vectors T ( t ) and the target vectors? I ask because it's possible that the similarity ranking for a given embedding may be high relative to the other candidates, but could still be a large distance away from the target. In that vein, do you have a sense of what images your transformed embeddings correspond to? Can images be decoded for them? Sec 5.2: It is claimed that the results in figure 6 demonstrate that the model can leverage compositional knowledge to generalize. Given that results are averaged over 3000 runs, I am inclined to believe this claim. But it could also be shown explicitly by creating a held-out test-set of unseen color+shape combinations. This would increase confidence in the claim that the model is exhibits some type of systematic generalization. citations [1] Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021). [2] Magassouba, Aly, Komei Sugiura, and Hisashi Kawai. "CrossMap Transformer: A Crossmodal Masked Path Transformer Using Double Back-Translation for Vision-and-Language Navigation." arXiv preprint arXiv:2103.00852 (2021).
4: Regarding the function A ( t ) = β t + m , could the authors give a brief explain for their choice of a affine transformation function here? Was this motivated by some previously observed property of the CLIP embedding space? Why should an affine transform be preferred over any other non-linear transform, e.g. quadratic, affine with a non-linear activation etc.? sec.
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not. 2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? 3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries. 4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening? 5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a). 6. The first two bullets about contributions (at the end of the intro) can be combined together. 7. Other errors/typos: a. L14 and 15: repetition of word “imagine” b. L42: missing reference c. L56: impact -> impacts Post-rebuttal comments: The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper. However I still have few comments -- 1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf). 2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper). 3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal, title={Multimodal residual learning for visual qa}, author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak}, booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} } 4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well.
5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a).
NIPS_2018_902
NIPS_2018
Weakness 1) The technical contribution is limited. Throughout this work, there is no any insight technical contribution in terms of either algorithm or framework. 2) The experimental setting is problematic. There is no evaluate criteria. There is no compared method. So we cannot know the superiority of the proposed compared with the existing related methods.
1) The technical contribution is limited. Throughout this work, there is no any insight technical contribution in terms of either algorithm or framework.
1N5Ia3KLX8
EMNLP_2023
- Overall, it is difficult to understand how actually the whole framework works (lack of a pseudocode or a schema). After pretraining, GMMs is estimated (with Expectation-Maximization? = it is not specified). Then the probabilities from GMM are used in a loss function to learn thresholds ξ. Later, somehow cross-entropy loss is applied on class probabilities which are computed basing on GMM with hard thresholds (Eq. 4) to update the models' parameters. How the gradients are propagated through loss function with probabilities computed using hard thresholds (for Universum class) and GMM (trained by EM?) to finally get to the classification model? - Experiments. - The authors compare their approach by extending with it several models. However, apart from the baseline only one other method for outlier exposure (which is unsuitable for universum classes according to the authors) is taken into account. Other related approaches are not studied. - The authors performed some statistical tests, but they do not specify which one. Were the test assumptions properly checked? - The authors formulate a research question "does COOLU provide a more reasonable way of learning representations and classifiers". What "reasonable way" of learning classifiers actually means? This statement is very imprecise. - Lack of some ablation studies, e.g. how well the method works without N-pair loss pretraining? - Some experimental details are not clear. E.g. How the number of GMM components was selected? What is the time complexity of this approach? (how the training time of the new method compare with the training time of the baseline?) - Motivation The authors claim the lack of representativeness of Universum class in training data. However, in most datasets used in the experimental study such class is quite large, hence having abundant representation in the data. That would mean that basically all the classes are not represented well enough in the training data. The authors probably have in mind the fact that Universum class is an aggregation of "all possible implicit classes" but still this claim about lack of representativeness is only supported by a visualization (which is not based on any theoretical model or real data).
- Lack of some ablation studies, e.g. how well the method works without N-pair loss pretraining?
ICLR_2023_2048
ICLR_2023
1. Although the authors mentioned that they proposed a robust DCRRNN model to achieve the goal of accurate streamflow prediction, the detailed model architecture is not explained in the article. It should be clarified in the methodology section. 2. The authors only use a bunch of quantitative analysis to show the effectiveness of the proposed model, but they did not compare their model with a state-of-the-art method such as EA-LSTM, PGRNN and etc. 3. The authors didn’t show the experiment results under limited observation data to further justify the robustness of the proposed model.
1. Although the authors mentioned that they proposed a robust DCRRNN model to achieve the goal of accurate streamflow prediction, the detailed model architecture is not explained in the article. It should be clarified in the methodology section.
ICLR_2021_1785
ICLR_2021
The paper is not clearly written, making it is hard to follow. There are two main reasons. First, the motivation of directed spanning set (DSS) is not clear. When Wx=y has multiple solutions or Wx<0 has solutions, ReLU(Wx) will precludes injectivity, where y>=0 is a constant. Are the two cases related to DSS? Second, the notations are confusing. For example, W is a matrix and is also a set. w is a row vector, while z and b are column vectors. I is an identity matrix and is also an inference network. The main theoretical results (i.e., Corollary 2 and Theorem 2) are highly related to (Lei et al., 2019). However, the paper did not compare with (Lei et al., 2019). What is the advantage of the proposed results over those in (Lei et al., 2019)? It seems that the result in (Lei et al., 2019) (i.e., m>=2.1n) is tighter than the presented one (m>=5.7n). Injectivity for Gaussian weights should be tested by comparing with gradient descent and (Lei et al., 2019). Also, how to use Corollary 2 and Theorem 2 to construct W in practice? It is better to test with some real problems, e.g., denoising. Minor comments: 1. In the first paragraph of subsection 2.3, ReLU(x)-ReLU(y) should be ReLU(Wx)-ReLU(Wy) . 2. What do C(W) and S(x,W) mean in Theorem 3?
1. In the first paragraph of subsection 2.3, ReLU(x)-ReLU(y) should be ReLU(Wx)-ReLU(Wy) .
NIPS_2019_1089
NIPS_2019
- The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as “Strong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019” - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that “resulting in a correspondence of our HPFN to an even deeper ConAC”, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning “flexible and higher-order local and global intercorrelations“? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compact” -> “Despite being compact” 2. line 56: “We refer multiway arrays” -> “We refer to multiway arrays” 3. line 158: “HPFN to a even deeper ConAC” -> “HPFN to an even deeper ConAC” 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on… ****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization” which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones.
- How can the model be modified to remain useful when there are noisy or missing modalities?
7vVWiCrFnd
ICLR_2024
1. The first paragraph is problematic. “implicitly assume that node representations learnt by GNNs are independent conditioned on node features and edges, thereby ignoring the joint dependency among nodes...” does not represent the related works accurately. These works do not ignore dependency among the nodes, which is captured via multiple rounds of message passing similar to loopy belief propagation. I find the first paragraph could be phrased in a different way to make the distinctions accurate. 1. The introduction of phantom nodes and edges is not a novel development and closely resembles the other methods like CIN. However, the problem of inefficiency remains in such methods i.e. the computational complexity of finding maximal cliques which can be used for phantom nodes to guarantee the inferential capacity. 1. Certain related works are missing in the paper. Recent works on Factor Graph Neural Networks (FGNN) [1] are highly related works in establishing connections between PGMs and GNNs. A small discussion on the relevance would be pertinent. 1. The experimental section could be more elaborate to study the effectiveness of GNNs in learning higher-order distributions. For example, comparison with Zhen et al. (2023) on inference of higher-order LDPC codes. **Overall,** I find the paper making a good contribution to the understanding of the GNN's expressivity in a new perspective from the lens of Probabilistic graphical models. Although, I haven't carefully checked all the proofs, the results are mostly not surprising. **References:** [1] Zhen Zhang, Mohammed Haroon Dupty, Fan Wu, Javen Qinfeng Shi, Wee Sun Lee. "Factor Graph Neural Networks" Journal of machine Learning research 24(181):1−54, 2023
1. The introduction of phantom nodes and edges is not a novel development and closely resembles the other methods like CIN. However, the problem of inefficiency remains in such methods i.e. the computational complexity of finding maximal cliques which can be used for phantom nodes to guarantee the inferential capacity.
5FXKgOxmb2
ICLR_2025
1. One of the main limitation of fragment-based methods is the absence of synthesizability considerations in the framework design. This significantly limits the applicability of such methods since, to be tested in physical and biological assays, the proposed molecules either require individual and expansive custom synthesis plans or have to be replaced by available analogs, thus drastically under-utilizing the expressivity of the model. It would be interesting to further discuss the limitations of the proposed method w.r.t. synthesizability in the manuscript. 2. On line 371: "We do not report Novelty and Uniqueness, as almost all evaluated models achieve 100% on these metrics." if this claim is made, then the numbers should be presented (at least in Appendix). Same for the mention right after: "For the baselines DiGress, SM-LSTM, and CharVAE, which are not able to achieve 100% Validity, we sample until we obtain 10^4 valid molecules", it could be informative to include these numbers in appendix (validity rate for each method). 3. It would be useful to specify a bit more clearly how benchmarking on Guacamol is executed (lines 301-304 and 309-311). Is the model trained on ZINC and then evaluated on some reference set defined by Guacamol (Chembl)? Or is the model trained on Chembl molecules and compared to a test set also defined in Guacamol? While the provided references are useful, the description of the experiments should be standalone in the paper. 4. In the goal-directed evaluations, it would be interesting to compare MAGNet with methods specifically aimed at goal-directed molecular design such as RL and GFlownet based methods. ### Elements worth clarifying 1. Figure 1 could be made clearer by further explaining parts a), b) and c) in the caption. 2. The factorisation from graph to scaffold graph, and scaffold graph to molecular graphs, described in Section 3, would be clearer with a supporting Figure. 3. The main baselines (PS-VAE, MoLeR and MiCaM) could be described in greater details. 4. I found the discussion on novel conditioning capabilities very interesting (lines 469-476), however, even with this in mind, it is not clear to me what Figure 5 is showing or how it supports these claims. I think this figure could be improved to better support this discussion. ### Minor comments: 1. Typo on line 155, factorise* 2. Line 150: not sure that App C.6 is the intended link here (or how it relates to the sentence)? 3. Typo on line 302: to evaluate* 4. Error in Figure 3: based on the text and the caption itself, it seems to be that the graph columns B and C are mixed up in Figure 3. 5. I did not find the caption of Table 1 very natural to read. I would suggest simply specifying in parenthesis what underline and bold mean in the table, as opposed to underlining and bolding their description in the caption.
1. One of the main limitation of fragment-based methods is the absence of synthesizability considerations in the framework design. This significantly limits the applicability of such methods since, to be tested in physical and biological assays, the proposed molecules either require individual and expansive custom synthesis plans or have to be replaced by available analogs, thus drastically under-utilizing the expressivity of the model. It would be interesting to further discuss the limitations of the proposed method w.r.t. synthesizability in the manuscript.
XNnFTKCacy
EMNLP_2023
1. The motivation is somewhat unclear to me. Generally, the authors claim that previous efforts can not handle entity interaction regarding topic coherence and category coherence. The authors can provide more intuitive examples by comparing existing methods, e.g., generative entity linking techniques. 2. The performance improvements are evident, but whether they are from better topic and category coherence is not verified. Figures 4 and 5's visualization just prove that the same or similar topics/categories will generate grouped embeddings, and more in-depth analyses are needed from the perspective of coherence. 3. The authors claim their model demonstrates outstanding performance in challenging long-text scenarios. More experiments and discussion on the context of different lengths can make this claim more convincing.
2. The performance improvements are evident, but whether they are from better topic and category coherence is not verified. Figures 4 and 5's visualization just prove that the same or similar topics/categories will generate grouped embeddings, and more in-depth analyses are needed from the perspective of coherence.
NIPS_2017_71
NIPS_2017
- The paper is a bit incremental. Basically, knowledge distillation is applied to object detection (as opposed to classification as in the original paper). - Table 4 is incomplete. It should include the results for all four datasets. - In the related work section, the class of binary networks is missing. These networks are also efficient and compact. Example papers are: * XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016 * Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS 2015 Overall assessment: The idea of the paper is interesting. The experiment section is solid. Hence, I recommend acceptance of the paper.
* XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016 * Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS 2015 Overall assessment: The idea of the paper is interesting. The experiment section is solid. Hence, I recommend acceptance of the paper.
NIPS_2022_2646
NIPS_2022
Weakness 1. Overclaims or several claims are not evaluated. There are many claims in this paper and I fail to find supports for some of them. Thus, this paper could have some overclaims: In L36, how to evaluate that MiRe does not have the "data-dependent'' spurious correlations? I fear that MiRe is still a data-driven method and it heavily rely on the Mixup operation of the fore-background. In L42, authors claimed that previous methods require "some distribution of values for an attribute". I would assume MiRe does not. However, the adoption of Grad-Cam indeed assumes that the input data is image with clear fore and background information. Thus, it is an overclaim. In L44, I fail to find the answers to the question: "how many factors of the latent factors". In L132, why do existing data augmentation approaches lack diversity? Any support? 2. Methodology. Technically, the method is not novel. It is a combination of Grad-Cam, Mixup, and GCN. I admire the application of these existing techniques. But there is no further insight and motivation is not strong. Plus, there are the following issues: In figure 3, why only cropping 1/8 of the original image? There lacks motivation. Similarly, in Eq. 4, why adopting cos ( f , c j ) / 2 ? How did the term 1 / 2 come up? This also goes to Eq. 6. The second part of the method is too complicated and computationally expensive for multiple domains. Thus I fear the comparison to other methods is not fair. Regarding the reproducibility, this approach has introduced many extra hyperparameters to be tuned, to name a few, the hyperparameters in Grad-Cam, threshold in Eq. 1, 1 / 8 in Figure 3, 1 / 2 in Eq. 4 and 6, the Mixup hyperparameter in Figure 3, ξ in Eq. 7, and λ in Eq. 8. Given that there are so many hyperparameters, I highly doubt the reproducibility of this approach. Furthermore, the robustness of the hyperparameters should be reported. Thus, not only accuracy, but also the variation. 3. Experiments. The experiments seem extensive. But after a careful check, I found that there is no reason to compare with different SOTA results in different datasets (e.g., the comparison methods in PACS and Office-Home are different). Plus, the model selection strategy is also different for each dataset. There is no support or explanation in the paper to illustrate why this is done. It is unknown where the authors obtain the results for comparison methods for the main paper. Then, if you can use DomainBed, why not just use it as the main benchmark? References Overall, the references are good. But you can also cite more recent works from the following DG survey articles: [1] Zhou et al. Domain generalzation in vision: a survey. [2] Wang et al. Generalizating to unseen domains: a survey on domain generalization. Minor comments Typos: L26, "has attract". L137, "we propose develop".
6. The second part of the method is too complicated and computationally expensive for multiple domains. Thus I fear the comparison to other methods is not fair. Regarding the reproducibility, this approach has introduced many extra hyperparameters to be tuned, to name a few, the hyperparameters in Grad-Cam, threshold in Eq. 1, 1 / 8 in Figure 3, 1 / 2 in Eq. 4 and 6, the Mixup hyperparameter in Figure 3, ξ in Eq. 7, and λ in Eq.
SnFmGmKTn1
EMNLP_2023
- The work is partially motivated by the challenges faced by triple-based approaches when dealing with long-tail entities. However, the manuscript does not highlight any particular considerations for long-tail triples, and I understand that any performance improvements for such cases (presented in Section 4.4) can be mostly attributed to ChatGPT (i.e. assuming that long-tail entities would lack suitable demonstration examples in their corresponding supplement pool). - Some details about how the experiments are conducted are missing (please refer to Question A and B below). - RotatE is used as the retriever in the proposed architecture. However, this choice is not sufficiently justified in the manuscript. I believe it would be interesting to see the performance of the end system, using at least one more from the KG embedding models.
- The work is partially motivated by the challenges faced by triple-based approaches when dealing with long-tail entities. However, the manuscript does not highlight any particular considerations for long-tail triples, and I understand that any performance improvements for such cases (presented in Section 4.4) can be mostly attributed to ChatGPT (i.e. assuming that long-tail entities would lack suitable demonstration examples in their corresponding supplement pool).
ICLR_2022_2323
ICLR_2022
Weakness: 1. The literature review is inaccurate, and connections to prior works are not sufficiently discussed. To be more specific, there are three connections, (i) the connection of (1) to prior works on multivariate unlabeled sensing (MUS), (ii) the connection of (1) to prior works in unlabeled sensing (US), and (iii) the connection of the paper to (Yao et al., 2021). (i) In the paper, the authors discussed this connection (i). However, the experiments shown in Figure 2 do not actually use the MUS algorithm of (Zhang & Li, 2020) to solve (1); instead the algorithm is used to solve the missing entries case. This seems to be an unfair comparison as MUS algorithms are not designed to handle missing entries. Did the authors run matrix completion prior to applying the algorithm of (Zhang & Li, 2020)? Also, the algorithm of (Zhang & Li, 2020) is expected to fail in the case of dense permutation. (ii) Similar to (i), the methods for unlabeled sensing (US) can also be applied to solve (1), using one column of B_0 at a time. There is an obvious advantage because some of the US methods can handle arbitrary permutations (sparse or dense), and they are immune to initialization. In fact, these methods were used in (Yao et al., 2021) for solving more general versions of (1) where each column of B has undergone arbitrary and usually different permutations; moreover, this can be applied to the d-correspondence problem of the paper. I kindly wish the authors consider incoporating discussions and reviews on those methods. (iii) Finally, the review on (Yao et al., 2021) is not very accurate. The framework of (Yao et al., 2021), when applied to (1), means that the subspace that contains the columns of A and B is given (when generating synthetic data the authors assume that A and B come from the same subspace). Thus the first subspace-estimation step in the pipeline of (Yao et al., 2021) is automatically done; the subspace is just the column space of A. As a result, the method of (Yao et al., 2021) can handle the situation where the rows of B are densely shuffled, as discussed above in (ii). Also, (Yao et al., 2021) did not consider only "a single unknown correspondence". In fact, (Yao et al., 2021) does not utilize the prior knowledge that each column of B is permuted by the same permutation (which is the case of (1)), instead it assumes every column of B is arbitrarily shuffled. Thus it is a more general situation of (1) and of the d-correspondence problem. Finally, (Yao et al., 2021) discusses theoretical aspects of (1) with missing entries, while an algorithm for this is missing until the present work. 2. In several places the claims of the paper are not very rigorous. For example, (i) Problem (15) can be solved via linear assignment algorithms to global optimality, why do the authors claim that "it is likely to fall into an undesirable local solution"? Also I did not find a comparison of the proposed approach with linear assignment algorithms. (ii) Problem (16) seems to be "strictly convex", not "strongly convex". Its Hessian has positive eigenvalues everywhere but the minimum eigenvalue is not lower bounded by some positive constant. This is my feeling though, as in the situation of logistic regression, please verify this. (iii) The Sinkhorn algorithm seems to use O(n^2) time per iteration, as in (17) there is a term C(hat{M_B}), which needs O(n^2) time to be computed. Experiments show that the algorithm needs > 1000 iterations to converge. Hence, in the regime where n << 1000 the algorithm might take much more time than O(n^2) (this is the regime considered in the experiments). Also I did not see any report on running times. Thus I feel uncomfortable to see the author claim in Section 5 that "we propose a highly efficient algorithm". 3. Even though an error bound is derived in Theorem 1 for the nuclear norm minimization problem, there is no guarantee of success on the alternating minimization proposal. Moreover, the algorithm requires several parameters to tune, and is sensitive to initialization. As a result, the algorithm has very lage variance, as shown in Figure 3 and Table 1. Questions: 1. In (3) the last term r+H(pi_P) and C(pi_P) is very interesting. Could you provide some intuition how it shows up, and in particular give an example? 2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close? 3. Solving the nuclear norm minimization program seems to be NP-hard as it involves optimization over permutation matrices and a complicated objective. Is there any hardness result for this problem? Suggestions: The following experiments might be useful. 1. Sensitivity to permutation sparsity: As shown in the literature of unlabeled sensing, the alternating minimization of (Abid et al., 2017) works well if the data are sparsely permuted. This might also apply to the proposed alternating minimization algorithm here. 2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases. 3. Sensitivity to other hyper-parameters. Minor Comments on language usage: (for example) 1. "we typically considers" in the above of (7) 2. "two permutation" in the above of Theorem 1 3. "until converge" in the above of (14) 4. ...... Please proofread the paper and fix all language problems.
2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close?
ACL_2017_494_review
ACL_2017
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the vector space. The framework is based on the idea of "attract" and "repel" constraints, where attract constraints are used to pull morphological variations close together (e.g. look/looking) and repel constraints are used to push derivational antonyms apart (e.g. responsible/irresponsible). They test their algorithm on multiple different vector spaces and several language, and show consistent improvements on intrinsic evaluation (SimLex-999, and SimVerb-3500). They also test on the extrinsic task of dialogue state tracking, and again demonstrate measurable improvements over using morphologically-unaware word embeddings. I think this is a very nice paper. It is a simple and clean way to incorporate linguistic knowledge into distributional models of semantics, and the empirical results are very convincing. I have some questions/comments below, but nothing that I feel should prevent it from being published. - Comments for Authors 1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumptions the model makes. I don't think you need to include this dataset at all, since it is a potentially erroneous evaluation that can cause confusion, and your results are convincing enough on the standard datasets. 2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actually compare against it empirically. 3) Ideally, we would have a vector space where morphological variants are just close together, but where we can assign specific semantics to the different inflections. Do you have any evidence that the geometry of the space you end with is meaningful. E.g. does "looking" - "look" + "walk" = "walking"? It would be nice to have some analysis that suggests the morphfitting results in a more meaningful space, not just better embeddings.
2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actually compare against it empirically.
ICLR_2021_853
ICLR_2021
are listed as follows: Strengths: 1). The authors propose a simple but efficient indicator-free method to prevent skip connection from dominating the superNet. They also demonstrate the effectiveness of auxiliary branch from the view of gradient flow and the convergence of network weight. 2). Extensive experiments on multiple search spaces and datasets show the effectiveness of the method. 3). The method can combine with DARTS variants to further improve the performance. Weaknesses: 1). I am sort of concerned that the auxiliary skip connection may suppress the weight of original skip connection. And I hope the authors will have further analysis. 2). I wonder what is the meaning of “DARTS-”since the method actually add an auxiliary skip connection stead of removing any connections or operations. I strongly suggest the authors change the name into “regDARTS” (regularized), “gradDARTS” (graduated) etc. [1] Zhou, Pan, et al. "Theory-inspired path-regularized differential network architecture search." arXiv preprint arXiv:2006.16537 (2020).
1). The authors propose a simple but efficient indicator-free method to prevent skip connection from dominating the superNet. They also demonstrate the effectiveness of auxiliary branch from the view of gradient flow and the convergence of network weight.
ICLR_2022_2726
ICLR_2022
1. One of the key points in SpaceMAP is to define the similarity based on EEDs, as given in (8) and (10). However, it is not explained why the EED used in (8) is inversed. If α t and β t are factors from the ambient space to the intrinsic space, then the EED in (8) should be α t R i j β t , as given in the paragraph after Example 3.1. 2. I find the definition of d_global and d_local not clear. In the second to last paragraph in section 1, d_global is the dimension of the manifold, and d_local is the dimension of a local neighborhood. However, for a manifold its dimension is defined as the dimension of an open subset that a neighborhood around each point on the manifold is homeomorphic to, so under such common definition d_global and d_local seem the same. 3. The MLE of intrinsic dimension in Theorem 3.1 and Lemma 3.2 is not defined or formally introduced. Also, the assumption in Theorem 3.1 is not clear enough. Does R ≤ R i j means that R is required to be smaller than R i j for all j = 1 , … , k ? 4. Section 3.2.2 seems very unclear, because it tries to explain distortion of similarity functions without defining the similarity functions. In Figure 2 (a), f , f ~ d → D ′ , and F ~ D → d are never defined, and P i j , Q i j , and the near field are only defined in the next section. 5. The sentence after equation (7) is not finished. 6. What does the subscript t in equation (8) denote? The so-called FD factors σ t , i in (8) and σ in (10) are not defined.
2. I find the definition of d_global and d_local not clear. In the second to last paragraph in section 1, d_global is the dimension of the manifold, and d_local is the dimension of a local neighborhood. However, for a manifold its dimension is defined as the dimension of an open subset that a neighborhood around each point on the manifold is homeomorphic to, so under such common definition d_global and d_local seem the same.
NIPS_2017_65
NIPS_2017
1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification 2) the optimization procedure used to solve the multi-objective optimization problem is not discussed in adequate detail Detailed comments below: Methods and Evaluation: The proposed objective is interesting and utilizes ideas from two well studied lines of research, namely, privileged learning and distribution matching to build classifiers that can incorporate multiple notions of fairness. The authors also demonstrate how some of the existing methods for learning fair classifiers are special cases of their framework. It would have been good to discuss the goal of each of the terms in the objective in more detail in Section 3.3. The part that is probably the most weakest in the entire discussion of the approach is the discussion of the optimization procedure. The authors state that there are different ways to optimize the multi-objective optimization problem they formulate without mentioning clearly which is the procedure they employ and why (in Section 3). There seems to be some discussion about the same in experiments section (first paragraph) and I think what was done is that the objective was first converted into unconstrained optimization problem and then an optimal solution from the pareto set was found using BFGS. This discussion is still quite rudimentary and it would be good to explain the pros and cons of this procedure w.r.t. other possible optimization procedures that could have been employed to optimize the objective. The baselines used to compare the proposed approach and the evaluation in general seems a bit weak to me. Ideally, it would be good to employ baselines that learn fair classifiers based on different notions (E.g., Hardt et. al. and Zafar et. al.) and compare how well the proposed approach performs on each notion of fairness in comparison with the corresponding baseline that is designed to optimize for that notion. Furthermore, I am curious as to why k-fold cross validation was not used in generating the results. Also, was the split between train and test set done randomly? And, why are the proportions of train and test different for different datasets? Clarity of Presentation: The presentation is clear in general and the paper is readable. However, there are certain cases where the writing gets a bit choppy. Comments: 1. Lines 145-147 provide the reason behind x*_n being the concatenation of x_n and z_n. This is not very clear. 2. In Section 3.3, it would be good to discuss the goal of including each of the terms in the objective in the text clearly. 3. In Section 4, more details about the choice of train/test splits need to be provided (see above). While this paper proposes a useful framework that can handle multiple notions of fairness, there is scope for improving it quite a bit in terms of its experimental evaluation and discussion of some of the technical details.
3. In Section 4, more details about the choice of train/test splits need to be provided (see above). While this paper proposes a useful framework that can handle multiple notions of fairness, there is scope for improving it quite a bit in terms of its experimental evaluation and discussion of some of the technical details.
NIPS_2019_1089
NIPS_2019
- The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as “Strong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019” - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that “resulting in a correspondence of our HPFN to an even deeper ConAC”, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning “flexible and higher-order local and global intercorrelations“? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compact” -> “Despite being compact” 2. line 56: “We refer multiway arrays” -> “We refer to multiway arrays” 3. line 158: “HPFN to a even deeper ConAC” -> “HPFN to an even deeper ConAC” 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on… ****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization” which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones.
- The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets.
NIPS_2021_37
NIPS_2021
, * Typos/Comments) Overall, I like and value the research topic and motivation of this paper and lean positive. However, some details are not clear enough. I would update my rating depending on the authors' feedback. The details are as follows. + Interesting and important research problem. This paper focuses on how to obtain disentangle representations for feature-level augmentation. This topic is interesting and important, and will attract many interests of the NeurIPS community. + Good quality of writing and organization. Overall, the writing quality is good and the paper is well organized. It is comfortable to read this paper, although some details are not clear. + Comprehensive experiments. Experiments are conducted on two synthetic datasets Colored MNIST and Corrupted CIFAR-10) and two real-world datasets (BAR and Biased FFHQ). - Relative difficulty score and generalized cross-entropy (GCE) loss. It is not clear how the relative difficulty score W ( x ) in Eq. (1) is used in the pipeline. W(x) is not mentioned again in both the overall objective functions Eq. (2) or Algorithm 1. Since readers may not be familiar with the generalized cross-entropy (GCE) loss, it is encouraged to briefly introduce the formulation and key points of the GCE loss to make this paper more self-contained. - How bias-conflicting samples and bias-aligned samples are selected. This weakness follows the first one. It seems that the "bias-conflicting" is determined based on the relative difficulty score, but the details are missed. Also, the ablation study on how the "bias-conflicting" is determined, e.g., setting the threshold for the relative difficulty score, is encouraged to be considered and included. - Disentanglement. It is not clear how disentanglement is guaranteed. Although "Broader Impacts and Limitations" stated that "Obtaining fully disentangled latent vectors ... a limitation", it is still important to highlight how the disentanglement is realized and guaranteed without certain bias types. - Inference stage. It is not clear how the inference is conducted during testing. Which encoders/decoders are preserved during the test stage? - Figure 1 is not clear. First, it seems that the two y towards L CE are the outputs of C i , but they are illustrated like labels rather than predictions. Second, the illustration of the re-weighting module is not clear. Does it represent Eq. (4)? - Table 4 reported a much lower performance of "swapping" on BAR compared to the other three datasets. Is there any explanation for this, like the difference of datasets? - Sensitivity to hyperparameters. The proposed framework consists of three important hyperparameters, ( λ dis , λ s w a p b , λ swap ) . It is not clear whether the framework is sensitive to these hyperparameters and how these hyperparameters are determined. * (Suggestion) Illustration of backpropagation. As introduced in Line 167-168, the loss from C i is not backpropagated to E b . It would be clearer if this can be added in Figure 1. * Line 280. Is "the first row and column ... respectively" a typo? It is a little confusing for me to understand this. * Typos in Algorithm 1. Are λ dis and λ s w a p b missed in L dis and L swap ? * Typo in Line 209. Corrputed -> Corrupted. ============================= After rebuttal =================================== After reading the authors' response to my questions and concerns, I would like to vote for acceptance. The major strengths of this paper are: The research problem, unbiased classification via learning debiased representation, is interesting and would attract the NeurIPS audience's attention. The proposed method is simple but effective. The method is built on top of LfF [12] and further considers (1) intrinsic and bias feature disentanglement and (2) data augmentation by swapping the bias features among training samples. The paper is clearly written and well organized. These strengths and contributions are also pointed out by other colleague reviewers. My main concerns were: Unclear technical details of the GCE loss and the relative difficulty score. This concern was also shared with Reviewer 8Ai1 and iKKw. The authors' response clearly introduced the details and addressed my concern well. Sensitivity to hyper-parameters. The authors' response provided adequate results to show the sensitivity to hyper-parameters. Other details of implementation and analysis of experimental results. The authors' responses clearly answered my questions. Considering both strengths and the weakness, I am happy to accept this paper. The authors have adequately addressed the limitations and potential negative societal impact of their work.
- Table 4 reported a much lower performance of "swapping" on BAR compared to the other three datasets. Is there any explanation for this, like the difference of datasets?
ARR_2022_110_review
ARR_2022
1. My biggest concern is that the analysis doesn't provide insights on WHICH language benefits WHICH other language in a multilingual setup. The only comparison provided is mono-vs-tri-lingual, but we should also compare vs bi-lingual (For example -- comparing En+Hi, En+Te, Hi+Te vs En+Hi+Te). This analysis may point to interesting observations on which language combination is more effective. I understand that the authors want to show us that multi-lingual representations improve performance, but understanding the relationship between different languages in this context of VLN is an equally important aspect -- the experimental setting of this paper is perfectly poised to provide this analysis, but the results are tucked away in the Appendix -- this should be moved to the main paper. 2. [ presentation of results] Table 2 should also include the best result from Shen et al. 2021 (one of the baselines), so that we can compare the finegrained results on each language and metric. 3. There seems to be some discrepancy between the scores for RxR in Table 1 and the scores reported by Shen et al for RxR (see Table 4 in Shen et al.) -- could you elaborate why? 4. Sec 6.3 results are provided inline, but not as a figure/table. It is difficult to understand the improvements easily. I would strongly recommend making this a table, and also providing SOTA methods for R2R and CVDN in that table (not just the baselines). This will tell us how far away the proposed transferred model fares vs fully supervised models on the respective datasets. 5. Sec 6.3: The definition of "transfer" is unclear. Does it mean that only the navigation step is trained on R2R and CVDN while the representation step is used from RxR? Or are both steps retrained (in that case it isn't a transfer). What about zero-shot transfer from RxR to R2R? How does the CLEAR approach fare on zero-shot? 6. Sec N of the appendix includes another baseline (SimCSE, Gao et al) -- but this is missing from the tables in the main paper. Why? 7. Are LSTMs a better choice for the navigation step? Would a transformer/attention mechanism be better suited for learning longer sequences? This architectural choice should be justified. 8. In Related Work (Sec 2), other datasets/methods for V&L with multilinguality are mentioned (VQA, captioning, retrieval...) -- is your method inspired by/related to any of these methods? If not, how is it different? 1. Sec 6.3 is titled "Generalization to other V&L tasks" -- this is misleading since both eval datasets are also VLN datasets. Other tasks has the connotation of other V&L tasks such as VQA, captioning, ... 2. There seems to be a related cross-lingual VLN dataset (English-Chinese) based on R2R -- https://arxiv.org/abs/1910.11301 . You might want to test CLEAR on this benchmark if possible -- I'm not sure how practical it is, so I've not included this under "Weaknesses". At the very least, it would be prudent to include this in related work. 3. Overall, I would encourage the authors to re-think which analysis should go into the main paper and which in the appendix. In my opinion, every study that directly contributed to the 2 claims from the abstract, should be moved to the main paper. The main contribution of this paper seems to be the multi-lingual training -- as such ablation studies about this part of the algorithm are more important that the ablation study about visual features. (Table 9 for example) This paper makes a step in a very interesting direction -- however, I would like to see (1) re-organization of the analysis (between main and appendix), (2) focused and precise analysis of multi-lingual training, and (3) more details and exhaustive experiments in the transfer learning setup. My current score of 3.5 reflects these weaknesses + other unclear details that I've mentioned in "Weaknesses" above. In case the paper doesn't go through to your ideal *ACL venue, please address these issues before resubmitting and I'll be happy to continue as a reviewer of this paper.
7. Are LSTMs a better choice for the navigation step? Would a transformer/attention mechanism be better suited for learning longer sequences? This architectural choice should be justified.
NIPS_2017_201
NIPS_2017
++++++++++ Novelty/Significance: The reformulation of the robust regression problem (Eq 6 in the paper) shows that robust regression is reducible to standard k-sparse recovery. Therefore, the proposed CRR algorithm is basically the well-known IHT algorithm (with a modified design matrix), and IHT has been (re)introduced far too many times in the literature to count. The proofs in the appendix seem to be correct, but also mostly follow existing approaches for analyzing IHT (see my comment below). Note that the “subset strong convexity” property (or at least a variation of this property) of random Gaussian matrices seems to have appeared before in the sparse recovery literature; see “A Simple Proof that Random Matrices are Democratic” (2009) by Davenport et al. Couple of questions: - What is \delta in the statement of Lemma 5? - Not entirely clear to me why one would need a 2-stage analysis procedure since the algorithm does not change. Some intuition in the main paper explaining this would be good (and if this two-stage analysis is indeed necessary, then it would add to the novelty of the paper). +++++++++ Update after authors' response +++++++++ Thanks for clarifying some of my questions. I took a closer look at the appendix, and indeed the "fine convergence" analysis of their method is interesting (and quite different from other similar analyses of IHT-style methods). Therefore, I have raised my score.
- Not entirely clear to me why one would need a 2-stage analysis procedure since the algorithm does not change. Some intuition in the main paper explaining this would be good (and if this two-stage analysis is indeed necessary, then it would add to the novelty of the paper). +++++++++ Update after authors' response +++++++++ Thanks for clarifying some of my questions. I took a closer look at the appendix, and indeed the "fine convergence" analysis of their method is interesting (and quite different from other similar analyses of IHT-style methods). Therefore, I have raised my score.
zpayaLaUhL
EMNLP_2023
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2. - The input for the analysis is limited to only 100 or 200 samples from wikitext-2. It would be desirable to experiment with a larger number of samples or with datasets from various domains. - Findings are interesting, but no statement of what the contribution is and how practical impact on the community or practical use. (Question A). - Results contradicting those reported in existing studies (Clark+'19) are observed but not discussed (Question B). - I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior is not such that it can be discussed to explain the behavior under normal conditions. Word embeddings may be the only prerequisites for the model to work properly rather than playing an important role in certain attention patterns. - Introduction says to analyze "why attention depends on relative position," but I cannot find content that adequately answers this question. - There is no connection or discussion of relative position embedding, which is typically employed in recent Transformer models in place of learnable APE (Question C).
- I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior is not such that it can be discussed to explain the behavior under normal conditions. Word embeddings may be the only prerequisites for the model to work properly rather than playing an important role in certain attention patterns.
NIPS_2016_9
NIPS_2016
Weakness: The authors do not provide any theoretical understanding of the algorithm. The paper seems to be well written. The proposed algorithm seems to work very all on the experimental setup, using both synthetic and real-world data. The contributions of the papers are enough to be considered for a poster presentation. The following concerns if addressed properly could raise to the level of oral presentation: 1. The paper does not provide an analysis on what type of data the algorithm work best and on what type of data the algorithm may not work well. 2. The first claimed contribution of the paper is that unlike other existing algorithms, the proposed algorithm does not take as many points or does not need apriori knowledge about dimensions of subspaces. It would have been better if there were some empirical justification about this. 3. It would be good to show some empirical evidence that the proposed algorithm works better for Column Subset Selection problem too, as claimed in the third contribution of the paper.
1. The paper does not provide an analysis on what type of data the algorithm work best and on what type of data the algorithm may not work well.
NIPS_2021_537
NIPS_2021
Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-level idea, though not explicitly highlighted, has been adopted in almost all query-based NAS approaches where the promising architectures are predicted and selected at each iteration and used to update the predictor model for next iteration. As the authors acknowledged in Section 2.3, their approach is exactly a simplified version of BO which has been extensively used for NAS [1,2,3,4]. However, unlike BO, the predictor doesn’t output uncertainty and thus the authors use a heuristic to trade-off exploitation and exploration rather than using more principled acquisition functions. 2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper. Detailed Comments: 1. The authors conduct some ablation studies in Section 3.2. However, a more important ablation would be to modify the proposed predictor model to get some uncertainty (by deep-ensemble or add a BLR final output layer) and then use BO acquisition functions (e.g. EI) to do the sampling. The proposed greedy sampling strategy works because the search space for NAS-Bench-201 and 101 are relatively small and as demonstrated in [6], local search even gives the SOTA performance on these benchmark search spaces. For a more realistic search space like NAS-Bench-301[7], the greedy sampling strategy which lacks a principled exploitation-exploration trade-off might not work well. 2. Following the above comment, I’ll suggest the authors to evaluate their methods on NAS-Bench-301 and compare with more recent BO methods like BANANAS[2] and NAS-BOWL[4] or predictor-based method like BRP-NAS [5] which is almost the same as the proposed approach. I’m aware that the authors have compared to BONAS and shows better performance. However, BONAS uses a different surrogate which might be worse than the options proposed in this paper. More importantly, BONAS use weight-sharing to evaluate architectures queried which may significantly underestimate the true architecture performance. This trades off its performance for time efficiency. 3. For results on open-domain search, the authors perform search based on a pre-trained super-net. Thus, the good final performance of WeakNAS on MobileNet space and NASNet space might be due to the use of a good/well-trained supernet; as shown in Table 6, OFA with evalutinary algorithm can give near top performance already. More importantly, if a super-net has been well-trained and is good, the cost of finding the good subnetwork from it is rather low as each query via weight-sharing is super cheap. Thus, the cost gain in query efficiency by WeakNAS on these open-domain experiments is rather insignificant. The query efficiency improvement is likely due to the use of a predictor to guide the subnetwork selection in contrast to the naïve model-free selection methods like evolutionary algorithm or random search. A more convincing result would be to perform the proposed method on DARTS space (I acknowledge that doing it on ImageNet would be too expensive) without using the supernet (i.e. evaluate the sampled architectures from scratch) and compare its performance with BANANAS[2] or NAS-BOWL[4]. 4. If the advantage of the proposed method is query-efficiency, I’d love to see Table 2, 3 (at least the BO baselines) in plots like Fig. 4 and 5, which help better visualise the faster convergence of the proposed method. 5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS. 6. I might misunderstand something here but the results in Table 3 seem to contradicts with the results in Table 4. As in Table 4, WeakNAS takes 195 queries on average to find the best architecture on NAS-Bench-101 but in Table 3, WeakNAS cannot reach the best architecture after even 2000 queries. 7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. 8. In Table 1, what does the number of predictors mean? To me, they are simply the number of search iterations. Do the authors reuse the weak predictors from previous iterations in later iterations like an ensemble? I understand that given the time constraint, the authors are unlikely to respond to my comments. Hope those comments can help the authors for future improvement of the paper. References: [1] Kandasamy, Kirthevasan, et al. "Neural architecture search with Bayesian optimisation and optimal transport." NeurIPS. 2018. [2] White, Colin, et al. "BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search." AAAI. 2021. [3] Shi, Han, et al. "Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS." NeurIPS. 2020. [4] Ru, Binxin, et al. "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels." ICLR. 2020. [5] Dudziak, Lukasz, et al. "BRP-NAS: Prediction-based NAS using GCNs." NeurIPS. 2020. [6] White, Colin, et al. "Local search is state of the art for nas benchmarks." arXiv. 2020. [7] Siems, Julien, et al. "NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search." arXiv. 2020. The limitation and social impacts are briefly discussed in the conclusion.
5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS.
ICLR_2021_1189
ICLR_2021
weakness of the paper is its experiments section. 1. Lack of large scale experiments: The models trained in the experiments section are quite small (80 hidden neurons for the MNIST experiments and a single convolutional layer with 40 channels for the SVHN experiments). It would be nice if there were at least some experiments that varied the size of the network and showed a trend indicating that the results from the small-scale experiments will (or will not) extend to larger scale experiments. 2. Need for more robustness benchmarks: It is impressive that the Lipschitz constraints achieved by LBEN appear to be tight. Given this, it would be interesting to see how LBEN’s accuracy-robustness tradeoff compare with other architectures designed to have tight Lipschitz constraints, such as [1]. 3. Possibly limited applicability to more structured layers like convolutions: Although it can be counted as a strength that LBEN can be applied to convnets without much modification, the fact that its performance considerably trails that of MON raises questions about whether the methods presented here are ready to be extended to non-fully connected architectures. 4. Lack of description of how the Lipschitz bounds of the networks are computed: This critique is self-explanatory. Decision: I think this paper is well worthy of acceptance just based on the quality and richness of its theoretical development and analysis of LBEN. I’d encourage the authors to, if possible, strengthen the experimental results in directions including (but certainly not limited to) the ones listed above. Other questions to authors: 1. I was wondering why you didn’t include experiments involving larger neural networks. What are the limitations (if any) that kept you from trying out larger networks? 2. Could you describe how you computed the Lipschitz constant? Given how notoriously difficult it is to compute bounds on the Lipschitz constants of neural networks, I think this section requires more elaboration. Possible typos and minor glitches in writing: 1. Section 3.2, fist paragraph, first sentence: Should the phrase “equilibrium network” be plural? 2. D^{+} used in Condition 1 is used before it’s defined in Condition 2. 3. Just below equation (7): I think there’s a typo in “On the other size, […]”. 4. In Section 4.1, \epsilon is not used in equation (10), but in equation (11). It might be more clear to introduce \epsilon when (11) is discussed. 5. Section 4.2, in paragraph “Computing an equilibrium”, first sentence: Do you think there’s a grammar error in this sentence? I might also have mis-parsed the sentence. 6. Section 5, second sentence: There are two “the”s in a row. [1] Anil, Cem, James Lucas, and Roger Grosse. "Sorting out lipschitz function approximation." International Conference on Machine Learning. 2019.
2. D^{+} used in Condition 1 is used before it’s defined in Condition 2.
NIPS_2019_104
NIPS_2019
. Despite the great technical material covered in the paper, its choice of organization makes the paper hard to follow; see detailed comments below. In addition, some related and recent literature on regret minimization in RL seem to be missing. Besides, I have some technical comments, which I detail below. 1. Organization: As said earlier, despite solid theoretical content, one may find the paper hard to follow due to the choice of organization. In particular: (i) one may expect that the algorithm StrongEuler to be fully specified in the main text rather than the supplementary; (ii) The proofs in the supplementary are not well-organized, and are hard-to-follow for most readers; and (ii) there is no conclusion section. 2. About Definition 3.1. The optimism should hold “with high probability”, right? Without this, I am not sure if one can guarantee that \bar Q_{k,h} \ge Q*_{k,h} for all x, a, k, and h. Could you explain? 3. Line 218, the statement “all existing optimistic algorithm”: the aforementioned collection of algorithms is not defined precisely. Since you are making a claim about the regret of such algorithm (which seems to be a strong one), you must specify this set clearly. Otherwise, the statement *should* be relaxed. Another vague statement of this type appears in line 212: “… is unavoidable for the sorts of the optimistic algorithms that we typically see in the literature”: again, I may ask to make the statement more precise and specific; otherwise, the claim needs to be accordingly relaxed. Minor comments: a. Theorem 2.4: \epsilon is not introduce. Does it have the same range as in Theorem 2.3? b. Line 31: The logarithmic regret bound in Jaksch et al. is an exception, as it is non-asymptotic (though it depends on the “wrong gap” gap_*). c. Line 48: the term “almost-gap-dependent” does not appear to be precise enough. d. Line 96: what is \tilde S? Shouldn’t it be S? e. Line 107: P(s’|s,a) is not introduced here yet, and later it seems you use p(s’|s,a). f. Line 107: that can be made optimal --> … **uniquely** optimal (Note the uniqueness here is crucial for the proof in [15] based on change-of-measure argument to work). e. Line 897: I couldn’t verify the second inequality \sqrt{a} + \sqrt{b} \precsim \sqrt{a+b}. Could you explain? Typos: Line 20: number states --> number of states Line 39: worse-case --> worst-case Line 93: discounted regret --> undiscounted regret (Isn’t it correct?) Line 95: [8] give … --> [8] gives Line 120: et seq. --> etc. (??) Line 129: a_{a’} --> a_{h’} Line 166-167: sharped --> sharpened Line 169: in right hand side --> in the right-hand side Line 196: we say than … --> we say that … Line 230: In the regret bound, remove extra “}”. Line 269: \sum_{t=1}^K … \omega_{k,h} --> …\omega_{t,h} Line 268: at the end of the sentence, “)” is missing. Line 275 (and elsewhere): Cauchy-Schwartz --> Cauchy-Schwarz Line 438: the second term does not dependent --> the rest of the sentence is missing. Line 502, 590, and elsewhere: author? --> use correct referencing! Line 525: similar to … --> similarly to … Line 634: Then suppose that … --> is it correct to begin the lemma with “then”? Line 656: proven Section … --> proven in Section … Line 668: We can with a crude comparison … --> The sentence does not make sense. Line 705 – Eq. 19: gap_h --> gap_h(x,a) AND “)” is missing Line 703 – iii : “)” is missing. Line 814: similar to --> similarly to Algorithm 1: Input is not provided. Algorithm 2, Line 4: rsum --> rsum_k AND the second rsum should perhaps be rsumsq Line 1085: Lemma 6 in [6] in shows --> remove the second “in” -- Updates (after author feedback) -- I have read the other reviews and have gone through authors' response. The response satisfactorily clarified my raised comments. In particular, the authors provided a precise plan to revise the organization of the paper. I therefore increase my score to 8.
. Despite the great technical material covered in the paper, its choice of organization makes the paper hard to follow; see detailed comments below. In addition, some related and recent literature on regret minimization in RL seem to be missing. Besides, I have some technical comments, which I detail below.
ICLR_2021_2838
ICLR_2021
#ERROR!
1. the baselines are too weak as none of them are designed to specifically handle oriented bounding boxes. Even if comparisons against works such as [1]-[4] can’t be made, I would at least want to see comparisons against “sequence-based” models that directly outputs OBBs.
ICLR_2022_999
ICLR_2022
(I'll expand on this part a bit so that the authors can address the issues): Although the analysis via kGLM is novel, the perspective of considering DEQ iterations from the angle of classical optimization problems is not. For instance, the monotone DEQ paper (sort of) already implies this connection; i.e., the existence and uniqueness of the fixed-point representation is guaranteed by the existence and uniqueness of a global optimum in the underlying convex optimization problem. Moreover, both works start from the optimization problem itself, and eventually propose specialized DEQ layer parameterizations (of the form σ ( W 1 z ∗ + W 2 T ( Y ) ) , where monDEQ uses T ( Y ) = Y ) to reflect this underlying optimization procedure. Representationally, I wonder if the DEQ inspired by kGLM inner optimization has a weaker capacity than the monotone DEQ model. As mentioned above, the discussion on initialization is interesting and something that I don't see implicit model papers discuss a lot. In particular, if we can derive what the "underlying optimization problem" a DEQ layer might be computing, then we can use this information to initialize the DEQ network and make it train in a more stable fashion. This is good, but generally impractical for DEQ models outside the scope of this paper. It is much easier to deduce what the weight-tied layer might look like from the optimization, than the other way around (E.g., if g θ is a residual block with normalization). That said, this paper actually has not answered the question it asks at the end of the first paragraph in Sec. 1: "Do commonly implemented DEQs correspond with any optimization problem?" While the paper addresses the problem of convolution by considering the circulant matrix form of the filters, with the formulation in Appendix E, wouldn't the convolutional kernel a) have symmetric weights; and b) be prohibited from performing striding or dilation? Do the authors have a sense of how this constrained convolution affect model performance? Can the authors also confirm that spectral normalization is only used at initialization, and not training?
1: "Do commonly implemented DEQs correspond with any optimization problem?" While the paper addresses the problem of convolution by considering the circulant matrix form of the filters, with the formulation in Appendix E, wouldn't the convolutional kernel a) have symmetric weights; and b) be prohibited from performing striding or dilation? Do the authors have a sense of how this constrained convolution affect model performance? Can the authors also confirm that spectral normalization is only used at initialization, and not training?
NIPS_2016_39
NIPS_2016
: One could eventually object that adversarial domain adaptation is not new, and neither are projections into shared and private spaces and orthogonality constraints. However, these are minor points. I still think that the whole package is sufficiently novel even for a high level conference as NIPS. I am also wondering where the exact contribution of the private space actually comes from. The training loss related to the task classifier is unlikely to give any higher performance on the target data (by construction due to the orthogonality constraints). Minor remarks: - In equation (5), I think the loss should be HH^T and not H^T H if orthogonality is supposed to be favored and features are rows. - the task loss is called L_task in the text but L_class in figure 1
- In equation (5), I think the loss should be HH^T and not H^T H if orthogonality is supposed to be favored and features are rows.
aURCCzSuhc
EMNLP_2023
1. The paper assumes that the relationship between new and old categories (non-overlapping, subclasses, etc.) is known. However, in reality, a category relationship judgment module is also needed. 2. The theoretically more optimized PLM algorithm is similar in performance to simple cross-annotation and does not show a significant advantage in large data scenarios.
1. The paper assumes that the relationship between new and old categories (non-overlapping, subclasses, etc.) is known. However, in reality, a category relationship judgment module is also needed.
NIPS_2018_888
NIPS_2018
weakness of the paper is that some parts are a bit vague or unclear, and that it contains various typos or formatting issues. I shall provide details in the following list; typos I found will be listed afterwards. 1. In the section about related work, I was wondering whether there is no related literature from the robust scheduling literature. I am not overly familiar with the area, so I cannot give concrete references, but I am aware that robust scheduling is a very large field, so I would be surprised if there were no robust online scheduling problems/algorithms that are connected to the work at hand in one way or another. Could you please shed some light on this aspect (and provide some reference(s) if appropriate)? 2. From the description in the „main results“ section of the Introduction, the precise connection between competitive ratio and robustness or consistency did not become entirely clear to me. Please provide some additional clarification on what is to be compared here – in particular, I also stumbled over why (as stated in several corresponding theorems) one ends up with competitive ratios „at most min{ robustness-ratio, consistency-ratio }“. At first glance, I had thought that if an algorithm does well for good predictions but the robustness bound is bad, that the latter should dominate. As this is apparently not the case, I ask for a brief explanation/clarification, preferably already in the introduction, as to why the minimum of the two values yields a bound on the competitive ratio. 3. In Algorithm 3, please clarify what „chosen“ means exactly. (Is it „drawn uniformly at random from the k (or l, resp.) values q_i (r_i, resp.)“ ? Or „chosen as index yielding maximal q_i/r_i-value“? Or something else?) Also, on a higher level, what makes q_i and r_i probability distributions? 4. On p.5, line 151 (multi-line inequality chain): I could not immediately verify the 3rd equation (in the middle line of the chain) – if it is based on a known formula, I do not recall it (maybe provide a reference?), otherwise please elaborate. 5. Please provide a reference for the statement that „... no algorithm can yield any non-trivial guarantees if preemptions are not allowed“ (lines 187-188, p. 6, 1st paragraph of Section 3). 6. At the very end of Section 3, you omit a proof for lack of space. If possible, please do include the proof in the revision (at least as supplementary material, but maybe it fits into the main paper after all). 7. Beginning of Section 2.2: should it not be „... must incur a competitive ratio of at most $b$.“ (not „at least“) ? The worst case would be x=1, then the ratio would be b/1=b, but in all other cases (x>1), the ratio is either b/x or eventually b/b=1 (as soon as x>b). 8. In Section 2.4, perhaps add „as in Theorems 2.2 and 2.3, respectively.“ to the sentence „Both Algorithms 2 and 3 extend naturally to this setting to yield the same robustness and consistency guarantees.“ ? Finally, here's a (sorry, quite pedantic) list of typos or formatting improvement suggestions for the authors' consideration: -- line 1: „machine-learned“ (hyphen is missing here, but used subsequently) -- l. 13: I think it should be „aimed at tackling“, not „aimed to tackle“ -- l. 18: missing comma between „Here“ and „the effort...“ -- The dollar sign (used in the context of the ski rental problem) somehow looks awkward; actually, could you not simply refrain from using a specific currency? -- Regarding preemptions, I think „resume“ is more common than „restart“ (ll. 54, 186) -- l. 76: I think it should be „used naively“, not „naively used“ -- Proof of Lemma 2.1, last bullet point: to be precise, it should be „ … x < b+x-y ... “ -- l. 119: comma is missing after „i.e.“ -- Proof of Theorem 2.2: There is a „d“ that should probably be a „b“ (ll. 134 and 136) -- Throughout the paper, you use four different ways to typeset fractions: in-line, \frac, \tfrac, and that other one with a slanted „frac-line“. I would appreciate a bit more consistency in that regard. In particular, please avoid using \frac when setting fractions in-line, as in Theorem 2.2, where it messes up the line spacing. -- l. 144 (1st sentence of Sect. 2.3): comma missing after „In this section“ -- p. 5: the second inequality chain violates the textwidth-boundary – can probably easily be fixed be linebreaking before the last equality- rather than the first inequality-sign. Also, „similar to“ should be „similarly to“ (ll. 157 and 159). -- l. 216, comma missing after „i.e.“ -- Lemma 3.2: Statement should start with a „The“ (and avoid \frac in-line, cf. earlier comment) Given the end of the proof, the statement could also be made with the less-coarse bound, or writing „less than“ instead of „at most“. -- l. 234: insert „the“ between „...define d(i,j) as“ and „amount of job“ (perhaps use „portion“ instead of „amount“, too) -- l. 238: missing comma between „predicted“ and „the longer job“ -- l. 243: suggest replacing „essentially“ by „asymptotically“ -- l. 249: comma missing after „Finally“ -- Theorem 3.3: Statement should start with a „The“. -- l. 257: comma missing after „e.g.“ -- l. 276: hyphenate „well-modeled“ -- l. 279: comma missing before „where“ -- Ref. [11]: use math-mode for e/(e-1), and TCP should be upper-case -- Ref. [14]: „P.N.“, not „PN“, for Puttaswamy's initials -- Ref.s [15, 16, 17]: Journal/Source ? Page numbers? -- Generally, the reference list is somewhat inconsistent regarding capitalization (in paper and journal titles) and abbreviations (compare, e.g., [11] Proceedings title with [21] Proceedings title).
18: missing comma between „Here“ and „the effort...“ -- The dollar sign (used in the context of the ski rental problem) somehow looks awkward; actually, could you not simply refrain from using a specific currency? -- Regarding preemptions, I think „resume“ is more common than „restart“ (ll. 54, 186) -- l.
ICLR_2023_3875
ICLR_2023
The approach is rather complicated, involving multiple encoders, deep feature losses, and a final vocoder. This premise mentioned in the abstract is absolutely incorrect: "Existing approaches model speech, potentially of multiple speakers, for denoising. Such approaches have an inherent drawback as a separate model is required for each speaker." Nearly all deep learning-based speech enhancement networks are trained to be speaker independent, and do not use a separate model for different speakers. Adding information about speaker identity can certainly improve performance, but speaker dependence is absolutely not necessary to get speech enhancement networks to work well. I don't agree with this central motivation of this paper: "Further, high variation in standards for intelligibility of speech varies with speaker, language, culture, emotion and perception...which makes generalized modelling of a speech signal challenging. To overcome this, instead of attempting to model speech, we propose to model the noise and estimate noise signal from the noisy audio, which we then utilize to obtain the denoised speech signal." I would argue that "noise", i.e. all non-speech sounds, exhibits much, much more variability compared to speech signals. In fact, speech signals are some of the most structured audio sources there are, because of its characteristic shape in the time-frequency domain (e.g. sparse harmonics from voiced sounds) and predictable structure of a sequence of phonemes. I found the paper title somewhat deceiving: "SPEECH DENOISING BY LISTENING TO NOISE" This title gave me the impression that the method was doing some kind of unsupervised or unpaired training for speech denoising. But the method uses supervised clean speech references. I would suggest adding a bit more detail to the title. Thank you for providing qualitative audio demos. However, I have to say, after listening to them, I don't personally think the proposed method sounds much better than the baseline methods, even after the VF model is applied. As mentioned in the paper, the direct outputs of the method sound very robotic, and applying VF on top of this makes the voices sound hollow and hoarse. I would disagree with the paper's assessment that they are "much more understandable". I wonder if the authors should experiment with alternative architectures. Maybe instead of predicting Mel spectrograms and vocoding these, consider using a masking model in the STFT domain or learnable basis domain? You can use Mel spectrogram as the input to the networks, and have the networks predict masks for the full-resolution STFT. Minor comments and typos Multiple occurences: "melspectrogram" -> "Mel spectrogram" "Audioset" -> "AudioSet" "wiener" -> "Wiener" Citations of classic speech enhancement technique neglects the famous MMSE and logMMSE methods by Ephraim and Malah: Ephraim, Yariv, and David Malah. "Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator." IEEE Transactions on acoustics, speech, and signal processing 32, no. 6 (1984): 1109-1121. Ephraim, Yariv, and David Malah. "Speech enhancement using a minimum mean-square error log-spectral amplitude estimator." IEEE transactions on acoustics, speech, and signal processing 33, no. 2 (1985): 443-445. "having the access clean ground truth audio." -> "having access to clean ground-truth audio." "signal to onatin" -> "signal to obtain" I find the {\bf a}_i notation for clean speech to be a bit weird, and potentially confusable with {\bf a}, used to indicate the mixture signal. "We use a Vector Quantized VAE (VQVAE) (?) model": fix missing reference "w. GT Noise,": kind of funny to have such a long space I would suggest bolding the best number within each column in the results tables. You could maybe move the "Denoiser" and "w. GT Noise" rows into Table 2, and thus get rid of Table 1. "show how does our method compare" -> "show how our method compares" "native english speaker" -> "native English speaker" "give the resulsts" -> "give the results" "of Mixed audio" -> "of mixed audio"
6 (1984): 1109-1121. Ephraim, Yariv, and David Malah. "Speech enhancement using a minimum mean-square error log-spectral amplitude estimator." IEEE transactions on acoustics, speech, and signal processing 33, no.
ICLR_2023_3377
ICLR_2023
Weakness: 1) The time complexity of the proposed method is not clearly. In the experiment, why the proposed method is not verified on large-scale dataset. 2) In the proposed method, a querying strategy that encourages diversity with k-means++ is proposed method. However, the main contributions of this strategy are not clearly presented. 3) In the proposed method, the equation(3) mainly deals with binary classification. How does the proposed method deal with the multiple classification setting.
3) In the proposed method, the equation(3) mainly deals with binary classification. How does the proposed method deal with the multiple classification setting.
NIPS_2016_232
NIPS_2016
weakness of the suggested method. 5) The literature contains other improper methods for influence estimation, e.g. 'Discriminative Learning of Infection Models' [WSDM 16], which can probably be modified to handle noisy observations. 6) The authors discuss the misestimation of mu, but as it is the proportion of missing observations - it is not wholly clear how it can be estimated at all. 5) The experimental setup borrowed from [2] is only semi-real, as multi-node seed cascades are artificially created by merging single-node seed cascades. This should be mentioned clearly. 7) As noted, the assumption of random missing entries is not very realistic. It would seem worthwhile to run an experiment to see how this assumption effects performance when the data is missing due to more realistic mechanisms.
7) As noted, the assumption of random missing entries is not very realistic. It would seem worthwhile to run an experiment to see how this assumption effects performance when the data is missing due to more realistic mechanisms.
ICLR_2023_314
ICLR_2023
1. The manuscript does not include the analysis of the computational complexities of the proposed method. 2. Regarding the experiments. I think it would be better to include a table summarizing the data sets, e.g., the input dimension, the number of samples. I also think that it would be better if the previous work, Wang et al. (2020) Xu et al. (2022), had been included in the comparison as all the datasets used in the experiments already include the domain indices. With the same reason, I think that the experiments did not really explain well about what advantages of using the proposed method in practice would be.
1. The manuscript does not include the analysis of the computational complexities of the proposed method.
NIPS_2020_29
NIPS_2020
* Relevant to a minority of the NeurIPS community * The experimental section -limited to on-line regression problems - is a bit disappointing. The experimental part should have addressed true and realistic contextual bandits problems (e.g. all the concrete examples and cases cited in the second paragraph of the introduction; reinforcement learning problems). * There are some important hyper-parameters in the method. How to choose them in function of the task and the dataset (except by brute-force, as it is done in Algorithm 3) remains an open issue. EDIT AFTER Rebuttal: @authors: it would be great if you can include the paragraph in the rebuttal that handles that point in the paper as well.
* Relevant to a minority of the NeurIPS community * The experimental section -limited to on-line regression problems - is a bit disappointing. The experimental part should have addressed true and realistic contextual bandits problems (e.g. all the concrete examples and cases cited in the second paragraph of the introduction; reinforcement learning problems).
NIPS_2022_1948
NIPS_2022
"Shaping representations" and "leverage from the learned representations" is what loss-based approaches (IsoMax, Scaled Cosine, SNGP, DUQ, and IsoMax+) have been doing since 2019. The fact that these loss-based approaches for OOD detection were not even cited may explain why the paper understand "shaping representations" and "leverage from the learned representations" as a significant novelty. Unfortunately, it is clearly not the case. Unfortunately, the paper entirely ignores the last three years of relevant advances in loss-based approaches that are completely reshaping the area after the Mahalanobis inference-based approach faded out. The paper insistently compares against Mahalanabos, a four years old approach that has been outperformed very easily in the last three years. 1. Making representation compatible with the OOD Detection procedure is at least three years old. The Mahalanobis paper is from 2018. At that time, inference-based metric learning approaches for OOD were trending. Since 2019, we have observed that loss-based methods are now mainstream. Therefore, training the model using a loss to make training representation and test/inference distribution compatible, which is the central claim of the paper's novelty, is not novel at all. Unfortunately, the paper lacks to realize that training the model to produce train and inference representations compatible with performing OOD has been the mainstream approach since 2019 and was proposed by IsoMax, Scaled Cosine, DUQ, SNGP, and more recently by DisMax. Kapa appears similar to essentially the entropic scale of IsoMax. 2. Forcing representation to the unit hypersphere is not novel. Forcing representation to the unit hypersphere during training to improve OOD detection was done in Scaled Cosine and IsoMax+, and more recently by DisMax. 3. Very limited novelty. Considering that point #1 above showed that leveraging representations/distributions learned during training to perform OOD detection has already been proposed. Moreover, noticing that point #2 above showed that unit hypersphere representations have also already been proposed, we conclude that the novelty of the proposed approach is very limited. 4. Neither cited nor compared with similar previous approaches. Considering that the paper essentially proposes a loss to train the network to improve OOD detection, we believe that the paper should have mentioned and compared with (at least) IsoMax, Scaled Cosine, DUQ, IsoMax+, and SNGP. Considering that this did not happen, we do not know if using vMF helps in any way. 5. Results for CIFAR10 do not look impressive. Apparently, the results for CIFAR10 are not remarkable compared to the approaches mentioned above (a direct comparison is absolutely mandatory). We would also like to see results for CIFAR100. 6. Results for CIFAR10 need to show classification accuracy. We also would like to see classification accuracy results to evaluate a possible classification accuracy drop, which is very usual in loss-based approaches. 7. We need much more experiment results using CIFAR10. Furthermore, we need CIFAR100 results as well. CIFAR10 and CIFAR100 still are the de facto standard datasets on which major OOD detection approaches have published results. The paper requires much more experimentation using CIFAR10 and CIFAR100 datasets. Classification accuracy need also to be shown and analyzed, as classification accuracy drop is a significant issue when using loss-based approaches for OOD detection. 8. Unlike recent approaches, the proposed solutions do not allow regular pure end-to-end backpropagation-based training. 9. The proposed approach presents much more hyperparameters than current state-of-the-art approaches. 10. Results should show mean and standard deviations. IsoMax: https://arxiv.org/abs/1908.05569 IsoMax (journal): https://arxiv.org/abs/2006.04005 Scaled Cosine: https://arxiv.org/abs/1905.10628 DUQ: https://arxiv.org/abs/2003.02037 SNGP: https://arxiv.org/abs/2006.10108 IsoMax+: https://arxiv.org/abs/2105.14399 DisMax: https://arxiv.org/abs/2205.05874 After rebuttal: I will keep my score because the differences between the proposed model and the cited training-based approaches are not significant. No direct comparison against the methods we presented was shown. Finally, the majority of the concerns we presented were not covered in the rebuttal. Neither citing nor comparing against the previous related work for the last four years is hard to accept. The paper does not properly comment on the approach's limitations. Unlike many other cited approaches, it does not allow pure end-to-end backpropagation training. It also should better explain the hyperparameter validation procedures required for the alpha, beta, and the dimension of the last layer.
3. Very limited novelty. Considering that point #1 above showed that leveraging representations/distributions learned during training to perform OOD detection has already been proposed. Moreover, noticing that point #2 above showed that unit hypersphere representations have also already been proposed, we conclude that the novelty of the proposed approach is very limited.
7oaWthT9EO
ICLR_2025
1. The main contributions in this paper, i.e. discretization and persistent training, are common tricks for training WGANs[1,2], which are not novel enough in practical implementation. For example, as is shown in the proof of Proposition 4.1, persistent training seems equal to just increasing the generator's iterations in the original WGAN's training. Thus please clarify how discretization and persistent training differ from the above existing methods. 2. Obtaining Kantorovich potential is a challenging and important step in ODE-based WGAN's training, but in this paper, it's still the same as the original WGAN, leaving some problems for persistent training as discussed in Remark 4.2, harming the consistency between theory and practice. 3. Experimental validation is not comprehensive enough, the used datasets are too small scale, like in WGAN and WGAN-GP, more common and large-scale baseline datasets should also be verified. For example, how does the proposed method perform on CIFAR-10 or CelebA compared to baseline WGAN methods in terms of FID, IS, and convergence speed? 4. Some mathematical explanations and notations should be modified. For example, there is no definition of $m$ in Eq.(2.4). Besides, more introduction of $\nabla \frac{\delta J}{\delta m}$ are suggested to be added, since $\nabla \frac{\delta J}{\delta m}$ is very important in the whole method, how to understand this gradient and how to get Eq(3.2) from Eq(3.1) should be added in the main paper. [1] Variational Wasserstein gradient flow. ICML 2022. [2] Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport. ICML 2024.
2. Obtaining Kantorovich potential is a challenging and important step in ODE-based WGAN's training, but in this paper, it's still the same as the original WGAN, leaving some problems for persistent training as discussed in Remark 4.2, harming the consistency between theory and practice.
NIPS_2021_2257
NIPS_2021
- Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network. - The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL model make use of different kinds of data? - The authors compared two-crop and multi-crop augmentation in section 4, and observed that multi-crop augmentation yielded better performance. One important missing factor is the (possible) computation overhead of multi-crop strategies. My estimation is that it would increase the computation complexity (i.e., slowing the speed) of training. Therefore, one could argue that if we could train the two-crop baseline for a longer period of time it would yield better performance as well. To make the comparison fair, the computation overhead must be discussed. It can also be seen from Figure 7, for the KNN-MoCo, that the extra positive samples are fed into the network \emph{that takes the back-propagated gradients}. It will drastically increase training complexity as the network not only performs forward passing, but also the backward passing as well. - Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely. Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations. - In L157 the authors discussed that for transfer learning potentially only low- and mid-level visual features are useful. My intuition is that low- and mid-level features are rather easy to learn. Then how does it explain the model’s transferability increasing when we scale up pre-training datasets? Or the recent success of CLIPs? Is it possible that \emph{only} MoCo learns low- and mid-level features? Minor things that don’t play any role in my ratings. - “i.e.” -> “i.e.,”, “e.g.” -> “e.g.,” - In Eq.1, it’s better to write L_{contrastive}(x) = instead of L_{contrastive}. Also, should the equation be normalized by the number of positives? - L241 setup paragraph is overly complicated for an easy-to-explain procedure. L245/246, the use of x+ and x is very confusing. - It’s better to explain that “nearest neighbor mining” in the intro is to mine nearest neighbor in a moving embedding space in the same dataset. Overall, I like the objective of the paper a lot and I think the paper is trying to answer some important questions in SSL. But I have some reservation to confidently recommend acceptance due to the concerns as written in the “weakness” section, because this is an analysis paper and analysis needs to be rigorous. I’ll be more than happy to increase the score if those concerns are properly addressed in the feedback. The authors didn't discuss the limitations of the study. I find no potential negative societal impact.
- The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL model make use of different kinds of data?
h57gkDO2Yg
ICLR_2024
Despite the overall positive impression, I see several weaknesses: - In the experiments the authors distill the data sets into 1000-2000 examples, for self-supervised learning, without augmentation. The authors do not comment on augmentations when training on the distilled data. This approach might work for the small models and low resolution used in the experiments, but I’m not convinced that it generalizes to larger models, more complex data sets and higher resolution. Data augmentation is a central component in many SSL methods including Barlow Twins, which the authors use. - Unrelated to data augmentation, I feel it would be necessary to run the algorithm on a less small-scale setup, e.g. on 224x224 ImageNet, and on larger downstream models (ResNet18 or similar) to make a convincing case, in particular given the complexity of the algorithm. I know this requires some compute, but one such experiment would still be necessary in my opinion. - Some baselines might be weak; for example MobileNet and ResNet10 from scratch get < 4% accuracy on Cars. Minor comments: - The abstract might be hard to follow for readers unfamiliar with prior works on dataset distillation.
- Some baselines might be weak; for example MobileNet and ResNet10 from scratch get < 4% accuracy on Cars. Minor comments:
CMMpcs9prj
ICLR_2025
1. The improvement on theoretical convergence result is not significant. Compared to CEDAS, it seems that the only improvement is removing the need for an additional unbiased compressor. To better illustrate this improvement, it is expected to validate whether using contractive compressors are more efficient than using unbiased ones. Otherwise, maybe the authors can compare the full convergence complexity (instead of the asymptotic one only) to address the theoretical improvement. 2. The numerical experiments are not persuasive enough. The compared baselines are Choco-SGD and BEER, which are in 2022 or earlier, and their convergence rate is clearly worse than SOTA as illustrated in Table 1. In contrast, CEDAS that seems closer to SOTA convergence rate is not compared. Maybe the authors can make the experimental results more solid by adding more baselines like CEDAS and DeepSqueeze.
2. The numerical experiments are not persuasive enough. The compared baselines are Choco-SGD and BEER, which are in 2022 or earlier, and their convergence rate is clearly worse than SOTA as illustrated in Table 1. In contrast, CEDAS that seems closer to SOTA convergence rate is not compared. Maybe the authors can make the experimental results more solid by adding more baselines like CEDAS and DeepSqueeze.
ICLR_2021_1193
ICLR_2021
(cons) 1. The authors should compare their work with GNNs with non-local operations, e.g., LatentGNN [1]. The paper also studies the limitations of local GNNs (not specifically LUMP) but the resulting model is similar to memory augmented GNNs and it has skip connections and augmented by convolution in the latent node space. 2. It's an interesting technique to improve the expressive power of GNNs but the augmentation requires significant modifications of the original GNNs. If the technique is applicable to general GNNs in a plug-and-play manner, it would be more useful. 3. Depending on the edge weights, the models may behave differently. The handcrafted edge weights from the truncated diffusion matrix naturally raise the question of whether they are necessary to show the effectiveness of the proposed technique. Question. 1. In the aggregation scheme of MemGCN, M v ( l = 1 ) = λ W ( l ) m v ( l ) ⋯ , ' m v ‘ ‘ ' has no nonlinear function whereas MemGAT has a nonlinear function for WRITE, .e.g, α v v ( l ) σ ( l ) ( m v ( l ) ) . Is this a typo? Otherwise, provide intuition why MemGCN should not use nonlinear activation functions for messages from memory nodes. 2. Unlike memGCN, memGAT adjusts the effect of messages from memory nodes within the attention mechanism. Is the main reason why the MemGAT significantly underperforms MemGCN in table 2 even though the vanilla GAT consistently outperforms the vanilla GCN? [1] Zhang, Songyang, Xuming He, and Shipeng Yan. "Latentgnn: Learning efficient non-local relations for visual recognition." International Conference on Machine Learning. 2019. --- Post Rebuttal --- I read the author feedback. The typo in Question 1 is fixed and the issue with the edge weights is addressed. However, the proposed method requires model-specific modifications and cannot be applicable to other tasks on graphs, e.g., link prediction. Due to the limitations, I will keep the original rating.
3. Depending on the edge weights, the models may behave differently. The handcrafted edge weights from the truncated diffusion matrix naturally raise the question of whether they are necessary to show the effectiveness of the proposed technique. Question.
NIPS_2020_770
NIPS_2020
I'm not sure how readable this is by people unfamiliar with homomorphic encryption. Unfortunately, with the low page limit, it just may be that such material is not possible to present at NeurIPS, and/or is simply outside the scope of the conference. There are a couple of issues I have with the paper: - I didn't see data sizes presented anywhere. How large is the encrypted training data, and how large are the key switching keys? - The machine used is very powerful. What was the memory use of the implementation? Is there a chance to run this on a weaker machine? If not, is it purely an implementation issue of the libraries used? - Can you comment on the machines used to evaluate the prior work, and how those machines may compare to your setup? - You mention that HEAAN supports floating point computations better. Is there a reason it was not used, instead of BGV? - Regarding "Broader impact", I would say that one disadvantage of training on encrypted data is that when data is contributed by multiple sources (through public-key encryption) any kind of model poisoning may be impossible to detect during the training phase. Have you considered such issues?
- You mention that HEAAN supports floating point computations better. Is there a reason it was not used, instead of BGV?
NIPS_2017_330
NIPS_2017
- Section 4 is very tersely written (maybe due to limitations in space) and could have benefitted with a slower development for an easier read. - Issues of convergence, especially when applying gradient descent over a non-Euclidean space, is not addressed In all, a rather thorough paper that derives an efficient way to compute gradients for optimization on LDSs modeled using extended subspaces and kernel-based similarity. At one hand, this leads to improvements over some competing methods. Yet, at its core, the paper avoids handling of the harder topics including convergence and any analysis of the proposed optimization scheme. None the less, the derivation of the gradient computations is interesting by itself. Hence, my recommendation.
- Issues of convergence, especially when applying gradient descent over a non-Euclidean space, is not addressed In all, a rather thorough paper that derives an efficient way to compute gradients for optimization on LDSs modeled using extended subspaces and kernel-based similarity. At one hand, this leads to improvements over some competing methods. Yet, at its core, the paper avoids handling of the harder topics including convergence and any analysis of the proposed optimization scheme. None the less, the derivation of the gradient computations is interesting by itself. Hence, my recommendation.
NIPS_2018_756
NIPS_2018
It looks complicated to assess the practical impact of the paper. On the one hand, the thermodynamic limit and the Gaussianity assumption may be hard to check in practice and it is not straightforward to extrapolate what happens in the finite dimensional case. The idea of identifying the problem's phase transitions is conceptually clear but it is not explicitly specified in the paper how this can help the practitioner. The paper only compares the AMP approach to alternate least squares without mention, for example, positive results obtained in the spectral method literature. Finally, it is not easy to understand if the obtained results only regard the AMP method or generalize to any inference method. Questions: - Is the analysis restricted to the AMP inference? In other words, could a tensor that is hard to infer via AMP approach be easily identifiable by other methods (or the other way round)? - Are the easy-hard-impossible phases be related with conditions on the rank of the tensor? - In the introduction the authors mention the fact that tensor decomposition is in general harder in the symmetric than in the non-symmetric case. How is this connected with recent findings about the `nice' landscape of the objective function associated with the decomposition of symmetric (orthogonal) order-4 tensors [1]? - The Gaussian assumption looks crucial for the analysis and seems to be guaranteed in the limit r << N. Is this a typical situation in practice? Is always possible to compute the `effective' variance for non-gaussian outputs? Is there a finite-N expansion that characterize the departure from Gaussianity in the non-ideal case? - For the themodynamic limit to hold, should one require N_alpha / N = O(1) for all alpha? - Given an observed tensor, is it possible to determine the particular phase it belongs to? [1] Rong Ge and Tengyu Ma, 2017, On the Optimization Landscape of Tensor Decompositions
- Given an observed tensor, is it possible to determine the particular phase it belongs to? [1] Rong Ge and Tengyu Ma, 2017, On the Optimization Landscape of Tensor Decompositions
0C5C70C3n8
EMNLP_2023
1. The improvement of the proposed method over the baseline in terms of automatic evaluation metrics is not obvious, and further validation of the effectiveness of the proposed method in terms of producing informative summaries is needed. 2. In the manual evaluation, the authors counted the entity hallucinations, syntactic agreement errors, and misspelling errors respectively, why not further classify the intrinsic entity hallucinations more finely into the two intrinsic hallucinations proposed to be solved in this paper: i.e. the entity-entity hallucinations and the entity-reference hallucinations, in order to further argue whether the two alignment methods proposed in this paper are effective or not in mitigating the entity hallucinations. 3. For the analyses in Tables 5 and 6, there are several observations on the CNNDM and XSum datasets that are not discussed in detail by the authors.
2. In the manual evaluation, the authors counted the entity hallucinations, syntactic agreement errors, and misspelling errors respectively, why not further classify the intrinsic entity hallucinations more finely into the two intrinsic hallucinations proposed to be solved in this paper: i.e. the entity-entity hallucinations and the entity-reference hallucinations, in order to further argue whether the two alignment methods proposed in this paper are effective or not in mitigating the entity hallucinations.
ACL_2017_108_review
ACL_2017
The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader interested. As for the approach itself, some important details are missing in my opinion: What is the decision criterion to include an edge or not? In lines 229--233 several different options for the I^k_t nodes are mentioned but it is never clarified which edges should be present! As for the empirical evaluation, the achieved results are better than some previous approaches but not really by a large margin. I would not really call the slight improvements as "outperformed" as is done in the paper. What is the effect size? Does it really matter to some user that there is some improvement of two percentage points in F_1? What is the actual effect one can observe? How many "important" entities are discovered, that have not been discovered by previous methods? Furthermore, what performance would some simplistic dictionary-based method achieve that could also be used to find overlapping things? And in a similar direction: what would some commercial system like Google's NLP cloud that should also be able to detect and link entities would have achieved on the datasets. Just to put the results also into contrast of existing "commercial" systems. As for the result discussion, I would have liked to see some more emphasis on actual crossing entities. How is the performance there? This in my opinion is the more interesting subset of overlapping entities than the nested ones. How many more crossing entities are detected than were possible before? Which ones were missed and maybe why? Is the performance improvement due to better nested detection only or also detecting crossing entities? Some general error discussion comparing errors made by the suggested system and previous ones would also strengthen that part. General Discussion: I like the problems related to named entity recognition and see a point for recognizing crossing entities. However, why is one interested in nested entities? The paper at hand does not really motivate the scenario and also sheds no light on that point in the evaluation. Discussing errors and maybe advantages with some example cases and an emphasis on the results on crossing entities compared to other approaches would possibly have convinced me more. So, I am only lukewarm about the paper with maybe a slight tendency to rejection. It just seems yet another try without really emphasizing the in my opinion important question of crossing entities. Minor remarks: - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one? - e.g.: why in italics? - time linear in n: when n is sentence length, does it really matter whether it is linear or cubic? - spurious structures: in the introduction it is not clear, what is meant - regarded as _a_ chunk - NP chunking: noun phrase chunking? - Since they set: who? - pervious -> previous - of Lu and Roth~(2015) - the following five types: in sentences with no large numbers, spell out the small ones, please - types of states: what is a state in a (hyper-)graph? later state seems to be used analogous to node?! - I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph? - in Figure 2 it was not obvious at first glance why this is a hypergraph. colors are not visible in b/w printing. why are some nodes/edges in gray. it is also not obvious how the highlighted edges were selected and why the others are in gray ... - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one? - denoting ...: sometimes in brackets, sometimes not ... why? - please place footnotes not directly in front of a punctuation mark but afterwards - footnote 2: due to the missing edge: how determined that this one should be missing? - on whether the separator defines ...: how determined? - in _the_ mention hypergraph - last paragraph before 4.1: to represent the entity separator CS: how is the CS-edge chosen algorithmically here? - comma after Equation 1? - to find out: sounds a little odd here - we extract entities_._\footnote - we make two: sounds odd; we conduct or something like that? - nested vs. crossing remark in footnote 3: why is this good? why not favor crossing? examples to clarify? - the combination of states alone do_es_ not? - the simple first order assumption: that is what? - In _the_ previous section - we see that our model: demonstrated? have shown? - used in this experiments: these - each of these distinct interpretation_s_ - published _on_ their website - The statistics of each dataset _are_ shown - allows us to use to make use: omit "to use" - tried to follow as close ... : tried to use the features suggested in previous works as close as possible? - Following (Lu and Roth, 2015): please do not use references as nouns: Following Lu and Roth (2015) - using _the_ BILOU scheme - highlighted in bold: what about the effect size? - significantly better: in what sense? effect size? - In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one? - due to _an_ insufficient - Table 5: all F_1 scores seems rather similar to me ... again, "outperform" seems a bit of a stretch here ... - is more confident: why does this increase recall? - converge _than_ the mention hypergraph - References: some paper titles are lowercased, others not, why?
- In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one?
ICLR_2023_217
ICLR_2023
1)In Figure 1 and Figure 5, what is the representation of the curve radian? 2)The α in Figure 2 is not explained in the main paper. 3)Does the model pre-trained on public data introduce security issues to users? How do we make trade-offs between security and performance?
2)The α in Figure 2 is not explained in the main paper.
aHmNpLlUlb
ICLR_2024
1. The paper is hard to read. It is unclear what is input to new meeting components. 2. It is hard to understand the general idea of the model, and Fig. 2 is completely unclear. 3. Genera formulas (3) and (4) are unceler. 4. In the experimental section we do not have experiments with the ShapeNet-based dataset (see pixelNeRF).
2. It is hard to understand the general idea of the model, and Fig. 2 is completely unclear.
7wJhlDMNH7
EMNLP_2023
The reasons for the rejection are as follows: 1. The proposed problem in the manuscript is no doubt a great contribution, but the insights from the editing techniques and observations are not primarily clear. 2. An analysis of the performance degradation, for instance, after editing with 10 instances, is not available in the manuscript. 3. An appendix with the implementation details can help with reproducibility. Additionally, the wall clock time to edit one instance and GPU consumption details are an add-on to show the analysis could also help in the insights. 4. The manuscript covers the discussion on a range of editing techniques but shows the comparison of editing on four techniques. The discussion and an analysis of the locate-then-edit approach for multimodal LLMs can show better insights with comparable observations.
3. An appendix with the implementation details can help with reproducibility. Additionally, the wall clock time to edit one instance and GPU consumption details are an add-on to show the analysis could also help in the insights.
NIPS_2018_917
NIPS_2018
- Results on bAbI should be taken with a huge grain of salt and only serve as a unit-test. Specifically, since the bAbI corpus is generated from a simple grammar and sentence follow a strict triplet structure, it is not surprising to me that a model extracting three distinct symbol representations from a learned sentence representation (therefore reverse engineering the underlying symbolic nature of the data) would solve bAbI tasks. However, it is highly doubtful this method would perform well on actual natural language sentences. Hence, statements such as "trained [...] on a variety of natural language tasks" is misleading. The authors of the baseline model "recurrent entity networks" [12] have not stopped at bAbI, but also validated their models on more real-world data such as the Children's Book Test (CBT). Given that RENs solve all bAbI tasks and N2Ns solve all but one, it is not clear to me what the proposed method adds to a table other than a small reduction in mean error. Moreover, the N2N baseline in Table 2 is not introduced or referenced in this paper, so I am not sure which system the authors are referring to here. Minor Comments - L11: LSTMs have only achieved on some NLP tasks, whereas traditional methods still prevail on others, so stating they have achieved SotA in NLP is a bit too vague. - L15: Again, too vague, certain RNNs work well for certain natural language reasoning tasks. See for instance the literature on natural language inference and the leaderboard at https://nlp.stanford.edu/projects/snli/ - L16-18: The reinforcement learning / agent analogy seems a bit out-of-place here. I think you generally point to generalization capabilities which I believe are better illustrated by the examples you give later in the paper (from lines 229 to 253). - Eq. 1: This seems like a very specific choice of combining the information from entity representations and their types. Why is this a good choice? Why not keep the concatenation of the kitty/cat outer product and the mary/person outer product? Why is instead the superposition of all bindings a good design choice? - I believe section four could benefit from a small overview figures illustrating the computation graph that is constructed by the method. - Eq. 7: At first, I found it surprising why three distinct relation representation are extracted from the sentence representation, but it became clearer later with the write, move and backling functions. Maybe already mention at this point why the three relation representations are going to be used for. - Eq. 15: s_question has not been introduced before. I imagine it is a sentence encoding of the question and calculated similarly to Eq. 5? - Eq. 20: A bit more details for readers unfamiliar with bAbI or question answering would be good here. "valid words" here means possible answer words for the given story and question, correct? - L192: "glorot initalization" -> "Glorot initialization". Also, there is a reference for that method: Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249-256). - L195: α=0.008, β₁=0.6 and β₂=0.4 look like rather arbitrary choices. Where does the configuration for these hyper-parameters come from? Did you perform a grid search? - L236-244: If I understand it correctly, at test time stories with new entities (Alex etc.) are generated. How does your model support a growing set of vocabulary words given that MLPs have parameters dependent on the vocabulary size (L188-191) and are fixed at test time? - L265: If exploding gradients are a problem, why don't you perform gradient clipping with a high value for the gradient norm to avoid NaNs appearing? Simply reinitializing the model is quite hacky. - p.9: Recurrent entity networks (RENs) [12] is not just an arXiv paper but has been published at ICLR 2017.
- p.9: Recurrent entity networks (RENs) [12] is not just an arXiv paper but has been published at ICLR 2017.
ICLR_2021_973
ICLR_2021
. Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well. Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. - Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates? - Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates? Provide additional feedback with the aim to improve the paper. - Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S. - Pg. 5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). - Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)? - Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT? - Fig. 4: On the color bar, presumably one of the labels should say “worse”. - Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers? - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. - D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4.
- Bottom of pg.4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S.
ACL_2017_33_review
ACL_2017
- Very close to distant supervision - Mostly poorly informed baselines General Discussion: This paper presents an extension of the vanilla LSTM model that incorporates sentiment information through regularization. The introduction presents the key claims of the paper: Previous CNN approaches are bad when no phrase-level supervision is present. Phrase-level annotation is expensive. The contribution of this paper is instead a "simple model" using other linguistic resources. The related work section provides a good review of sentiment literature. However, there is no mention of previous attempts at linguistic regularization (e.g., [YOG14]). The explanation of the regularizers in section 4 is rather lengthy and repetitive. The listing on p. 3 could very well be merged with the respective subsection 4.1-4.4. Notation in this section is inconsistent and generally hard to follow. Most notably, p is sometimes used with a subscript and sometimes with a superscript. The parameter \beta is never explicitly mentioned in the text. It is not entirely clear to me what constitutes a "position" t in the terminology of the paper. t is a parameter to the LSTM output, so it seems to be the index of a sentence. Thus, t-1 is the preceding sentence, and p_t is the prediction for this sentence. However, the description of the regularizers talks about preceding words, not sentences, but still uses. My assumption here is that p_t is actually overloaded and may either mean the sentiment of a sentence or a word. However, this should be made clearer in the text. One dangerous issue in this paper is that the authors tread a fine line between regularization and distant supervision in their work. The problem here is that there are many other ways to integrate lexical information from about polarity, negation information, etc. into a model (e.g., by putting the information into the features). The authors compare against a re-run or re-implementation of Teng et al.'s NSCL model. Here, it would be important to know whether the authors used the same lexicons as in their own work. If this is not the case, the comparison is not fair. Also, I do not understand why the authors cannot run NSCL on the MR dataset when they have access to an implementation of the model. Would this not just be a matter of swapping the datasets? The remaining baselines do not appear to be using lexical information, which makes them rather poor. I would very much like to see a vanilla LSTM run where lexical information is simply appended to the word vectors. The authors end the paper with some helpful analysis of the models. These experiments show that the model indeed learns intensification and negation to some extent. In these experiments, it would be interesting to know how the model behaves with out-of-vocabulary words (with respect to the lexicons). Does the model learn beyond memorization, and does generalization happen for words that the model has not seen in training? Minor remark here: the figures and tables are too small to be read in print. The paper is mostly well-written apart from the points noted above. It could benefit from some proofreading as there are some grammatical errors and typos left. In particular, the beginning of the abstract is hard to read. Overall, the paper pursues a reasonable line of research. The largest potential issue I see is a somewhat shaky comparison to related work. This could be fixed by including some stronger baselines in the final model. For me, it would be crucial to establish whether comparability is given in the experiments, and I hope that the authors can shed some light on this in their response. [YOG14] http://www.aclweb.org/anthology/P14-1074 -------------- Update after author response Thank you for clarifying the concerns about the experimental setup. NSCL: I do now believe that the comparison is with Teng et al. is fair. LSTM: Good to know that you did this. However, this is a crucial part of the paper. As it stands, the baselines are weak. Marginal improvement is still too vague, better would be an open comparison including a significance test. OOV: I understand how the model is defined, but what is the effect on OOV words? This would make for a much more interesting additional experiment than the current regularization experiments.
- Very close to distant supervision - Mostly poorly informed baselines
NIPS_2016_395
NIPS_2016
- I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work.
5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result.
5vJe8XKFv0
ICLR_2024
I think the proposed method is novel, and that it can give comparable performance. What is not clear from the presentation both theoretical justification and/or empirical evidence is the benefits that it can have over FNO. Please see questions section for specific. The writing/overall presentation can be improved. - For example, Table 1, caption is not all descriptive, without first introducing what is order of transform, a comparison is being made. This table is not necessary in my opinion. - After equation (6), $f$ becomes $F$, small things like these - The description of the several PDE while good, takes up lot of space, which can be otherwise devoted to better explaining and further evaluating the use of complex number (Minor) There are several typos, some of which I will point out below: - Reference to Table 7 is broken - The caption of figure 1 is not coherent with the figure and/or text description of the model. These are important, as readers will get super-confused if these are not in place. - Beginning of section 2.3, sentence is redundant
- Reference to Table 7 is broken - The caption of figure 1 is not coherent with the figure and/or text description of the model. These are important, as readers will get super-confused if these are not in place.
NIPS_2021_386
NIPS_2021
1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS with ImageNet dataset. 2. There is no actual new algorithmic or research contribution in this paper. The paper uses the methods of [Nguyen et al., 2021] directly. The only contribution seems to be running large-scale experiments of the same methods. However, compared to [Nguyen et al., 2021], it seems that there are some qualitative differences in the obtained images as well (lines 173-175). The authors do not clearly explain what these differences are, or why there are any differences at all (since the approach is identical). The only thing reviewer could understand is that this is due to ZCA preprocessing which does not sound like a major contribution. 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021. Update: Please see my comment below. I have increased the score from 3 to 5.
1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS with ImageNet dataset.
NIPS_2018_695
NIPS_2018
Weakness: a) There is no quantitative comparison between AE-NAM and VAE-NAM. It is necessary to answer that when one-to-many is not concerned, which one, AE-NAM or VAE-NAM should be used. In another word, the superior of VAE-NAM comes from V or AE? b) It contains full of little mistakes or missing references. For example: i. Line 31, and Line 35, mix use 1) and ii); ii. Line 51, 54, 300, 301, missing reference; iii. Equation 2): use E() but the context is discussing C(); iv. Line 174: what is mu_y? missing \ in latex? v. Line 104: gramma error? 3. Overall evaluation: a) I think the quality of this paper is marginally above the acceptance line. But there are too many small errors in the paper
2): use E() but the context is discussing C(); iv. Line 174: what is mu_y? missing \ in latex? v. Line 104: gramma error?
8Ezv4kDDee
ICLR_2025
- Limited Practical Impact of Theoretical Formulation: Section 3 introduces a theoretical framework with equations involving KL divergence and mutual information to motivate the role of task descriptions. However, these equations, especially Equation (5), are not integrated into the experiments and thus do not guide the empirical work in any meaningful way. The formulation could be streamlined or more directly connected to the paper’s practical findings. - Restricted Applicability of Mutual Information Calculation (Equation 6): The authors quantify task description information in Equation (6) by defining bounds on task parameters. This works well for their synthetic dataset, where parameters are precisely controlled. However, the method lacks applicability to real-world datasets, where task definitions are more complex and less structured. The authors do not demonstrate how to extend this metric to real datasets like CoFE, which limits the study’s relevance beyond synthetic setups. Neither the theoretical analysis nor the form of synthetic data make sense to CoFE. - Limited Generalizability of Synthetic Data to Real-World Tasks: The synthetic data structure, based on simple arithmetic equations, does not represent the complexity found in most real-world datasets. Real tasks, such as those in natural language processing, often require interpreting nuanced instructions rather than solving modular arithmetic problems. Consequently, the insights gained from these synthetic tasks may not fully transfer to more realistic settings. - Unclear Notations and Words: The paper contains many abbreviations and notations for readers to guess. For example, no ex, 1 ex, 3 ex in Figure 1. Hq(t) in equation (5). Not Pred Task in Figure 5.
- Limited Practical Impact of Theoretical Formulation: Section 3 introduces a theoretical framework with equations involving KL divergence and mutual information to motivate the role of task descriptions. However, these equations, especially Equation (5), are not integrated into the experiments and thus do not guide the empirical work in any meaningful way. The formulation could be streamlined or more directly connected to the paper’s practical findings.
ICLR_2022_2531
ICLR_2022
I have several concerns about the clinical utility of this task as well as the evaluation approach. - First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling, especially given the known faithfulness issues with text generation? There are some existing approaches for automatic ECG interpretation. How does this work fit into the existing approaches? A portion of the ECG reports from the PTB-XL dataset are actually automatically generated (See Data Acquisition under https://physionet.org/content/ptb-xl/1.0.1/). Do you filter out those notes during evaluation? How does your method compare to those automatically generated reports? - A major claim in the paper is that RTLP generates more clinically accurate reports than MLM, yet the only analysis in the paper related to this is a qualitative analysis of a single report. A more systematic analysis of the quality of generation would be useful to support the claim made in the appendix. Can you ask clinicians to evaluate the utility of the generated reports or evaluate clinical utility by using the generated reports to predict conditions identifiable from the ECG? I think that it’s fine that the RTLP method performs comparable to existing methods, but I am not sure from the current paper what the utility of using RTLP is. - More generally, I think that this paper is trying to do two things at once – present new methods for multilingual pretraining while also developing a method of ECG captioning. If the emphasis is on the former, then I would expect to see evaluation against other multilingual pretraining setups such as the Unicoder (Huang 2019a). If the core contribution is the latter, then clinical utility of the method as well as comparison to baselines for ECG captioning (or similar methods) is especially important. - I’m a bit confused as to why the diversity of the generated reports is emphasized during evaluation. While I agree that the generated reports should be faithful to the associated ECG, diversity may not actually be necessary metric to aim for in a medical context. For instance, if many of the reports are normal, you would want similar reports for each normal ECG (i.e. low diversity). - My understanding is that reports are generated in other languages using Google Translate. While this makes sense to generate multilingual reports for training, it seems a bit strange to then evaluate your model performance on these silver-standard noisy reports. Do you have a held out set of gold standard reports in different languages for evaluation (other than German)? Other Comments: - Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports. - You might consider changing the terminology from “cardiac arrythmia” categories to something broader since hypertrophy (one of the categories) is not technically a cardiac arrythmia (although it can be detected via ECG & it does predispose you to them) - I think it’d be helpful to include an example of some of the tokens that are sampled during pretraining using your semantically similar strategy for selecting target tokens. How well does this work in languages that have very different syntactic structures compared to the source language? - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? - What kind of tokenization is used in the model? Which Spacy tokenizer? - It’d be helpful to reference the appendix when describing the setup in section 3/5 so that the reader knows that more detailed architecture information is there. - I’d be interested to know if other multilingual pretraining setups also struggle with Greek. - It’d be helpful to show the original ECG report with punctuation + make the ECG larger so that they are easier to read - Why do you think RTLP benefits from fine-tuning on multiple languages, but MARGE does not?
- What kind of tokenization is used in the model? Which Spacy tokenizer?
ICLR_2022_2123
ICLR_2022
of this submission and make suggestions for improvement: Strengths - The authors provide a useful extension to existing work on VAEs, which appears to be well-suited for the target application they have in mind. - The authors include both synthetic and empirical data as test cases for their method and compare it to a range of related approaches. - I especially appreciated, that the authors validated their method on the empirical data and also provide an assessment of face validity using established psychological questionnaires (BDI and AQ). - I also appreciated the ethics statement pointing out that the method requires additional validation, before it may enter the clinic. - The paper is to a great extend clearly written. Weaknesses - In Figure 2 it seems that Manner-1 use of diagnostic information is more important than Manner-2 use of this information, which calls your choice into question to set lambda = 0.5 in equation 3. Are you able to learn this parameter from the data? - Also in Figure 2, when applying your full model to the synthetic data, it appears to me that inverting your model seems to underestimate the within-cluster variance (compared to the ground truth). Could it be that your manner-1 use of information introduces constraints that are too strong, as they do not allow for this variance? - It would strengthen your claims of “superiority” of your approach over others, if you could provide a statistical test that shows that your approach is indeed better at recovering the true relationship compared to others. Please, provide such tests. - There are important information about the empirical study missing that should be mentioned in the supplement, such as recording parameters for the MRI, preprocessing steps, was the resting-state recorded under eyes-open or eyes-closed condition? A brief explanation of the harmonization technique would also be appreciated. It would also be helpful to mention the number of regions in the parcellation in the main text. - The validation scheme using the second study is not clear to me. Were the models trained on dataset A and then directly applied to dataset B or did you simply repeat the training in dataset B. If the latter is the case, I would refer to this as a replication dataset and not a validation dataset (which would require applying the same model on a new dataset, without retraining). - Have you applied multiple testing correction for the FID comparisons across diagnoses. If so which? If not, you should apply it and please, state that clearly in the main manuscript. - It is somewhat surprising that the distance between SCZ and MDD is shorter than between SCZ and ASD as often the latter two are viewed as closely related. It might be helpful to discuss, why that may be the case in more detail. - The third ethics statement is not clear to me. Could you clarify? - The font size in the figures is too small. Please, increase it to improve readability.
- Have you applied multiple testing correction for the FID comparisons across diagnoses. If so which? If not, you should apply it and please, state that clearly in the main manuscript.
6srsYdjLnV
EMNLP_2023
Some major concerns: - Gender neutrality assumption in the source language. Though some nouns are lexically gender-neutral in English, document context could render these nouns gender-specific. Relevant questions: 1) is context taken into consideration while filtering gender-neutral English sources? 2) did the survey include questions regarding source gender neutrality? - Over-neutralization. For specific nouns as in example C of Table 1, much less participants prefer the gender-neural alternative in Italian. Is it necessary for these target translations to be gender-neutral? Also in Table 2, why would we need to neutralize ii and iii for translations if the source English is already gendered? - On multiple N references. How frequently do the three translations overlap or are identical? Have you considered multi-reference evaluations? - Sentence vs. document level. How are the sentence-level N-vs-G tags migrated, are they inherited directly from the document level? - Problematic reference-free gender-neutral MT evaluation. The binary N-G classification accuracy only evaluates the scenario when a gender-neutral noun is translated into Italian. It does not provide any information on the overall translation quality of the documents/sentences thus not linearly dependent and cannot be compared proportionally to standard MT metrics, BLEU, TER, and METEOR. Unfortunately, the much higher "classifier" number in Table 5 does not lead to the conclusion that the reference-free evaluation is "promising" (line 625). Minor clarifications are necessary: - There is no concrete accuracy assessment on the two rounds of GPT-generated training data; - "Grammatical gender languages" is a confusing term in this paper. Only concepts that have M-F-N cognates are examined in this paper, but not other nouns that have inherent gender in Italian, such as desks, beds, etc.
1) is context taken into consideration while filtering gender-neutral English sources?
gInIbukM0R
ICLR_2025
Major: - Conclusions are made before results are presented (L62, L256, L266), and sometimes some claims are made with no supporting results at all (L242, L302). Furthermore, many results are presented as "validating our hypothesis". I would suggest removing this hypothesis-based presentation, and focus on an analysis of your results. This could be done by presenting the results of section 4 first, *then* dive into the analyses of the latter half of section 3.3(.0) and section 3.3.1. - Mathematical definitions lack support and introduction. The definitions rely on (Li et al, 2023), which "provides rigorous mathematical foundation for understanding emergence in network structures" (L77). The authors would benefit from integrating in their manuscript a more thorough summary and introduction to the definitions. For instance, it is highly unclear to me how one goes from equation on L166 to the main definition of emergence in L175, and how the latter can be derived from Eq. (8) in Appendix. Appendix A does not provide clarification unfortunately, only copy-pasting (exact replica with unproperly formatted references, see L648 and L669) from (Li et al., 2024). - Lack of surrounding literature. At a superficial level, the reference list is quite small, with more than half of the papers being preprints/non-peer-reviewed work. More concretely, the authors would benefit from including more literature on emergence in LLMs and neural scaling laws (see e.g. (Nam et al., 2024, NeurIPS) for recently published work that surveyed some of these topics). - Circular reasoning arguments are employed many times, starting with the abstract "Our hypothesis posits that the degree of emergence [...] can predict the development of emergent behaviors in the network", then further in L381 "as the value of Emergence decreased, training accuracy change decrease, thereby supporting our hypothesis that emergence functions as a measure of prediction of emergent traits within a neural network.", and again on L438 "[...], supporting the idea that the emergence value is a predictor of emergent traits." . These are logical fallacies unfortunately and should be removed, in a favor of exposition in line with my first point above. - Repeated sentences and content, for instance L44-L47 and Paragraph beginning with L71 both makes similar comments on the loss landscape and its relationship with emergence, and see a further example in the circular reasoning point above. - Claims of significance should be backed by statistical tests. Minor - Repeated reference for (Li et al, 2023) - Remove "intuitively" in L251 if no intuition is provided. - Many typos, too many to list unfortunately. A thorough re-read is advised.
- Claims of significance should be backed by statistical tests. Minor - Repeated reference for (Li et al, 2023) - Remove "intuitively" in L251 if no intuition is provided.
ICLR_2023_1490
ICLR_2023
Weakness: 1. The paper provides much about the experimental results, while the contents for the method and motivation seem insufficient. 2. Not much insights have been given as to why spiking self-attention should be designed in this way. 3. Other concerns detailed in Summary.
2. Not much insights have been given as to why spiking self-attention should be designed in this way.
QVVSb0GMXK
ICLR_2024
1. The overall contributions (i.e., Fig. 3b and the discsusion below in page 5) appear somewhat one-dimensional and lack the significance to make this work distinct. The essence of the proposed NME appears to be a straightforward (and a brute-force) rescaling by considering all potential factors (i.e., k). 2. Several claims are very arbitrary and not well supported by the evidence: - The statement "The dilemma between normalization for effective network optimization and high variation of data scales" found in the third paragraph of the introduction is ambiguous. I'd like clarity on how the authors characterize this dilemma and why a sequence of "normalization-optimization-denormalization" isn't relevant here. - The claims "... data within each window has a simple structure that can be easily modeled at a single scale" and "Instead, we may assume that data within each window has a single scale of variation given the window size is small" are arbitrary. For example, with longer time series patches, the time series within a patch might still encounter the scaling challenges depicted in Fig. 1, making this assumtion fail to generalize well. - Was there any reference to PatchTST? I couldn't find the acknowledgment in this statement "We follow the tokenization process in the Vision Transformer (Dosovitskiy et al., 2020) by splitting the time series sequence into non-overlapping windows." 3. Some technical designs are not well-motivated or clearly discussed: - The reason for using BYOL isn't well-justified. BYOL emphasizes positive-only contrastive learning via the distillation, yet I don't observe a strong correlation between its primary features and this work. Why not consider other well-recognized frameworks, like SimCLR? - What would be the complexity of NME when enumerating all possible scales and ensemble the embeddings across scales? 4. The experiments need furhther improvements to give more robust analysis of the proposed method over existing research. - Testing on a variety of SSL frameworks and time series tasks (e.g., forecasting) would provide a more thorough evaluation of the proposal.
1. The overall contributions (i.e., Fig. 3b and the discsusion below in page 5) appear somewhat one-dimensional and lack the significance to make this work distinct. The essence of the proposed NME appears to be a straightforward (and a brute-force) rescaling by considering all potential factors (i.e., k).
ARR_2022_15_review
ARR_2022
- The modeling of the mixture of multiple aspects is not explored deeper enough, and thus the so-called "first to explore" (in the Introduction) sounds more like an overclaim to me. - The qualitative results (Table 5) and human evaluation both show that there are still limitations of the proposed method and the improvement is not so obvious. - Given the arguments about the downstream applications (Line 116-117), I would like to see more downstream performance of such controllable text generation models. Think about how you would connect the proposed models to real NLG applications, and show the advantage of your model. I would say style transfer could be a good start. N/A. The authors have addressed most of my previous comments and suggestions.
- The modeling of the mixture of multiple aspects is not explored deeper enough, and thus the so-called "first to explore" (in the Introduction) sounds more like an overclaim to me.
ICLR_2023_4713
ICLR_2023
• [Major] Though CLP achieves better attack impact compared to traditional attacks with similar or lower attack budget on average, this method requires more clients to perturb. It is not clear how the performance difference is when the traditional attack methods utilize all these clients. • [Major] In the paragraph on improved resilience against defenses, the paper simply claims that a lower average budget implies better robustness. With the varying number of malicious clients, I don't think this is obvious, and I suggest the authors provide a quantitative comparison. • [Major] Lack of evaluation like robustness against defense against S i m A t t a c k -CLP. • [Minor] If the paper claims S i m A t t a c k -CLP reduces the computational complexity, it is better to report the time difference.
• [Major] In the paragraph on improved resilience against defenses, the paper simply claims that a lower average budget implies better robustness. With the varying number of malicious clients, I don't think this is obvious, and I suggest the authors provide a quantitative comparison.
NIPS_2018_874
NIPS_2018
--- None of these weaknesses stand out as major and they are not ordered by importance. * Role of and relation to human judgement: Visual explanations are useless if humans do not interpret them correctly (see framework in [1]). This point is largely ignored by other saliency papers, but I would like to see it addressed (at least in brief) more often. What conclusions are humans supposed to make using these explanations? How can we be confident that users will draw correct conclusions and not incorrect ones? Do the proposed sanity checks help identify explanation methods which are more human friendly? Even if the answer to the last question is no, it would be useful to discuss. * Role of architectures: Section 5.3 addresses the concern that architectural priors could lead to meaningful explanations. I suggest toning down some of the bolder claims in the rest of the paper to allude to this section (e.g. "properties of the model" -> "model parameters"; l103). Hint at the nature of the independence when it is first introduced. Incomplete or incorrect claims: * l84: The explanation of GBP seems incorrect. Gradients are set to 0, not activations. Was the implementation correct? * l86-87: GradCAM uses the gradient of classification output w.r.t. feature map, not gradient of feature map w.r.t. input. Furthermore, the Guided GradCAM maps in figure 1 and throughout the paper appear incorrect. They look exactly (pixel for pixel) equivalent to the GBP maps directly to their left. This should not be the case (e.g., in the first column of figure 2 the GradCAM map assigns 0 weight to the top left corner, but somehow that corner is still non-0 for Guided GradCAM). The GradCAM maps look like they're correct. l194-196: These methods are only equivalent gradient * input in the case of piecewise linear activations. l125: Which rank correlation is used? Theoretical analysis and similarity to edge detector: * l33-34: The explanations are only somewhat similar to an edge detector, and differences could reflect model differences. Even if the same, they might result from a model which is more complex than an edge detector. This presentation should be a bit more careful. * The analysis of a conv layer is rather hand wavy. It is not clear to me that edges should appear in the produced saliency mask as claimed at l241. The evidence in figure 6 helps, but it is not completely convincing and the visualizations do not (strictly speaking) immitate an edge detector (e.g., look at the vegitation in front of the lighthouse). It would be useful to include a conv layer initialized with sobel filter and a canny edge detector in figure 6. Also, quantitative experimental results comparing an edge detector to the other visual explanations would help. Figure 14 makes me doubt this analysis more because many non-edge parts of the bird are included in the explanations. Although this work already provides a fairly large set of experiments there are some highly relevant experiments which weren't considered: * How much does this result rely on the particular (re)intialization method? Which initialization method was used? If it was different than the one used to train the model then what justifies the choice? * How do these explanations change with hyperparameters like choice of activation function (e.g., for non piecewise linear choices). How do LRP/DeepLIFT (for non piecewise linear activations) perform? * What if the layers are randomized in the other direction (from input to output)? Is it still the classifier layer that matters most? * The difference between gradient * input in Fig3C/Fig2 and Fig3A/E is striking. Point that out. * A figure and/or quantitative results for section 3.2 would be helpful. Just how similar are the results? Quality --- There are a lot of weaknesses above and some of them apply to the scientific quality of the work but I do not think any of them fundamentally undercut the main result. Clarity --- The paper was clear enough, though I point out some minor problems below. Minor presentation details: * l17: Incomplete citation: "[cite several saliency methods]" * l122/126: At first it says only the weights of a specific layer are randomized, next it says that weights from input to specific layer are randomized, and finally (from the figures and their captions) it says reinitialization occurs between logits and the indicated layer. * Are GBP and IG hiding under the input * gradient curve in Fig3A/E? * The presentation would be better if it presented the proposed approach as one metric (e.g., with a name), something other papers could cite and optimize for. * GradCAM is removed from some figures in the supplement and Gradient-VG is added without explanation. Originality --- A number of papers evaluate visual explanations but none have used this approach to my knowledge. Significance --- This paper could lead to better visual explanations. It's a good metric, but it only provides sanity checks and can't identify really good explanations, only bad ones. Optimizing for this metric would not get the community a lot farther than it is today, though it would probably help. In summary, this paper is a 7 because of novelty and potential impact. I wouldn't argue too strongly against rejection because of the experimental and presentation flaws pointed out above. If those were fixed I would argue strongly against rejection. [1]: Doshi-Velez, Finale and Been Kim. “A Roadmap for a Rigorous Science of Interpretability.” CoRR abs/1702.08608 (2017): n. pag.
* The presentation would be better if it presented the proposed approach as one metric (e.g., with a name), something other papers could cite and optimize for.
NIPS_2020_628
NIPS_2020
1. The model consists of spectral normalized hidden layers to guarantee a bounded Lipchitz constant for top NN layers, and a random Fourier feature approximated Gaussian process as the last layer. The combination is new, but the overall method is not end-to-end. Thus it can be hard to balance these two components to let them work well with each other. In supplementary material, I saw the entire algorithm is an alternative minimization optimization. So I'm curious about how you choose the initial parameters to make it work well. 2. The main strength is the empirical performance, but the paper does not release code.
2. The main strength is the empirical performance, but the paper does not release code.
ARR_2022_299_review
ARR_2022
The main concern is the measure of the inference speed. The authors claimed that "the search complexity of decoding with refinement as consistent as that of the original decoding with beam search" (line 202), and empirically validated that in Table 1 (i.e., #Speed2.). Even with local constraint, the model would conduct 5 (N=5) more softmax operations over the whole vocabulary (which is most time-consuming part in inference) to calculate the distribution of refinement probabilities for each target position. Why does such operations only marginally decrease the inference speed (e.g., form 3.7k to 3.5k tokens/sec for Transformer-base model)? How do we measure the inference speed? Do you follow Kasai et al., (2021) to measure inference speed when translating in mini-batches as large as the hardware allows. I guess you report the batch decoding speed since the number is relatively high. Please clarify the details and try to explain why the refinement model hardly affect the inference speed. The score will be increased if the authors can address the concern. [1] Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation. ICLR 2021. 1. Line118: SelfAtt_c => Cross Attention, the attention network over the encoder representations is generally called as cross attention. 2. Ablation study in Section 4.1.3 should be conducted on validation sets instead of test sets (similar to Section 4.1.2). In addition, does the refinement mask in Table 2 denote that randomly selecting future target words no greater than N in model training (i.e., Line 254)? 3. Is PPL a commonly-used metric for storytelling?
2. Ablation study in Section 4.1.3 should be conducted on validation sets instead of test sets (similar to Section 4.1.2). In addition, does the refinement mask in Table 2 denote that randomly selecting future target words no greater than N in model training (i.e., Line 254)?
NIPS_2020_1606
NIPS_2020
- All theoretical contributions are based on an unrealistic model in Eq. 1. The proposed model over-siplifies he problem by assuming the noise only on sources and not sensors. - The method section is very dense and difficult to follow. - Some related works (such as DL) are excluded from experimental comparisons. - The alternative methods are selectively used in different experiments. For example, why CanICA is not used in the experiment with synthetic data?
1. The proposed model over-siplifies he problem by assuming the noise only on sources and not sensors.
MbKRJUowYX
EMNLP_2023
1.In the introduction and the experimental analysis, the authors marked the magnitude of the metrics enhancement(8.53% in PPL,16.7% in Dist-2,8.34% in Acc),It is suggested that the change be made to the absolute value of the enhancement rather than the percentage to reduce ambiguity; 2.The performance of emotion intensity labeling has an impact on the subsequent calculation of sentiment correlations was not clarified in the paper (e.g., the relationship between the loss generated when the model is labeled with emotion intensity and the overall loss of the model proposed by the authors); 3. The paper mentions that emotion graphs require modeling the correlation of intrinsic emotions and interconnecting all emotion nodes, and that the details of the main emotion and secondary emotions conversions should be detailed when calculating the emotion correlations and subsequent generation; 4. The authors mention in the experimental section that their proposed model is compared to the Sotas model. As far as I know, the Baselines chosen for the paper is not the SOTA model. The authors can be confirmed by the following literature. Empathetic Dialogue Generation via Sensitive Emotion Recognition and Sensible Knowledge Selection
1.In the introduction and the experimental analysis, the authors marked the magnitude of the metrics enhancement(8.53% in PPL,16.7% in Dist-2,8.34% in Acc),It is suggested that the change be made to the absolute value of the enhancement rather than the percentage to reduce ambiguity;
NIPS_2019_1130
NIPS_2019
weakness is the lack of focus of the discussion. I feel that too many points are scattered and there lacks a central message on the insights gained. Below are some specific questions and concerns: 1. Line 100-101: The theoretical results in [36] and also [26] do not assume that the gradient noise $Z_k$ is Gaussian. The weak approximation results do not depend on the actual distribution of gradient noise, which only need to satisfy some moment conditions. These are always satisfied when the objective is of finite-sum form, as considered in this work. See also [B] below for more general statements. This part should be rephrased accordingly to properly represent the results of prior work. 2. Line 180: The assumption $H\sigma$ is quite restrictive, as even in the quadratic case, as long as the covariance of gradients are not constant you would expect there to be some growth. I suggest relaxing this condition by some localization arguments, since at the end your results only depend on $\sigma^*$. 3. Line 127-137: 1) The reference appears wrong, [37] does not talk about the convergence rate of SGD to SDE. 2) Note that in previous work, explicit bounds between expectations over arbitrary test functions (not just $||\nabla f||^2$) on SGD and SDE are established. These are not the same as the results presented in Appendix D, which are matching rates just on $||\nabla f||^2$ (not arbitrary test functions). Moreover, the presented results are not bounding the difference between the expectation iterates, but rather show them having similar rates. This is a weaker statement. In my opinion, this point should be better clarified to avoid confusion of what actualy is derived in this paper -- in fact, without looking at the appendix I thought that the authors obtained uniform-in-time approximation results for non-convex cases, which would certainly be interesting! As far as I know, so far only [C] provides such estimates, but require strong convexity. I suggest the authors make space for the statements of results in this section in the main paper, since you have mentioned this in your abstract as one of your main results. 4. Line 277-286: This is an interesting observation. However, I have some concerns on its validity in general settings. It is well-known that 1D SDEs with multiplicative noise can be written as a noisy gradient flow of a modified potential function, but this fails to hold in high dimensions. It appears to me that by assuming $H$ is diagonal and $\sigma$ is constant, we fall into the 1D scenario, but this analogy is not likely to generalize. Perhaps the authors can comment on this. 5. Minor typos: 1) Theorem B.2, assumption 1 should not have a square on the RHS. 2) line 194: know -> known References: [A] Smith, Samuel L., and Quoc V. Le. "A bayesian perspective on generalization and stochastic gradient descent." arXiv preprint arXiv:1710.06451 (2017). [B] Li et al. "Stochastic Modified Equations and Dynamics of Stochastic Gradient Algorithms I: Mathematical Foundations." Journal of Machine Learning Research 20.40 (2019): 1-40. [C] Feng, Yuanyuan, et al. "Uniform-in-Time Weak Error Analysis for Stochastic Gradient Descent Algorithms via Diffusion Approximation." arXiv preprint arXiv:1902.00635 (2019).
4. Line 277-286: This is an interesting observation. However, I have some concerns on its validity in general settings. It is well-known that 1D SDEs with multiplicative noise can be written as a noisy gradient flow of a modified potential function, but this fails to hold in high dimensions. It appears to me that by assuming $H$ is diagonal and $\sigma$ is constant, we fall into the 1D scenario, but this analogy is not likely to generalize. Perhaps the authors can comment on this.
HzecOxOGAS
EMNLP_2023
- A few parts lack details. E.g., Line 220, we sift through the financial corpora to isolate sentences that include these metrics. (how to identify these metrics). - It is good to run the experiments multiple times and report the mean and std.
- A few parts lack details. E.g., Line 220, we sift through the financial corpora to isolate sentences that include these metrics. (how to identify these metrics).
ICLR_2023_4130
ICLR_2023
weakness: 1.The authors mention that “we focus on quantum vision transformers applied to image classification tasks”. Can the authors explain more about why previous quantum transformers are not suitable for image classification task? What makes this work perform well in the classification task rather than others? 2.Some figure captions are not clear. For example, what does \theta refer to in Fig. 4-6? Also, what does different lines stand for? What does the panel in Fig. 7-9 mean? 3.What’s the motivation of introducing second-order compound matrix? 4.What are the differences between A- Orthogonal Patch-wise scheme (Table 2) and the classical transformer? It seems that both can be formulated as Vx_i.
4.What are the differences between A- Orthogonal Patch-wise scheme (Table 2) and the classical transformer? It seems that both can be formulated as Vx_i.