Dataset Viewer
Auto-converted to Parquet
paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
176
10.5k
point
stringlengths
42
623
ACL_2017_433_review
ACL_2017
- The annotation quality seems to be rather poor. They performed double annotation of 100 sentences and their inter-annotator agreement is just 75.72% in terms of LAS. This makes it hard to assess how reliable the estimate of the LAS of their model is, and the LAS of their model is in fact slightly higher than the inter-annotator agreement. UPDATE: Their rebuttal convincingly argued that the second annotator who just annotated the 100 examples to compute the IAA didn't follow the annotation guidelines for several common constructions. Once the second annotator fixed these issues, the IAA was reasonable, so I no longer consider this a real issue. - General Discussion: I am a bit concerned about the apparently rather poor annotation quality of the data and how this might influence the results, but overall, I liked the paper a lot and I think this would be a good contribution to the conference. - Questions for the authors: - Who annotated the sentences? You just mention that 100 sentences were annotated by one of the authors to compute inter=annotator agreement but you don't mention who annotated all the sentences. - Why was the inter-annotator agreement so low? In which cases was there disagreement? Did you subsequently discuss and fix the sentences for which there was disagreement? - Table A2: There seem to be a lot of discourse relations (almost as many as dobj relations) in your treebank. Is this just an artifact of the colloquial language or did you use "discourse" for things that are not considered "discourse" in other languages in UD? - Table A3: Are all of these discourse particles or discourse + imported vocab? If the latter, perhaps put them in separate tables, and glosses would be helpful. - Low-level comments: - It would have been interesting if you had compared your approach to the one by Martinez et al. (2017, https://arxiv.org/pdf/1701.03163.pdf). Perhaps you should mention this paper in the reference section. - You use the word "grammar" in a slightly strange way. I think replacing "grammar" with syntactic constructions would make it clearer what you try to convey. ( e.g., line 90) - Line 291: I don't think this can be regarded as a variant of it-extraposition. But I agree with the analysis in Figure 2, so perhaps just get rid of this sentence. - Line 152: I think the model by Dozat and Manning (2016) is no longer state-of-the art, so perhaps just replace it with "very high performing model" or something like that. - It would be helpful if you provided glosses in Figure 2.
- It would be helpful if you provided glosses in Figure 2.
ACL_2017_67_review
ACL_2017
The main weaknesses for me are evaluation and overall presentation/writing. - The list of baselines is hard to understand. Some methods are really old and it doesn't seem justified to show them here (e.g., Mpttern). - Memb is apparently the previous state-of-the-art, but there is no mention to any reference. - While it looks like the method outperforms the previous best performing approach, the paper is not convincing enough. Especially, on the first dataset, the difference between the new system and the previous state-of-the-art one is pretty small. - The paper seriously lacks proofreading, and could not be published until this is fixed – for instance, I noted 11 errors in the first column of page 2. - The CilinE hierarchy is very shallow (5 levels only). However apparently, it has been used in the past by other authors. I would expect that the deeper the more difficult it is to branch new hyponym-hypernyms. This can explain the very high results obtained (even by previous studies)... - General Discussion: The approach itself is not really original or novel, but it is applied to a problem that has not been addressed with deep learning yet. For this reason, I think this paper is interesting, but there are two main flaws. The first and easiest to fix is the presentation. There are many errors/typos that need to be corrected. I started listing them to help, but there are just too many of them. The second issue is the evaluation, in my opinion. Technically, the performances are better, but it does not feel convincing as explained above. What is Memb, is it the method from Shwartz et al 2016, maybe? If not, what performance did this recent approach have? I think the authors need to reorganize the evaluation section, in order to properly list the baseline systems, clearly show the benefit of their approach and where the others fail. Significance tests also seem necessary given the slight improvement on one dataset.
- Memb is apparently the previous state-of-the-art, but there is no mention to any reference.
ICLR_2023_1833
ICLR_2023
. Strengths first: The paper is one of the first to give an empirical study of quantization of MoE networks. It would be a good manual/starting point for practitioners in the field. Weaknessess: Thoroughness: Despite having good results and having investigated several quantization options, one would still have questions of "what if?" style. There are many additional experiments and empirical evaluations that are needed to make it a stronger contribution, and to be certain of presented recommendations. For instance here are additional questions: 1) if inference happens in fp16, why to stick with uniform or log-uniform quantization schemes? how about non-inform quantization akin k-means? 2) why not to consider finer grouping for quantization instead of per-tensor and per-channel? 3) why PTQ calibration techniques are not discussed? are all calibrations work the same? 4) what is the tradeoff between # experts vs bit-width of compression? are there certain recommendation? and many other questions of this format The paper would benefit from another proof-reading pass: there are many places where it is hard to understand what was exactly meant.
2) why not to consider finer grouping for quantization instead of per-tensor and per-channel?
NIPS_2022_1402
NIPS_2022
1. The representation could be further improved. For example, there are both “unseen classes” and “unseen-classes” in the paper, this should be unified. 2. It would be better to study the impact of the ratio of unseen classes. For example, how the performance varies with different ratios of unseen classes unlabeled examples.
2. It would be better to study the impact of the ratio of unseen classes. For example, how the performance varies with different ratios of unseen classes unlabeled examples.
ACL_2017_371_review
ACL_2017
- The description is hard to follow. Proof-reading by an English native speaker would benefit the understanding - The evaluation of the approach has several weaknesses - General discussion - In Equation 1 and 2 the authors mention a phrase representation give a fix-length word embedding vector. But this is not used in the model. The representation is generated based on an RNN. What the propose of this description? - Why are you using GRU for the Pyramid and LSTM for the sequential part? Is the combination of two architectures a reason for your improvements? - What is the simplified version of the GRU? Why is it performing better? How is it performing on the large data set? - What is the difference between RNNsearch (groundhog) and RNNsearch(baseline) in Table 4? - What is the motivation for only using the ending phrases and e.g. not using the starting phrases? - Did you use only the pyramid encoder? How is it performing? That would be a more fair comparison since it normally helps to make the model more complex. - Why did you run RNNsearch several times, but PBNMT only once? - Section 5.2: What is the intent of this section
- Why are you using GRU for the Pyramid and LSTM for the sequential part? Is the combination of two architectures a reason for your improvements?
NIPS_2018_700
NIPS_2018
Weakness: The major quality problem of this paper is clarity. In terms of clarity, there are several confusing places in the paper, especially in equation 9, 10, 11, 12. 1) What is s_{i,j} in these equations? In definition 1, the author mentions that s_{i,j} denotes edge weights in the graph, but what are their values exactly in the experiments? Are they 0/1 or continuous values? 2) How is the diffusion map computed for structural embedding in 10 and 12? Is it using equation 1 only with the learned structural embedding and without text embedding? 3) Why is the diffusion convolution operator only applied to text embedding? Can it also be applied to structural embedding? On the other hand, if the author wants to capture global information in the graph as claimed between line 191 and line 194, why not directly use the diffusion map in equation (1) on text embedding instead of applying the diffusion convolution operator in 4.2? It's confusing to me the relationship between equation (1) and equation (5), (6) and (7) in section 4.2. In other words, there are two methods that could capture the global information: equation (1), and equation (5)(6)(7). Equation (1) is applied to structural embedding in equation (10) and (12); equation (5)(6)(7) are applied to textual embeddings. The author should explain why they do so. 4) I wonder whether the structural embedding is really necessary in this case, since in the learning process, the structural embedding just involves a embedding table lookup. The author does not explain the point of using a structural embedding, especially in such a way. What if just use diffusion text embedding? I don't see any experimental results proving the effectiveness of structural embedding in Table 1. 5) What's the motivation of each part in equation (8)? For example, what's the motivation of maximizing the probability of textual embedding of vertex i conditioned on the diffusion map of structural embedding of vertex j in equation (12)? 6) In line 135, the author says "Initially the network only has a few active vertices, due to sparsity." How is "active vertices" defined here? 7) In 5.2, when the author trains the SVM classifier, do they also fine-tune the embeddings or just freeze them? There are many existing CNN and RNN based neural classifier for text classification. What if just use any of those off-the-shelf methods on the text embedding without the diffusion process and fine-tune those embedding? This is typically a strong baseline for text classification, but there is no corresponding experimental results in Figure 5. 8) In line 234, how those objective function weights obtained? Are they tuned on any development set? Has the author tried only using a subset of those objectives? It's not clear how important each of those four objectives is. 9) In line 315, the author attributes the result of Table 3 to "both structure and text information". However, the fact that the method picks vertex 3 is due to diffusion convolution operator, as explained in line 313. Does the "structure" here mean the diffusion convolution operator or the structural embedding?
6) In line 135, the author says "Initially the network only has a few active vertices, due to sparsity." How is "active vertices" defined here?
NIPS_2021_1731
NIPS_2021
I am not quite convinced by the motivation of the proposed method as a discrete analogue of the continuous Beltrami flow. The “structural assumptions on the diffusivity” a seem to not be satisfied scaled dot product attention in BLEND. What is the point of all the theoretical motivation if the actual construction violates the assumptions of the theory? Currently, I find it unclear which aspect of the proposed method makes it perform well. Is it the position that is added as a pre-processing step, the continuousness of the flow, the particular integrator, or the particular flow equation (6)? The ablation analysis in the appendix only partially answers this question. Further experiments could include: augmenting GAT with positional encodings; using BLEND with Euler steps; using GAT with continuous integration. The method does not seem to get state-of-the-art results on larger sized data sets. The GAT baseline uses more parameters, but would BLEND improve if it used as many? I expect that BLEND has much higher training and inference time than GAT, even with the smaller model, because of the continuous integration. Concrete run-times are not given, so I can’t say for sure. Further suggestions for improvement: I’d be very interested in the performance of including channel mixing in the flow, referred to as Onsager diffusion, as currently, the fact that the channels only interact via the attention seems limiting. The same holds for time-dependent diffusivity. Include (some idea of) the structural assumptions in thm 1 in the main paper. Include results on BLEND-kNN on ogb-arxiv or explain why this result is missing. Include some actual run-times of BLEND(-kNN) vs other methods. In the appendix clarify why (9, suppl mat) is the obvious discrete analogue of (6, suppl mat). I see some notational similarity between (10, suppl mat) and (6, suppl mat), but that looks rather superficial, besides it being one possible generalization of the classic Dirichlet energy. In fact, the paper doesn’t seem to be using this additional generality and only shows the classic example in which ψ ~ is constant. What does this additional ψ generality add? Could the authors clarify the step from (8, suppl mat) to (1)? Where does the time come from? Typos: Eqn 4, x ( x , 0 ) should be x ( v , 0 ) In Table 3, the score for GCN on CiteSeer is bold. The colouring in Table 3 seems incorrect. The BLEND-kNN performs on par with CGNN. The eqn under line 48 in the suppl mat would be clearer if parenthesis would be added to indicate that the partial derivative only applies to Z. Line 153, missing reference Conclusion: Originality: The work is original. It tries to connect differential geometry to continuous flows in graphs in a way I hadn’t seen before. Quality: I have doubts about the correctness, as I question whether the presented theory applies to the proposed model. Additionally, important ablations are missing. Clarity: The paper is clearly written. Significance: The paper can be significant to all people researching graph neural nets and open exploration into continuous flows on this domain. Score: 5, marginally below acceptance threshold. If the authors can convince me the theory does apply to their model, I will increase my score. Confidence: 4. I read the paper in detail. The fact that their theory does not seem to be applicable to the used model, is not honestly mentioned in the limitations. To the contrary, the vagueness of unspecified 'structural assumptions', that are only given in the appendix, makes this theoretical limitation hard to find. I think the authors underestimate the current use of graph neural networks in industry. They are used widely. As such, some more elaboration on potential negative societal impact of graph neural networks in general could be given.
4. I read the paper in detail. The fact that their theory does not seem to be applicable to the used model, is not honestly mentioned in the limitations. To the contrary, the vagueness of unspecified 'structural assumptions', that are only given in the appendix, makes this theoretical limitation hard to find. I think the authors underestimate the current use of graph neural networks in industry. They are used widely. As such, some more elaboration on potential negative societal impact of graph neural networks in general could be given.
NIPS_2019_263
NIPS_2019
--- Weaknesses of the evaluation in general: * 4th loss (active fooling): The concatenation of 4 images into one and the choice of only one pair of classes makes me doubt whether the motivation aligns well with the implementation, so 1) the presentation should be clearer or 2) it should be more clearly shown that it does generalize to the initial intuition about any two objects in the same image. The 2nd option might be accomplished by filtering an existing dataset to create a new one that only contains images with pairs of classes and trying to swap those classes (in the same non-composite image). * I understand how LRP_T works and why it might be a good idea in general, but it seems new. Is it new? How does it relate to prior work? Does the original LRP would work as the basis or target of adversarial attacks? What can we say about the succeptibility of LRP to these attacks based on the LRP_T results? * How hard is it to find examples that illustrate the loss principles clearly like those presented in the paper and the supplement? Weaknesses of the proposed FSR metric specifically: * L195: Why does the norm need to be changed for the center mass version of FSR? * The metric should measure how different the explanations are before and after adversarial manipulation. It does this indirectly by measuring losses that capture similar but more specific intuitions. It would be better to measure the difference in heatmaps before and after explicitly. This could be done using something like the rank correlation metric used in Grad-CAM. I think this would be a clearly superior metric because it would be more direct. * Which 10k images were used to compute FSR? Will the set be released? Philosohpical and presentation weaknesses: * L248: What does "wrong" mean here? The paper gets into some of the nuance of this position at L255, but it would be helpful to clarify what is meant by a good/bad/wrong explanation before using those concepts. * L255: Even though this is an interesting argument that forwards the discussion, I'm not sure I really buy it. If this was an attention layer that acted as a bottleneck in the CNN architecuture then I think I'd be forced to buy this argument. As it is, I'm not convinced one way or the other. It seems plausible, but how do you know that the final representation fed to the classifier has no information outside the highlighted area. Furthermore, even if there is a very small amount of attention on relevant parts that might be enough. * The random parameterization sanity check from [25] also changes the model parameters to evaluation visualizations. This particular experiment should be emphasized more because it is the only other case I can think of which considers how explanations change as a function of model parameters (other than considering completely different models). To be clear, the experiment in [25] is different from what is proposed here, I just think it provides interesting contrast to these experiments. The claim here is that the explanations change too much while the claim there is that they don't change enough. Final Justification --- Quality - There are a number of minor weaknesses in the evaluation that together make me unsure about how easy it is to perform this kind of attack and how generalizable the attack is. I think the experiments do clearly establish that the attack is possible. Clarity - The presentation is pretty clear. I didn't have to work hard to understand any of it. Originality - I haven't seen an attack on interpreters via model manipulation before. Significance - This is interesting because it establishes a new way to evaluate models and/or interpreters. The paper is a bit lacking in scientific quality in a number of minor ways, but the other factors clearly make up for that defect.
* How hard is it to find examples that illustrate the loss principles clearly like those presented in the paper and the supplement? Weaknesses of the proposed FSR metric specifically:
NIPS_2016_450
NIPS_2016
. First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy?
* Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy?
ICLR_2023_2698
ICLR_2023
1) The proposed method can be viewed as a direct combination of GCN and normalizing flow, with the ultimate transformed distribution, which is Gaussian in conventional NF, replaced by Gaussian mixture distribution, encouraging the latent representation to be more clustered. Technically, there is no enough new stuffs here. 2) More Seriously, to ensure the intractability of the normalizing flow after absorbing the graph neural network, the proposed model has to replace the basic operation \sigma(AXW) with the operation \sigma(AX) in the graph neural networks, abandon the feature affine transformation operation, i.e., XW, before passing the intermediate representations to neighboring nodes. Since the W is the main parameters to be learned in GNN, abandoning it means the representation ability of GNN is restricted significantly. The experimental results also show that the proposed model brings very little gains over the old models like GCN and GAT on classification tasks. 3) Without using the feature affine transformation AXW, then the dimension of intermediate hidden representations will always be kept the same as that of input feature since the NF have to maintain the dimension unchanged. Then, if the dimension of input feature is very high, in addition to the complexity issue, the learned feature will be also be very high, which may not be very useful as nowadays we often expect the learned features to be compact. 4) For the experiments, since the paper want to demonstrate the proposed model is able to learn clustering-friendly representations, we expect to directly see how the model performs on clustering task on the clustering performance metric, like accuracy ACC, normalized mutual information NMI etc, rather than the indirect Silhouette criteria, which is not meaningful at all.
1) The proposed method can be viewed as a direct combination of GCN and normalizing flow, with the ultimate transformed distribution, which is Gaussian in conventional NF, replaced by Gaussian mixture distribution, encouraging the latent representation to be more clustered. Technically, there is no enough new stuffs here.
ICLR_2022_2470
ICLR_2022
Weakness: The idea is a bit simple -- which in of itself is not a true weakness. ResNet as an idea is not complicated at all. I find it disheartening that the paper did not really tell readers how to construct a white paper in section 3 (if I simply missed it, please let me know). However, the code in the supplementary materials helped. White paper is constructed as follow: white_paper_gen = torch.ones(args.train_batch, 3, 32, 32) It offers another way of constructing white paper, which is white_paper_gen = 255 * np.ones((32, 32, 3), dtype=np.uint8) white_paper_gen = Image.fromarray(white_paper_gen) white_paper_gen = transforms.ToTensor()(white_paper_gen) white_paper_gen = transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))(white_paper_gen) The code states that either version works similarly and does not affect the performance. I wonder if there are other white papers as well, for example np.zeros((32, 32, 3)) -- most CNN models add explicit bias terms in their CNN kernel. Would a different white paper reveal different bias in the model? I don't think the paper answers this question or discusses it. 2. Section 4 "Is white paper training harmful to the model?" -- the evidences do not seem to support the claim. The evidences are 1). Only projection head (CNN layers) are affected but not classification head (FCN layer); 2). Parameter changes are small. None of these constitute as a direct support that the training is not "harmful" to the model. This point can simply be illustrated by the experimental results 3. Section 5.1 and 5.2 mainly build the narrative that WPA improves the test performance (generalization performance), but they are indirect evidence to support that WPA does in fact alleviate shortcut learning. Only Section 5.3 and Table 6 directly show whether WPA does what it's designed to do. A suggestion is to discuss the result of Section 5.3 more. 4. It would be interesting to try to explain why WPA works -- with np.ones input, what is the model predicting? Would any input serve as white paper? Figure 2 seems to suggest that Gaussian noise input does not work as well as WPA. Why? Authors spend lot of time showing WPA improves the test performance of the original model, but fails to provide useful insights on how WPA works -- this is particularly important because it can spark future research directions.
1). Only projection head (CNN layers) are affected but not classification head (FCN layer);
ICLR_2023_1418
ICLR_2023
Weakness: 1. Regarding the whole framework, which part is vital for using CLIP to guide weakly supervised learning? I think the discussion is necessary (but I didn’t find clear answer in the discussion) and help this paper to be distinguished from the other related work. 2. The knowledge bank is based on classes appearing in the full dataset and is defined by the text. Can you explain how the size of the knowledge bank affects performance? After all, when there exists a brunch of interaction classes, I am not sure about the training efficiency and workload. 3. SRC only does not help much with the detection, according to Table 2. It is not very effective and is kind of counterintuitive. I would recommend providing a more detailed explanation or removing the SRC part. 4. When a CLIP model is used, it is always necessary to explain the potential issue of fair comparison. After all, CLIP has seen quite a lot of training data during pretraining, and there is a risk of potential data leakage. As such, explanation is necessary.
1. Regarding the whole framework, which part is vital for using CLIP to guide weakly supervised learning? I think the discussion is necessary (but I didn’t find clear answer in the discussion) and help this paper to be distinguished from the other related work.
NIPS_2020_420
NIPS_2020
**Exposition** - I think the paper contains interesting ideas with good empirical results. However, the exposition of the method is not easy to follow and require significant revision. Here are a couple of examples that were unclear. - L6: “coherent HOI.” What does it mean to have “coherent HOI”? What are the incoherent ones? - L8: “transformations between human-object pairs.” The “transformation” is vague. Later in the paper, it turns that this is merely replacing instance-level (human or object) from similar HOI examples. The exposition is unnecessarily complicated. - The analogy between HOI analysis and Harmonic analysis is interesting at first glance, but the link is quite weak. In the problem contexts, there is only two “basis” (human and object) to form an HOI. The decomposition/integration steps introduced in this paper also do not have a close connection with the Fourier analysis as claimed. - On L33, what does the “eigen” structure of HOI mean? - On L51, “IDN can learn to represent the interaction/verb with T_I and T_D.” What does this mean? - On L205, I was not able to follow the concept of Interactive validity. There is no definition of these loss terms and no figures to illustrate this part. - Figure 2: o What does “X” mean? o g_h and g_o are not discussed. Later I found that this is just identity (swapping instance features) - Figure 3: o (a) Please specify the loss terms here. o (b) I know that the f_u^{v_i} is predicted from the concatenated feature f_h and f_o (the integration step). However, for the decomposition step, why not use f_u as input (as discussed in Eqn 1) and predict f_h and f_o? - When using the autoencoder for compressing the features f_h + f_o, isn’t that the encoded features already are “integrated”? How can we “slice” the features to get individual features? **Novelty** - The inter-pair transformation idea has been exploited in [A]. The paper should cite and discuss the differences with respect to [A] (as it was published before the NeurIPS submission). [A] Detecting Human-Object Interactions via Functional Generalization. AAAI 2020 **Method** - The proposed approach seems to require a much larger model size. For example, the method needs two (T_I and T_D) two-layer fully connected networks for *each* verb interaction. This is certainly not scalable and can be slow at test time. For example, for HICO-DET, this requires evaluating the T_I and T_D 117 times. Unfortunately, the paper did not discuss the model size and runtime performance. At least this should be discussed as a limitation. **Evaluation** - In Table 4, which dataset is this conducted on? It seems to me that this is done on the *testing set* of the HICO-DET dataset. The ablation should be done in the validation set without seeing the testing set. This may suggest that all the model tuning may also be conducted on the testing datasets, which may lead to overfitting.
- The analogy between HOI analysis and Harmonic analysis is interesting at first glance, but the link is quite weak. In the problem contexts, there is only two “basis” (human and object) to form an HOI. The decomposition/integration steps introduced in this paper also do not have a close connection with the Fourier analysis as claimed.
NIPS_2020_1016
NIPS_2020
1. The PFQ algorithm introduced many hyperparameters, and I am curious how the authors chose the parameters \epsilon and \alpha. The authors simply claimed these parameters are determined from the four-stage manual PFQ from Figure 1, and then claim that FracTrain is insensitive to hyperparameters. First, the precision choices of the four stage PFQ in Figure 1 is already arbitrary. Second, I do not think the empirical results can support the claim that FracTrain is insensitive to hyperparameters. I would encourage the authors to have an ablation study of \epsilon and \alpha. I do understand an ablation study of various precision combinations is shown in the appendix, but this might not provide enough insights for users of FracTrain that simply want to know what is the best hyperparameter combination to use. 2. I found the MACs and Energy results reported in the paper needs further explanation. For instance, in Table2, it seems to me MACs cannot be a useful measurement since SBM and FracTrain might use different precisions. Even if the MACs numbers are the same, low-precision operations will surely be more energy efficient. A more useful measurement metric might be bitwise operations. In terms of the Energy reported in this paper, the authors claim it is calculated from an RTL design. However, BitFusion is an inference accelerator, what modifications have you done to the BitFusion RTL to support this training energy estimation? What is the reuse pattern for gradients/activations? 3. Dynamic precision control during training might only show meaningful performance gains on bit-serial accelerators. However, most existing ML accelerators tend to use bit-parallel fixed-point numbers, this might restrict the implications of the proposed methodology. 4. I think this paper has missed a number of citations in the recent advances of dynamic inference methods. Dynamic channel control [1,2] and dynamic precisions [3] have recently been widely explored and these citations are not seen in this paper. [1] Gao, Xitong, et al. "Dynamic channel pruning: Feature boosting and suppression." ICLR 2018. [2] Hua, Weizhe, et al. "Channel gating neural networks." Advances in Neural Information Processing Systems. 2019. [3] Song, Zhuoran, et al. "DRQ: Dynamic Region-based Quantization for Deep Neural Network Acceleration." ISCA 2020
3. Dynamic precision control during training might only show meaningful performance gains on bit-serial accelerators. However, most existing ML accelerators tend to use bit-parallel fixed-point numbers, this might restrict the implications of the proposed methodology.
ICLR_2021_2674
ICLR_2021
Though the training procedure is novel, a part of the algorithm is not well-justified to follow the physics and optics nature of this problem. A few key challenges in depth from defocus are missing, and the results lack a full analysis. See details below: - the authors leverage multiple datasets, including building their own to train the model. However, different dataset is captured by different cameras, and thus the focusing distance, aperture settings, and native image resolution all affect the circle of confusion, how are those ambiguities taken into consideration during training? - related to the point above, the paper doesn't describe the pre-processing stage, neither did it mention how the image is passed into the network. Is the native resolution preserved, or is it downsampled? - According to Held et al "Using Blur to Affect Perceived Distance and Size", disparity and defocus can be approximated by a scalar that is related to the aperture and the focus plane distance. In the focal stack synthesis stage, how is the estimated depth map converted to a defocus map to synthesize the blur? - the paper doesn't describe how is the focal stack synthesized, what's the forward model of using a defocus map and an image to synthesize defocused image? how do you handle the edges where depth discontinuities happen? - in 3.4, what does “Make the original in-focus region to be more clear” mean? in-focus is defined to be sharpest region an optical system can resolve, how can it be more clear? - the paper doesn't address handling textureless regions, which is a challenging scenario in depth from defocus. Related to this point, how are the ArUco markers placed? is it random? - fig 8 shows images with different focusing distance, but it only shows 1m and 5m, which both exist in the training data. How about focusing distance other than those appeared in training? does it generalize well? - what is the limit of the amount of blur presented in the input that the proposed models would fail? Are there any efforts in testing on smartphone images where the defocus is *just* noticeable by human eyes? how do the model performances differ for different defocus levels? Minor suggestions - figure text should be rasterized, and figures should maintain its aspect ratio. - figure 3 is confusing as if the two nets are drawn to be independent from each other -- CNN layers are represented differently, one has output labeled while the other doesn't. It's not labeled as the notation written in the text so it's hard to reference the figure from the text, or vice versa. - the results shown in the paper are low-resolution, it'd be helpful to have zoomed in regions of the rendered focal stack or all-in-focus images to inspect the quality. - the sensor plane notation 's' introduced in 3.1 should be consistent in format with the other notations. - calling 'hyper-spectral' is confusing. Hyperspectral imaging is defined as the imaging technique that obtains the spectrum for each pixel in the image of a scene.
- fig 8 shows images with different focusing distance, but it only shows 1m and 5m, which both exist in the training data. How about focusing distance other than those appeared in training? does it generalize well?
NIPS_2021_28
NIPS_2021
The paper is overall interesting, well-written and makes a valuable contribution. I do, however, have some comments for the authors to consider (which in my mind, are potential limitations of the study): - Comparison of the proposed unsupervised method with the supervised baseline is not suggestive because of the absence of augmentations in the supervised baseline. The authors should consider reporting performance on the decoding task when the supervised method employed data augmentations as well. - For completeness, the authors should also report how the hyperparameters for the linear decoder were determined. Ideally, I would’ve liked to see error bars for decoding accuracies as well (maybe by bootstrapping training set for the decoder?) - In future, the authors could also consider replacing the acc metric for decoding with better evaluation metrics for circular data, like circular correlation. This would treat the reach direction as a continuous variable (which it is) rather than as a discrete unordered variable (which it theoretically isn’t). - The authors consider swapping only the block of variables belonging to the `content’ group. What would happen if the reconstruction term in the BlockSwap method swapped both the content and style of the augmented views? Does swapping only the content block necessarily facilitate disentanglement? If the BlockSwap was essential, does the proposed method required knowing the number of latent factors in advance. The authors could discuss these aspects in their conclusion/discussion. - When comparing the proposed SwapVAE against vanilla VAE, the authors should also consider reporting other metrics more commonly employed in VAE evaluation (likelihood etc.) and not just the reconstruction error (which can be trivially minimized). This is important, since the authors mention ‘generating realistic neural activity’ as a significant contribution of their paper. - The authors should also consider defining content and style more broadly as it relates to their specific neural application (e.g., as in Gabbay &Hosehn (2018)) where style is instance-specific(?) and content includes information that can be transferred among groups. More specifically, since their model is not sequential and does not capture the temporal dynamic structure, what do they really mean by ‘style’ represents the ‘movement dynamic’?
- The authors should also consider defining content and style more broadly as it relates to their specific neural application (e.g., as in Gabbay &Hosehn (2018)) where style is instance-specific(?) and content includes information that can be transferred among groups. More specifically, since their model is not sequential and does not capture the temporal dynamic structure, what do they really mean by ‘style’ represents the ‘movement dynamic’?
NIPS_2022_2572
NIPS_2022
1. The analysis of vit quantification could be explained in depth: (a) this paper argues that `a direct quantization method leads to the information distortion’ in Line 45. The approach proposed in this paper does not improve this phenomenon either (e.g. 1.2268 in Fig1(b) v.s. 1.3672 in Fig5(b) for Block.3. The variance difference is even larger with the proposed approach). (b) The quantization of MHSA introduces a large loss of precision, which has been found in transformer quantization in the NLP (such as Q-BERT, Q8BERT, BinaryBERT, FullyBinaryBert, etc.) and is not unique to the ViT model. 2. Some minor problems: (a) In Fig2, the tilde hat of k is too small. It should be inconsistent with q’s hat. (b) In Equation 9, Q k might be Q ( k ) to be consistent with Q ( q ) .
1. The analysis of vit quantification could be explained in depth: (a) this paper argues that `a direct quantization method leads to the information distortion’ in Line 45. The approach proposed in this paper does not improve this phenomenon either (e.g. 1.2268 in Fig1(b) v.s. 1.3672 in Fig5(b) for Block.3. The variance difference is even larger with the proposed approach). (b) The quantization of MHSA introduces a large loss of precision, which has been found in transformer quantization in the NLP (such as Q-BERT, Q8BERT, BinaryBERT, FullyBinaryBert, etc.) and is not unique to the ViT model.
NIPS_2018_76
NIPS_2018
- A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixel neighborhood. Also, PointNet uses a variant of STN in their network architecture. In this regard, the technical novelty seems limited in this work. Also, there are no empirical or conceptual comparisons to STN in this work, which is important. - There are no ablation studies on network architectures and also no ablation experiments on how the representative points are selected. - The runtime of the proposed network seems slow compared to several recent techniques. Even for just 1K-2K points, the network seem to be taking 0.2-0.3 seconds. How does the runtime scales with more points (say 100K to 1M points)? It would be good if authors also report relative runtime comparisons with existing techniques. Minor corrections: - Line 88: "lose" -> "loss". - line 135: "where K" -> "and K". Minor suggestions: - "PointCNN" is a very short non-informative title. It would be good to have a more informative title that represents the proposed technique. - In several places: "firstly" -> "first". - "D" is used to represent both dimensionality of points and dilation factor. Better to use different notation to avoid confusion. Review summary: - The proposed technique is sensible and the performance on different benchmarks is impressive. Missing comparisons to established STN technique (with both local and global transformations) makes this short of being a very good paper. After rebuttal and reviewer discussion: - I have the following minor concerns and reviewers only partially addressed them. 1. Explicit comparison with STN: Authors didn't explicitly compare their technique with STN. They compared with PointNet which uses STN. 2. No ablation studies on network architecture. 3. Runtimes are only reported for small point clouds (1024 points) but with bigger batch sizes. How does runtime scale with bigger point clouds? Authors did not provide new experiments to address the above concerns. They promised that a more comprehensive runtime comparison will be provided in the revision. Overall, the author response is not that satisfactory, but the positive aspects of this work make me recommend acceptance assuming that authors would update the paper with the changes promised in the rebuttal. Authors also agreed to change the tile to better reflect this work.
- A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixel neighborhood. Also, PointNet uses a variant of STN in their network architecture. In this regard, the technical novelty seems limited in this work. Also, there are no empirical or conceptual comparisons to STN in this work, which is important.
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm.
* L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes?
ICLR_2022_1824
ICLR_2022
. However, I struggle to see the novelty in the author’s approach: spikes and local connections alone have been tried many times (Tab.3 and also [1]). Training the output layer (rather than the whole network) with an RL-based rule is somewhat new, but I find this approach unreasonable for the following reasons: The last layer is usually trained with SGD + cross-entropy to assess the quality of representations built by previous layers. So the performance of R-STDP in any case would be limited by the representations it gets from earlier layers, which are arguably more important for training networks. (This paper tries to do that too with SVM, however.) There’s no reason for this approach to scale beyond MNIST, as the hardest part of training is done by a simple STDP rule. Maybe some layer-wise R-STDP can be a valid approach (akin to [2]), or a backprop-like RL error [3]. As a side point, I couldn’t run the code in colab (with PyTorch 1.8 and Bindsnet installed). Running image_classification_experiment.py gives record() got an unexpected keyword argument 'n_labels'. Disabling recording makes it go away, but then there’s a shape mismatch. And if you make the running time 256 instead of 256*3 to fix it, the accuracy doesn't improve at all. The main result of the paper -- MNIST accuracy (Tab. 3) -- is very weak. It’s pretty straightforward to achieve 95%+ test accuracy with spiking networks, local connections and unsupervised pre-training (using SGD for predictions) [1] (Tab. 2 there). Therefore, even ignoring the potential weakness of the R-STDP in the final layer and concentrating on the STDP + SVM result (87.5%), it is clear that the network does not learn useful representations. There are multiple potential reasons for that: The STDP in the first layer is at fault, which would be a bit surprising given the clarity of filters in Fig.3A. As a sanity check, you can train an SVM on the hidden layer without any pre-training, and see if it improves the results. The decoding scheme is ill-fitted for SVM. I’d suggest using SGD with cross-entropy like in [1] and probably many papers in Tab. 3 of your paper. If you see a large improvement, then R-STDP needs some rethinking to properly make use of the pre-trained layer. Local connections make it harder. Some works in Tab. 2 of [1] successfully use LC layers, however. I would test performance with the same architecture, but using convolutions. Another thing I noticed is really large filters -- 15x15 filters for a 28x28 image are not too far from a fully connected layer. When the winner-take-all in the LC layer makes a mistake by activating the wrong “digit” (and the filter weights do look like digits in Fig.3A), the readout layer can’t fix it. Finding the root of poor performance would improve the paper, but the overall approach (hidden layer STDP + last layer R-STDP) is still unlikely to scale to harder problems and deeper networks. Recommendation Due to limited novelty and unsatisfying results, I would recommend rejecting the paper. Minor comments It is noteworthy that in many learning problems, we do not have direct access to the explicit label of the data. Consequently, we may need to abandon gradient-based methods, and utilize reinforcement and reward-modulated learning rules Gradient-based doesn’t mean it uses labels. See VAEs, self-supervised learning, etc. that all use backprop. The proposed network is … the first locally-connected SNN with a hidden layer That’s not true. See [1] and references therein. Various problems: Eq. 3 has extra underscores. It has to be dg/dt. P_ij^+- in Eqs. 7-8 only need one index, j for Eq. 7 and i for Eq. 8. Eq. 12 is confusing. Where does the reward come from at each trial? Is one of the r_i taken from Eq. 11? Explaining the network model in Sec. 4.2 with equations would greatly improve clarity. [1] https://www.sciencedirect.com/science/article/pii/S0893608019301741 [2] https://www.frontiersin.org/articles/10.3389/fnins.2018.00608/full [3] https://proceedings.neurips.cc/paper/2020/hash/1abb1e1ea5f481b589da52303b091cbb-Abstract.html
8. Eq. 12 is confusing. Where does the reward come from at each trial? Is one of the r_i taken from Eq. 11? Explaining the network model in Sec. 4.2 with equations would greatly improve clarity. [1] https://www.sciencedirect.com/science/article/pii/S0893608019301741 [2] https://www.frontiersin.org/articles/10.3389/fnins.2018.00608/full [3] https://proceedings.neurips.cc/paper/2020/hash/1abb1e1ea5f481b589da52303b091cbb-Abstract.html
NIPS_2022_2635
NIPS_2022
Weakness: The writing of this paper is roughly good but could be further improved. For example, there are a few typos and mistakes in grammar: 1. Row 236 in Page 4, “…show its superiority.”: I think this sentence should be polished. 2. Row 495 in Supp. Page 15: “Hard” should be “hard”. 3. Row 757 in Supp. Page 29: “…training/validation/test” should be “…training/validation/test sets”. 4. Row 821 in Supp. Page 31: “Fig.7” should be “Fig.12”. Last but not least, each theorem and corollary appearing in the main paper should be attached to its corresponding proof link to make it easy for the reader to follow. The primary concerns are motivation, methodology soundness, and experiment persuasion. I believe this is a qualified paper with good novelty, clear theoretical guarantees, and convincing empirical results.
4. Row 821 in Supp. Page 31: “Fig.7” should be “Fig.12”. Last but not least, each theorem and corollary appearing in the main paper should be attached to its corresponding proof link to make it easy for the reader to follow. The primary concerns are motivation, methodology soundness, and experiment persuasion. I believe this is a qualified paper with good novelty, clear theoretical guarantees, and convincing empirical results.
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea. 2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies? Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature? Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619. 135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias? 141 "values" ==> "value"? 143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012. Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013. 146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task. 152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them? 177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help. Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level). I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract. 248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them? 326 How do you know whether the frame is under- or over-generating? Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which? 336 "with... PMI": something missing (threshold?) 371 did you do this partitions randomly? 376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?) Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph? More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea. 420 "both classes of knowledge": antecedent missing. 421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role. 461 "also"? 471 where do you get verb-level similarities from? Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions. 598 define term "message" and its role in the factor graph. 621 why do you need a "soft 1" instead of a hard 1? 647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline. 654 "more skimp seed knowledge": ? 659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger". 681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here? 781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them?
mhCNUP4Udw
ICLR_2025
1 The motivation for incorporating vision modality into MPNNs for link prediction should be better clarified and discussed. Why is this design effective? Any theoretical evidence? Maybe a dedicated section for this discussion could be valuable. 2 The counterpart methods used for experimental comparison seem not SOTA enough. The authors should compare some 2024 SOTAs. 3 Minor Issues: Ln 32 on Page 1, ‘Empiically’ should be ‘Empirically’
3 Minor Issues: Ln 32 on Page 1, ‘Empiically’ should be ‘Empirically’
NIPS_2022_1667
NIPS_2022
1. The proposed invariant learning module (Sec. 4.2) focuses on mask selection and raw-level features. The former framework (Line 167-174, Sec. 4) seems not limited to raw-level selection. There is also a discussion about representation learning in the appendix. I think the feature selection, presented in Section 4.2, could be further improved, with consideration of representation learning. 2. There are two interactive modules in the proposed RA2. Compared to previous active adaptation methods, which are designed on a specific metric, it introduces more computation processes. How about the complexity compared with previous methods? 3. Illustration: The text in Figures 2, and 4 is too small. It should be adjusted to the same size as Figures 1, and 3.
1. The proposed invariant learning module (Sec. 4.2) focuses on mask selection and raw-level features. The former framework (Line 167-174, Sec. 4) seems not limited to raw-level selection. There is also a discussion about representation learning in the appendix. I think the feature selection, presented in Section 4.2, could be further improved, with consideration of representation learning.
NIPS_2018_591
NIPS_2018
Weakness: - some details are missing. For example, how to design the rewards is not fully understandable. - some model settings are arbitrarily set and are not well tested. For example, what is the sensitivity of the model performance w.r.t. the number of layers used in GCN for both the generator and discriminator?
- some details are missing. For example, how to design the rewards is not fully understandable.
ICLR_2023_4599
ICLR_2023
Lack of clarity. The paper lacks important information to reproduce the results: Overall, the paper lacks a clear high-level explanation of the proposed method. In particular, I think Fig. 2 is very hard to parse and fails to communicate the intuition or high-level idea of the proposed method. The section b) of Fig. 2 is quite convoluted and the text lacks details about how to parse the image. It is unclear to me how the features shown in Fig. 2 are extracted; what the positional embedding used is; and a justification of the architecture used for the “Surface Extractor” in the Figure. From the “Network Architecture” paragraph it seems like the proposed approach is just putting existing components together; this in my opinion decreases the novelty of the approach. What is the loss used in Eq. 7? I could not find any no discussion about it. The estimation refinement process presented in Sec. 3.2 lacks details: The update step shown in Eq. 9 does not preserve the properties of a rotation matrix. It is unclear from Eq. 8 how the optimization problem ensures that the rotation estimates still belong to the SO(3) group. This is a crucial aspect since the paper states that it aims at refining a pose, and the formulation does not seem to be that solid. The method lacks robustness. This is because the formulation assumes that the pre-trained SDF function is perfect. However, this may not be the case and the estimates can be severely affected. The proposed method requires a pre-trained SDF function for every object. I think this is not scalable as it requires training several networks, increasing time and computational resources. Insufficient experiments: The paper mainly claims that the proposed method is faster than ICP in the abstract and Table 1. Unfortunately, I don’t see a more complete experiment backing this up besides Table 1. In principle, comparing both methods is not a fair comparison because the proposed method uses GPUs and requires an SDF network for each object. Thus, the training time of each SDF network for each object is discounted in Table 1. The ablation study is quite limited. It is unclear an optimal number of iterations (or the value of n in Algo. 1). Second, there are other parameters that can affect the performance (e.g., L and L_max) of the proposed approach.
1). Second, there are other parameters that can affect the performance (e.g., L and L_max) of the proposed approach.
NIPS_2017_434
NIPS_2017
--- This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance: 1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not ablated. How important is the added complexity? Will one IN do? 2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2. 3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder. While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated. Why is this particular dimension of difficulty interesting? 4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error? 5. Are the learned object state embeddings interpretable in any way before decoding? 6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions: * The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs). * How many different kinds of physical interaction can be in one simulation? * How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates? Preliminary Evaluation --- Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection.
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
ARR_2022_143_review
ARR_2022
1. [ Double edge point] It's an incremental improvement to K-NN based MT approach, little novelty but large engineering and execution effort, backed by good experimental design. This weakness is a little nitpicking esp when I personally execution (replicable) beats idea (novelty); but if there's no code release is produced after the revision process, then this weakness stands given the next. 2. Replicability of method is not clear, there's no indication that the code will be released. - I might have missed the specific in the paper; hopefully, I get CKMT = "Compact-network K-nearest-neighbor MT" correct. If it is, it'll be good to have the the abbreviation explicit in its full form somewhere in the paper. Otherwise some clarification on the abbreviation would be good. Same for PCKMT = "Pruned CKMT", hope I get that right too.
1. [ Double edge point] It's an incremental improvement to K-NN based MT approach, little novelty but large engineering and execution effort, backed by good experimental design. This weakness is a little nitpicking esp when I personally execution (replicable) beats idea (novelty); but if there's no code release is produced after the revision process, then this weakness stands given the next.
xCFdAN5DY3
ICLR_2025
1. The paper falls short of establishing a compelling case for Prithvi WxC as a foundation model for weather or climate. The practical significance and advantages of this approach remain inadequately demonstrated: a.) While foundation models typically excel at zero-shot performance and data-efficient fine-tuning across diverse tasks, the evidence presented for Prithvi WxC's capabilities in these areas is not convincing. Baselines for the non-forecasting experiments are either very weak (interpolation-based downscaling) or non-existent (gravity wave experiments). Some highly relevant and simple baselines are: - How much worse(?) does Prithvi WxC perform on these tasks if you omit the pre-training stage (i.e. initialize with random weights instead of the frozen pre-trained ones, and train all parameters jointly from scratch on the tasks)? - How about completely removing the pre-trained transformer backbone (i.e. removing the Prithvi WxC block from Figures 12 & 13)? - For the latter, it would be also good to run an experiment where you replace the pre-trained Prithvi WxC backbone with some "lightweight" blocks (e.g. a (deeper) U-Net), trained in a task-specific way from scratch, to account for the huge difference in parameter counts if you completely remove Prithvi WxC. These ablations would immensely help in understanding how useful the pre-training stage is for these downstream applications (e.g. does using pre-trained Prithvi WxC improve performance over such simple baselines? Is it more data-efficient?). Besides, otherwise, it is hard to see evidence for the claim in the conclusion that *"Instead of building task-specific ML-models from scratch, these pretrained encoders can be used to develop more precise data-driven models of atmospheric processes"*. b.) No ablations are included. I understand that training such a huge model is expensive but having a few ablations would have been very appreciated (perhaps, with a smaller-scale version of the model). For example: - How crucial is it to predict climatology-normalized targets as opposed to normal per-variable means/stds? - What's the forecasting performance of Prithvi WxC after the first pre-training phase? - How important is local vs global masking? What about the masking rates? - What's the line of thought behind randomizing the distance between input timesteps? Can the model only use one input timestep? I presume this is possible by masking the corresponding snapshot by 100%, but no experiments with this setting are shown. c.) The weather forecasting results seem lukewarm, albeit it is hard to judge because the comparison is not apples-to-apples. - Prithvi WxC is trained and evaluated on Merra-2. The baselines are evaluated on ERA5. These reanalysis datasets have different spatial resolutions. The evaluation years seem to be different too (correct me if I'm wrong). It would help to fix this mismatch. For example, given the foundational nature of Prithvi WxC... why not fine-tune it on ERA5 directly? Showing that it can be competitive to these baselines in an apples-to-apples comparison would be a very strong result. - Based on the mismatched comparison, Prithvi WxC seems to be competitive on 6h to 12h forecasts but it's quite notable that its performance implodes compared to the baselines for longer lead times. It is very unclear why. I wouldn't necessarily expect this version of Prithvi WxC to be state-of-the-art, but the performance does seem underwhelming. Especially given that the authors did "several things" to tune these results (i.e. a second forecasting-specific pre-training stage and autoregressive rollout fine-tuning). - The hurricane evaluation includes hurricanes from 2017 to 2023. This seems to overlap with the training data period (up to 2019). - Either Figure 6 or its analysis in the main body of the text (lines 251-253) is wrong because I see all of the three models do best on exactly one of the three RMSE figures. - For the hurricane forecasting experiments, I would appreciate a comparison to the state-of-the-art models included in the weather forecasting experiments (e.g. GraphCast) which have shown to be better than FourcastNet. d.) The downscaling problem setup is artificial. Downscaling coarsened of existing reanalysis/model outputs is not of much use in practice. A realistic and important downscaling application, as discussed in the Appendix, would be to downscale coarse-resolution model outputs to high-resolution outputs (either of a different model, observations, or the same model run at higher resolution). e.) The climate model parameterization experiments should be more carefully interpreted. - The model predicts outputs that are normalized by the 1980-2019 climatology. Unfortunately, decadal or centennial simulations of the future under a changing climate are inherently a non-stationary problem. It is highly unclear if Prithvi WxC would remain stable, let alone effective, under this highly relevant use case. This is particularly so as the in-the-loop (coupled to a running climate model) stability of ML-based climate model parameterizations is a well-known issue. - The selling point for ML-based emulators of climate model parametrizations is often their computational cheapness. Thus, the runtime of Prithvi WxC should be discussed. Given the large parameter count of Prithvi WxC it might be important to note its runtime as a limitation for these kinds of applications. - Line 461 claims that Prithvi WxC "outperforms" task-specific baselines but no baselines whatsoever are included in the manuscript for this experiment. - Are the inputs a global map? I am not familiar with gravity waves, but I believe that most physics parameterizations in climate models are modeled column-wise (i.e. across atmospheric height but ignoring lat/lon interactions). This is surely a simplification of these parameterizations, but it seems to indicate that they're highly local problems. What's the motivation for using global context then? - The end of the section should be worded more carefully, clearly stating the aforementioned limitations. f.) No scaling experiments are included. Thus, it is unclear how important its 2.3 billion parameter size is, how well the model scales, and how its size impacts performance on the downstream applications. Besides, vision and language models are usually released with multiple model sizes that cover different use cases (e.g. balancing inference speed with accuracy). It would be really useful to get these (and carefully compare them) for Prithvi WxC. 2. Related work is insufficiently discussed. Please include an explicit section discussing it, focusing on: - Carefully comparing similarities/differences to existing weather foundation models (e.g. architectures, pre-training objectives, downstream applications etc.). Besides, ClimaX is not properly discussed in the paper. Given that it's also a transformer-based foundation model, validated on forecasting, downscaling, and climate emulation, it is very important to include it in the comparison. - Similarly, please discuss how exactly the masking technique in this paper relates to the ones proposed in Vandal et al. and McNally et al.. - Carefully discuss how the architecture is derived from Hiera and/or MaxViT (and other papers of which components were derived, if any). 3. While the authors transparently discuss some issues/limitations with their experiments (e.g. the evaluation data mismatches), it would be nice to also include an explicit paragraph or section on this (and include aforementioned things like the issues with the climate model parameterization experiments). Minor: - Can you properly discuss, and include a reference to, what a Swin-shift is? - Similarly, for the "pixel shuffle layers" - Line 39: Pangu -> Pangu-Weather - Line 48: Nowcasting should be lower-case - Equation 1: Consider reformulating this as an objective/loss function. - Also Eq. 1: What is $\hat{X}_t$? What is $\sigma_C$? - Line 93: $\sigma^2_C = \sigma^2_C(X_t - C_t)$ doesn't make sense to me. - Line 104: *" same 20 year period that we used for pretraining."* .... Do you mean 40 year period? If not, which 20-year period from the 40-year training period did you use? - Line 157: Multiple symbols are undefined (e.g. $V_S$). - Line 169: It's not entirely clear what "alternates" means in this context. - Line 429: "baseline"... do you mean Prithvi WxC? - Line 507: "improved"... improved compared to what? - Figure 12: Do you mean 'downscale' on the right "upscale" block? - Sections D. 2.3 and D.2.4 in the appendix are literal copies of the corresponding paragraphs on pages 8 and 9. Please remove.
- The selling point for ML-based emulators of climate model parametrizations is often their computational cheapness. Thus, the runtime of Prithvi WxC should be discussed. Given the large parameter count of Prithvi WxC it might be important to note its runtime as a limitation for these kinds of applications.
NIPS_2022_2182
NIPS_2022
Weakness: 1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. 2. There is a gap between the proposed metric and method. Based on post-aggregation node similarity, they propose an aggregation similarity metric. However, the final 3-channel filterbank has nothing to do with the above metric. 3. The novelty of the idea is not enough. In addition to the limitations pointed out above, both new metric and method are relatively straightforward. 4. The improvement in Table 4 does not seem statistically significant because of high variance. 5. There is a problem with the typesetting of the paper. In addition to the limitations mentioned in the paper, the intrinsic relationship between the proposed metric and method should be taken into consideration. No potential negative societal impact.
3. The novelty of the idea is not enough. In addition to the limitations pointed out above, both new metric and method are relatively straightforward.
ICLR_2021_394
ICLR_2021
which lead me to recommend against acceptance. In no particular order: The crucial "unseen is forbidden" hypothesis is vague and seems to be a bit of a strawman. 2) The framing of the paper seems to oversell the method in a way that makes the contribution less clear. 3) The writing is not very clear. 4) The experiments seem to be only proof-of-concept in scenarios where the method is designed to work. The method seems to incur an exponential cost, but this is not discussed. Elaborating: The authors claim that, because DNN behavior is undefined on unseen datapoints, the "unseen-is-forbidden learning hypothesis is currently preventing neural networks from assuming symmetric extrapolations without evidence." This claim is stated in various forms several times, but never made very precise, and it is crucial in motivating the authors' approach. Roughly, I take the authors to be claiming that (i) the correct way to "extrapolate" is to assume that: transformations that were not observed to change the target distribution should be assumed to NOT change the target distribution, (ii) DNNs will not extrapolate in this way by default, and must be explicitly designed to do so. These claims (or whatever the authors actually mean) need(s) to be stated explicitly, and with appropriate modesty. After all, both (i) and (ii) seem contentious. The claim about an "economical data generating process" supports (i), but is itself somewhat vague and dubious, and should be discussed in the introduction as motivation for (i). The authors claim that their method can discover invariances without any data supporting them. And their abstract claims: "Any invariance to transformation groups is mandatory even without evidence, unless the learner deems it inconsistent with the training data." But in reality, the authors specify a small number of possible invariances which the method selects among (in a soft way). And the data is used to guide this selection process. So in reality, the designer is in charge of specifying a (restricted) set of (possible) invariances. So like previous works on enforcing invariances, it places a burden on the designer to identify plausible invariances. Overall, I found the framing in the work to be "the model discovers invariances by itself without any data!" whereas a more neutral version would be "instead of enforcing a set of invariances, we propose a set of possible invariances, and assume that any input transformations that are not observed to affect the label should be enforced" Besides the above issues (vagueness of "unseen-is-forbidden" and related discussion (1), overselling (2)), there were several other issues of clarity. The paper is not poorly written overall, but is much harder to read and understand than it needs to be. Some specific issues are: The results in Section 4 are presented with insufficient context or intuition. Theorems are stated without any proof intuition and should reference proofs in the appendix. The intuition for the penalty arrived at (eqn13) is unclear. The flow is sometimes unclear. For instance, "Learning CG-invariant representations without knowledge of G_I. " should be a subsection, not a (latex) paragraph, and should explain what the point of the subsection is before diving in. The authors seem to be using (latex) paragraphs (i.e. beginning with bolded phrases) as subsections and paragraphs beginning with italicized phrases as (latex) paragraphs. I suspect the paper was edited to fit into 8 pages without removing sufficient content. This impedes the flow and sacrifices clarity. I think a graph showing the data generating process would be much clearer than the current explanations (e.g. eqn4/5) - it is unclear what equation 7 is saying... the text above makes it seem like a definition of a goal, but the following paragraph treats it as an assertion that the goal is possible to achieve. ...Overall, I recommend stripping out some of the mathematical details and using more words and diagrams in the main text to describe the underlying issues/motivations/methods. The overall story should be made clearer (e.g. by addressing (1) and (2)), and more space should be devoted to linking each part of the paper into the overall story. The experiments are synthetic tasks where the correct invariance group is included in the set of invariances being searched over. I don't think that showing that this method can bring some benefits on a real task is an absolute requirement, given the novelty of the approach. But without more meaningful results, the paper is held to a much higher standard. Even for synthetic experiments, these are rather weak; for instance, it would be interesting to see whether/how the method degrades when we consider much larger sets of possible invariances. It seems like the method might require including a set of parameters for each of the possible 2^m invariances. Is this in fact the case? If not, why not? If so, it should be discussed as a limitation. Suggestions/Questions: In Section 4 paragraph 1, are G-invariance and G_I-invariance used interchangeably? This was confusing. say what I and D are as soon as they are introduced (top of page 4). Typo: "a somewhat a" Why a "nonpolynomial" activation function? The definition of "almost surely" at the bottom of page 4 is not correct (it is possible to sample probability 0 events), and also it should say that samples of Gamma(X^(obs)/(cf)) (not X^(obs)/(cf)) are equal with probability 1 (these are not the same statement!). "level of invariance" and "non-extrapolated validation accuracy", and several other phrases are not defined and should probably be replaced by something more clear and explicit. It seems like you might need to assume that that different x^(hid) can't be used to generate the same x^(obs) or x^(cf). If so, this should be explicit.
2) The framing of the paper seems to oversell the method in a way that makes the contribution less clear.
NIPS_2020_309
NIPS_2020
1. The motivation is conceptually described, and an example could help reader understand how the hierarchical structure benefits the document representation. 2. A standalone literature review part could be better. 3. The model description could be improved, e.g., the generative process is in detail but presenting such process in separate steps should be better for understanding, too many symbols and a notation table could be better. 4. The evaluation task only contains text classification and more tasks should be included. Besides, the paper does not provide enough details to reproduce the results (the demo code is not enough, some suggestions/guidance about the model setting for different types of documents could be more helpful).
3. The model description could be improved, e.g., the generative process is in detail but presenting such process in separate steps should be better for understanding, too many symbols and a notation table could be better.
ARR_2022_18_review
ARR_2022
1. The exposition becomes very dense at times leading to reduced clarity of explanation. This could be improved. 2. No details on the. multi-task learning mentioned in Section 4.4 are available. 3. When generating paraphrases for the training data, it is unclear how different the paraphrases are from the original sentences. This crucially impacts the subsequent steps because the model will greatly rely on the quality of these paraphrases. If the difference between the paraphrases and the original sentence is not large enough, the quality of the final training data will be low and as a result of the discarding process very few pairs will be added into the new training data. 4. Again, using style vector differences for control also relies heavily on the style diversity of paraphrases. If the style of the paraphrases is similar to or the same as the original sentences, it will be very difficulty for the model to learn a good style extractor and the whole model will default to a paraphrase model. Examples of the generated paraphrases in the training data could have been presented in addition to some intermediate evaluations to confirm the quality of the intermediate stages. 5. The method of addressing the issue of the lack of translation data doesn't contribute to the technical novelty and should not be considered as a modeling fix. 6. Again, a quantitative evaluation of the degree of word overlap between the input and output could will strengthen the results showing the extent of the copying issue. 7. The combination of the individual metrics into one score (AGG; section 5.5) seems to conflate different scales of the different components. This can result in differences that are not comparable. Thus, it is unclear how the differences in AGG compare across systems. For example, comparing two instances, suppose instance 1 has A= 1, S = 0.8 and L =1, and instance 2 has A=0.9, S = 0.7 and L = 1. Clearly the instances seem alike with small changes in A and S. However, taking their composites, instance 1 has AGG=1 and instance 2 has AGG = 0.63 exaggerating the differences. Seeing in this light, the results in table 1 do not convey anything significant. 8. Table 4 shows human evaluation on code-mixing addition and explains that DIFFUR-MLT+BT performs best (AGG), giving high style accuracy (ACC) without loss in similarity (SIM). However, we do see that SIM values are very low for DIFFUR- ML, BT. What are we missing here? 9. In Figure 4, the analysis on formality transfer seems limited without showing how it is applicable to the other languages studied. Even in Hindi, to what extent is the degree of formality and the use of Persian/Sanskrit forms maintained for Hindi? What does it look like for the other languages? See comments/questions in the summary of weaknesses for ways to improve the paper. A few typos to be corrected: Line 491 "help improve" Line 495: "performance of across" Line 496: "model fail ...since they" Figure 1, example for \lambda = 1.5 nA -> na (short vowel)
3. When generating paraphrases for the training data, it is unclear how different the paraphrases are from the original sentences. This crucially impacts the subsequent steps because the model will greatly rely on the quality of these paraphrases. If the difference between the paraphrases and the original sentence is not large enough, the quality of the final training data will be low and as a result of the discarding process very few pairs will be added into the new training data.
ARR_2022_252_review
ARR_2022
- The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
- Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
jhdVt7rC8k
EMNLP_2023
I don’t find significant flaws in this paper. There are some minor suggestions: 1. The VideoQA benchmarks in the paper are all choice-based. It would be better to choose some generation-based VideoQA datasets like ActivityNet-QA to increase the diversity. 2. I believe the Flipped-QA is a general framework for various generative VideoQA models. However, the authors only apply this framework to LLM-based models. It would be better to further verify the effectiveness and universality to non-LLM-based models like HiTeA and InternVideo.
2. I believe the Flipped-QA is a general framework for various generative VideoQA models. However, the authors only apply this framework to LLM-based models. It would be better to further verify the effectiveness and universality to non-LLM-based models like HiTeA and InternVideo.
NIPS_2022_1440
NIPS_2022
• The writing could be improved. It took me quite a lot of effort to go back and forth to understand the main idea and the theoretical analysis of the paper. • Using neural networks as surrogate models will certainly help to improve the model's accuracy, however, I'm wondering how the hyper-parameters of these NN surrogate models (e.g., the number of layers, the width in each layer, the learning rate) could be efficiently set. From the experimental results, it shows that the performance of BO depends on a lot in the hyperparameters of the NN surrogate models. Finding an optimal set of hyperparameters seem to create another AutoML problem to be solved. • I would also like to understand more about the time cost of the proposed algorithms, and how they are compared to the time cost of existing baselines. Training an NN surrogate model for every single input query in the batch probably takes a lot of time.
• The writing could be improved. It took me quite a lot of effort to go back and forth to understand the main idea and the theoretical analysis of the paper.
4vPVBh3fhz
ICLR_2024
1. Theorem 3.2 lacks a detailed proof procedure, although the authors provide an interesting discussion on the confusion matrix in section 3.3. Please let me know where the proof is if I missed. 2. All experiments are conducted on small-scale datasets where the number of classes is small, but it is always desired to include large-scale experiments. Can you share any experimental results (e.g., on CIFAR100) compared with other methods? 3. The proposed method primarily builds upon a combination of existing methods (i.e., Clopper-Pearson intervals [1], Gaussian elimination [2]) and it doesn't present significant theoretical novelty. I am willing to improve my score, if the authors can well address these concerns. [1] Charles J Clopper and Egon S Pearson. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26(4):404–413, 1934. [2] Gene H Golub and Charles F Van Loan. Matrix computations. JHU press, 2013.
3. The proposed method primarily builds upon a combination of existing methods (i.e., Clopper-Pearson intervals [1], Gaussian elimination [2]) and it doesn't present significant theoretical novelty. I am willing to improve my score, if the authors can well address these concerns. [1] Charles J Clopper and Egon S Pearson. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26(4):404–413, 1934. [2] Gene H Golub and Charles F Van Loan. Matrix computations. JHU press, 2013.
ICLR_2023_516
ICLR_2023
Weakness: Although the motivation is innovative in constructing the pretraining model for the UI modeling field, the overall pretraining pipeline may lack appropriate innovation and some aspects are similar to Flamingo, e.g., the vision-language model architecture, the evaluation method on the multi-task and few-shot learning, and the computational pipeline of Region Summarizer is similar to the Perceiver Resampler of Flamingo. The illustration of the pretraining dataset is too brief. It is recommended to add more statistical analysis and construction details, e.g., The text and content description are human-annotated or not? Considering the readability, it is recommended to add the introduction of test datasets about downstream tasks. Perhaps the authors can add these in the supplementary material rather than suggesting the readers read the previous literature. The ablation study is limited and maybe authors can consider the following ablation to make the paper clear and complete. (1) Region Summarizer: ① Why choose the bbox coordinates as q rather than the region feature of bbox? ② What is the result when kv is just the vit_outputs rather than the concatenation of vit-outputs and bbox. (2) Pretraining: ① Can the text input is concatenated by the four text elements of an object? ② The ViT is freezing as Flamingo or not? If not, The much screenshots of examples will bring the burden of GPU memory or not? The 2.69M pre-training data as extra knowledge may obscure the fairness of the method compared with the baseline method. Maybe author can try to further finetune the comparison methods combined with single-modal encoders of the pretraining model? This is just a suggestion out of curiosity and not influence the final judgment. In the Discussion section, these “early exploration” are supposed to illustrate hypotheses by some experimental data. The paper of the comparison method “Widget caption” is cited repeatedly. Maybe the authors can consider making the dataset and code public to promote the development of this field. It is just advice and not influence the final judgment.
① Can the text input is concatenated by the four text elements of an object?
NIPS_2021_1251
NIPS_2021
- Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objective function of interest is the stochastic noisy function. It would be good to make this distinction clearer upfront. - The RF experiment is not super compelling. It is not nearly as interesting as the FEL problem, and the risk aversion does not make a significant difference in average performance. Overall the empirical evaluation is fairly limited. - It is unclear why the mean-variance model is the best metric to use for evaluating performance - Why not also evaluate performance in terms of the VaR or CVaR? - The MV objective is nice for the proposed UCB-style algorithm and theoretical work, but for evaluation VaR and CVaR also are important considerations Writing: - Very high quality and easy to follow writing - Grammar: - L164: “that that” - Figure 5 caption: “Simple regret fat the reprted” Questions: - Figure 2: “RAHBO not only leads to strong results in terms of MV, but also in terms of mean objective”? Why is it better than GP-UCB on this metric? Is this an artifact of the specific toy problem? Limitations are discussed and potential future directions are interesting. “We are not aware of any societal impacts of our work” – this (as with an optimization algorithm) could be used for nefarious endeavors and could be discussed.
- The MV objective is nice for the proposed UCB-style algorithm and theoretical work, but for evaluation VaR and CVaR also are important considerations Writing:
ICLR_2023_1511
ICLR_2023
Weakness_ - The paper could do better to first motivate the "Why" (why do we care about what we are going to be presented). - Similarly, it is lacking a "So What" on the bounds provided, which are often just left there as final statements, without an analysis that explains whether 1) they are (likely to be) tight and 2) what this implies for practitioners. - Although well-written, the paper felt quite dense, even compared to other pure-math ML papers. More examples such as Figure 2 would help. - As far as I understood, the assumption on the non-linearities discards the sigmoid and the softmax, which are popular non-linearities. It would be good to acknowledge this directly by name.
- The paper could do better to first motivate the "Why" (why do we care about what we are going to be presented).
ARR_2022_130_review
ARR_2022
1. Section 4 (Models) misses some details about the proposed model. For example, what is the exact inference procedure over $O_p$ (personally I prefer equations over textual descriptions) 2. Evaluation metrics: This subsection is difficult to read and not rigorous. Comments & questions: - Abstract: The sentence in lines 12-17 ("After multi-span re-annotation, MultiSpanQA consists of over a total of 6,0000 multi-span questions in the basic version, and over 19,000 examples with unanswerable questions, and questions with single-, and multi-span answers in the expanded version") is cumbersome and can be made clearer. - Do you perform re-annotation for the expanded dataset as well? The text now says "..and applying the same preprocessing" (line 355) - this point can be made more clear. - What is the inference procedure for single-span (v1)? Is the prediction a long span like in training? Typos: - Line 47: "constitinga" -> "consisting" - Line 216: "classifies" -> "classify"
- Abstract: The sentence in lines 12-17 ("After multi-span re-annotation, MultiSpanQA consists of over a total of 6,0000 multi-span questions in the basic version, and over 19,000 examples with unanswerable questions, and questions with single-, and multi-span answers in the expanded version") is cumbersome and can be made clearer.
hkWHdI8ss5
ICLR_2024
1. Spending 1 hour to optimize a coarse mesh from a domain-specific model for furniture is not necessary. For domain-specific single-image 3D reconstruction, there are many existing fast and robust models—for example, Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction. 2. No technical novelty. It is a simple combination of two off-the-shelf models: Total3DUnderstanding and Magic3D-style DMTet finetuning. 3. The domain-specific model is trained on Pix3D. And the experiments are conducted on Pix3D. Such comparisons to those zero-shot single-image 3D reconstruction models are even more unfair. 4. No two-stage ablation studies. The paper does not ablate either stage to show the significance of design choices.
3. The domain-specific model is trained on Pix3D. And the experiments are conducted on Pix3D. Such comparisons to those zero-shot single-image 3D reconstruction models are even more unfair.
ICLR_2021_2824
ICLR_2021
Weakness: While the authors claimed that they challenged the hypothesis by Kang et al. that the learning of feature representation and classifier should be completely decoupled in long-tail classification, from my perspective this paper is a nature extension of Kang et al. Similar to Kang et al., this paper further demonstrates the importance of progressive learning to address long-tailed distribution, which first mainly focuses on head classes (first stage) and then on tail classes (second stage). Different from Kang et al., a) this paper uses more aggressive sampling strategy at the second stage, by replacing class-balanced sampler with class-reversed sampler, and b) this paper shows that the feature can be further fine-tuned at the second stage. This paper mainly reported the overall performance. It would be more convincing to provide the per-class performance to show the improvement on the tail classes. This paper mainly focused on the comparison with Kang et al. Since Kang et al. extensively conducted experiments on the long-tailed ImageNet and Places datasets, how does the proposed approach perform on these two datasets? While the time to switch from instance-balanced sampling to class-reversed sampling is a hyper-parameter as the authors mentioned, I was wondering if there is any principle to guide this design choice. Also, if this is dataset/distribution dependent or sensitive. Consider an even smoother transition, from instance-balanced sampling to class-balanced sampling and finally to class-reversed sampling. Will this combination outperform the strategy in the paper? In the discussion section, the authors claimed that the near-optimality of their method and that little room left for the improvement of the successive resampling strategy. This seems a very strong argument. I was wondering if there is any formal theoretical guarantee on this. Post Rebuttal: I do appreciate the efforts and additional experiments and theoretical analysis that the authors made in the rebuttal. While this paper proposed an interesting approach to long-tail recognition, some connections, distinctions, and comparisons with related work and thorough experimental analysis were missing in the original manuscript, as mentioned by other reviewers as well. Some of these concerns were addressed in the rebuttal, but not fully clarified. For example, the new comparison on the more challenging ImageNet-LT dataset (Table 3) shows that the proposed approach does not outperform or is even worse than Decouple [Kang et al.] for the overall performance. Also, Table 5 shows the trade-off between head and tail categories. But similar trade-off has not been fully investigated for the baselines; for example, by changing the hyper-parameters in Decouple [Kang et al.], Decouple [Kang et al.] could also significantly improve the tail accuracy while slightly decreasing the head accuracy. This makes the paper not ready for this ICLR. I encourage the authors to continue this line of work for future submission.
3) shows that the proposed approach does not outperform or is even worse than Decouple [Kang et al.] for the overall performance. Also, Table 5 shows the trade-off between head and tail categories. But similar trade-off has not been fully investigated for the baselines; for example, by changing the hyper-parameters in Decouple [Kang et al.], Decouple [Kang et al.] could also significantly improve the tail accuracy while slightly decreasing the head accuracy. This makes the paper not ready for this ICLR. I encourage the authors to continue this line of work for future submission.
ICLR_2021_2926
ICLR_2021
and suggestions: 1. It is not clear to me if the warm-up phase makes a difference in performance on larger, more realistic datasets like Clothing1M. More careful analysis of how the warm-up phase affects the sample separation in SSL versus a fully supervised setting would have been useful, including experiments on CIFAR-10. 2. Additional experiments on realistic noisy datasets like WebVision would have provided more support for C2D. 3. The paper is not clearly written. Important components like MixMatch are not explained. For instance, the Method section contains discussion on various design decisions, rather than a step-by-step description of the method itself. An algorithm figure detailing C2D method would be useful for exposition. In sum, the paper definitely has a good idea and interesting results, but it is not well-structured, which makes it harder to parse the method and results. Questions and suggestions: 1. Do you have any additional insights into modest performance gains on Clothing1M 2. How does the algorithm perform on other real-world datasets like WebVision, evaluated by DivideMix?
2. Additional experiments on realistic noisy datasets like WebVision would have provided more support for C2D.
NIPS_2019_854
NIPS_2019
weakness I found in the paper is that the experimental results for Atari games are not significant enough. Here are my questions: - In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not included? Also, why is $\sum_a N(s, a)$ (but not $N(s, a)$) used for $\lambda_s$ in Equation (7)? - In Figure 3, when $d=5$, MENTS performs slightly worse than UCT at the beginning (for about 20 simulation steps) and then suddenly performs much better than UCT. Any hypothesis about this? It makes me wonder whether the algorithm scales with larger tree depth $d$. - In Table 1, what are the standard errors? Is it just one run for each algorithm? There is no learning curve showing whether each algorithm converges. What about the final performance? It’s hard for me to justify the significance of the results without these details. - In Appendix A (experimental details), there are sentences like ``The exploration parameters for both algorithms are tuned from {}.’’ What are the exact values of all the hyperparameters used for generating the figures and tables? What hyperparameters is the algorithm sensitive to? Please make it more clear to help researchers replicate the results. To summarize based on the four review criteria: - Originality: To the best of my knowledge, the algorithm presented is original: it builds on previous work (a combination of MCTS and maximum entropy policy optimization), but comes up with a new idea for selecting actions in the tree based on the softmax value estimate. - Quality: The contribution is technically sound. The proposed method is shown to achieve an exponential convergence rate to the optimal solution, which is much faster than the polynomial convergence rate of UCT. It is also evaluated on two test domains with some good results. The experimental results for Atari games are not significant enough though. - Clarity: The paper is clear and well-written. - Significance: I think the paper is likely to be useful to those working on developing more sample efficient online planning algorithms. UPDATE: Thanks for the author's response! It addresses some of my concerns about the significance of the results. But it is still not strong enough to cause me to increase my score as it is already relatively high.
- In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not included? Also, why is $\sum_a N(s, a)$ (but not $N(s, a)$) used for $\lambda_s$ in Equation (7)?
tj4a1JY03u
ICLR_2024
1. The conclusions are a bit obvious - that higher resolution inputs and more specialized training data improve LLaVA's OCR performance. 2. The most important contribution of the paper is the collected dataset. It succeeds in showing the data improves LLaVA's OCR capabilities, but does not demonstrate it is superior to other visual instruction datasets. For example, mPLUG-Owl has comparable OCR performance to LLaVAR under the same resolution in Table 2. This raises the question of whether OCR-specific data is needed, or if the scale of data in the paper is insufficient. 3. The evaluation is limited, mostly relying on 4 OCR QA datasets. As the authors admit in Fig 4(5), this evaluation may be unreliable. More scenarios like the LLaVA benchmark would be expected, especially in ablation studies.
3. The evaluation is limited, mostly relying on 4 OCR QA datasets. As the authors admit in Fig 4(5), this evaluation may be unreliable. More scenarios like the LLaVA benchmark would be expected, especially in ablation studies.
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm.
* L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0?
NIPS_2019_1276
NIPS_2019
* Really only one real takeaway/useful experiment from the paper, which is that disentangling is sample efficient for this strange set of upstream tasks. * I have a lot of problems with these abstract visual reasoning tasks. They seem a bit unintuitive and overly difficult (I have a lot of trouble solving them). Having multiple rows and having multiple and different factors changing between each frame is very confusing and it seems like it would be hard to interpret how much these models actually learn the pattern or just exploit some artifacts. Do we have any proof that more simpler visual reasoning tasks wouldn’t do and this formulation in the paper is the way to go? * It seems weird the authors didn’t just consider a task with one row and one panel missing and the same one factor changing between panels. Is there any empirical evidence that this is too easy or uninformative? Why not a row where there are a few panels of the ellipse getting bigger and then for the missing frame the model chooses between a smaller ellipse, same size ellipse, *bigger ellipse*, bigger ellipse but at the wrong angle, bigger ellipse, but translated, bigger ellipse but different color, etc. or at least some progression of difficulty starting from the easiest and working up to the tasks in the paper?
* I have a lot of problems with these abstract visual reasoning tasks. They seem a bit unintuitive and overly difficult (I have a lot of trouble solving them). Having multiple rows and having multiple and different factors changing between each frame is very confusing and it seems like it would be hard to interpret how much these models actually learn the pattern or just exploit some artifacts. Do we have any proof that more simpler visual reasoning tasks wouldn’t do and this formulation in the paper is the way to go?
MMrqu8SD6y
EMNLP_2023
- Weak supervision could be better evaluated - eg, how realistic are the evaluated tweets? The prompt requires "all of the structured elements for perspectives to be present in the generated tweets", which doesn't see the most realistic. The generation of authors is also not realistic ("[author] embeddings are initialized by averaging the corresponding artificial tweets"). - The authors also claim that weak supervision achieves "comparable" performance - this feels like a bit of an overstatement. For author stance, performance drops 5 points from direct to weak (same gap as from their model to baseline); also substantial drops for ambiguous (30 points) and entity mapping (8 points).
- Weak supervision could be better evaluated - eg, how realistic are the evaluated tweets? The prompt requires "all of the structured elements for perspectives to be present in the generated tweets", which doesn't see the most realistic. The generation of authors is also not realistic ("[author] embeddings are initialized by averaging the corresponding artificial tweets").
KUpUO7aSSg
ICLR_2025
1. The experiment appears to be somewhat limited. While the proposed method is tailored for agricultural settings, I would recommend the authors to transfer it to natural environments, such as cityscapes, to compare its effectiveness. 2. The method proposed in this paper does not seem to specifically address issues present in agricultural settings. 3. There is a lack of essential visualization of intermediate processes and comparisons.
3. There is a lack of essential visualization of intermediate processes and comparisons.
9TpgFnRJ1y
ICLR_2025
1. Similar to other generator-based explanation frameworks, the transparency of the explanation process itself is limited due to the black-box nature of the neural-network-implemented generator. 2. The flexibility of the proposed method is another concern, as the delivered explanations appear to be model-specific. 3. The expected counterfactual violates $\mathcal{P}_2$ stated in Definition 1. 4. The benefit of the rotation for accelerating expectation computation is unclear. Questions 1 and 2 detail the concerns mentioned in points 3 and 4 respectively.
3. The expected counterfactual violates $\mathcal{P}_2$ stated in Definition 1.
NIPS_2020_1776
NIPS_2020
1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally added noise is able to escape saddle point in polynomial time. In addition, the iteration complexity is no longer dimension-free anymore in Theorem 3 (there is a $d$ dependency instead of $log d$). 2. There is no empirical study in this paper. The authors should have constructed some synthetic examples where we know the exact location of saddle points and tried to verify the theoretical claims of the proposed algorithm. Furthermore, PGD [15] should also be compared. I know this is a theory paper, but given the presence of [15], the theoretical contribution is not strong enough from my perspective.
1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally added noise is able to escape saddle point in polynomial time. In addition, the iteration complexity is no longer dimension-free anymore in Theorem 3 (there is a $d$ dependency instead of $log d$).
NIPS_2017_217
NIPS_2017
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. - "Embedding" is an overloaded word for a scalar value that represents object ID. - The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end without any post-processing. - Keypoint detection results should be included in the experiments section. - Sometimes the predicted tag value might be in the range of tag values for two or more nearby people, how is it determined to which person the keypoint belongs? - Line 168: It is mentioned that the anchor point changes if the neck is occluded. This makes training noisy since the distances for most examples are computed with respect to the neck. Overall assessment: I am on the fence for this paper. The paper achieves state-of-the-art performance, but it is incremental and does not have much technical substance. Furthermore, the main improvement comes from running [31] in a post-processing stage.
- Keypoint detection results should be included in the experiments section.
NIPS_2017_351
NIPS_2017
1. The approach mentions attention over 3 modalities – image, question and answer. However, it is not clear what attention over answers mean because most of the answers are single words and even if they are multiword, they are treated as single word. The paper does not present any visualizations for attention over answers. So, I would like the authors to clarify this. 2. From the results in table 1, it seems that the main improvement in the proposed model is coming from the ternary potential. Without the ternary potential, the proposed model is not outperforming the existing models for 2 modalities setup (except HieCoAtt). So, I would like the authors to throw light into this. 3. Since ternary potential seems to be the main factor in the performance improvement of the proposed model, I would like the authors to compare the proposed model with existing models where answers are also used as inputs such as Revisiting Visual Question Answering Baselines (Jabri et al., ECCV16). 4. The paper lacks any discussion on failure cases of the proposed model. It would be insightful to look into the failure modes so that future research can be guided accordingly. 5. Other errors/typos: a. L38: mechanism -> mechanisms b. L237 mentions that the evaluation is on validation set. However Table 1 reports numbers on the test-dev and test-std sets? Post-rebuttal comments: Although the authors' response to the concern of "Proposed model not outperforming existing models for 2 modalities" does not sound satisfactory to me due to lack of quantitative evidence, I would like to recommend acceptance because of the generic attention framework for multiple modalities being proposed in the paper and quantitative results of 3-modality attention outperforming SOTA. The quantitative evaluation of the proposed model's attention maps against human attention maps (reported in the rebuttal) also looks good and suggests that the attention maps are more correlation with human maps' than existing models. Although, we don't know what this correlation value is for SOTA models such as MCB, I think it is still significantly better than that for HieCoAtt. I have a question about one of the responses from the authors -- > Authors' response -- “MCB vs. MCT”: MCT is a generic extension of MCB for n-modalities. Specifically for VQA the 3-MCT setup yields 68.4% on test-dev where 2-layer MCB yields 69.4%. We tested other combinations of more than 2-modalities MCB and found them to yield inferior results. Are the numbers swapped here? 69.4% should be for 3-MCT, right? Also, the MCB overall performance in table 1 is 68.6%. So, not sure which number the authors are referring to when they report 68.4%.
3. Since ternary potential seems to be the main factor in the performance improvement of the proposed model, I would like the authors to compare the proposed model with existing models where answers are also used as inputs such as Revisiting Visual Question Answering Baselines (Jabri et al., ECCV16).
ICLR_2022_1794
ICLR_2022
1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work. 2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019. 3 On the segmentation mask involved with cancer on CSAW-S, the segmentation results of DEEPLAB3-DEIT-S cannot be concluded as better than DEEPLAB3-RESNET50. The implication that ViTs outperform CNNs in this segmentation task cannot be validly drawn from an 0.2% difference with larger variance. Questions: 1 For the grid search of learning rate, is it done on the validation set? Minor problems: 1 The n number for Camelyon dataset in Table 1 is not consistent with the descriptions in the text in Page 4.
1 For the grid search of learning rate, is it done on the validation set? Minor problems:
ACL_2017_433_review
ACL_2017
- The annotation quality seems to be rather poor. They performed double annotation of 100 sentences and their inter-annotator agreement is just 75.72% in terms of LAS. This makes it hard to assess how reliable the estimate of the LAS of their model is, and the LAS of their model is in fact slightly higher than the inter-annotator agreement. UPDATE: Their rebuttal convincingly argued that the second annotator who just annotated the 100 examples to compute the IAA didn't follow the annotation guidelines for several common constructions. Once the second annotator fixed these issues, the IAA was reasonable, so I no longer consider this a real issue. - General Discussion: I am a bit concerned about the apparently rather poor annotation quality of the data and how this might influence the results, but overall, I liked the paper a lot and I think this would be a good contribution to the conference. - Questions for the authors: - Who annotated the sentences? You just mention that 100 sentences were annotated by one of the authors to compute inter=annotator agreement but you don't mention who annotated all the sentences. - Why was the inter-annotator agreement so low? In which cases was there disagreement? Did you subsequently discuss and fix the sentences for which there was disagreement? - Table A2: There seem to be a lot of discourse relations (almost as many as dobj relations) in your treebank. Is this just an artifact of the colloquial language or did you use "discourse" for things that are not considered "discourse" in other languages in UD? - Table A3: Are all of these discourse particles or discourse + imported vocab? If the latter, perhaps put them in separate tables, and glosses would be helpful. - Low-level comments: - It would have been interesting if you had compared your approach to the one by Martinez et al. (2017, https://arxiv.org/pdf/1701.03163.pdf). Perhaps you should mention this paper in the reference section. - You use the word "grammar" in a slightly strange way. I think replacing "grammar" with syntactic constructions would make it clearer what you try to convey. ( e.g., line 90) - Line 291: I don't think this can be regarded as a variant of it-extraposition. But I agree with the analysis in Figure 2, so perhaps just get rid of this sentence. - Line 152: I think the model by Dozat and Manning (2016) is no longer state-of-the art, so perhaps just replace it with "very high performing model" or something like that. - It would be helpful if you provided glosses in Figure 2.
- Table A2: There seem to be a lot of discourse relations (almost as many as dobj relations) in your treebank. Is this just an artifact of the colloquial language or did you use "discourse" for things that are not considered "discourse" in other languages in UD?
FEpAUnS7f7
ICLR_2025
**Originality** **[Minor]** The idea to use ML tools to assist users in interpreting privacy policies is not new—in this sense the contribution of this study is marginal. Still, there is certainly value in evaluating this idea using the most recent large language models, and there is certainly value in conducting a study with actual users to see whether the tool really makes interpretation easier. I am not sure whether there are many user studies in prior work on this idea — perhaps the authors could clarify whether this is the first user study of its kind and, if not, whether it tells us anything new. **Quality** Missing methodological details make it hard to tell whether the empirical findings support the broad claims in the abstract and introduction, and I have lingering questions about some of the results. - **[Minor]** Section 3: Need a clear description of all the benchmark tasks to understand exactly what’s being evaluated — Section 3 seems to assume the tasks have already been defined. Examples: - Section 3.1: What does it really mean to identify “User Choice/Control” or “Data Retention”, e.g., as a practice? Does this simply mean the privacy policy describes their user choice allowances or data retention practices, which could range from quite benign to quite egregious? How is this useful to a user? - Section 3.2: What is the Choice Identification task? Is this the task described in A.1.3? Was this task defined in Wilson et al. (2016) too? - Section 3.3: What’s “Privacy Question Answering”? (Or is it “Policy Question Answering”? Both terms are used.) - Section 3.4: What’s in this dataset, as compared to the dataset used in the previous tasks? Who defined the “risky” sentences (what were the human-generated references for the ROUGE score)? Any examples? - Section 4 provides a bit more detail, and the examples in the Appendix are somewhat helpful. Perhaps this Section could come before Section 3; or alternatively, move parts of Section 3 to the appendix, and just summarize the most important findings (GPT models perform better on X benchmarks) in a paragraph, using that space instead to better explain the tasks at hand. - **[Major]** Section 4: These results are striking — users seem to comprehend the privacy policies much more easily with LLM assistance! But there are some key methodological details missing that could determine how rigorous the results are: - Did the Experimental Group also have a copy of the privacy policy that they could read directly during the task (not through QA), or did they rely solely on information from the LLM agent? From the Appendix, I infer they did have access to the raw text — do the gains decrease/increase if the user cannot cross-check the LLM agent responses with the raw legal text? - Section 6.1: Where/how were users recruited? How many privacy policies did each participant review? How were the privacy policies selected — from one of the previous datasets? Did every participant review the same privacy policy? (How likely is it that these policies appeared in the training data — i.e. leakage?) Where/how was questionnaire administered? This information is key for determining how internally and externally valid these results might be. - Was the study IRB approved? - L393: What about racial, economic diversity in the sample? How well might these results generalize to other groups, especially marginalized groups? - I’m surprised by the finding that the Experimental Group had *higher* trust in info scores than the control group — and I wonder if there’s an issue with construct validity for this question. The relevant question is (L978): “I believe the information I read/received is accurate (1-5).” Given that the control group had direct access to the privacy policies, why would they respond with a 2.6, on average, compared to 4.5 in the experimental group, since the underlying information (the privacy policy) is the same for both groups? My best guess is that the Control Group suspected the company was misrepresenting its privacy practices in its privacy policy, and answered based on their distrust in the company; I suspect the Experimental Group, on the other hand, responded based on their level of trust in the accuracy of the LLM agent’s responses. So the scores may not be directly comparable. The alternative is that using the LLM agent somehow increased people’s confidence in the accuracy of the privacy policy itself, which seems less likely but still possible. - **[Major]** Generally, it’s not clear how well the benchmarks measure the “correctness” of the agent’s responses — what is the ground truth for each of these tasks? The comprehension questions seem good, but they’re short, and not very granular — whereas the examples in the Appendix show LLM responses with much, much more detailed information about data practices. As the authors point out in the discussion, LLMs often produce incorrect and misleading text, especially when prompted for specific details that are less likely to be represented in training data. Can the authors say anything about the factuality of those more specific responses? How likely are those responses to contain falsehoods about the privacy policy that could mislead users? Can users easily identify false responses by cross-checking with the raw text or the QA feature? **Clarity** Generally the paper is easy to follow, with the exception of the omitted methodological details listed above. Some **minor** points of clarity that would be worth addressing: - L132: Have ML techniques actually improved privacy policy accessibility in practice? Or is this just a summary of research, not practice? - L130: What is the OPP-115 dataset? Readers may not know. - L131: Broken cite here. - L136: What’s the difference between an LLM and an LLM agent? Is there a definition the authors can give? What makes this application an LLM agent, rather than just an LLM (the fact that the program scrapes hyperlinks, maybe)? - Fig. 2: Text is too small to read, and often cropped, so it’s not clear what the different elements are. Simple labels might be better. - Table 1-2: Suggest combining numbers side-by-side, so it’s easy to compare. - Table 2, L192: SVM F1-score has a misplaced decimal. **Significance** - **[Minor]** This is a neat idea, and it seems like it could certainly help users in particular cases. But to frame the significance more precisely, it would be helpful to comment on the scope of a technological solution like this (e.g. in the discussion) — there is a structural issue here with privacy regulations, and with GDPR in particular, that require companies to disclose information about their privacy policies but do not require companies to make that information, and users’ options with respect to their data, truly accessible. In a perfect world, this tool may not be necessary — companies could be required to produce interpretable “privacy labels” similar to Apple’s Privacy Nutrition labels. How does the performance of this LLM-based solution compare to other policy alternatives? (These questions probably cannot be answered in this study, but it is worth mentioning that a technological solution is not necessarily the best solution.) - **[Major]** Section 3: On a similar note, can the authors report any non-ML baselines here? How does a person do on this task, on their own? It seems less important to know how GPT models compare to BERT or other ML models, and more important to know how this method compares to what users would otherwise be doing in practice. (Unless those traditional models are actually being used by lay users in practice — that would be worth mentioning.) - L094: “We provide empirical evidence of the superiority of LLMs over traditional models”: I’m assuming these sentence refers specifically to *ML* models (would be worth clarifying). But is this approach superior to the practical alternatives available to users/policymakers? Superior to things like Apple’s “Privacy Nutrition” labels? Superior to writing a simpler privacy policy? Superior to hiring a lawyer? It would help to be more precise with this and similar claims of LLM “superiority”—superior to what? - Section 3: It seems like the GPT models perform better than traditional ML models, but stepping back, are these scores good enough to be relied on? For example, the recall scores seem really low here — as far as I can tell, the GPT models miss as many as 30% of instances of third party sharing, and as many as 84% of instances of “data retention”? Can this tool be used to balance precision and recall? Is this the right balance for this kind of task? Recall might well be more important to users in this kind of task.
- L393: What about racial, economic diversity in the sample? How well might these results generalize to other groups, especially marginalized groups?
ICLR_2021_1505
ICLR_2021
1. The novelty is very low. Stage-wise and progressive training have been proposed for such a long time, they have been used everywhere. The way the authors use them don’t really exhibit anything novel to me. 2. The resolution of the outputs (128x128) is lower than prior works (e.g. DVD-GAN has 256x256 outputs). Since the paper claims the computation cost is lower, one would expect the model can generate higher resolution and much longer duration videos, but in fact it’s quite the opposite. To prove the effectiveness, I feel the authors need to show something higher than 256x256, say 512 or 1024 resolution. On the other hand, the hardware requirement is still high (128 GPUs) instead of some normal equipment that everyone can have, so I really don’t see any benefit of the model. If the authors can train DVD-GAN using only a handful of GPUs, that might also be a contribution, but it’s not the case now. 3. Output quality is reasonable, but still far from realistic. Recent GAN works have shown amazing quality in synthesized results, and the bar has become much higher than a few years ago. In that aspect, I feel there’s still much room for improvement for the result quality. Overall, given the limited novelty, low resolution output and still high hardware requirement, I’m inclined to reject the paper.
3. Output quality is reasonable, but still far from realistic. Recent GAN works have shown amazing quality in synthesized results, and the bar has become much higher than a few years ago. In that aspect, I feel there’s still much room for improvement for the result quality. Overall, given the limited novelty, low resolution output and still high hardware requirement, I’m inclined to reject the paper.
ICLR_2021_491
ICLR_2021
I am very concerned about the experiment sections. To my understanding, Figure 2/Section 4.1 are factually incorrect. In particular, it appears that the soft-labels technique does essentially the same, or better than, CRM, across all fronts. In detail, a) In Figure 2(a), the leftmost softlabel point is equal to or better than CRM (and cross-entropy) b) Figure 2(b) really concerns me, as it appears that the hyperparameters have been chosen to make a fairly narrow point - that it is possible to have low average hierarchical distance, and high top-1 error. I agree that that indicates a problem with the metric. However, the authors make the fair broader claim that CRM is the only method which beats cross-entropy, which I do not think is justified. Looking at Figure 4 in A.1, it is readily apparent that choosing different hyperparameters for existing methods would yield similar error distributions to CRM. Having these plots in the appendix, combined with claims that the authors chose the best hyperparameters, feels a bit misleading. c) As in 1), soft labels is essentially on top of CRM and Cross entropy (for iNaturalist19, it looks like a higher beta value would be directly on top, it's unclear why the authors did not extend the curve further) These results, at first blush, seem fairly impressive. For the leftmost plots, I am concerned that the authors are using subpar hyperparameters, similarly to 1)(b) above. Strangely, in this instance the results for other hyperparameters are not included in the appendix. The remaining experiments are fairly convincing, though. Reccomendation I do not think this paper can be accepted in its current form. While I suspect that CRM is a good method that I would like to use, some of the core arguments (Figure 2/Section 4.1) in the paper appear to be fatally flawed. Smaller notes: The paper could use an additional proofread, as there are often odd phrasings. I found the experiment section particularly hard to follow The acronym HXE is never defined, or linked to a citation
1), soft labels is essentially on top of CRM and Cross entropy (for iNaturalist19, it looks like a higher beta value would be directly on top, it's unclear why the authors did not extend the curve further) These results, at first blush, seem fairly impressive. For the leftmost plots, I am concerned that the authors are using subpar hyperparameters, similarly to
ACL_2017_543_review
ACL_2017
- Experimental results show only incremental improvement over baseline, and the choice of evaluation makes it hard to verify one of the central arguments: that visual features improve performance when processing rare/unseen words. - Some details about the baseline are missing, which makes it difficult to interpret the results, and would make it hard to reproduce the work. - General Discussion: The paper proposes the use of computer vision techniques (CNNs applied to images of text) to improve language processing for Chinese, Japanese, and Korean, languages in which characters themselves might be compositional. The authors evaluate their model on a simple text-classification task (assigning Wikipedia page titles to categories). They show that a simple one-hot representation of the characters outperforms the CNN-based representations, but that the combination of the visual representations with standard one-hot encodings performs better than the visual or the one-hot alone. They also present some evidence that the visual features outperform the one-hot encoding on rare words, and present some intuitive qualitative results suggesting the CNN learns good semantic embeddings of the characters. I think the idea of processing languages like Chinese and Japanese visually is a great one, and the motivation for this paper makes a lot of sense. However, I am not entirely convinced by the experimental results. The evaluations are quite weak, and it is hard to say whether these results are robust or simply coincidental. I would prefer to see some more rigorous evaluation to make the paper publication-ready. If the results are statistically significant (if the authors can indicate this in the author response), I would support accepting the paper, but ideally, I would prefer to see a different evaluation entirely. More specific comments below: - In Section 3, paragraph "lookup model", you never explicitly say which embeddings you use, or whether they are tuned via backprop the way the visual embeddings are. You should be more clear about how the baseline was implemented. If the baseline was not tuned in a task-specific way, but the visual embeddings were, this is even more concerning since it makes the performances substantially less comparable. - I don't entirely understand why you chose to evaluate on classifying wikipedia page titles. It seems that the only real argument for using the visual model is its ability to generalize to rare/unseen characters. Why not focus on this task directly? E.g. what about evaluating on machine translation of OOV words? I agree with you that some languages should be conceptualized visually, and sub-character composition is important, but the evaluation you use does not highlight weaknesses of the standard approach, and so it does not make a good case for why we need the visual features. - In Table 5, are these improvements statistically significant? - It might be my fault, but I found Figure 4 very difficult to understand. Since this is one of your main results, you probably want to present it more clearly, so that the contribution of your model is very obvious. As I understand it, "rank" on the x axis is a measure of how rare the word is (I think log frequency?), with the rarest word furthest to the left? And since the visual model intersects the x axis to the left of the lookup model, this means the visual model was "better" at ranking rare words? Why don't both models intersect at the same point on the x axis, aren't they being evaluated on the same set of titles and trained with the same data? In the author response, it would be helpful if you could summarize the information this figure is supposed to show, in a more concise way. - On the fallback fusion, why not show performance for for different thresholds? 0 seems to be an edge-case threshold that might not be representative of the technique more generally. - The simple/traditional experiment for unseen characters is a nice idea, but is presented as an afterthought. I would have liked to see more eval in this direction, i.e. on classifying unseen words - Maybe add translations to Figure 6, for people who do not speak Chinese?
- The simple/traditional experiment for unseen characters is a nice idea, but is presented as an afterthought. I would have liked to see more eval in this direction, i.e. on classifying unseen words - Maybe add translations to Figure 6, for people who do not speak Chinese?
t1nZzR7ico
ICLR_2025
1. The presentation in the experiment section is not upto par with ICLR. The figures and text should be arranged properly. 2. The idea is similar to treating VLM and LLM as two agents helping to jailbreak the T2I diffusion model. How is approach different from [1]. 3. 1. VioT dataset: 20 images in each of the 4 catergoreis were provided. However I feel the number of images is small to text the validity of the approach. 4. Lack of scoring function ablation details to understand each of its contribution. Why is there a linear addition? Is there no normalization of the values? Such details are very important to understand the scoring function. Further details related to weakness are asked using questions below. 1. Dong, Yingkai, et al. "Jailbreaking Text-to-Image Models with LLM-Based Agents." arXiv preprint arXiv:2408.00523 (2024).
3.1. VioT dataset:20 images in each of the 4 catergoreis were provided. However I feel the number of images is small to text the validity of the approach.
ACL_2017_636_review
ACL_2017
- Only applied to English NER--this is a big concern since the title of the paper seems to reference sequence-tagging directly. - Section 4.1 could be clearer. For example, I presume there is padding to make sure the output resolution after each block is the same as the input resolution. Might be good to mention this. - I think an ablation study of number of layers vs perf might be interesting. RESPONSE TO AUTHOR REBUTTAL: Thank you very much for a thoughtful response. Given that the authors have agreed to make the content be more specific to NER as opposed to sequence-tagging, I have revised my score upward.
- I think an ablation study of number of layers vs perf might be interesting. RESPONSE TO AUTHOR REBUTTAL: Thank you very much for a thoughtful response. Given that the authors have agreed to make the content be more specific to NER as opposed to sequence-tagging, I have revised my score upward.
NIPS_2022_2772
NIPS_2022
• The paper is hard to follow, and more intuitive explanations on the mathematical derivations are needed. Figure captions are lacking, and require additional explanations and legends (e.g., explain the colors in Fig. 2). Fig. 1 and 2 did not contribute much to my understanding, and I had to read the text few times instead. • At the end of the day the model proposes a method to learn features for detecting boundaries, which is an old computer vision task. Indeed, it uses a new MRF framework, and contrastive loss, but it is not clear why not using DNNs with contrastive loss for doing that, besides that maybe the learned features are more like human vision features. • The results of the models are not compared against unsupervised DNN models. I think it is interesting to see such a comparison, e.g., to unsupervised segmentation models that can be adjusted to the BSDS500 contour detection task. • There is not review of previous related work, and these is not section for “related work”. Can unsupervised segmentation model be relevant to your work?
• The paper is hard to follow, and more intuitive explanations on the mathematical derivations are needed. Figure captions are lacking, and require additional explanations and legends (e.g., explain the colors in Fig. 2). Fig. 1 and 2 did not contribute much to my understanding, and I had to read the text few times instead.
ICLR_2023_1463
ICLR_2023
Weakness: There is quite a bit of redundancy in the writing. e.g. the point that disentangled representations are better than entangled representations is made unnecessarily too many times. e.g. the first 6 lines of first paragraph in section 4.1 are not really needed in my opinion as that has already been said thrice earlier. I'd have loved to see more discussion of the interpretability/explainability aspect in the main paper, more than just a single case study in experiments section. If redundancy is removed there's plenty of room for discussion about this in the paper. My biggest concern is the objective function involves a lot of hyperparameters. And while the authors did a good job (from reproducibility perspective) of writing out all hyperparameter choices it is very much unclear how: 1) those choices can actually be made in practice (the current explanation in the appendix is too vague) and 2) how sensitive are the empirical results to hyperparameter choices. This second point is especially crucial since wrong choices can conceivably wipe out whatever improvement is gained from this method. I will be willing to reconsider my rating if this particular issue is resolved.
2) how sensitive are the empirical results to hyperparameter choices. This second point is especially crucial since wrong choices can conceivably wipe out whatever improvement is gained from this method. I will be willing to reconsider my rating if this particular issue is resolved.
NIPS_2021_671
NIPS_2021
and my questions about this paper: 1. The experiment of this paper is not sufficient. Firstly, there is no comparison with other data poison methods, especially with [1], which is very similar to the proposed one. 2. This work utilizes existing attack methods on a surrogate model. It is similar to use the transferability of adversarial examples directly. The author needs to further claim the novelty and contribution of the proposed method. 3. The proposed method might be invalid when adversarial detections are involved. More precisely, the defender can utilize existing detection methods, such as like LID[2], MD[3], and KB[4], to remove those poisoned examples. Thus, there should be some tests on evaluating the robustness of the proposed method against adversarial detections. 4. The author has pointed out that their method performs unsatisfactorily to the defense of adversarial training techniques. In fact, such an limitation is fatal as the adversarial training is not so expensive as the authors claimed. Some adversarial training method like FastAdv [5] improves the training speed significantly.
2. This work utilizes existing attack methods on a surrogate model. It is similar to use the transferability of adversarial examples directly. The author needs to further claim the novelty and contribution of the proposed method.
NIPS_2022_139
NIPS_2022
1. The rates for the smooth case depend on the dimension d (not the rank as in the Lipschitz case). While the authors show tight lower bounds up to factors that depend on |w*|, this lower bound is probably obtained by taking rank=d and therefore the upper bound may not have tight dependence on the rank. It is important to clarify this in the paper. 2. One minor weakness is that all the algorithms in the paper are somewhat standard as similar algorithms have been used in the literature. However, the obtained results and some of the techniques are interesting (such as using average stability instead of uniform stability to improve the rates), therefore I don’t consider this to be a real limitation. 3. The rates obtained for the smooth case are very similar to the rates for DP-SCO in ell_1 geometry [AFKT21, BGN21] which also have a phase transition. I’m not sure if there is any fundamental connection here but it may be useful to explore this a little and comment on this. 4. Why didn’t the authors use the algorithms from “private selection from private candidates” [LT18] for adaptivity to |w*|? This will only require to privatize the score functions (as [LT18] assumes a non-private one) but may be simpler than redoing everything from scratch. More minor comments: 1. Text in table 1 is too small and hard to read 2. Algorithm 1: gradient symbol is missing in line 4 References: [AFKT21] Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry [BGN21] Non-Euclidean Differentially Private Stochastic Convex Optimization [LT18] Private selection from private candidates
1. Text in table 1 is too small and hard to read 2. Algorithm 1: gradient symbol is missing in line 4 References: [AFKT21] Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry [BGN21] Non-Euclidean Differentially Private Stochastic Convex Optimization [LT18] Private selection from private candidates
ICLR_2021_309
ICLR_2021
I don’t have any serious complaints. The contribution is a tad narrow, but it makes progress on some tricky and difficult questions. The experiments also only produce corroborating evidence of CAD’s status as implicating causal variables, and we already know by construction that there is a causal aspect to these perturbations, so it’s not exactly an Earth-shaking result. But the experimental framework gives what seems to be pretty good evidence that other automated methods don’t implicate causal variables, so that’s nice. The main value seems to be in the theoretical observations about the effect of noise on out-of-domain performance and what that tells us about causality. These observations in the paper seem fairly straightforward, but I’m not aware of any literature making the same point. (However, I am not perfectly familiar with the literature and might have missed it.) Regardless, the application of these observations to experiments is interesting and provides useful conceptual scaffolding for future research about causal variables in models (such as deep NNs) without such explicit notions. Recommendation Accept. The paper seems sound, is well written, and addresses an important problem. The contribution may not be huge but seems to me worth publishing. More comments/questions The bolded paragraph on P. 5 might be a bit much. It makes a particular claim about “language meaning” which implicitly views meaning as corresponding to the causal connection between input language and output labels. The notion of language “meaning” is nuanced and it’s not clear whether this claim is true for all tasks, where labels may be annotated based on broader (or indeed narrower) inferences regarding the generative process of the text. Since this isn’t purporting to be a paper about language meaning, I would suggest staying away from this. Typos, style, etc. P. 2: I think you should be able to render (Wright et al., 1934; Figure 1) more naturally by using the [bracketed arguments] in \citep. ...Not sure how this plays with hyperref. P. 3: period before “and unintentional” P. 6: “use SVM” -> “we use SVM”
2: I think you should be able to render (Wright et al., 1934; Figure 1) more naturally by using the [bracketed arguments] in \citep. ...Not sure how this plays with hyperref. P.
NIPS_2017_645
NIPS_2017
- The main paper is dense. This is despite the commendable efforts by the authors to make their contributions as readable as possible. I believe it is due to NIPS page limit restrictions; the same set of ideas presented at their natural length would make for a more easily digestible paper. - The authors do not quite discuss computational aspects in detail (other than a short discussion in the appendix), but it is unclear whether their proposed methods can be made practically useful for high dimensions. As stated, their algorithm requires solving several LPs in high dimensions, each involving a parameter that is not easily calculable. This is reflected in the authors’ experiments which are all performed on very small scale datasets. - The authors mainly seem to focus on SSC, and do not contrast their method with several other subsequent methods (thresholded subspace clustering (TSC), greedy subspace clustering by Park, etc) which are all computationally efficient as well as come with similar guarantees.
- The authors do not quite discuss computational aspects in detail (other than a short discussion in the appendix), but it is unclear whether their proposed methods can be made practically useful for high dimensions. As stated, their algorithm requires solving several LPs in high dimensions, each involving a parameter that is not easily calculable. This is reflected in the authors’ experiments which are all performed on very small scale datasets.
NIPS_2019_1397
NIPS_2019
weakness of the manuscript. Clarity: The manuscript is well-written in general. It does a good job in explaining many results and subtle points (e.g., blessing of dimensionality). On the other hand, I think there is still room for improvement in the structure of the manuscript. The methodology seems fully explainable by Theorem 2.2. Therefore, Theorem 2.1 doesn't seem necessary in the main paper, and can be move to the supplement as a lemma to save space. Furthermore, a few important results could be moved from the supplement back to the main paper (e.g., Algorithm 1 and Table 2). Originality: The main results seem innovative to me in general. Although optimizing information-theoretic objective functions is not new, I find the new objective function adequately novel, especially in the treatment of the Q_i's in relation to TC(Z|X_i). Relevant lines of research are also summarized well in the related work section. Significance: The proposed methodology has many favorable features, including low computational complexity, good performance under (near) modular latent factor models, and blessing of dimensionality. I believe these will make the new method very attractive to the community. Moreover, the formulation of the objective function itself would also be of great theoretical interest. Overall, I think the manuscript would make a fairly significant contribution. Itemized comments: 1. The number of latent factors m is assumed to be constant throughout the paper. I wonder if that's necessary. The blessing of dimensionality still seems to hold if m increases slowly with p, and computational complexity can be still advantageous compared to GLASSO. 2. Line 125: For completeness, please state the final objective function (empirical version of (3)) as a function of X_i and the parameters. 3. Section 4.1: The simulation is conducted under a joint Gaussian model. Therefore, ICA should be identical with PCA, and can be removed from the comparisons. Indeed, the ICA curve is almost identical with the PCA curve in Figure 2. 4. In the covariance estimation experiments, negative log likelihood under Gaussian model is used as the performance metric for both stock market data and OpenML datasets. This seems unreasonable since the real data in the experiment may not be Gaussian. For example, there is extensive evidence that stock returns are not Gaussian. Gaussian likelihood also seems unfair as a performance metric, since it may favor methods derived under Gaussian assumptions, like the proposed method. For comparing the results under these real datasets, it might be better to focus on interpretability, or indirect metrics (e.g., portfolio performance for stock return data). 5. The equation below Line 412: the p(z) factor should be removed in the expression for p(x|z). 6. Line 429: It seems we don't need Gaussian assumption to obtain Cov(Z_j, Z_k | X_i) = 0. 7. Line 480: Why do we need to combine with law of total variance to obtain Cov(X_i, X_{l != i} | Z) = 0? 8. Lines 496 and 501: It seems the Z in the denominator should be p(z). 9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments: 10. The manuscript could be more reader-friendly if the mathematical definitions for H(X), I(X;Y), TC(X), and TC(X|Z) were state (in the supplementary material if no space in the main article). References to these are necessary when following the proofs/derivations. 11. Line 208: black -> block 12. Line 242: 50 real-world datasets -> 51 real-world datasets (according to Line 260 and Table 2) 13. References [7, 25, 29]: gaussian -> Gaussian Update: Thanks to the authors' for the response. A couple minor comments: - Regarding the empirical version of the objective (3), it might be appropriate to put it in the supplementary materials. - Regarding the Gaussian evaluation metric, I think it would be helpful to include the comments as a note in the paper.
9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments:
NIPS_2018_606
NIPS_2018
Although the adjoint sensitivity method is an existing method, exposing this method to machine learning and computational statistics communities where, as far as I am aware it is not widely known about, is a worthwile contribution of this submission its own right. Given the ever increasing importance of AD in both communities, adding to the range of scientific computing primitives for which frameworks such as autograd can efficiently compute derivatives through will hopefully spur more widespread use of gradient based learning and inference methods with ODE models and hopefully spur other frameworks with AD capability in the community such as Stan, TensorFlow and Pytorch to implement adjoint sensitivity methods. The specific suggested applications of the 'ODE solver modelling primitive' in ODE-Nets, CNFs and L-ODEs are all interesting demonstrations of some of the computational and modelling advantages that come from using a continuous-time ODE mode; formulation, with in particular the memory savings possible by avoiding the need to compute all intermediate states by recomputing trajectories backwards through time being a possible major gain given that device memory is often currently a bottleneck. While 'reversing' the integration to recompute the reverse trajectory is an appealing idea, it would have helped to have more discussion of when this would be expected to breakdown - for example it seems likely that highly chaotic dynamical systems would tend to be problematic as even small errors in the initial backwards steps could soon lead to very large divergences in the reversed trajectories compared to the forward ones. It seems like a useful sanity check in an implementation would be to compare the final state of the reversed trajectory to the initial state of the forward trajectory to check how closely they agree. The submission is generally very well written and presented with a clear expository style, with useful illustrative examples given in the experiments to support the claims made and well thought out figures which help to give visual intuitions about the methods and results. There is a lot of interesting commentary and ideas in the submission with there seeming to be a lot of potential in even side notes like the concurrent mixutre of dynamics idea. While this makes for an interesting and thought-provoking read, the content-heavy nature of the paper and slightly rambling exploration of many ideas are perhaps not ideally suited to such a short conference paper format, with the space constraints meaning sacrifices have been made in terms of the depth of discussion of each idea, somewhat terse description of the methods and results in some of the experiments and in some cases quite cramped figure layouts. It might be better to cull some of the content or move it to an appendix to make the main text more focussed and to allow more detailed discussion of the remaining areas. A more significant weakness perhaps is a lack of empirical demonstrations on larger benchmark problems for either the ODE-Nets or CNFs to see how / if the proposed advantages over Res-Nets and NFs respectively carry over to (slightly) more realistic settings, for example using the CNF in a VAE image model on MNIST / CIFAR-10 as in the final experiments in original NF paper. Although I don't think such experiments are vital given the current numerical experiments do provide some validation of the claims already and more pragmatically given that the submission already is quite content heavy already so space is a constaint, some form of larger scale experiments would make a nice addition to an already strong contribution. # Questions * Is the proposal to backward integrate the original ODE to allow access to the (time-reversed) trajectory when solving the adjoint ODE rather than storing the forward trajectory novel or is this the standard approach in implementations of the method? * Does the ResNet in the experiments in section 7.1 share parameters between the residual blocks? If not a potentially further interesting baseline for the would be to compare to a deeper ResNet with parameter sharing as this would seem to be equivalent to an ODE net with a fixed time-step Euler integrator. * What is the definition used for the numerical error on the vertical axis in Figure 4a and how is the 'truth' evaluated? * How much increase in computation time (if any) is there in computing the gradient of a scalar loss based on the output of `odeint` compared to evaluating the scalar loss itself using your Python `autograd` implementation in Appendix C? It seems possible that the multiple sequential calls to the `odeint` function between pairs of successive time points when calculating the gradient may introduce a lot of overhead compared to a single call to `odeint` when calculating the loss itself even if the overall number of inner integration steps is similar? Though even if this is the case this could presumably be overcome with a more efficient implementation it would be interesting to get a ballpark for how quick the gradient calculation is currently. # Minor comments / suggestions / typos: Theorem 1 and proof in appendix B: while the derivation of this result in the appendix is nice to have as an additional intuition for how the expression arises, it seems this might be more succinctly seen as direct implication of the Fokker-Planck equation for a zero noise process or equivalently from Liouville's equation (see e.g. Stochastic Methods 4th ed., Gardiner, pg. 54). Similarly expressing in terms of the time derivative of the density rather than log density would perhaps make the analogy to the standard change of variables formula more explicit. Equation following line 170: missing right parenthesis in integrand of last integral. Algorithm 1: Looks like a notation change might have lead to some inconsistencies - terms subscripted with non-defined $N$: $s_N$, $t_N$, look like they would be more consistent if instead subscripted with 1. Also on first line vector 0 and scalar 0 are swapped in $s_N$ expression References: some incorrect capitalisation in titles and inconsistent used of initials rather than full first names in some references. Appendices L372: 'differentiate by parts' -> 'integrate by parts' L385: Unclear what is meant by 'the function $j(x,\theta,t)$ is unknown - for the standard case of a loss based on a sum of loss terms each depending on state at a finite set of time points, can we not express $j$ as something like $$ j(x,\theta,t) = \sum_{k=1}^K \delta(t_k - t) \ell_k(x, \theta) $$ which we can take partial derivatives of wrt $x$ and then directly subsitute into equation (19)? L394: 'adjoing' -> 'adjoint'
* Does the ResNet in the experiments in section 7.1 share parameters between the residual blocks? If not a potentially further interesting baseline for the would be to compare to a deeper ResNet with parameter sharing as this would seem to be equivalent to an ODE net with a fixed time-step Euler integrator.
8VK9XXgFHp
EMNLP_2023
1. Poor figures (minor). Figures in this paper are not clear. I can not obtain the effectiveness of capturing fine-grained cross-entity interaction among candidates in comparison in Figure 1. 2. Poor motivation (major). The cross-encoder architecture is not "ignoring cross-entity comparison". It also "attends to all candidates at once" to obtain the final matching scores. Of course, it may be not so fine-grained. 3. Poor novelty of methodology (major). The idea of capturing fine-grained cross-entity interaction among candidates has already been proposed in entity linking (ExtEnD: Extractive entity disambiguation). 4. Unfair comparison (major). As far as I know, all previous zero-shot entity linking candidate ranking works use BERT-base parameters rather than RoBERTa-base and top 64 candidates rather than top 56. These may result in unfair comparison in the experimental results. 5. Wrong baseline results (minor). As far as I know, the results in BLINK and E-repeat are Micro Acc. rather than Macro Acc.
2. Poor motivation (major). The cross-encoder architecture is not "ignoring cross-entity comparison". It also "attends to all candidates at once" to obtain the final matching scores. Of course, it may be not so fine-grained.
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later. 3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further. 4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation. 5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable. 6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map? 7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences). 8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case? Minor Points: - L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (aren’t we mutliplying by a nicely-conditioned matrix to make sure everything is dense?). - Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU? Perliminary Evaluation The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*). [A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. “Neural Module Networks.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799. [B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. “Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847. [C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. “Simple Baseline for Visual Question Answering.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167.
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
NIPS_2022_69
NIPS_2022
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated. 2. The experimental results did not show that this work model obviously outperforms other variant comparison algorithms/models. 3. The innovations of network architecture design and constraint embedding are rather limited. The authors discussed that the performance is limited by the performance of the oracle expert.
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated.
NIPS_2020_1645
NIPS_2020
It's not clear that this is a global method and calling it this causes some confusion. While each explanation is given by the output of a single learned model rather solved for independently with its own optimization problem (which makes it more "global" in a sense), the explanations are still fundamentally local because they apply to a single node/graph. Some of consequences of this are: - Figure 1: It's unclear how the proposed method produces this type of explanation (which says "mutagens contain the NO2 group"). This seems like it requires "additional ad-hoc post-analysis ... to extract the shared motifs to explain a set of instances" [Line 48]. Perhaps this analysis is easier with the proposed method, but it still seems necessary. - Paragraph Starting in Line 185: This reads as if global explanations are strictly better than local explanations when they actually solve different problems. The reviewer believes this is trying to say something along the lines of "having a single (global) model that produces each explanation has advantages over solving an optimization problem independently for each explanation". - Line 236-238: It is unclear what metric is going to be used to compare local and global explanations. Generally, this comparison is challenging because local and global explanation solve different problems and (usually) do better at their respective problems and associated metrics. Given the metric described in the Paragraph Starting in Line 294, it seems like a local (per point) metric is used for both GNNExplainer and the proposed method.
- Figure 1: It's unclear how the proposed method produces this type of explanation (which says "mutagens contain the NO2 group"). This seems like it requires "additional ad-hoc post-analysis ... to extract the shared motifs to explain a set of instances" [Line 48]. Perhaps this analysis is easier with the proposed method, but it still seems necessary.
NIPS_2021_422
NIPS_2021
Experimental results leave some questions open, i.e.: - One experiment to estimates the quality of uncertainty estimates measures how often the true feature importance lies within a 95% credible interval. However, the experiments uses pseudo feature importance because no true feature importance is available. The correctness of the pseudo feature importance relies on Prop 3.2 and a large enough perturbation value to be chosen. This makes it difficult to judge to what degree the experiment can be trusted because the difference between the tested method and the pseudo feature importance is only the number of perturbations. The experiment could be strengthened in two ways. 1. set up a (toy) dataset where true feature importance is clearly defined. 2. What are the results when choosing BayesSHAP N=10k perturbations vs BayesLIME N=10k? These should be nearly identica; otherwise the assumption of the pseudo feature importance being (nearly) equal to true feature importance is compromised. Results could also be reported for BayesSHAP N=100 vs BayesLime N=10k and vice versa. - Correctness of Estimated Number of Perturbations: How can you be sure that G doesn’t simply overestimate? Maybe a value a lot less than G would have been sufficient? - Human Evaluation Experiment: Users were asked to guess a number from an image where the explanation was masked. Figure 8 indicates that each user was given30 such images and line 230 suggests that all 30 images where a masked version of number “4”. This seems like an unbalanced setup which make it difficult to determine the meaningfulness of this experiment. Questions User Study: Why did you decide to erase the explanations and measure failure to guess the correct image rather than keeping just the explanations and measuring success of guessing the correct image? B.1: Why is it okay to make these assumptions? Theorem 3.3: How do we know that S is large enough? After Authors' Response All my questions and weakness concerns were appropriately addressed, therefore I raised my score. A "Broader Impacts and Limitations" section could be added that discusses potential dangers of trusting explanations. It could for example discuss that the choice of N is important or that explanations can still be wrong.
- One experiment to estimates the quality of uncertainty estimates measures how often the true feature importance lies within a 95% credible interval. However, the experiments uses pseudo feature importance because no true feature importance is available. The correctness of the pseudo feature importance relies on Prop 3.2 and a large enough perturbation value to be chosen. This makes it difficult to judge to what degree the experiment can be trusted because the difference between the tested method and the pseudo feature importance is only the number of perturbations. The experiment could be strengthened in two ways.
NIPS_2018_641
NIPS_2018
weakness. First, the main result, Corollary 10, is not very strong. It is asymptotic, and requires the iterates to lie in a "good" set of regular parameters; the condition on the iterates was not checked. Corollary 10 only requires a lower bound on the regularization parameter; however, if the parameter is set too large such that the regularization term is dominating, then the output will be statistically meaningless. Second, there is an obvious gap between the interpretation and what has been proved. Even if Corollary 10 holds under more general and acceptable conditions, it only says that uncertainty sampling iterates along the descent directions of the expected 0-1 loss. I don't think that one may claim that uncertainty sampling is SGD merely based on Corollary 10. Furthermore, existing results for SGD require some regularity conditions on the objective function, and the learning rate should be chosen properly with respect to the conditions; as the conditions were not checked for the expected 0-1 loss and the "learning rate" in uncertainty sampling was not specified, it seems not very rigorous to explain empirical observations based on existing results of SGD. The paper is overall well-structured. I appreciate the authors' trying providing some intuitive explanations of the proofs, though there are some over-simplifications in my view. The writing looks very hasty; there are many typos and minor grammar mistakes. I would say that this work is a good starting point for an interesting research direction, but currently not very sufficient for publication. Other comments: 1. ln. 52: Not all convex programs can be efficiently solved. See, e.g. "Gradient methods for minimizing composite functions" by Yu. Nesterov. 2. ln. 55: I don't see why the regularized empirical risk minimizer will converge to the risk minimizer without any condition on, for example, the regularization parameter. 3. ln. 180--182: Corollar 10 only shows that uncertainty sampling moves in descent directions of the expected 0-1 loss; this does not necessarily mean that uncertainty sampling is not minimizing the expected convex surrogate. 4. ln. 182--184: Non-convexity may not be an issue for the SGD to converge, if the function Z has some good properties. 5. The proofs in the supplementary material are too terse.
4. ln. 182--184: Non-convexity may not be an issue for the SGD to converge, if the function Z has some good properties.
NIPS_2021_725
NIPS_2021
Comparing the occupational statistics computed by GPT2 vs those by the United States is very interesting and informative. However, the presentation on the methodology and the subsequent discussion is confusing to me. Particularly from section 3.4, I am not sure what “adj.” in equation (1) means and why “adj. Pred” is appropriate as a scaling factor. Would appreciate it if the authors could clarify and make this section clearer. The analysis of intersection effects is interesting but I fail to see a clear presentation on statistical significance of these results. It may be clearer if the authors could specify p-values on some regressors and offer some discussions. From Table 3, I also do not believe that average pseudo-R2 is necessarily a meaningful measure for the individual factor. The authors claim the contribution of “benchmarking the extent of bias relative to inherently skewed societal distributions of occupation associations”. However, I have some reservations as 1) the authors did not propose any quantitative measurement to the extent of occupation bias relative to real distributions in society; 2) the authors did not compare any models other than GPT2. Several sections of the paper read confusing to me. There is a missing citation / reference in Line 99, section 3.1. The notation \hat{D}(c) from Line 165, section 3.4 is unreferenced. The authors made great effort to acknowledge the limitations of their work.
2) the authors did not compare any models other than GPT2. Several sections of the paper read confusing to me. There is a missing citation / reference in Line 99, section 3.1. The notation \hat{D}(c) from Line 165, section 3.4 is unreferenced. The authors made great effort to acknowledge the limitations of their work.
Jszf4et48m
ICLR_2025
1. **Presentation of this work requires thorough improvements.**. - The authors should use `\citep` for most cases in the manuscript, which places the authors' names and the year in parentheses. The current use of `\cite` makes the manuscript cluttered and difficult to read. - The manuscripts contain many incoherent parts. For instance, in Lines 474-479, the authors state: "we used x_0 ... and employed LoRA fine-tuning for efficiency. **But** these two variants are crucial to make it work". The use of the transitional word "but" in this context is confusing. Later, the authors assert that "we **must** fine-tune the entire model instead of using LoRA". Then why is LoRA fine-tuning described as crucial if it is not the recommended approach? - Notations are inconsistent in the manuscript. For example, in Equation (2), $x_t=\alpha_t x_0^i+(1-\alpha_t)y^i+\sigma_t^2 \epsilon_t$, whereas in Equation (6)~(16), $x_t=(1-\alpha_t) x_0+\alpha_t y+\sigma_t \epsilon_t$. - What is the meaning of "$:\sigma_t^2=\alpha_t-\alpha_t^2$" in Equation (2)? 2. **Many statements in this work are incorrect or overclaimed**. - In the abstract: "surpassing Stable-Diffusion (LDM) performance". However, all experiments conducted in the main content are compared with the LDM trained on small datasets, which are NOT Stable Diffusion (v1/v2/v3). - The training objective of LDM or Stable Diffusion is the noise $\epsilon$. However, $\epsilon$ is NOT $x_t-x_0$ in LDM as stated in Line 266 and Table 4. - In Line 507-509, by the SNR definition in this work, SNR does not tend to 0 as $t \rightarrow T$, as the concatenated condition $y$ also provides some signals to a certain extent similar to the proposed framework even at $t=T$. SNR should NOT be the reason that distinguishes between different conditioning methods (Concatenation v.s. Schrödinger Bridge).
- What is the meaning of "$:\sigma_t^2=\alpha_t-\alpha_t^2$" in Equation (2)?
NIPS_2022_789
NIPS_2022
(-) It would be nice to show and discuss failure cases, or situations when the proposed approach does not outperform the others. Minor comments: table X, figure Y, section Z, etc. --> Table X, Figure Y, Section Z, etc. Eq. 3: x t --> x ( t ) Fix punctuation at the end of Eqs. 6 and 9 L71: R n --> R m L76: utilize L77: rely Eqs. 13 and 14: for consistency, write them in the discrete setting L144: uses L154: recall the definition of p 0 (it was only defined in Section 2) SuppMat, L465-466: check the sentence SuppMat, Fig. 6: there are two lines in red that should be in green SuppMat, L502: ϵ θ --> z θ SuppMat,L507: (4) --> Table 4 SuppMat, L509: (1) --> Algorithm 1
6: there are two lines in red that should be in green SuppMat, L502: ϵ θ --> z θ SuppMat,L507: (4) --> Table 4 SuppMat, L509: (1) --> Algorithm 1
NuMemgzPYT
EMNLP_2023
1. Minor - I think a much more comprehensive and data-intensive analysis would improve this paper significantly but since it is a short paper this isn't a strong negative against what has been done by the authors. 2. I am unsure about the technical novelty of the approach - the paper appears to be simply doing prompt engineering on ChatGPT to meet an end goal. I like the domain and the framework but the experimental results, while positive, show the strength of ChatGPT (LLMs) in performing qualitative analysis but not the author's claim of reducing human burdens of performing TA. 3. Minor - Need to have at least one qualitative analysis of how the machine coder and human coder are selecting similar or different "codes" needed for completeness.
1. Minor - I think a much more comprehensive and data-intensive analysis would improve this paper significantly but since it is a short paper this isn't a strong negative against what has been done by the authors.
ICLR_2021_2821
ICLR_2021
weakness: 1: AdpCLR_pre looks intuitive since it uses a pre-trained self-supervised model (simCLR); therefore, we can get a high-quality similarity measure in the pair of image embedding. While in the AdpCLR_full author mentioned that no pre-trained model is used then only we can ensure that P_same is the correct pair and rest of the top-K pair we can not guarantee anything. Because the similarity measure on any other embedding vector will not be of high quality and incorrect pair will degrade the result more compared to the correct one. How will you get the correct top-K positive pair in the AdpCLR_full setup? Maybe I am missing something, or the proper explanation is not provided. In the Algorithm, if pretrain=False the encoder F will be random and it will provide poor embedding, hence we expect poor similarity measure. Please explain the process of embedding in the case of AdpCLR_full if no pre-trained is used. 2: I believe the convergence and generalization should be a function of the number of incorrect positive pair. In the generalization error, we can see that it is a function of K, but at the same time, the error will depend on the expected error in the positive pair. It is counter-intuitive AdpCLR_full convergence does not have any dependency on the correct positive pair or the positive pair. If we have few/may incorrect positive pair the learning will be too difficult, and convergence will slow down, or it will oscillate and will not converge at all. Please provide a detail explanation. 3: The experimental settings are not mentioned properly; result reproducibility is critical using the provided information. The author does not provide the code. 4: can you provide the result for the AdpCLR_full approach for the ResNet-50 (1x,2x,4x) architecture like the result provided for AdpCLR_pre in the table-1. 5: How the pseudo label is defined?
3: The experimental settings are not mentioned properly; result reproducibility is critical using the provided information. The author does not provide the code.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
67