Dataset Viewer
paper_id
stringlengths 10
10
| title
stringlengths 17
149
| abstract
stringlengths 468
2.59k
| pdf_url
stringlengths 71
71
| reviews
listlengths 2
7
|
---|---|---|---|---|
zzOOqD6R1b | Stress-Testing Capability Elicitation With Password-Locked Models | To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM’s full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we investigate the conditions under which fine-tuning-based elicitation suffices to elicit capabilities. To do this, we introduce password-locked models, LLMs fine-tuned such that some of their capabilities are deliberately hidden. Specifically, these LLMs are trained to exhibit these capabilities only when a password is present in the prompt, and to imitate a much weaker LLM otherwise. Password-locked models enable a novel method of evaluating capabilities elicitation methods, by testing whether these password-locked capabilities can be elicited without using the password. We find that a few high-quality demonstrations are often sufficient to fully elicit password-locked capabilities. More surprisingly, fine-tuning can elicit other capabilities that have been locked using the same password, or even different passwords. Furthermore, when only evaluations, and not demonstrations, are available, approaches like reinforcement learning are still often able to elicit capabilities. Overall, our findings suggest that fine-tuning is an effective method of eliciting hidden capabilities of current models but may be unreliable when high-quality demonstrations are not available, e.g., as may be the case when models’ (hidden) capabilities exceed those of human demonstrators. | https://openreview.net/pdf/060fc5a68cf9e8cd99067fa71d86b9b2407c68af.pdf | [
{
"confidence": 4,
"rating": 8,
"review_id": "Hgk9jK64zF",
"review_text": "The paper studies whether fine-tuning can elicit the hidden capabilities of LLMs, especially motivated by the setting of dangerous capabilities evaluations. \n\nTo provide a specific experimental setup, the paper considers password-locked models, which responds with weak answers unless provided with a fixed password in the prompts. By giving strong demonstrations generated from a strong base model to the locked model, they show that SFT typically can recover most of the capabilities hidden in the locked model, achieving unlocking. \n\nThe paper continues to study RL (with reward but not explicit demonstrations) and a toy setting where the password locking is achieved from scratch.\n\nIn my view, the paper studies a well motivated concern in current safety research agenda on dangerous capability eval. The password-locking scenario is a novel instantiation that we can concretely study capability elicitation. I find that simply making this formal and executable is already a meaningful contribution.\n\nThe paper is nicely written and well structured. \n\nThe two main lines of the experiments on SFT and RL are solid, and the results are novel.\n\nFor section 5, a minor concern I have is that the password-locking is achieved by SFT as well. This introduces this weird \"causal structure\" in the experimental setup: specifically, the capability elicitation (in section 5) and the password-locking are both via SFT. I can imagine that we get the results in section 5, not (only) because SFT is good in capability elicitation, but because it's relatively weak in robustly password-locking the model. This might happen, for example, if password-locking a kind of an unnatural task, and a base LM trained on natural language corpus won't robustly lock itself via SFT. To be fair, I think this is not the most likely explanation of the results in section 5. But I am wondering if the author(s) has any take on the relative power of SFT. (Also, I think section 7 partially addresses this concern anyway, though I look forward to seeing more evidence against this weird hypothesis.)\n\nThe paper would generally be stronger if there is a straighforward baseline on capability elicitation via few-shot/many-shot prompting only (no messing with the model weights). Similarly, I'd love to see how RepE / steering vector perform on password-locked models; i.e., can you find a direction in the activation space to unlock the model? (I would guess you can.)\n\nHow robustly has SFT password-locked the models? Any baseline attempts to jailbreak it, via say few-shot prompting?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "1iJNEK19Nx",
"review_text": "To determine whether models are safe to deploy, it's important for labs and regulators to be able to \"elicit\" model capabilities, including ones that are somehow \"hidden\" (whether by a malicious deployer, by some unforeseen property of the training data, or by an unaligned model that is able to plan). One way to evaluate for the presence of some capability is to fine-tune the model to perform the capability and check whether the model is able to do it. Does this work? The authors study this question by introducing and training \"password-locked\" models that are trained to only perform particular tasks well if a password is present in the prompt. The authors find that given sufficiently good demonstrations, fine-tuning and RL succeed at eliciting capabilities, but very poor demonstrations are insufficient.\n\n- The strongest contribution of this paper is the framing of the problem---motivating the problem of capability elicitation and showing some preliminary experiments in the space. The problem is pretty similar to backdoor detection but the new framing does provide different constraints which I find interesting: (1) the test model (in this case a password-locked model) doesn't necessarily have to be poisoned in a sneaky way like in backdoor attacks, instead the goal is to produce a test model in any way that somehow hides capabilities in a way that realistically imitates \"scheming\" or \"sandbagging\"; (2) the attacker is not necessarily given access to the training data or high-quality data.\n- Though I think it's unsurprising that fine-tuning successfully unlocks these password-locked models, I think the finding that fine-tuning on lower quality data doesn't work as well is an interesting contribution (a priori I could imagine that fine-tuning on lower quality data would still work to fully unlock the model if the password-locking were implemented internally as a simple conditional). The result in Fig 10 where domain generalization still works when each domain has a different password is also slightly surprising.\n- Overall I like the experimental methodology (testing both supervised fine-tuning and RL, modulating the quality of the fine-tuning data, testing generalization across domains---all choices motivated by their problem framing).\n\nThe technical contributions here are somewhat limited. I'd say password-locking is a fairly simple way of modeling hidden capabilities and could be quite unlike the threat models that the authors are actually interested in (situationally-aware LLMs or AI labs deliberately hiding capabilities from auditors; in the case of situationally-aware models, presumably the model does some kind of complex internal reasoning to determine their next action, whereas these password-locked models seem likely to just learn some kind of simple conditional to output X or Y). Indeed, the authors acknowledge this briefly in the limitations section. To some extent I think this is a result of the these threats not being understood well enough for us to know how to model them in a more realistic way than these password-locked models. Given the simplicity of password-locking I'd say that the success of the authors' elicitation methods are mostly unsurprising and also may not generalize to their real threat models.\n\n- Are there any techniques from past work that may be relevant here? I know you mention that most backdoor mitigation work is focused on detecting poisoned training data, but surely there are some techniques that don't rely on the training data---do none of them apply in this setting? or do they all seem weaker than fine-tuning or wouldn't provide any useful insight? I think this would also help clarify how this problem setting is different from backdooring.\n- It seems the most important future work here would be to come up with more realistic models than these password-locked models. Do you have ideas how how to do this? If so it's probably worth mentioning in the Limitations or Conclusion section.\n\n\nOther remarks that don't require a response:\n- I agree that a weakness of attacking the model via prompting is that it may underestimate capabilities, and in that sense doing fine-tuning seems like the correct choice. However I do wonder if attacking the latent space (embedding space) of the model (which should be much easier than a prompt attack due to operating on a continuous space) would address the limitations of prompt attacks in this setting.\n- Fig 18 caption: I think the last sentence has a typo (a repeated phrase)"
},
{
"confidence": 3,
"rating": 5,
"review_id": "VHhxlYfeDy",
"review_text": "The authors study the problem of testing advanced LLMs for non-obvious capabilities in a toy setting, where a model has been fine-tuned to only perform well when presented with a fixed password, and to perform poorly otherwise. This is meant to simulate the difficulty of finding the correct prompt for eliciting a particular capability among many other suboptimal prompts. The authors then study various methods of fine-tuning to trigger the strong performance even without the right password, such as supervised fine-tuning on high/low-quality demonstrations as well as various forms of RL which do not require demonstrations at all. They find that performance on par with original with-password performance can be easily achieved, suggesting that prompt fragility and variability may not be difficult problems.\n\n* The problem is interesting, and the authors present a creative reduction to the toy problem of password-locked models\n* A wide array of elicitation methods is considered, from many different settings of SFT as well as various demonstration-free methods using reinforcement learning to explore the more difficult where we do not have access to high quality demonstrations. Editorially, I do not think this is a very important setting, since it seems unlikely that we would ever be testing models for capabilities for which we have no idea what successful behaviors look like. And the dependence on RL approaches, which are in general much flakier than supervised learning, casts a lot of doubt on whether results in this setting would generalize in the real world. But the authors think this is important and study it carefully.\n* Lots of analysis on a diverse set of task settings\n\n* The authors explain the definition and process of password-locking models quite thoroughly, but their paper would be better served by a clearer and earlier motivation for studying password-locked models.\n* In general, the authors lean heavily on references to prior work to explain various decisions in their experimental design, or different analyses of model performance. It would be helpful if these choices were just explained directly, and then separately credited prior work for motivating these decisions.\n* There are various alternative approaches to eliciting stronger performance without the correct password, such as prompt optimization (e.g. with GCG or even gradient-free search) or few-shot prompting, which are not explored in this paper\n* The nomenclature throughout the paper is a little confusing. The pi^weak_base notation was hard to keep track of, as well as the locked with password/locked without password/unlocked distinctions\n* Given the close connection to neural backdoors/trojans, the authors should explain more clearly why or why not backdoor removal methods are not applicable here\n\n* As I understand, the reason for studying password-locked models is that they are a rough approximation of prompt fragility in LLMs, i.e. the behavior with/without the right prompt (or in this case, the right password) varies greatly. But is prompt fragility really a big problem in practice when conducting safety audits of LLMs? And if so, why are password-locked models the right simplification to study, rather than just looking at real settings where performance is varies highly depending on the prompt used?\n* What evidence do the authors have that their password-locking was successful? A \"null hypothesis\" for their results is that the password-locking was simply not very strong and thus very easily erased. Do perturbations of the password also work? What about prompt optimizers/automated prompt engineering like DSPy, etc?\n* How do we know the fine-tuning isn't just improving overall task performance? Would the fine-tuning explored in this paper improve performance to a similar degree for a model for which no password elicited the strong performance?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "KgdS6LAkEd",
"review_text": "This paper studies the problem of eliciting hidden capabilities from language models, through fine-tuning. In order to plant 'locked' capabilities and then do controlled test of elicitation, the authors lock the capabilities using passwords they bake into the prompts at fine-tuning. Then, they try to elicit the capability without the password in a zero shot way, and then after fine-tuning with a few good samples, or using RL and existing judgment labels. They find that using a few good FT samples the capabilities are almost completely attainable.\n\n1. I liked the idea of using passwords to control for the model behavior, and lock the capabilities.\n\n1. I am not sure about the motivation of the paper. Is model locking something that people really do in practice? Using a set of words in a prompt is pretty weak in practice, there might be stronger ways to lock a model, such as posthoc methods at decoding time. Also the findings and insights are not that surprising. Many safety alignment and jailbreaking papers show that alignment is 'shallow' and can be easily reversed [1,2]\n\n2. Using fine-tuning and RL at decoding time is a pretty strong assumption, as having access to model parameters, training a model and also having access to high quality data is not that realistic.\n\n[1] Patil, Vaidehi et al. “Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks.” ArXiv abs/2309.17410 (2023): n. pag.\n\n[2] Yang, Xianjun, et al. \"Shadow alignment: The ease of subverting safely-aligned language models.\" arXiv preprint arXiv:2310.02949 (2023).\n\n1. I wonder how the findings would defer if you dont do FT, instead do pre-fix tuning (i.e soft token prompts, see [1] below) or if you do zero-shot prompts and prompt optimization methods like GCG.\n\n[1] Li XL, Liang P. Prefix-Tuning: Optimizing Continuous Prompts for Generation. InProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) 2021 Aug (pp. 4582-4597)."
}
] |
zxSWIdyW3A | Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging | Existing reconstruction models in snapshot compressive imaging systems (SCI) are trained with a single well-calibrated hardware instance, making their perfor- mance vulnerable to hardware shifts and limited in adapting to multiple hardware configurations. To facilitate cross-hardware learning, previous efforts attempt to directly collect multi-hardware data and perform centralized training, which is impractical due to severe user data privacy concerns and hardware heterogeneity across different platforms/institutions. In this study, we explicitly consider data privacy and heterogeneity in cooperatively optimizing SCI systems by proposing a Federated Hardware-Prompt learning (FedHP) framework. Rather than mitigating the client drift by rectifying the gradients, which only takes effect on the learning manifold but fails to solve the heterogeneity rooted in the input data space, FedHP learns a hardware-conditioned prompter to align inconsistent data distribution across clients, serving as an indicator of the data inconsistency among different hardware (e.g., coded apertures). Extensive experimental results demonstrate that the proposed FedHP coordinates the pre-trained model to multiple hardware con- figurations, outperforming prevalent FL frameworks for 0.35dB under challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous Dataset has been built upon multiple practical SCI systems. Data and code are aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging.git | https://openreview.net/pdf/9784818faf1b61e993e8c55556f64ad6c612ecad.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "OYqo48ZEnI",
"review_text": "The authors present a Federated Hardware-Prompt learning (FedHP) framework to address the fact that compressive snapshot spectral imaging devices may not be easily tuneable against changes in the coded aperture, and that in fact the said access to coded apertures may not be possible due to privacy reasons. The authors solve this by hardware prompt learning, which essentially learns from observing diverse coded aperture samples of all clients, regularizing the input data space and achieving the goal of coping with heterogeneity sourcing from hardware. The results show on a specific dataset improvement across all 10 samples in terms of spectral reconstruction quality. The comparison is primarily in the sense of federated learning approaches.\n\nTypo: figure 3 caption -> colorblue shouldn’t be there\n\nThe presentation is somewhat accessible to a generally knowledgeable non-expert in federated learning, in that the purposes are clear.\n\nThe biggest weakness is arguably that the paper covers a somewhat very niche topic, which is the application of a federated learning scheme to compressive snapshot spectral imaging. To some extent one would expect the technique to abstract away from the specific case of CASSI, as the solution does not particularly pertain to CASSI.\n\nIn addition, due to limited data available in this setup and to very limited size datasets, it is difficult to ascertain the significance of the findings.\n\nCan the authors extend this to any other compressive imaging scheme? Or perhaps disentangle the improvements in terms of FedHP, from those specific to the application? This would also broaden the data available for validating the experiments."
},
{
"confidence": 4,
"rating": 5,
"review_id": "eO0MWT6rHh",
"review_text": "The paper addresses the challenges faced in snapshot compressive imaging (SCI) systems due to hardware shifts and the need for adaptability across multiple hardware configurations. By introducing a hardware-prompt network and leveraging federated learning, the framework enhances the adaptability and performance of SCI models across different hardware configurations.\n\n1. The manuscript is well-organized with a clear and logical structure that enhances the readability of the content.\n2. The paper provides a detailed background on SCI and FL. The planned release of the Snapshot Spectral Heterogeneous Dataset (SSHD) will significantly aid future research.\n3. Using different coded apertures for different clients closely mirrors real-world scenarios, adding significant practical relevance to the study.\n\n1. The literature review on federated learning (FL) heterogeneity in the Introduction section lacks comprehensiveness. There are numerous recent papers addressing heterogeneity in FL that are not cited here. Additionally, the references included are somewhat outdated. Including more current and diverse references would strengthen the review and provide a more accurate context for the study.\n2. the manuscript explains that the coded apertures for each client follow a specific distribution Pc, it does not provide further details about the exact nature or type of this distribution.\n3. There are many ways to partition data to construct heterogeneous scenarios, such as practical and pathological methods. The approach of equally splitting the training dataset according to the number of clients is not very convincing. The authors should try different partitioning methods.\n4. It is unclear which datasets were used to obtain the experimental results in Tables 1 and 2. The authors did not specify this, which creates confusion in the experimental analysis.\n\n1. What is the rationale for using adaptors, and what is their function?\n2. What network models are used in the comparison methods? It is necessary to clearly state the fairness of the validated methods.\n3. The explanation for Figure 3 is not detailed enough. For example, what is \"Patch\"?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "QAXeKTfhaH",
"review_text": "Most existing reconstruction models in snapshot compressive imaging systems are trained using a single hardware configuration, making them highly susceptible to hardware variations. Previous approaches attempted to address this issue by centralizing data from multiple hardware configurations for training, but this proved difficult due to hardware heterogeneity across different platforms and privacy concerns. This paper proposes a Federated Hardware-Prompt Learning (FedHP) framework, which aligns data distributions across different hardware configurations by correcting the data distribution at the source, thereby enabling the trained model to adapt to multiple hardware configurations. The performance on existing datasets shows an improvement compared to previous popular training frameworks. Additionally, the authors have released their own created dataset and code.\n\n1.Previous work focused on the data itself, directly correcting various types of data through network models. In contrast, the authors of this paper focus on the root cause of the differences—hardware. They address the issue from the perspective of learning the differences in hardware.\n2.The method proposed by the authors has achieved excellent performance compared to existing mainstream methods, and the average performance has also improved.\n\n1.The number of clients used in the experiments is still relatively small. Although a simple comparison of the impact of different numbers of clients was made, there is not much difference in performance compared to other methods when the number of clients is larger.\n2.Although good results were reported on simulated data, more results on real data should be included to evaluate the effectiveness of the proosed method.\n\nWhy does the prompter lead to such a significant improvement, while the effect of the adaptor is not as pronounced? Please provide an in-depth analysis."
},
{
"confidence": 4,
"rating": 6,
"review_id": "Aa3IgBYymf",
"review_text": "The paper introduces FedHP, a reconstruction method for snapshot compressive imaging systems, which addresses the challenge of cross-hardware learning by proposing a federated learning approach. The key contribution lies in using a hardware-conditioned prompter to align data distributions across different hardware configurations, thereby enhancing the adaptability of pre-trained models without compromising data privacy.\n\n1. The writing of the paper is good, making it easy to read and follow with clear arguments.\n2. The problem defined in the paper is novel with a clear motivation, providing good inspiration for solving the issue of inconsistent device configurations in snapshot compressive imaging.\n3. The proposed method is clear and the conclusions are relatively convincing. Overall, it is an interesting work.\n\n1. There are some typos in the writing. For example, the caption of Figure 3 and the bold parts in the second row of Table 1 and the eighth row of Table 2 are confusing.\n2. The proposed FedHP method is relatively straightforward and lacks deeper insights. Moreover, it does not show a significant performance improvement compared to FedAvg.\n3. The experiments are not comprehensive enough. Given that this work aims to address the snapshot compressive imaging (SCI) problem, I suggest adding experiments to test the applicability of other SCI systems, such as Coded Aperture Compressive Temporal Imaging (CACTI).\n4. There is a lack of sufficient real-world experiments. It would be beneficial to set up multiple independent SCI systems to test the algorithm's performance. Including reconstruction results obtained from these real-world systems is recommended.\n\n1. All the experiments in this paper are based on the SD-CASSI model. Can the same FedHP model be simultaneously applicable to both DD-CASSI and SD-CASSI architectures, which have significantly different designs?\n2. Although the proposed method outperforms other algorithms in terms of performance metrics, there are still many artifacts in the reconstructed images. While I understand that this is maybe due to the precision issues of the CASSI system, it is crucial for evaluating the practical usability of the algorithm. Additionally, I am not sure whether the spectral accuracy of the reconstructed images is also optimal in statistical terms, which is vital for spectral imaging systems.\n3. Furthermore, if possible, I hope the authors can also address the concerns I raised in the Weaknesses section."
}
] |
zw2K6LfFI9 | PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation | Long-horizon manipulation tasks with general instructions often implicitly encapsulate multiple sub-tasks, posing significant challenges in instruction following.
While language planning is a common approach to decompose general instructions into stepwise sub-instructions, text-only guidance may lack expressiveness and lead to potential ambiguity. Considering that humans often imagine and visualize sub-instructions reasoning out before acting, the imagined subgoal images can provide more intuitive guidance and enhance the reliability of decomposition. Inspired by this, we propose **PERIA**(**PE**rceive, **R**eason, **I**magine, **A**ct), a novel framework that integrates holistic language planning and vision planning for long-horizon manipulation tasks with complex instructions, leveraging both logical and intuitive aspects of task decomposition.
Specifically, we first perform a lightweight multimodal alignment on the encoding side to empower the MLLM to perceive visual details and language instructions.
The MLLM is then jointly instruction-tuned with a pretrained image-editing model to unlock capabilities of simultaneous reasoning of language instructions and generation of imagined subgoals. Furthermore, we introduce a consistency alignment loss to encourage coherent subgoal images and align with their corresponding instructions, mitigating potential hallucinations and semantic conflicts between the two planning manners.
Comprehensive evaluations across three task domains demonstrate that PERIA, benefiting from holistic language and vision planning, significantly outperforms competitive baselines in both instruction following accuracy and task success rate on complex manipulation tasks. | https://openreview.net/pdf/2a39fcbdd8617cd0a7fbe9312a20b9b51ea8ab74.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "hd3aedGTvC",
"review_text": "The paper proposes a framework that integrates large multimodal language models (MLLMs) and diffusion models to enable holistic language planning and vision planning for long-horizon robotic manipulation tasks with complex instructions. The authors jointly train the MLLM and diffusion model for language reasoning and visual imagination through latent image token generation. An explicit consistency loss aligns the reasoned instructions with the imagined subgoal images.\n\n1. Novel motivation for integrating of multiple modalities for providing better guidance.\n\n2. Principled design of the framework components like the encoding-side alignment and the latent image token generation approach.\n\n1. Weak experimental evaluation (see below questions).\n\n1. While the authors acknowledge that training and inference costs are significant, the current draft lacks a more in-depth analysis of these problems. In particular, what are the various tradeoffs associated with different MLLMs that can be used, taking into consideration training time/FLOPs/MACs? How does varying these choices impact performance? Experiments answering these questions are equally as important as the ablations being run on training design choices (e.g. alignment loss).\n\n2. Lack of real-world evaluation. Many works ([1], [2]) in this problem setting leveraging foundation models for robotic manipulation demonstrate the advantages of these large MLLMs/generative models in real-world settings, where the distribution of objects is extremely long-tailed. Can the authors show that PERIA can operate with similar success in this regime?\n\n[1] [Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning](https://arxiv.org/abs/2311.17842)\n\n[2] [Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models](https://arxiv.org/abs/2310.10639)"
},
{
"confidence": 4,
"rating": 6,
"review_id": "yF6zORO4R5",
"review_text": "The paper tackles the problem of long-horizon task planning on pick-and-place tasks in the Ravens domain. Given a dataset of trajectories, it first learns the projection to align the vision and language encoder for a multimodal LLM. Then it finetunes both the multimodal LLM and a diffusion model to generate a step action in language, where the diffusion model is used to generate a conditioning subgoal image, which is proposed as an intermediate step that helps with the step action generation in language.\n\n- The paper is overall well-written and the figures are helpful for understanding the method.\n\n- It is unclear, at least from the experiments in the paper, that the diffusion model is actually useful, especially when the output is still in language space. For example, it seems that the tasks studied in the paper can be easily tackled by a modern multimodal language model (likely even the open-sourced ones), by simply providing the the initial image and appropriate prompting. However, this is missing as an important baseline in the paper (and this does not require additional training data). Furthermore, to demonstrate the effectiveness of an image subgoal in addition to a language subgoal, the evaluation would have to be done on tasks that have subgoals that are difficult to describe in language but easy to describe in visual space, but all the evaluated tasks are the contrary.\n- A related work “Video Language Planning” also seems to be missing from the paper, despite it might involve closed-sourced models. However, the idea seems quite relevant and it’s unclear if the paper provides additional insights for the community.\n\nSee \"weaknesses\" section above."
},
{
"confidence": 4,
"rating": 7,
"review_id": "rVdP3LcARR",
"review_text": "The paper proposes a holistic vision-language planning method for long-horizon robot manipulation, by learning a multi-modal large language model (MLLM). The MLLM generates interleaved language actions and keyframe images based on language goal and the initial image. Each pair of generated language and keyframe image is used as conditioning of a learned motion policy for robot manipulation.\n\nBased on a pretrained MLLM model, the paper first learns a projector to align visual encoding to with language on image captioning tasks tailored to robot manipulation. Then it applies instruction tuning to fine-tune the MLLM, an output projector, and a diffusion model to generate interleaved language and images. Additional, the authors propose another training objective to align the generated language and images. All large models are fine-tuned with LoRA.\n\nOn simulated robot manipulatio benchmarks, the proposed method outperforms imitation learning, language planning, and vision planning methods. The paper also systematically evaluates capabilities of the MLLM along different axes, and justifies the benefits introduced by each loss design via ablation studies.\n\n- The paper tackles the important challenge of robot long-horizon planning. The proposed method plans jointly in the language and image space, providing rich information for the low-level policy to condition on.\n- The paper exploits the capabilities of MLLM to generate language and images for robot manipulation, used with a separate low-level policy. I think this is good practice as MLLM is not naturally suitable to generate robot motion.\n- The experiments are comprehensive and provide useful information on understanding the capability of the trained MLLM.\n- The paper is in general well-written and easy to follow.\n\n- The explanation of low-level policy is missing from the main paper. This part is very important - the MLLM outputs language and images only, and it's not clear how these modalities are bridged with robot motion.\n- The contribution of the alignment loss between generated image and language is not sufficiently justified in the experiment. It will be helpful if the authors can provide the task success rate when the loss is absent.\n\n- I wonder which of the three pretraining tasks is the most important for vision-language alignment in the context of robot manipulation. It will be interesting if the authors can show some ablation studies on this."
},
{
"confidence": 5,
"rating": 6,
"review_id": "KqvZedn6p7",
"review_text": "This paper focuses on robotic manipulation with complex instructions. It proposes PERIA, a framework that integrates MLLM and diffusion models to incorporate both language planning and visual planning for long-horizon language-instructed manipulation tasks. Specifically, PERIA first performs a lightweight multi-modal alignment to consolidate the multi-modal perception capabilities. Then, PERIA performs multi-modal instruction tuning, where it outputs both subgoal language descriptions and visual tokens, both of which are fed to a diffusion model to generate subgoal images. PERIA introduces an additional consistency loss between and generated subgoal image and language descriptions. Experimental results demonstrate that PERIA significantly outperforms competitive baselines.\n\n•\tThis work follows a natural and reasonable pipeline to tackle the manipulation tasks with complex language instructions. Combining language planning and visual generation for manipulation is a sound approach.\n\n•\tThe alignment stage empowers the overall capabilities, as demonstrated in the experimental part.\n\n•\tPERIA achieves convincing experimental results compared with previous works. The authors also conduct extensive ablative study to mine more insights.\n\n•\tEnd-to-end learning for such a large system requires considerable cost. Such a comprehensive framework may lead to powerful performances but the resources may be a limitation. This paper does not present how much resources PERIA uses or related experiments to address such potential concerns.\n\n•\tOne of my concerns is that the consistency objective, which forces the MLLM to output subgoal language descriptions, may suffer from accumulative error. This is because when the generated subgoal image is not the desired image but is a natural image that can be reached within one-step action, the MLLM would learn an incorrect subgoal description.\n\n•\tMore literature references and related baselines should be incorporated.\n\n•\tThe ablation in visual planning lacks an experiment where PERIA generates subgoal images with either subgoal descriptions or generated visual tokens, which should reveal more insights into what leads to the improvements in visual planning.\n\n•\tYou generate subgoal images with subgoal descriptions and generate visual tokens. Why not use 1) subgoal descriptions and observation or 2) generated visual tokens alone? The former resembles a world model, and the latter sounds like a decoding of an imagined visual subgoal, both of which sound more natural. I guess you have tried the latter but found it was not as good as adding subgoal language.\n\n•\tWhat LLM do you use? It is possible that a powerful LLM accounts for superior performance to some extent. Have you compared the LLMs of different works?"
}
] |
zv9gYC3xgF | Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models | We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distribution.
While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary $n$ remains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate $O(1/\sqrt{t})$. This is the first global convergence result for Gaussian mixtures with more than $2$ components. The sublinear convergence rate is due to the algorithmic nature of learning over-parameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps. | https://openreview.net/pdf/01089f1b9d7a3757d7fe8abda681870c3db968be.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "6LkxBEXYgp",
"review_text": "The paper studies the convergence of EM for learning mixtures of Gaussians. Specifically, they consider a simplified setting where the Gaussians are in $d$-dimensions and all have covariance $I_d$. They consider an overparameterized version of the problem where they parametrize the mixture they are trying to learn by a mixture of $n$ Gaussians with means $\\mu_1, \\dots , \\mu_n$ and the ground truth distribution generating the data just consists of a single Gaussian $N(\\mu^* , I_d)$. The paper analyzes the dynamics of gradient EM for this problem. The main result of the paper is proving that for this overparametrized variant, gradient EM converges to the true distribution at a rate of $1/\\sqrt{t}$ with additional constants depending exponentially on the distance between the initialized means and the true mean, which they show is necessary.\n\nThere has been a long line of work on understanding the convergence of EM or gradient EM for learning mixtures of Gaussians. Without overparametrization, provable convergence is known for mixtures of two Gaussians and it is also known that convergence fails in general for mixtures of three or more components. For overparamterized settings, a previous work [Dwivedi et. al. 2018] shows that if we parametrize a mixture of two Gaussians and try to learn a ground truth distribution consisting of a single Gaussian, then EM converges at a $1/\\sqrt{t}$ rate (as long as the mixing weights are set to be different). This is in contrast to when we parametrize with only a single Gaussian and EM converges exponentially fast. The results of the current paper can be seen as generalizing the results of [Dwivedi et. al. 2018] to more than two components. The paper empirically validates their theoretical results with experiments on simple synthetic datasets.\n\nThe paper makes progress on a well-studied problem of understanding convergence of EM for learning GMMs. They give the first global convergence results for mixtures with more than two components.\n\nThe paper overcomes nontrivial technical barriers to extend previous results to more than two components.\n\nThe results of the paper only work when the ground truth is \"trivial\" i.e. a single Gaussian.\n\nThe results are qualitatively similar to previous work on overparametrized mixtures of two Gaussians. The contributions of the paper are mostly technical and it is a bit difficult to find a nice conceptual takeaway \\--- the previous work for two components already showed that overparametrization can lead to drastically slower convergence. It would be much more exciting and novel, say, if we could prove something when the ground truth were not just a single Gaussian.\n\n."
},
{
"confidence": 3,
"rating": 6,
"review_id": "Q83DuxxS9R",
"review_text": "This paper talks about the gradient-EM algorithm for over-parameterized GMM. The paper mostly shows the GLOBAL convergence and its rate when using this model to learn a single Gaussian.\n\nI believe any non-convex global convergence optimization problem is valuable. It is an extension of Dwivedi et al. 2019.\n\n1. The over-parametrized model may have severe overfitting problem. \n2. The based distribution is quite easy: a single normal, with known variance. In the paper, the covariance is fixed as the identity, which simplifies the problem in a deep way. Actually for symmetric 2-GMM, there are already faster algorithms to learn both mean and cov. \n3. I feel confused about the consistency and convergence in the paper. In Line 96, the convergence of KL divergence also contains the convergence of MLE, ie consistency. The convergence to the MLE is another loss function. Also in Remark 6, the convergence when sample size to infinity seems more easily ensured by WLLN.\n\nBesides the weakness above, I also have following questions:\n4. If you only learn the single normal, how is the algorithm compared with Dwivedi et al. 2019 or just 2-GMM? Is it necessary to use more? Is it overfitting so the performance seems better?\n5. I don’t get why the paper introduces Fact 1. It seems obvious. \n6. The mean is convergent to 0 (true) instead of the MLE."
},
{
"confidence": 2,
"rating": 6,
"review_id": "am4YAV6doi",
"review_text": "The paper focuses on the setting of a Gaussian Mixture Model with several summands and an input vector produced by one Gaussian distribution, where it employs the Expectation-Maximization rule to infer the model's parameters. Since the problem of having arbitrary number of summands has been unsolved, the paper provides an innovative scheme which includes the computation of the likelihood function and shows that the EM algorithm converges with sublinear complexity. \n\nThe authors also show that there exist neighborhoods of slow convergence rates.\n\n- The paper is well written, the theorems, lemmata and algorithmic steps are described gradually.\n- From a first overview of the literature, the result about global convergence seems novel. \n- Across section 4, there is intuition and remarks provided about the necessity of the steps.\n\n- The experimental evaluation is used as a proof of concept and thus is limited. The authors could have (potentially) experimented with several datasets, with varying weights in the GMM, and try to benchmark their algorithm to compare the emergent convergence rates.\n\nNA."
},
{
"confidence": 4,
"rating": 6,
"review_id": "pTgLGsoIvx",
"review_text": "The paper considers fitting a single Gaussian with multiple-component Gaussian mixture models (GMM) through the Gradient EM algorithm. While the two balanced over-specified Gaussian setting has been widely studied in the previous work, generalizing it to multiple-component GMM requires significant algebraic efforts. The entirety of the paper is to show the $1/\\sqrt{t}$ convergence rate of the population EM algorithm. In particular, the paper characterizes the explicit convergence rate of $1/\\sqrt{T}$ with constants exponential in the number of components, the phenomenon that coincides with the exponential lower bound for the parameter estimation of general GMMs with no separation.\n\n-\tExtending some existing two-component results to general multiple-component GMM is non-trivial and significant. The paper nicely characterizes the convergence rate that captures some important properties of learning GMM that can be achieved by GMM. \n\n-\tThe paper is well-written, emphasizing important aspects of the results and well-contrasting their techniques to existing results. \n\n-\tProof sketch is nicely written to help readers understand their key results.\n\n-\tWhile the lower bound result (Theorem 7) is a nice addition to the literature, I believe that the gap between this lower bound and the upper bound is large, since the upper bound is exponentially slow in the number of components. \n\n-\tOne important result from two specified GMM is the $n^{-1/4}$ (n is the number of samples here) statistical rate after convergence. I would like to see $n^{-1/2k}$ style results in general k-component GMM settings. At least, the authors should have discussed this aspect of previous work and contrasted the implications to k-GMM settings. \n\n-\tThe experiment would have been nicer if the final statistical rates were compared.\n\n-\tMaybe authors can elaborate on how their results can imply learning k-GMM with small separations?\n\n-\tIn Theorem 7, there is no restriction on the step size $\\eta$. I believe that the lower bound should also be able to tell that $\\eta$ cannot be set too large.\n\n-\tWhy only on the gradient EM? Can the analysis in the paper imply some convergence rates of the standard EM algorithm as well? I think it would make the paper much stronger if it could show that the same results hold for standard EM."
}
] |
zv4UISZzp5 | IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation | As Large Language Models (LLMs) become more capable of handling increasingly complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs so that the evaluation set continually updates and refines according to model abilities.
Our data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains.
To produce high-quality data, we incorporate a self-correct mechanism into our generalization framework and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2.
The results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works.
We will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs. | https://openreview.net/pdf/74ed0078ffe00fb63ba32cc447f4540054349fbb.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "ecG0hpo8bm",
"review_text": "This paper proposes a method of generating prompts for evaluating large language models such that the prompts are dynamic and allow for showing meaningful performance gaps between different language models.The authors show that the generated data is more-challenging and discriminative than prior datasets.\n\n- Work is very timely and addresses a major issue in how we can better evaluate LLMs which are continuously improving and saturating existing benchmarks.\n- Good to see that the generated prompts are indeed harder than baseline datasets - this should indicate that the prompts are challenging enough to provide decent signal on a language model's capabilities.\n- Experimented with many SOTA models and compared with several baseline datasets.\n\nThe main weakness of this work is that much of the pipeline relies prompting language models to modify seed data. This means that the performance of the language model plays a huge role in the quality of the resulting data. Given that the pipeline seems to have many different steps, each of these steps can introduce errors since LLMs are not fully reliable. It then becomes crucial to have a way of verifying that the generated questions are of high quality. There's also a concern that the ground truth answers might not be entirely accurate. The authors mention both of these issues as limitations.\n\n- If a particular language model is used to generate data using the proposed method, is there any bias where that model will perform better at solving those problems? For example, if Claude generates the prompt set, will the prompt set be easier for Claude than GPT?\n- Is the data generation done for a set of language models or for each individual language model? In other words, are the prompts being dynamically changed with respect to a single language model's response or all language model responses? Specifically, Section 2.2 says that the method \"rephrases the question based on the response from the LLM\" - which LLM is this statement referring to?\n- Are there any experiments to verify the robustness of each individual step in the pipeline? It seems like the current experiments are meant to verify the final output of the pipeline, not the in-between steps."
},
{
"confidence": 3,
"rating": 4,
"review_id": "kLpxn5sGzh",
"review_text": "The paper proposes a prompt synthesis framework for evaluating LLMs to accurately reflect different Large Language Model abilities. The authors develop two models to measure LLMs’ question discriminative power and difficulty. This study presents “instruction gradient” and “response gradient” methods to exploit rule sets to generalize questions.\n\nThe paper focuses on the generation of a large number of queries and corresponding answers on general language and mathematical topics. They have released a set of over 3000 questions for LLM evaluation. Their proposed metrics (discrimination index and difficulty score) show significant improvement in the quality of the benchmark datasets.\n\nAlthough the paper tries to solve a crucial research area in the scope of LLM evaluation, the study lacks in many different ways. The textual flow is difficult to follow. Many of the concepts introduced were not properly described or not cited with previous work’s references. These issues restricted the reviewability of this study.\n\n1. The proposed methods - “Instruction gradient” and “response gradient” are not properly described in the manuscript. Authors should write the working procedure of these methods in detail in the main manuscript, as these are the centerpiece of the whole question generation process.\n\n 2. “Generalizing questions from seed data based on the \"instruction gradient\" restricts the diversity and confines the content to specific topics” - is unclear. Consider explaining.\n\n 3. In section 2.3 - Assessing the Usability of General Text Questions: How is the assessment done? Is it done manually with human input? Or by an autonomic process/model?\n 4. In section 2.3 - CoT Check for Mathematical Questions: “we use Hunyuan to assess the reasonableness of the question, which successfully identifies the unreasonableness of the problem and corrects it based on the assessment process.” - How can it be ensured that the model successfully identifies the unreasonableness? Provide a theoretical/experimental study.\n 5. In section 2.4 - Acquiring reference answers: lines 133-136, are the answers scored by human participants?\n 6. In section 2.4 - Acquiring reference answers: line 140, What is meant by a “collective voting mechanism”? Please explain clearly.\n 7. In section 2.5 - lines 148-149, what are “label discrimination indexes”?\n a. In line 149, “the prompt includes four features” - How did you select these features? Provide some analysis.\n b. In lines 162-164, How did you select the threshold values? (e.g., “Low” means less than or equal to 0.1, “High” means values greater than 0.25, etc.). \n c. In line 168, “discrimination level label ranging from 0-3” - Is this range acquired by observations? Or have you performed some analyses on the score expressions?\n 8. In equation 4, what does the “score” mean? Is it the evaluation score that is depicted in Table 1?\n a. If you are using the same “score” to calculate the difficulty score and the discrimination indexes, does that mean a question is more difficult if a question is more discriminative?"
},
{
"confidence": 2,
"rating": 4,
"review_id": "UZwVXcmL62",
"review_text": "The paper introduces a novel framework for evaluating Large Language Models LLMs) based on Item Discrimination ID theory, which generates adaptive, high- quality prompts to effectively differentiate model performance. Key contributions include a dynamic evaluation set that evolves with LLM advancements, a self- correct mechanism for prompt precision, and models to estimate prompt discrimination and difficulty. The authors validate their framework by testing it on five state-of-the-art models and release a dataset of over 3,000 prompts to aid further research, demonstrating enhanced challenge and discrimination over previous methods.\n\nThe paper proposes a novel prompt generation method to produce more challenging evaluation data.\nThe paper is well-structured and clearly written. The methodology and evaluation criteria are explained clearly, making the paper accessible to a broad audience.\n\nThe paper only used one LLM Hunyuan) to generalize data and did not verify whether the proposed method can generalize to other LLMs.\nIt is debatable whether using test data generated by an LLM to evaluate the performance of LLMs has practical value. The paper lacks validation of the effectiveness of the machine-generated test set, such as comparing its metrics with those of other human-annotated datasets.\nThe paper lacks an analysis of the diversity of the data used to produce the test set.\n\nThe concerns are included in the weaknesses."
}
] |
zuwpeRkJNH | Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation | Surgical video-language pretraining (VLP) faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data. This study aims to bridge the gap by addressing issues regarding textual information loss in surgical lecture videos and the spatial-temporal challenges of surgical VLP. To tackle these issues, we propose a hierarchical knowledge augmentation approach and a novel Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. The proposed knowledge augmentation approach uses large language models (LLM) to refine and enrich surgical concepts, thus providing comprehensive language supervision and reducing the risk of overfitting. The PeskaVLP framework combines language supervision with visual self-supervision, constructing hard negative samples and employing a Dynamic Time Warping (DTW) based loss function to effectively comprehend the cross-modal procedural alignment. Extensive experiments on multiple public surgical scene understanding and cross-modal retrieval datasets show that our proposed method significantly improves zero-shot transferring performance and offers a generalist visual repre- sentation for further advancements in surgical scene understanding. The source code will be available at https://github.com/CAMMA-public/PeskaVLP. | https://openreview.net/pdf/b754552d7cad51cf70357809a56df08d88257ab9.pdf | [
{
"confidence": 5,
"rating": 8,
"review_id": "x9lmNImh2H",
"review_text": "The paper addresses challenges in surgical video-language pretraining (VLP) due to the knowledge domain gap and scarcity of multi-modal data. It proposes a hierarchical knowledge augmentation approach and the Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. This approach enhances data efficacy and tackles spatial-temporal challenges by combining language supervision with visual self-supervision. Extensive experiments demonstrate significant improvements in zero-shot transferring performance and the generalist visual representation for surgical scene understanding.\n\nThe paper presents a unique approach to surgical video-language pretraining by employing hierarchical knowledge augmentation using LLMs, significantly improving textual data quality and diversity. The PeskaVLP framework innovatively integrates visual and language supervision, addressing the spatial-temporal challenges in surgical scene understanding. The methodology is meticulously validated through extensive zero-shot and linear-probing evaluations on datasets such as Cholec80 and AutoLaparo, demonstrating substantial performance improvements. The clarity of the presentation, with well-organized sections and effective visual aids, facilitates comprehension. The significant contribution lies in enhancing surgical scene understanding and cross-modal retrieval, making it highly valuable for the NeurIPS community. The paper's originality in using hierarchical pretraining and the detailed discussion on model architectures and initialization underscore its quality and significance in advancing surgical data science.\n\nFirstly, the dataset size is relatively small, with 1,007 videos for phase-level pretraining and 920 for video-level pretraining, which may limit the generalizability of the findings (as mentioned in the supplementary material). I know the difficulty in collecting medical data, but we must be sure that the presented approach can be generalized to different domains and hospitals. Furthermore, I doubt the methodology's potential to process \"noisy\" videos. \nExpanding the dataset and including more diverse surgical procedures would improve robustness. \n\nSecondly, while the paper mentions ASR errors in transcriptions, it does not provide a detailed methodology for handling them. Providing specific techniques for improving transcription accuracy would strengthen the study. \n\nAdditionally, the practical implementation of the PeskaVLP framework in real-world surgical contexts is not thoroughly discussed. Detailing strategies for integration into clinical workflows and addressing potential technological barriers would be beneficial.\n\n1. How do you plan to address the limited sample size and diversity in future studies to improve the generalizability of your findings? Consider expanding the dataset to include a more extensive and more diverse sample of surgical procedures to enhance robustness and applicability.\n\n2. What specific methods did you use to handle ASR errors in transcriptions? How did these errors impact your analysis?\n\n3. How do you manage the computational overhead associated with the hierarchical pretraining and dynamic time-warping processes?"
},
{
"confidence": 5,
"rating": 8,
"review_id": "fLzJ6lMID0",
"review_text": "The paper presents a novel approach for enhancing surgical video analysis by incorporating procedural awareness. The authors propose a system that integrates knowledge of surgical procedures to improve the identification, segmentation, and annotation of surgical activities in video footage. This approach aims to address challenges such as the variability of surgical techniques and the complexity of visual data in operating rooms. The contributions of the paper include the development of a procedural model that can be aligned with video data, the creation of annotated datasets for training and evaluation, and the demonstration of improved performance over traditional video analysis methods.\n\n1.The integration of procedural knowledge into surgical video analysis is a highly original concept. This approach not only enhances the accuracy of video analysis but also opens new avenues for improving surgical training and documentation.\n\n2.Introduces a novel hierarchical knowledge augmentation technique using large language models to refine surgical concepts. Employs a Dynamic Time Warping-based loss function for effective cross-modal procedural alignment. Demonstrates significant improvements in zero-shot transfer performance across multiple surgical datasets. Provides a robust general visual representation beneficial for various surgical scene understanding tasks.\nWeaknesses:\n\n3.The potential applications of this research in surgical training, intraoperative assistance, and postoperative review are significant. The approach addresses a critical need in medical video analysis, making it highly relevant and impactful.\n\nDataset Limitations: The annotated datasets used for training and evaluation are crucial for the model's success. Expanding the diversity and volume of these datasets would enhance the generalizability of the findings.\n\nGeneralizability: How does the system perform across different types of surgeries (like ophthalmic surgery)? Have you tested its effectiveness in various surgical domains beyond the initial scope?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "7Q5nQkdlIh",
"review_text": "This paper proposes a Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) method that enriches language supervision with LLM-refined surgical concepts. It further constructs hard negative samples by reversing the text orders at the phase and video levels and employs a Dynamic Time Warping (DTW) based loss to align multimodal procedures. Extensive experiments on multiple surgical procedures and comprehensive evaluations demonstrate the effectiveness of this framework.\n\n- The paper is overall well-written, with the background and motivation well-stated.\n- Using LLM to augment surgical video text descriptions is a good idea to enhance the quality of surgical text narration. It establishes a good baseline and guideline for future works that aim to apply LLM in surgical narratives.\n- A more comprehensive parent-child level cross-modal correspondence was designed using DTW than existing works.\n- Demonstration of the proposed method can close the representation gap for different modality, and analysed both successful and complicated examples.\n\n- By reading the enriched dataset by LLM in Appendix H, I am concerning that the variation and diversity of narration will be removed by the augmentation. Will that cause any problems?\n- In my opinion, using LLM to refine the text description of surgical videos is the most important contribution of this paper. It would be interesting to see if other components are also effective enough without the knowledge augmentation.\n\n- Beyond the current ablation study on PeskaVLP components, would applying the hierarchical knowledge-augmented text data in HecVL improve its performance and if this could yield results competitive with PeskaVLP. This would provide powerful support to verify the extent to which the other components in PeskaVLP contribute to performance, apart from the augmented texts.\n- Although LLM can enhance surgical text quality, is there a concern that the text may become overly standardized? Given that surgeons' narratives in the operating room tend to be more oral, concise, and sometimes include jargon, will there be a performance degradation in real-world, real-time applications where LLM augmentation is impractical?\n- In Appendix E, Figure 4, it would also be interesting if the authors could visualize the embeddings of HecVL, since it performs better than SurgVLP.\n- In Table 3, on Cholec80, Moco pre-trained on Cholec80 (V) has better performance but wasn't in bold, do I misinterpret something?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "y1b7xOz8Eh",
"review_text": "The paper presents a new framework called PeskaVLP for surgical video-language pretraining. A hierarchical knowledge augmentation approach is used for enriching text information. The pretraining is implemented with the proposed language supervision and visual self-supervision. A new training objective is proposed for surgical procedural understanding. Extensive experiments are conducted to demonstrate the effectiveness on the surgical phase recognition task and cross-modal retrieval task on multiple downstream dataset.\n\n1. This paper addresses the problem of VLP in the surgical scene. A hierarchical knowledge augmentation is proposed to tackle the problem of lack of textual information in the surgical field.\n2. The paper is generally well-written and easy to follow.\n\n1. The explanation of method details is not clear enough, and there is a lack of discussion on some experimental results\n2. The proposed method is based on certain assumptions but lacks a comprehensive consideration of applicability.\n\n1. What types of surgeries are included in the SVL dataset used in the paper? Is it suitable for the pretraining task? Could it affect the results on the downstream dataset?\n\n2. In Section 3.2, where hierarchical knowledge is augmented by GPT, the authors need to discuss the ability of LLMs to generate accurate textual information to describe the surgical steps in the domain-specific surgical context, especially considering the fine-grained image-text alignment in the clip-level (only 4 frames).\n\n3. In Section 3.2, the authors calculate textual similarity between the pseudo step generated by the LLM and the narration. How is this similarity calculated? Is there an ablation study on the effectiveness of the three behavior in knowledge augmentation?\n\n4. In Section 3.3.1, the authors implement visual self-supervision based on augmentation. Which specific augmentations were used? Do the augmentations affect the corresponding text's semantic information? For example, using flipping could impact descriptions related to left/right information in surgical operation.\n\n5. In Section 3.3.2, procedural information based on surgical phases is used. However, in surgical datasets, such as the cholec80 and AutoLaparo mentioned in the paper, the surgical process does not always follow a linear order defined by Phase 1-N and may include repeated phases. The authors should discuss the applicability of the method design in such situations.\n\n6. In Table 3, for the experimental results on cholec80, Moco (third row) provides the best results, but this is not highlighted in bold in the table. This needs to be corrected and the corresponding discussion should be provided. The same issue appears with the results using Moco (second row) on the StrasBypass70 dataset."
}
] |
zuwLGhgxtQ | A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers | We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only $\mathcal{O}(\log(1/\varepsilon))$ versus $\Omega(\text{poly}(1/\varepsilon))$ iterations to output a sample which is $\varepsilon$-close to the target in $\chi^2$-divergence. Our results are presented for proximal samplers that are based on Gaussian versus stable oracles. We show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. We also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved. | https://openreview.net/pdf/bd86dfe1f5fac662f55df1bccfbb1134cf9043ed.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "e2ERikvqJN",
"review_text": "The paper investigates the complexity of sampling from heavy-tailed distributions and presents a distinction between obtaining high-accuracy and low-accuracy guarantees. It analyzes two types of proximal samplers: those based on Gaussian oracles and those based on stable oracles. The main findings are that Gaussian oracle-based samplers can only achieve low-accuracy guarantees when sampling from heavy-tailed distributions, while stable oracle-based samplers can achieve high-accuracy guarantees. Additionally, the paper establishes lower bounds for samplers using the stable oracle, indicating that the presented upper bounds are optimal and cannot be fundamentally improved.\n\n1. The problem is well-motivated and interesting. \n2. Designed the algorithms and derived the upper bounds and lower bounds for different settings. \n3. The authors also provided insightful discussion.\n4. The authors provided solid theoretical proof for the results.\n\nThere is no experiment to verify the theoretical findings.\n\n1. Can you give an example in the real-world to motivate your problem?\n2. Is it possible to run some experiments to verify your results?"
},
{
"confidence": 1,
"rating": 7,
"review_id": "K2UBZSWIwI",
"review_text": "This paper studies the problem of heavy-tailed sampling. First, the paper shows that while the gaussian proximal samplers are efficient for light-tailed targets, they are not accurate for heavy-tailed ones; the paper develops a lower bounds for the Gaussian proximal samplers, which reveals a fundamental challenge in heavy-tailed settings.\n\nThen, the paper proceeds to develop a novel samplers based on restricted alpha-stable oracle; the insight is to replace the standard heat equation in gaussian oracle with a fractional heat flow. The paper proves that under suitable conditions the proposed sampler is efficient for heavy-tailed targets. Additionally, the paper proposes a practical implementation for a particular case of alpha=1.\n\n- Novel theoretical analysis for the gaussian oracle sampler, which provides a new insight to developing sampling algorithms\n\n- A novel methodology for heavy-tailed sampling\n\n- The paper is purely theoretical and lacks experimental evaluation; it would be nice to at least have a toy illustration for the implementable algorithm 2+3 in the alpha=1 case.\n\n- As the authors discussed in Sec5, the current paper does not present implementable algorithms for general alpha values in (0,2).\n\n- I wonder if the efficiency rejection sampling efficiency in Alg.3 has been taken into account of the sampler's theoretical complexity and practical complexity?\n\n- Maybe I am missing this -- what is the impact of alpha?"
},
{
"confidence": 1,
"rating": 7,
"review_id": "5Ofh7FZ5zb",
"review_text": "The paper focus on studying the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees. Their results are presented for proximal samplers that are based on Gaussian versus stable oracles. Authors show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. They also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved.\n\nAlthough I am not an expert in this field, I find this work quite interesting. The authors provide new material and support their statements with proofs.\n\nThe paper is not tested in any way on a numerical experiment. I am convinced that a paper presented at this type of conference should be both motivated by a real-world application and tested numerically, e.g., on a near-real-world formulation of the problem.\n\n**After a rebuttal process**, the authors agreed with this weakness and promised to add the experiments to the final version of the paper.\n\nN/A"
},
{
"confidence": 3,
"rating": 8,
"review_id": "uc6KPPkdL0",
"review_text": "The authors provide a lower bound for sampling from heavy tailed distributions under the Gaussian oracle of order $O(\\textup{poly}(1/\\varepsilon))$. They then propose an alternative proximal sampling algorithm using the $\\alpha$-stable oracle that achieves a convergence rate of $O(\\log(1/\\varepsilon))$ for heavy-tailed distributions satisfying a fractional Poincare inequality. They then provide a practical implementation of the stable proximal sampler, and lower bounds on its convergence rate.\n\n- This work presents a very nice combination of results showing a separation in the performance of stable and Gaussian proximal samplers. The combination of lower and upper bounds separating the two methods makes the work a particularly interesting contribution.\n\n- The addition of a practical implementation of the stable proximal sampler is nice to have, demonstrating that it is viable in practice.\n\n- The work is generally clearly presented and the authors are clear about their contributions.\n\n- Overall, I consider this to be a very sound piece of theoretical work.\n\nI have no major concerns about this paper. The presentation is somewhat dense in places, though this is mostly just a consequence of it being a very technical paper and not a flaw as such. If the authors want to make the claim that practicioners should use the stable proximal sampler in applied settings, then they may want to provide empirical evidence of its performance compared to the Gaussian proximal sampler. However, I understand that this is not the main purpose of this theoretical paper.\n\nI have no clarifications to request."
},
{
"confidence": 2,
"rating": 6,
"review_id": "9OSFu4H7g1",
"review_text": "This paper studies the complexity of sampling heavy-tailed distributions. It provides lower bounds on the complexity of Gaussian-based samplers for a class of heavy-tailed targets. Then, the paper constructs proximal samplers based on stable oracles, which improve the sampling complexity.\n\n* This paper is well-written. The background of sampling and the research problems regarding sampling complexity are clearly introduced. The contributions of the lower bound on Gaussian-based samplers for heavy-tailed targets and the improved complexity using stable oracles are clearly presented.\n* The paper is technically sound. The definitions and assumptions are discussed clearly, and the theoretical results are supported by proof sketches.\n\nThe contribution of the paper could be improved with empirical experiments to evaluate the sampling algorithms and their complexity.\n\n* Is there any intuition that a Gaussian-based sampler has lower accuracy for heavy-tailed targets than for non-heavy-tailed targets?\n* How would a Gaussian-based sampler compare with a stable oracle for not heavy-tailed targets?"
}
] |
zuWgB7GerW | How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning | We show that deep neural networks (DNNs) can efficiently learn any
composition of functions with bounded $F_{1}$-norm, which allows
DNNs to break the curse of dimensionality in ways that shallow networks
cannot. More specifically, we derive a generalization bound that combines
a covering number argument for compositionality, and the $F_{1}$-norm
(or the related Barron norm) for large width adaptivity. We show that
the global minimizer of the regularized loss of DNNs can fit for example
the composition of two functions $f^{*}=h\circ g$ from a small number
of observations, assuming $g$ is smooth/regular and reduces the dimensionality
(e.g. $g$ could be the modulo map of the symmetries of $f^{*}$),
so that $h$ can be learned in spite of its low regularity. The measures
of reguarity we consider is the Sobolev norm with different levels
of differentiability, which is well adapted to the $F_{1}$ norm.
We compute scaling laws empirically, and observe phase transitions
depending on whether $g$ or $h$ is harder to learn, as predicted
by our theory. | https://openreview.net/pdf/d47299e76cea5209510c750a7137c8f8ce0de3bd.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "HoG0k5Pjq5",
"review_text": "This paper introduces Accordion Networks (AccNets), a novel neural network structure composed of multiple shallow networks. The authors propose a generalization bound for AccNets that leverages the F1-norms and Lipschitz constants of the subnetworks, demonstrating that these networks can break the curse of dimensionality by efficiently learning compositions of Sobolev functions. The paper also provides theoretical insights and empirical validation, showcasing the superior performance of AccNets in learning complex compositional tasks compared to shallow networks and kernel methods.\n\nThe introduction of Accordion Networks (AccNets) as a novel neural network structure is a creative and original contribution. The paper provides a thorough theoretical analysis supported by empirical evidence, ensuring the soundness of its claims. The ability of AccNets to break the curse of dimensionality by learning compositional functions efficiently addresses a fundamental challenge in high-dimensional learning tasks.\n\n1. The practical implementation of the proposed regularization methods might be challenging, particularly the first one requiring infinite width. \n\n2. The paper mentions the difficulty in optimizing Lipschitz constants, which could be a limitation in practical applications.\n\n3. Additional experiments on more diverse real-world datasets could further demonstrate the robustness and generalizability of AccNets.\n\n4. Although the author has discussed the differences between DNN and AccNet, there is still not enough information for me to be sure in which settings to use AccNet and in which settings to use DNN. More clear differences and applicable conditions, especially the shortcomings of each need to be pointed out.\n\nCan the authors provide more details on the computational complexity of training Accordion Networks compared to traditional DNNs?\n\nHow sensitive are the generalization bounds to the choice of hyperparameters, particularly the Lipschitz constants and F1-norms?\n\nAre there any specific types of tasks or datasets where Accordion Networks might not perform as well as traditional methods?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "QgS94f64r7",
"review_text": "The authors present a generalization bound for deep neural networks that describes how depth enables models to learn functions that are compositions of Sobolev functions. To do this, they both prove a generalization bound for compositions of accordion networks (densely connected networks with a low-rank weight structure) and for compositions of Sobolev functions. They then present a sample efficiency result for different kinds of regularization on accordion networks.\n\nI really liked this paper and would like to see it accepted to NeurIPS. It addresses an important question: how does depth change generalization bounds for deep neural networks? To my knowledge, not many papers so far have addressed this question and I found the findings presented here very interesting and well embedded within prior methodology.\n\nI also found the paper very well written. I found it easy to follow along despite the highly technical nature of the results (note that I did not check the proofs in particular detail). I especially appreciated the remarks explaining different potential extensions and limitations.\n\nFinally, the theory appears to be able to explain certain empirical phenomena (in networks trained under realistic paradigms) at least qualitatively (though note that I had a few questions I will mention under weaknesses and questions). This indicates to me that it is a promising way for thinking about generalization in deep neural networks.\n\n1. I would like to see a more thorough comparison with shallow networks and generalization bounds, as this comparison is a central argument for the usefulness of the presented theory. While it is clear how the findings for the shallow network are a special case of the findings on the deep networks (as presented in Thm. 1), it remains a bit unclear to me how the theory can explain improved generalization in deep compared to shallow networks. The authors certainly present different several pieces of evidence on this: both Fig. 1 and Fig. 3 demonstrate that shallow networks exhibit worse scaling. I also appreciated the theoretical explanation of a particular contrast in l. 256-261. However, I think it would be really useful to provide a general theoretical explanation for this difference and test it empirically: would it be possible to extend the theoretical comparison in l. 256-261 to the general experimental setup studied in the figures --- and if so, would this theoretical comparison predict the conditions under which deep networks have the strongest advantages over shallow networks (or perhaps the conditions under which they don't perform that much better)? Not only would this serve as a useful validation of the theory, I think it would also provide a more extensive intuition for the authors' findings.\n\n2. I appreciated the fact that the authors compare their findings with related work wherever this becomes relevant. However, I think a (potentially brief) section comparing the results here to other theoretical investigations of depth in deep networks (perhaps using different approaches) would be useful. \n\n3. The linked codebase does not contain the notebooks indicated in the README as far as I can tell and therefore currently can't be used to directly reproduce the findings.\n\n4. I believe the figures would still benefit from error bars or some other indication of the overall statistical error in the findings. I agree that the main contribution of this paper is theoretical, but since the experiments test the empirical validity of the theory, I believe it is nevertheless important to get a sense for the overall deviation in these findings (e.g. across model seeds). If the authors are concerned about a lack of clarity, they could leave the bars out of the main figures but add supplementary figures with error bars. Moreover, some of the lines in Fig. 1 do contain error bars and it would be good to clarify what these error bars represent.\n\n1. Do you think my suggestion in point 1 of the weaknesses make sense or do you have a reason why you see it as unnecessary?\n\n2. As far as I understand, the reason for the asymmetry between $\\nu_g$ and $\\nu_h$ in Fig. 2 is the different dimensionality, correct? It would be good to mention these dimensionalities, as I was only able to find them in the appendix.\n\n3. Could you clarify why in Fig. 2, you're using the scalings from Prop 3 rather than from Thm. 5?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "2IngJYVbr1",
"review_text": "The authors introduce accordion networks (AccNets), which are compositions of multiple shallow networks. By leveraging prior workthat computes norm-based generalization bounds for shallow two-layer networks, the authors bound the complexity of a deep AccNet (as measured by its F1 norm) but the sum of the complexities of the individual shallow networks. They empirically observe that the rates predicted on real-world data are roughly representative of the trained networks, and are indeed much better than those for kernels trained on the same tasks. They put forth a nontrivial scaling law for the excess risk: $N^{-\\mathrm{min}(1/2, \\nu_g/d_{in}, \\nu_h/d_{mid})}$ for an Acc Net compared to $\\mathcal L \\sim N^{-\\mathrm{min}(1/2, \\nu_g/d_{in}, \\nu_h/d_{in})}$ for a kernel in terms of the dimensionalities $d$ and Sobolev constants $\\nu$ of the respective spaces and functions. From this, the authors obtain predictions of several phases, that they put forth experiments to verify.\n\nThe paper tackles a very important open question in the theory of deep learning, for which not much progress has been made. By creatively leveraging results for shallow network in composition, the authors arrive at a nontrivial bound for deep nets. The empirics are a very compelling and welcome part of the paper. The phase diagrams illustrate the nontrivial predictivity of the theory, especially at the level of the rates. This may have important implications for scaling laws. Modulo minor revisions in discussion and exposition, the whole paper is quite readable for a relatively broad audience.\n\nI am not sure how compelling the phase plots in Figure 2 are. The bounds in general are extremely loose, however the comparison of the rates in Figure 2c and Figure 3 is very promising. In general, however, it is the experience of the reviewer that measuring a rate is an extremely finicky business. It is therefore important to add a section in the appendix explicitly stating how the rates were obtained and measured. I also strongly encourage the authors to make the code for all figures public. \n\nBecause they are used very early on throughout the paper, it is the opinion of the reviewer that the notions of F1 distance and Sobolev norm should be defined earlier on in the paper. Without this, it seems like the audience will be constrained to the set of learning theorists familiar with these terms. However, if these terms are defined early on, the paper becomes remarkably accessible to a much broader audience.\n\nThe plot labels in Figures 2 and 3 are very difficult to read. \n\nA small comment: I have not seen the term \"modulo space\" used before. Often the term is \"quotient space\" \n\nThe sentence defining the $F_1$ ball (above theorem 1) is confusing, circular, and difficult to read. Please rewrite it.\n\nThe excess rate formula $\\mathcal L \\sim N^{-\\mathrm{min}(1/2, \\nu_g/d_{in}, \\nu_h/d_{mid})}$ is a very important result and I recommend that it be formatted for display, not inline.\n\nHow are you measuring \"dimension\" in 4.1.1? A high-dimensional gaussian with spectral decay of its covariance going as $k^{-\\alpha}$ for capacity exponent $\\alpha$ is nominally \"full dimensional\" since it is not strictly speaking constrained to a sub-manifold, and yet basic results in kernel theory and high-dimensional linear regression can show that the generalization error achieves a much better rate at larger values of $\\alpha$. Specifically, a model with capacity exponent $\\alpha$ and source exponent $r$ achieves a rate of $N^{-2\\alpha min(r, 1)}$. See, e.g. https://arxiv.org/abs/2105.15004 . Such power law anisotropy is in abundant in natural data. In particular shallow two layer networks in the lazy limit can achieve this scaling for such 'easy tasks' with quick spectral decay. On the other hand, the bounds that you state cannot decay faster than $N^{-1/2}$. \n* In this sense, it seems that the bounds (shallow or deep) presented are certainly not tight for some datasets. Am I incorrect in concluding this? Do you have an intuition for what causes the breakdown in correctly predicting the error rates in this case?\n* Given that they breakdown in that setting, what about the datasets that you study makes it so that the scaling law predictions seem to hold?"
}
] |
ztwl4ubnXV | OxonFair: A Flexible Toolkit for Algorithmic Fairness | We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair. | https://openreview.net/pdf/1198c251b0f5664b73f1ec30b356982f81f81fc7.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "eHIhFf9cWw",
"review_text": "The paper introduces \"AnonFair,\" a toolkit designed to enforce algorithmic fairness across various domains, including NLP, computer vision, and traditional tabular data. It is compatible with popular machine learning frameworks like sklearn, AutoGluon, and PyTorch. Unlike well-established fairness tools like FairLearn and AIF360, AnonFair extends to different types of data, including NLP and vision.\n\nOther tools offer many methods but limited control over them, while AnonFair uses a single, highly customizable method that allows for per-group thresholding.\n\nIt specifically addresses the issue of overfitting by utilizing validation data, making it more reliable when traditional methods might fail.\n\nEmpirical evidence presented shows that AnonFair performs well, often matching or surpassing other methods in fairness benchmarks without being specifically optimized for complex or high-dimensional scenarios.\n\nAnonFair seems to provide a robust and adaptable solution for implementing fairness in machine learning, in ways that other tools do not currently offer.\n\n- The paper does well in positioning AnonFair against competing tools by demonstrating its performance on standard fairness metrics and its versatility across a variety of use cases.\n- AnonFair supports NLP and computer vision classification tasks, allowing broader applicability.\n- The toolkit uses validation data to combat overfitting, ensuring that fairness measures remain robust across both training and unseen data.\n\n- The toolkit not only competes well in terms of accuracy and fairness metrics but also offers significant advantages in computational efficiency.\n\n- Some sections are overly detailed, such as the introduction, while others are missing necessary depth:\n - Section 3 could use a clearer structure, possibly with a diagram, to help readers understand how to interact with the toolkit.\n - The section on toolkit expressiveness needs more detailed examples and explanations of how the supported fairness measures are implemented. \n - Results discussion is kept very brief and could benefit from specific numerical examples, like percentage improvements compared to other methods.m actual numbers, such as how much % improvement in comparison to method XY and such.\n\n- The paper assumes readers are familiar with fairness terminology and metrics without adequate explanations or definitions for some acronyms (e.g., DEO in Table 3 and 4).\n - Subsection 4.3 lists supported fairness measures but fails to provide examples or brief explanations, making it less informative for those not familiar with these terms.\n\n- Lack of consistency in terminology usage; for example, \"EOp\" in Figure 1 (top right) vs. \"EO\" in Section 5.2, “AnonFair” missing before \"Frontier\" in Figure 1 (left), and inconsistent references like \"See Figure\" vs. \"See fig..\"\n\n- A stronger call to action for community engagement, such as through open-source collaboration or empirical validation studies, could significantly enhance the broader impact and encourage more widespread adoption and refinement of AnonFair.\n\n- The paper would benefit from a summary of explicit cases and recommendations advising users on the best scenarios for using the tool.\n\n- Figure 2 is not referred to in the paper, or did I miss this part.\n\n1. The paper mentions that hard assignment is more efficient than soft assignment, while appendix A adds some operational details, it remains unclear how these methods specifically compare in terms of quantitative metrics. Could the authors provide specific metrics or comparisons that demonstrate the efficiency and performance benefits of hard assignment?\n2. The discussed limitations reads a bit out of context given provided evidence in the paper. What makes the mentioned solutions suboptimal, and how significant are these shortcomings? Also it was not clear to me, after finishing reading, when it is adequate to use this tool and what could be use cases when it fails. Including this into the conclusion could make the reader grasping the full picture. \n3. Is Figure 6 part of the Appendix or misplaced?"
},
{
"confidence": 1,
"rating": 7,
"review_id": "suISL3caiH",
"review_text": "This paper describes a new toolkit for algorithmic fairness, enabling the optimization of any fairness measure that is a function of the confusion matrix. Experiments on vision and NLP demonstrated the effectiveness of the proposed toolkit.\n\nAn easy-to-use toolkit for enforcing algorithmic fairness.\n\nPresentation could be made more self-contained, e.g. a table listing the supported fairness metrics, as functions of the confusion matrix. This would help readers not familiar with the field.\n\nIt seems that only binary classification is supported. How can such metrics be extended to other tasks?\n\nSome minimal code snippets for the interface could be shown as examples.\n\n- L5: \"True positives, false positives, ...\" => \"the confusion matrix\"\n - L6: \"extendable\" => \"extensible\""
},
{
"confidence": 4,
"rating": 6,
"review_id": "wfWExSdRPF",
"review_text": "The paper introduces a new toolkit designed to enhance algorithmic fairness with greater expressiveness. Unlike existing toolkits, this one offers more customization options to optimize user-defined objectives and fairness constraints. Although the proposed toolkit currently includes only one method, it supports both computer vision and natural language processing (NLP) tasks. The authors compare the efficiency of this method, finding that the toolkit is relatively more efficient than Fairlearn. Comprehensive experiments were conducted on various datasets, and the results were compared with those from other popular toolkits.\n\n- The paper introduces a versatile toolkit that supports both NLP and computer vision tasks, unlike existing toolkits which lack this capability.\n- The proposed toolkit employs efficient optimization techniques that accelerate the evaluation process.\n\n- The formulation presented in Subsection 4.2 of the paper is limited to a single-layer model, which restricts its applicability across different machine learning models. To enhance the flexibility of the method, I recommend adopting a more generic notation, particularly if we aim to incorporate pretrained language models.\n- The abstract is quite unclear, especially the part that mentions \"9/9 and 10/10 of the group metrics of two popular review papers.\" I suggest rephrasing the abstract for better clarity and comprehension.\n\n- In Figure 3, the proposed toolkit appears to encounter scaling issues when reaching 5 groups. Could you provide more details on why this occurs and elaborate on the underlying reasons for this limitation?\n- The paper presents results on multilingual datasets. Do you have any specific findings for each language, particularly regarding the effectiveness of the toolkit for individual languages?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "Fzy3CesDFd",
"review_text": "The paper describes details of a fairness toolkit (\"AnonFair\"), which confers fairness to any given machine learning classifier by exploring a wide range of prediction thresholds for different groups (which are either provided upfront or inferred through an auxiliary classifier). The toolkit is designed to be quite expressive, as it can optimize several different metrics, e.g., false positives/negatives, true positives, etc. The toolkit can work across all classifiers (which can output class probabilities), including ones trained on vision and NLP tasks.\n\nThe paper introduces and describes a toolkit that implements several fairness strategies and can support any fairness measure that can be expressed in terms of true positives, false positives, true negatives and false negatives. These techniques primarily rest upon adjusting the classification thresholds of different groups, and the paper also incorporates tricks to speed up their computations of precision and recall across different thresholds. The fairness techniques that this paper implements are (largely) classifier agnostic, and can be applied to a wide range of classifiers including NLP and vision classifiers (as this paper shows). Overall, I appreciate that expressivity and broad applicability of their toolkit.\n\nWhile the toolkit might turn out to be useful for some practitioners, it is a relatively straightforward implementation of well-known (and simple) technique of adjusting prediction thresholds across groups. Exploring different thresholds can be computationally prohibitive, for which the authors use a standard trick to speed up their explorations (which I appreciate). The paper acknowledges and cites relevant papers/techniques that they implement. Overall, the originality and novelty of their work is significantly limited, as the toolkit is an implementation of known and simple fairness techniques. Further, the underlying fairness techniques (not from the authors) are themselves applicable to most classifiers, so any implementation of the same could work for NLP and vision tasks—which is claimed to be one of the major contributions of this work.\n\nI feel that the current version is a good starting point (in terms of implementation) of existing fairness techniques and speeding them up and trying them out on vision and NLP tasks. To improve the paper, I would suggest clearly outlining the important problems that this toolkit now can enable researchers to answer (which was not possible before) and answer a few of those questions in the paper."
},
{
"confidence": 3,
"rating": 6,
"review_id": "r0Qce3R7mX",
"review_text": "This paper presents AnonFair, a cutting-edge open-source toolkit designed to promote algorithmic fairness. Authors claim the following contributions:\n(1) Comprehensive support for NLP and Computer Vision classification, as well as standard tabular problems.\n(2) Enhanced robustness against overfitting challenges through the ability to enforce fairness on validation data.\n(3) Versatility in optimizing any measure that is a function of True Positives, False Positives, False Negatives, and True Negatives, making it easily adaptable and more expressive than other toolkits.\n(4) Seamless integration with popular ML toolkits such as sklearn, Autogluon, and pytorch.\n(5) AnonFair supports 9/9 and 10/10 of the group metrics of two prominent review papers and is accessible online at no cost.\n\nThis toolkit progresses in algorithmic fairness and enhances multidisciplinary collaborations, it is design to integrate the intervention of policy-makers.\n\nThe paper includes a complete section of experiments and comparison with existing toolkits. \n\nAnonFair key contributions include support to popular and relevant NLP and Computer vision areas.\n\n* Lack of clarity in some reported experiments, e.g. results tables are not cited in the text, metrics are not well-contextualized (e.g. larger or lower scores are better?)\n\n* Lack of analysis, examples or human evaluation to better understand contributions and limitations of the method in each of the experiments.\n\n(1) Could you provide more high-level context for each of the experiments that you are running in order to make the paper more self-contained?\n(2) for NLP experiments, why do you think mitigation works for Twitter and not for Jigsaw?"
}
] |
zsXbGJJ7Oo | G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training | Medical imaging tasks require an understanding of subtle and localized visual features due to the inherently detailed and area-specific nature of pathological patterns, which are crucial for clinical diagnosis. Although recent advances in medical vision-language pre-training (VLP) enable models to learn clinically relevant visual features by leveraging both medical images and their associated radiology reports, current medical VLP methods primarily focus on aligning images with entire reports. This focus hinders the learning of dense (pixel-level) visual features and is suboptimal for dense prediction tasks (e.g., medical image segmentation).
To address this challenge, we propose a novel medical VLP framework, named **Global to Dense level representation learning (G2D)**, which aims to learn global and dense visual features simultaneously using only image-text pairs without extra annotations. In particular, G2D designs a **Pseudo Segmentation (PS)** task, which enables the model to learn dense visual features during VLP. Notably, generating PS masks can be performed on the fly during VLP, which does not incur extra trainable parameters. With this simple yet effective idea, G2D achieves superior performance across 5 medical imaging tasks and 25 diseases. Particularly, in the segmentation task which requires dense visual features, **G2D surpasses existing models even with just 1% of the training data for finetuning, compared to 100% used by other models**. The code can be found in https://github.com/cheliu-computation/G2D-NeurIPS24/tree/main. | https://openreview.net/pdf/266314e449f23eb30c332e9f0688da33556f643c.pdf | [
{
"confidence": 5,
"rating": 5,
"review_id": "wPn9WWqSQg",
"review_text": "This paper proposes G2D, a novel vision-language pre-training (VLP) framework for medical imaging that aims to learn both global and dense visual representations from radiography images and their associated radiology reports. The key innovation is a pretext task called Pseudo Segmentation (PS), which uses a pseudo mask derived from attention maps to guide the learning of dense visual features during pre-training. The authors demonstrate that G2D outperforms existing medical VLP approaches on various downstream tasks including classification, segmentation, object detection, and zero-shot visual grounding across multiple medical imaging datasets. Notably, G2D shows strong performance on segmentation tasks even when fine-tuned on very limited data.\n\nNovel approach: The paper introduces an innovative method for learning dense visual representations in medical VLP without requiring pixel-level annotations, addressing a key limitation of existing approaches.\n\nWell-motivated: The authors provide a clear rationale for why learning dense representations is important for medical imaging tasks and why existing VLP methods struggle with this.\n\nComprehensive evaluation: The method is evaluated on a wide range of downstream tasks and datasets, demonstrating its versatility and effectiveness across different medical imaging applications.\n\nStrong results: G2D consistently outperforms existing methods, especially on segmentation tasks where it achieves impressive results with very limited fine-tuning data.\n\nAblation studies: The paper includes thorough ablation experiments to validate key design choices and components of the method.\n\nPotential impact: The proposed approach could significantly reduce the need for large annotated datasets in medical imaging, which is a major bottleneck in the field.\n\nLimited theoretical analysis: While the method is empirically strong, there is little theoretical justification for why the pseudo segmentation task leads to improved dense representations.\n\nComplexity of the approach: The method involves several components and processing steps, which may make it challenging to implement and potentially limit its adoption.\n\nComputational resources: The pre-training process appears to be computationally intensive (16 A100 GPUs), which could be a barrier for researchers with limited resources.\n\nGeneralization to other domains: While the focus on medical imaging is valuable, it's unclear how well this approach would generalize to other vision-language domains.\n\nComparison to more recent baselines: Some of the baselines used for comparison (e.g., ConVIRT, GLoRIA) are somewhat older.\n\nComparison to more recent medical VLP methods would strengthen the evaluation.\n\nMajor concerns:\nMy primary concern revolves around the authors' claim that current medical VLP methods primarily align images with entire text reports. This assertion appears to be inconsistent with the facts, as evidenced by several papers that have employed local alignment between image regions and text. This factual contradiction significantly undermines the novelty of the present work. For instance:\n\nGLoRIA (Huang et al., ICCV 2021): \"Global-Local Representation Alignment for Improved Visual Recognition in Medical Imaging\"\nThis paper introduced a global-local alignment approach, learning finer-grained representations by aligning image patches with text tokens.\nMGCA (Wang et al., arXiv 2022): \"Multi-Granularity Cross-Modal Alignment for Generalized Medical Visual Representation Learning\"\nThis method employed a multi-granularity alignment strategy, including global, local, and fine-grained levels of alignment.\nBioViL (Boecking et al., ECCV 2022): \"Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing\"\nThis work proposed a method to improve biomedical vision-language processing by leveraging text semantics, which includes local alignment strategies.\nMedKLIP (Wu et al., medRxiv 2023): \"Medical Knowledge Enhanced Language-Image Pre-training\"\nThis approach utilized external knowledge bases to enhance local alignment, achieving more fine-grained image-text matching.\nGiven these existing works, the authors' characterization of the current state of medical VLP appears inaccurate. This misrepresentation significantly weakens the claimed novelty of their approach. The authors should provide a more accurate description of existing methods and clearly articulate how their approach differs from or improves upon these established local alignment strategies.\n\nOther minor concerns:\nHave you explored the quality of the learned representations at different levels of the network? Are there significant differences in the quality of features at different scales?\nHow sensitive is the method to the choice of threshold used in pseudo mask construction? The ablation shows results for a few values, but is there a principled way to choose this threshold?\nHave you investigated the potential of using the pseudo masks generated during pre-training for weakly supervised segmentation tasks?\nHow does the performance of G2D change as the amount of pre-training data is varied? Is there a clear relationship between pre-training data volume and downstream task performance?\nGiven the computational requirements for pre-training, have you explored any techniques for making the approach more efficient, such as progressive training or curriculum learning?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "QpQD7IdrMu",
"review_text": "This manuscript describes a medical vision-language pre-training framework called Global to Dense level representation learning (G2D), that learns global and dense visual features simultaneously with only image-text pairs, by exploiting the aggregated attention map from the vision encoder for a pseudo segmentation pretext task. The improved (frozen) vision encoder is then utilized as part of the model pipeline for a number of downstream tasks (e.g. segmentation, classification)\n\n- Pseudo segmentation pretext task enables dense segmentation during pre-training, and avoids external resources as for alignment-based methods, and limitations on high-level semantic representations in reconstruction-based methods\n - Importance of associating semantic meaning verified via experiment\n\n- Unclear if specific sentence/phrase to individual image region alignment is achieved, for dense learning\n - Lack of fine-grained pixel-level evaluation of masks\n\n1. The accuracy of the initial aggregated attention map appears possibly non-optimal, given that additional thresholding by body mask is required. As such, it might be considered to quantify the accuracy of these maps, possibly against segmentation ground truth.\n\n2. In Section 3.2, it is stated that a threshold is applied (at 85%) to transform the aggregated attention map into a binary mask, before smoothing. It might be clarified if the need for smoothing (and related smoothing parameters) was empirically determined.\n\n3. In Section 3.3, it is stated that \"This decoder takes visual feature V_i as input and utilises the pseudo mask ˜M_i as the supervisory signal for the pretext task\". It might be clarified as to whether and how specific text can be matched to specific (separate) image regions, as in Figure 4 of Section A.7. In other words, while Figure 4 shows specific text descriptions corresponding to specific image regions, were these correspondences/alignments indicated by the proposed G2D model, or are they external manual observations? A.1 suggests no, but this might be explicitly stated.\n\n4. In Section 4, the choice of ResNet-50 as the encoder over other plausible choices (e.g. U-Net encoder) might be briefly explained.\n\n5. For Table 1, it might be clarified as to what \"encoder-decoder\" refers to - the updating of both encoder and decoder?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "jyhquw0TsQ",
"review_text": "The paper proposes an encoder-decoder medical VLP approach for global-to-dense visual representation learning. Pseudo segmentation is adopted for dense level learning. Rich experiments validate the effectiveness of the proposed method.\n\n1. The motivation behind the work is clear. Pseudo-segmentation supervision is effective, which is validated by experiments.\n2. The experiments are rich and ablation analysis shows the contributions of each component and design.\n3. The illustrations are clear and easy to understand.\n4. The improvements are consistent and sometimes substantial.\n\n1. The comparisons with MGCA and MRM in the CXR14 dataset are not included in Table 3, but Table 4 includes the comparisons with MGCA and MRM. What are the reasons behind this?\n2. Transformer-based vision encoder is not analyzed.\n3. The balance between VLA and PA losses is not analyzed.\n\nIs it not applicable to compare with MGCA and MRM in the CXR14 dataset?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "vgnjuXg82b",
"review_text": "The paper proposes a new medical vision-language model, G2D, which employs vision-language alignment (VLA) and pixel alignment (PA) strategies, combined with a pseudo segmentation (PS) pre-training task, to learn global and dense visual representations from medical images. The VLA strategy is used to learn global representations of images and texts, while the PS task constructs pseudo masks through a parameter-free mechanism to facilitate the learning of dense representations. The method is comprehensively validated across five downstream tasks (image segmentation, object detection, zero-shot image visual grounding, zero-shot image classification, and fine-tuned image classification), demonstrating its effectiveness in handling both unimodal and cross-modal tasks.\n\n+ The paper is well-written, with the motivation, method, and results clearly presented. A minor concern is the reference format; it should be [1] instead of (1) according to the NeurIPS template.\n\n+ A significant concern with most existing works is that they operate primarily at the Image-Text Retrieval level, similar to the perceptual level of CLIP, and do not effectively capture dense features between modalities. The G2D model addresses this issue by integrating Vision-Language Alignment (VLA) and Pseudo Segmentation (PS) tasks to facilitate simultaneous learning of global and dense visual features. This multi-level feature learning significantly enhances the model's performance in tasks requiring dense feature perception, such as segmentation.\n\n+ During pre-training, the G2D method utilizes only image-text pairs without the need for additional annotated data. By generating pseudo masks on the fly through the PS task, it reduces the cost and complexity associated with data annotation.\n\n+ The G2D method is novel, and the experiments are robust. Experimental results on five medical imaging tasks involving 25 diseases demonstrate that the G2D model outperforms existing models, even with minimal fine-tuning data. Notably, in segmentation tasks requiring dense visual features, G2D achieves excellent results with just 1% of the training data for fine-tuning.\n\nMajor concerns:\n\n- The attention maps could introduce errors in pseudo mask, and these errors may propagate throughout the training process. To address this, a clear validation strategy needs to be outlined. For instance, in Figure 2, aggregated attention map might incorrectly highlight irrelevant regions. It is essential to establish methods for **detecting** and **measuring** these errors to ensure the reliability of the model. I hope the authors could quantify the errors in aggregated attention map and pseudo mask during the rebuttal period.\n\nMinor concerns:\n\n- The training and validation of the model rely on specific datasets, which may introduce biases and potentially affect the model's generalizability to different datasets.\n\n- It is uncertain whether the method can be effectively extended to vision-language tasks involving 3D imaging (e.g., CT and MRI), presenting a limitation in its current scope of application.\n\n- How do you detect and correct the errors made by aggregated attention map?"
}
] |
zqLAMwVLkt | Generative Semi-supervised Graph Anomaly Detection | This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes. | https://openreview.net/pdf/3c33b4f4c3c23708a8d12f3c6cbda3a20a9ca71e.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "Z2QN5ZkVlh",
"review_text": "This paper works on node anomaly detection in the novel semi-supervised setting where few labeled normal nodes are given and proposes to generate new anomaly nodes to boost the training data. The anomaly generation algorithm is inspired by the empirical observation that:\n\n(1) Anomaly nodes have lower affinity score than normal nodes\n(2) Feature distribution of anomaly nodes are similar to normal nodes if they share similar neighborhood patterns.\n\n(1) The setting is novel and aligned to the real-world situation where normal nodes are typically known compared with anomaly nodes.\n\n(2) The motivation for the proposed two regularization losses is very intuitive and clear.\n\n(3) The experimental results are very impressive.\n\n(1) The proposed two regularization losses are heavily based on the empirical analysis, which might not transfer to other anomalies in other datasets. \n\n(2) For the second prior, its assumption that anomaly nodes sharing similar local structures would share a similar feature distribution has not been empirically verified.\n\n(3) Experiments miss the comparison with diffusion-based generative anomaly detection baseline.\n\n(1) As stated in the weakness, the core regularization loss terms are designed based on two assumptions:\n* The anomaly nodes have a lower affinity score than normal nodes. However, there is no comprehensive experimental verification of the other datasets on this. It might be better to provide the verification like Figure 1 but on more different datasets.\n* Anomaly nodes sharing similar neighborhood structures should possess similar feature distributions to their corresponding normal nodes. Although some references have been attached to justify this hypothesis, it might be better to include some empirical verification on this as well.\n\nFurthermore, there might be some contradiction between these two assumptions by themselves. First, if assumption 1 holds, it means anomaly nodes should share different local subgraphs with the normal nodes, which indicates that assumption 2 cannot hold. How do we mediate this situation?\n\n(2) Is there any difficulty when optimizing the loss according to Eq. (4) and Eq. (5) at the same time? Firstly, for Eq. (4), since the fixed terms would be embeddings of normal nodes and their neighbors, the embeddings of abnormal nodes ($\\hat{\\mathbf{h}}_i$ in Eq. (2)) would be optimized towards being further away from the neighbors' embeddings. However, Eq. (5) would also enforce the $\\hat{\\mathbf{h}}_i$ to be close to the normal one $\\mathbf{h}_i$. These two directions seem to be contradictory to each other. \n\n(3) Joint optimization according to Eq. (7) does not make sense under this generative augmentation setting. Here we use a generative model to augment the training data. This therefore should be that the training model is fixed. Moreover, if we jointly optimize the anomaly detection term and the other two generative terms, it would lead to the gradient for anomaly detection leaks to classification. This is quite confusing to me and might need more clarification.\n\n(4) How many layers of the subgraphs are used in optimizing the affinity score? If we use 2-hop neighbors, it might cause the computation to consider the significantly large number of nodes. If not, how should we decide on this parameter?\n\n(5) The comparison misses the baseline [1]\n\n[1] Liu, Kay, et al. \"Graph diffusion models for anomaly detection.\" (2024)."
},
{
"confidence": 3,
"rating": 6,
"review_id": "rA2IjZH4UJ",
"review_text": "The paper proposes a novel approach called GGAD aimed at improving anomaly detection in graphs under a semi-supervised framework. GGAD generates pseudo anomaly nodes that serve as negative samples for training a one-class classifier. This method is built on two\nkey priors: asymmetric local affinity and egocentric closeness, which help in generating reliable outlier nodes that mimic real anomalies in terms of both graph structure and feature representation. Extensive experimental results demonstrate the effectiveness of the method across diverse graph anomaly detection datasets.\n\n1.The method is innovative. The proposed graph anomaly detection method can exploit the feature and structure information of normal nodes more effectively in the studied semi-supervised scenario compared to existing methods. The proposed two priors provide a meaningful characterization of desired properties of outliers in this semi-supervised setting and can be utilized to explore other beneficial priors further. \n\n2.The experiments in the paper are comprehensive and thorough.\n\n1. The model relies on prior knowledge to generate anomaly points. This prior knowledge can limit the model’s application scenarios. The model performs best only when the real anomalies align with this prior knowledge. For anomaly types that do not conform to the prior knowledge, the model may not effectively detect them.\n\n2.The model does not perform best on the Photo dataset in Table 1, and the article lacks an explanation of the results at the overall data level.\n\n3. This model employs a semi-supervised approach that uses some positive samples for training. However, it does not consider the issue of noise interference within the positive samples, namely, how the model overcomes interference when some positive samples are mislabeled.\n\n4. During the initialization step, only the initial feature of outliers are obtained while the connections between the outliers and normal nodes are not well illustrated in the paper. From Figure 2, one outlier is connected to more than one normal node while the feature of the outlier is generated according to single normal node. The neighborhood of outliers is important since the it involves the computation of node affinity score of outliers.\n\nsee weakness"
},
{
"confidence": 5,
"rating": 5,
"review_id": "25Yt6Bnugi",
"review_text": "This paper introduces a novel generative-based GAD approach, named GGAD, tailored for the semi-supervised scenario. Unlike existing GAD frameworks, the authors highlight the feasibility and importance of a semi-supervised setting where labels for normal nodes are relatively easy to obtain during training, but labeled abnormal nodes are very limited. In this context, the paper proposes generating pseudo-anomaly nodes to serve as substitutes for real anomaly nodes in training, thus aiding in anomaly detection. These pseudo-anomalies are generated through two unique loss-guidance mechanisms. Experimental results demonstrate the effectiveness of GGAD.\n\nHowever, the description of the semi-supervised setting in this paper lacks clarity and unconvincing. Additionally, there is minimal differentiation between the proposed method and existing works that generate pseudo-anomaly samples for data augmentation. I think this paper's novelty is limited. I still think that doing unsupervised GAD is more necessary, and if the authors can prove that the pseudo-outlier proposed by GGAD can benefit unsupervised GAD as a general module, I can up my score.\n\n1.The complete experiment shows the effectiveness of the method and the necessity of each component.\n\n2.Some visual illustrations help the reader understand, although the shapes of the images seem to be compressed.\n\n1. I am still confused about the motivation for performing semi-supervised GAD. Why do most methods emphasize unsupervised scenarios? The cost of labeling normal nodes seems too expensive, as the authors themselves state on lines 268 to 269, yet they assert again on line 31 that labels for normal nodes are easy to obtain.This inconsistency hinders a clear understanding of the necessity and practical applications of semi-supervised GAD, which significantly undermines the motivation for this work.\n\n2. While the first loss function proposed by the authors appears intuitively valid, the second loss function aims to generate outliers similar to normal nodes. In my opinion, optimizing these two losses together is unreasonable because they conflict with each other. It seems that they should correspond to different outlier generation processes\n\n3. The paper validates the improvement of unsupervised GAD using labeled normal nodes and claims that GGAD remains superior. I think the authors ignore the fact that unsupervised methods do not obtain this outlier like GGAD and this comparison is not reasonable.\n\n1. why semi-supervised GAD is more important than unsupervised GAD, How do you overcome the labeling cost?\n2. If unsupervised GAD methods use outliers in GGAD, is it beneficial for them?\n3. why Eq.5 need Gaussian noise?\n4.In addition to the outlier generation methods mentioned on lines 376-396 (they seem overly simplistic), are there more advanced methods for generating outliers similar to GGAD? How does GGAD compare to them?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "oNvnPj5Plf",
"review_text": "This paper explores the problem of semi-supervised graph anomaly detection (GAD), where some nodes are known to be normal, in contrast to the typical unsupervised setting with no labeled data. The authors show that even a small percentage of labeled normal nodes can improve the performance of existing unsupervised GAD methods when adapted to the semi-supervised scenario. The paper proposes a novel Generative GAD approach (GGAD) to better exploit normal nodes by generating pseudo anomaly nodes, called 'outlier nodes', to provide effective negative samples for training a one-class classifier. GGAD generates these outlier nodes using priors about anomaly nodes, such as asymmetric local affinity and egocentric closeness, to mimic anomalies in structure and features. Experiments on six real-world GAD datasets show that GGAD outperforms state-of-the-art methods in both unsupervised and semi-supervised settings.\n\n+ This paper studies a new problem of semi-supervised GAD that has not been widely studied. \n\n+ The proposed method is simple and effective from the empirical perspective.\n\n+ The experiments are extensive including effectiveness and efficiency analyses and the method has been tested on real-world large-scale graphs to verify the scalability.\n\n- The two priors that are used to generate outlier nodes are heuristic or based on empirical evidence. There is no theoretical analysis provided to better guarantee the effectiveness of the proposed method.\n\n- It will be more interesting and helpful to show the generated outlier nodes can capture the characteristics of anomalous nodes in addition to comparing their representations.\n\n- The experimental settings of anomaly contamination are not very clear: how the contamination is introduced?\n\n- Overall experimental settings. What hardware has been used in the experiments, e.g., memory, and why are the experiments conducted on CPUs?\n\n1. Theoretical analysis of the proposed method, especially these two priors.\n\n2. Experimental settings including hardware and anomaly contamination.\n\n3. Analysis of the generated outlier nodes."
},
{
"confidence": 4,
"rating": 7,
"review_id": "JVY0ZfV1dW",
"review_text": "The paper studies an under-explored graph anomaly detection problem where the detection models have access to a set of labeled normal nodes. To tackle this problem, it introduces a generative approach namely GGAD that generates pseudo anomaly nodes, called outlier nodes, to support the training of a discriminative one-class classifier. The key idea underlying this approach is to generate the outlier nodes in a way that can well simulate real anomaly nodes in both graph structure and feature representation perspectives. To achieve this, GGAD defines and incorporates two priors, including asymmetric local affinity and egocentric closeness, into its optimization objectives, with the former prior focusing on the alignment on the graph structure aspect and the latter on the feature representation aspect. The method is evaluated on six large real-world datasets and shows impressive detection performance compared to existing state-of-the-art methods.\n\n- The paper is generally well-written and easy-to-follow.\n- The problem setting is practical since labeled normal samples are easy to obtain in many real-world applications. Compared to the commonly studied unsupervised setting, this semi-supervised setting often results in better detection performance.\n- The proposed method GGAD is novel. There have been many generative anomaly detection methods, but as far as I know, they are unable to consider the graph structure and the neighboring nodes’ representations. By introducing the two new priors, GGAD addresses this issue well. Fig.1 and Fig. 3 help demonstrate this effect.\n- The method is compared with a range of unsupervised and semi-supervised methods on 6 real-world datasets with diverse genuine anomalies, and gains largely improved detection performance over these competing methods.\n- The ablation study is plausible and justifies the contribution of each proposed prior.\n\n- The outlier node generation in GGAD may cause non-trivial computational overhead.\n- Despite better performance than the competing methods, GGAD gains an AUC of only around 0.6 on some datasets, such as DGraph and Reddit.\n- In Fig. 4 (b), GGAD shows a fast AUPRC growth with increasing training size, but the other methods have a flat performance trend. What would be the reason behind?\n\nSee the weakness"
}
] |
zpw6NmhvKU | RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting | The Rashomon effect is a mixed blessing in responsible machine learning. It enhances the prospects of finding models that perform well in accuracy while adhering to ethical standards, such as fairness or interpretability. Conversely, it poses a risk to the credibility of machine decisions through predictive multiplicity. While recent studies have explored the Rashomon effect across various machine learning algorithms, its impact on gradient boosting---an algorithm widely applied to tabular datasets---remains unclear. This paper addresses this gap by systematically analyzing the Rashomon effect and predictive multiplicity in gradient boosting algorithms. We provide rigorous theoretical derivations to examine the Rashomon effect in the context of gradient boosting and offer an information-theoretic characterization of the Rashomon set. Additionally, we introduce a novel inference technique called RashomonGB to efficiently inspect the Rashomon effect in practice. On more than 20 datasets, our empirical results show that RashomonGB outperforms existing baselines in terms of improving the estimation of predictive multiplicity metrics and model selection with group fairness constraints. Lastly, we propose a framework to mitigate predictive multiplicity in gradient boosting and empirically demonstrate its effectiveness. | https://openreview.net/pdf/838fbeed0eab05add105305af9fefdf722fe747f.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "tcA0QhNUXj",
"review_text": "This paper proposes a method (RashomonGB ) to estimate the Rashomon sets/predictive multiplicity of gradient boosting models. It estimates multiple ($m$) models at each stage (effectively performing a local exploration) and then combine all such models in the end to construct $m^T$ models for Rashomon set computation, where $T$ is the number of iterations of the boosting. On several datasets the paper shows that RashomonGB performs better than re-training with $m$ seeds, in that at the fix $\\epsilon$ (loss difference) level, RashomonGB tends to show more predictive multiplicity.\n\nPredictive multiplicity is an important topic. The paper is generally clear and well-written. The proposed method is a sensible first method for boosting algorithms, which was previously underexplored. I think the proposed method is likely adopted by people who care about this problem as it's intuitive and easy to implement.\n\n1. The current exploration strategy is fast to compute, but I'm not sure if this follows the motivation of Rashomon set very well. While the authors mention one example on the Contraception dataset where re-training underestimates the predictive multiplicity, in general RashomonGB might create models that are more correlated than normal (because the \"backbone\" is the same GB model), thus underestimating the predictive multiplicity. Right now, the conclusion shows otherwise probably because the number of re-training is too small. \n\n2. Regarding the experiment, if I read this correctly, currently we use more compute for RashomonGB as well (by combining different weak models), so it is also not quite a fair comparison in my opinion. I would be very interested to see some estimate of how much compute RashomonGB saves against re-training, by running more re-training and see when are the metrics in Fig3 in the two methods become comparable.\n\n\n\nminor: one \"RashomonGB\" in L290 should be \"re-training\".\n\n1. What's $\\epsilon_{t_1}$ (and $\\epsilon_{t_2}$) in L243-L244? Isn't epsilon a quantity set by the user? \n\n\n2. In L282-283, do we construct 10 final models and 1024 for re-training and RashomonGB, respectively? If only 2 out of $m$ models are used why train $m$ of them (L282-283) for RashomonGB? \n\n3. Related to the above, I originally thought there is a model \"filtering\" step in each iteration $t$, and wonder how $\\epsilon_t$ is set for each iteration. However, from L282-283 it seems like we just randomly pick a few models and brute-force combine all weak models for the final Rashomon set exploration. Could the authors clarify?\n\n4. Are Fig 4 measured on the test set? If so, then it's not clear how useful this is as we cannot choose models basing on test performance - did the authors try picking models on the frontier basing on the validation set and then plot this on the test set? Right now, due to the sheer number of final models generated by RashomonGB, it's unclear if the models with better trade-off are just lucky."
},
{
"confidence": 2,
"rating": 5,
"review_id": "0pO4zAVBB0",
"review_text": "This paper presents an approach that compute Rashomon set for gradient boosting algorithm where the set can be obtained through products over weak learners at each step rather than sampling them through retraining. The authors further proposed a dataset related Rashomon bound through sub-Gaussian assumption, where mutual information between hypothesis space and dataset shows the predictive multiplicity, which can further decomposed into model uncertainty and quality of data. Experiments show the proposed solution offers more models in Rashomon set than retraining given the same computation budget.\n\nThe rough idea of the proposed approach is straightforward since decomposing Rashomon set search on boosting algorithm can be a \"standard\" operation given the unique residual learning property of boosting algorithms. The novelty of the proposed approach is probably more from \"our work is the first to explore the Rashomon effect for gradient boosting\".\n\nThe dataset related Rashomon set bound seems an interesting point. But it needs some justification for the key assumption of it (sub-Gaussian). Proposition 2 seems make sense given the positive relation between number of boosting iterations and Rashomon set (also for dataset size).\n\nExperiments in 4.2 seem interesting. I would love to see more experiments like it.\n\nI got some difficult time to understand the introduction and abstract of this paper even I have read some literatures about Rashomon effect and predictive multiplicity. It is simply hard to read given the narrative there. Especially the second paragraph of introduction; it gets me confused and self-questioning my understanding of Rashomon effect from other works.\n\nWhy boosting algorithms? \nFurther justification about the dataset related Rashomon set bound?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "ouYKGioKR8",
"review_text": "The paper studies the Rashomon effect in gradient boosting, a commonly used algorithm for tabular datasets, but something that has not received enough attention in multiplicity literature. The paper provides several theoretical discussions on the size of the Rashomon set and the impact of the number of iterations on multiplicity in GBRTs. Furthermore, the paper proposes RashomonGB, a method to create an exponential number of ‘near-optimal models’ by training only a polynomial number of models. With more models in the Rashomon set, the use of RashomonGB can create several downstream benefits without any extra cost of training, shown empirically by the authors.\n\n- Multiplicity in GBRTs, or generally any gradient-boosting algorithm, has not been studied in the literature, and so the authors provided a novel discussion, especially given the importance of these algorithms in tabular settings.\n- The paper provides several theoretical discussions backed by empirical support. The insights on the growing Rashomon set with iterations were quite interesting, although I have concerns about the validity of these insights (see Weaknesses).\n- Multiplicity quantification can be quite costly, and various methods in pursuit of reducing this cost can significantly benefit further auditing. The use of RashomonGB, as proposed by the authors, can be an important step in that direction for gradient-boosted algorithms.\n\n- While the presentation of the rest of the concepts and the theoretical discussion were easy to follow, important details about the RashomonGB method and the details of the empirical setup were either missing (even from the Appendix) or imprecise. For instance, the Rashomon set of the gradient boosting algorithm isn’t going to simply be the iterative extension of Rashomon sets at every residual level, i.e., equation 4 is imprecise. Similarly, it seems that the epsilon value of the Rashomon set increases with more iterations, and thus it is confusing to me whether the insight that more iterations create bigger Rashomon sets is a result of multiple iterations or simply a result of bigger epsilon. See the section ‘Questions’ for more detailed comments and some follow-up questions. Edit after rebuttal: Acknowledged, correct and clarified.\n- There are other methods to measure predictive uncertainty in gradient-boosted algorithms. Some examples based on a cursory search (there might be more, as I’m not too familiar with GBRTs) - https://arxiv.org/abs/2205.11412 https://arxiv.org/pdf/1910.03225 https://arxiv.org/abs/2106.01682 -
While I understand that prediction uncertainty is not the same as predictive multiplicity, the two are closely related, and when proposing a better method to measure multiplicity, the paper should compare itself with other stronger baselines than just retraining. Just as previous works have proposed using Monte Carlo Dropout (which was initially created as a method to measure uncertainty) as a measure of multiplicity, uncertainty measurement baselines for GBRTs could have been adopted to create reasonable baselines, and would have made the results a lot stronger. Edit after rebuttal: Acknowledged and added.\n\nMy questions and comments mostly revolve around the RashomonGB formulation.\n- I don’t believe equation 4 is correct. A model formed from residual models that are present in their Rashomon sets at every step does not necessarily make a model that will be present in the Rashomon set overall. That’s because the composition of GBRTs occurs at the prediction level, while Rashomon sets are defined by the authors at the loss level. Equation 4 probably would have been true if the loss function had a linear relationship with the model predictions, which is not an assumption I see being made anywhere in the paper. This also makes me question the empirical results, because if the RashomonGB formulation isn’t precise, do the models across which the authors calculate multiplicity even belong to the same Rashomon set? Edit after rebuttal: Acknowledged and corrected.\n- Can the authors comment on why they compare two situations with different Rashomon parameters and make claims on their multiplicity? For example, Proposition 3 and the following paragraph. A Rashomon set would of course be bigger with a larger value of epsilon, and having that variability when talking about other trends doesn’t seem convincing to me. Edit after rebuttal: Confusion clarified.\n- What was the exact epsilon value used for the experiment? I couldn’t find it anywhere in the paper. Moreover, I hope that given the Rashomon sets for the RashomonGB setup were defined with T*epsilon as the new epsilon value, the same freedom was also given to retraining. Again, if the comparison was done across methods with different epsilon values (which might not be the case, but I don’t know the details), that does not make sense to me. Edit after rebuttal: Appropriate information added."
},
{
"confidence": 2,
"rating": 6,
"review_id": "8Elq8CwQT8",
"review_text": "The paper explores the concept of predictive multiplicity in gradient boosting models. The Rashomon effect refers to the existence of multiple models that perform similarly well on a given dataset. The authors formalize this effect in the context of gradient boosting, introduce a new method called RashomonGB to efficiently explore this multiplicity, and demonstrate its application on various datasets. The paper aims to improve the estimation of predictive multiplicity and model selection, especially with considerations for group fairness.\n\n1. The introduction of RashomonGB represents a novel method for exploring the Rashomon set in gradient boosting, offering an exponential search space as opposed to traditional linear methods.\n2. The paper provides a robust theoretical foundation using statistical learning and information theory to analyze the Rashomon effect, enhancing the understanding of this phenomenon in gradient boosting.\n3. The authors demonstrate the practical utility of RashomonGB on a wide range of real-world datasets, including tabular and image data, showcasing its versatility and effectiveness.\n\n1. While the paper discusses the positive societal impacts of RashomonGB, it lacks a thorough exploration of potential negative impacts or misuse of the method.\n2. The theoretical analysis relies on several assumptions that may not hold in all practical scenarios, potentially limiting the generalizability of the findings.\n3. The paper mentions the intention to release code post-review, but the lack of immediate open access to code and data can hinder reproducibility and independent validation by other researchers.\n4. Implementing RashomonGB might be complex for practitioners without a strong background in the theoretical aspects of machine learning and gradient boosting, potentially limiting its adoption in the industry.\n\n1. Can the method be extended or adapted for other types of machine learning models beyond gradient boosting?\n2. How does the choice of hyperparameters in RashomonGB affect the stability and reliability of the results?\n3. What are the practical challenges faced during the implementation of RashomonGB, and how can they be addressed to facilitate broader adoption?"
}
] |
znBiAp5ISn | TAS-GNN: Topology-Aware Spiking Graph Neural Networks for Graph Classification | The recent integration of spiking neurons into graph neural networks has been gaining much attraction due to its superior energy efficiency.
Especially because the irregular connection among graph nodes fits the nature of the spiking neural networks, spiking graph neural networks are considered strong alternatives to vanilla graph neural networks.
However, there is still a large performance gap for graph tasks between the spiking neural networks and artificial neural networks.
The gaps are especially large when they are adapted to graph classification tasks, where none of the nodes in the testset graphs are connected to the training set graphs.
We diagnose the problem as the existence of neurons under starvation, caused by the irregular connections among the nodes and the neurons. To alleviate the problem, we propose TAS-GNN.
Based on a set of observations on spiking neurons on graph classification tasks, we devise several techniques to utilize more neurons to deliver meaningful information to the connected neurons.
Experiments on diverse datasets show up to 27.20% improvement, demonstrating the effectiveness of the TAS-GNN. | https://openreview.net/pdf/7ce7c8cc5374dbd6686b378ef8174a06b76e4183.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "3IYgelN3ZX",
"review_text": "There's a large performance gap for graph tasks, especially graph classification tasks, between the spiking neural networks and artificial neural networks. The authors proposes the problems as the neuron's under starvation and illustrated the reason of the problem. To solve the problem, TAS-GNN was proposed.\n\nThe main contributions of the paper are as follows:\n1: Starvation problem of spiking neurone in GNNs in graph classification tasks are identified.\n\n2: A strategy was proposed to address the spike frequency deviations on the basis of the correlation between graph topology and spike frequency patterns.\n\nThe authors conduct experiments on 5 popular datasets and use several different designs of GNN layer. The results show competitive potential of the TAS-GNN.\n\n1:This is a well-written paper, from the formulation of the problem to the solution. The author's motivation for the use of graph topology is clear.\n\n2:The method of using topology-awaregroup-adaptive neurons shows competitive results compared with other baselines. The ablation study makes the result more persuasive. \n\n3: The Figures in the paper are quite straightforward, easy to follow.\n\n1: The name of the paper is \"Topology-Aware Spiking Graph Neural Networks\". However, as I can tell the only graph topology used in the method is nodes degree, which is used to group the neurons. I wonder if it is appropriate to name it as \"topology aware\", or the author can explain it more.\n\n2: The analysis regarding the performance of the method is lack of discussion. For instance, in some datasets, such as MUTAG and IMDB-Binary, the proposed method achieve quite competitive results while in PROTEINS it doesn't. It's betted to explain what cause the phenomenon, like the characteristics of the datasets? Also, in table 2, the results of GAT and GAT+TAG in IMDB-Binary are the same. It's better to make an explanation about them.\n\n3: There're several typos and basic grammar mistakes in the paper that will affect the presentation of the paper. In line 120 \" and apply is to\"; The sentence in line 123 is hard to understand\n\n1: In section 3 the authors mentioned the hypothesis that the phenomenon mentioned above is caused by the topology of the real-world graphs. What motivates you to have the hypothesis?"
},
{
"confidence": 5,
"rating": 7,
"review_id": "B4knkzsRSl",
"review_text": "This paper primarily discusses integrating Spiking Neural Networks (SNNs) into Graph Neural Networks (GNNs) to address several key challenges in graph classification tasks. Specifically, the paper proposes a new method called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) which leverages the topology of graphs to improve the performance of spiking neural networks in graph classification tasks.\n\n(1)The authors clearly articulate the performance gap between existing Graph Neural Networks (GNNs) and Spiking Neural Networks (SNNs) in graph classification tasks.\n(2)The authors conduct an in-depth analysis of the performance degradation of spiking neural networks in graph classification tasks and introduce the \"neuron starvation\" problem.\n(3)The authors propose topology-aware group-adaptive neurons (TAG) based on the graph's topology, a novel approach that helps address the neuron starvation issue.\n(4)The authors provide a detailed description of how to convert input graphs into spike representations, perform message passing, and classify the graphs.\n(5)The authors validate the method's generalizability and effectiveness by using multiple public datasets (such as MUTAG, PROTEINS, ENZYMES, NCI1, IMDB-BINARY) in the experimental section.\n\n(1)The authors mention several application areas and challenges, but the references and comparisons to existing literature are not sufficiently comprehensive.\n(2)Although the methodology section describes the main steps, it lacks detailed descriptions of some key aspects such as threshold initialization and the specific training process.\n(3)Although there are some ablation studies, the analysis of the individual contributions of each component is insufficient, making it difficult to determine the specific impact of each component on the overall performance improvement.\n\n(1)Could you provide more details on how the neuron starvation problem was diagnosed? Specifically, what metrics or observations were used to identify this issue in SNNs for graph classification?\n(2)The paper mentions the use of learnable initial thresholds for neurons. Could you elaborate on how these initial values are set and what specific strategies or heuristics were used to determine them?\n(3)Conduct a more thorough ablation study to analyze the independent contributions of each component (e.g., TAG, learnable initial thresholds) to the overall performance. This will help readers understand the significance of each part of the proposed method.\n(4)The sensitivity analysis shows variations in performance with different initial thresholds and learning rates. Could you explain why certain thresholds or learning rates were more effective and how they were chosen?\n(5)How does TAS-GNN scale with very large graphs in terms of computational efficiency and memory usage? Are there any specific optimizations or techniques used to handle large-scale datasets?\n(6)While the paper compares TAS-GNN with several baseline methods, could you consider including comparisons with more recent or advanced GNN models that have shown strong performance in graph classification tasks?\n(7)Have you tested TAS-GNN on any real-world applications or datasets beyond the ones mentioned? If so, could you share the results and insights gained from these experiments?"
},
{
"confidence": 4,
"rating": 3,
"review_id": "QplC2giKwy",
"review_text": "The paper presents a novel approach called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) to address the performance gap between spiking neural networks (SNNs) and artificial neural networks (ANNs) in graph classification tasks. The authors identify a \"starvation\" problem in spiking neurons within GNNs, where many neurons do not emit any spikes during inference, leading to severe information loss. This problem is more pronounced in graph classification tasks, where the test set graphs are independent from the training set, unlike in transductive or inductive learning settings.\n\n1.\tThis paper identifies a critical \"starvation\" problem in spiking neurons within Graph Neural Networks (GNNs), where many neurons do not emit any spikes during inference, leading to severe information loss. This problem is more pronounced in graph classification tasks, where the test set graphs are independent from the training set\n2.\tThe paper proposes a novel approach called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) to address the graph classification problem.\n\n1.\tThe authors use the node degree instead of the concept of topology, there’s a large gap between the graph topology and node degree.\n2.\tThe authors solve the graph classification task as a contribution, which is not a significant challenge for spiking graph neural networks.\n3.\tThe advantage of Spiking Neural Networks (SNN) is their low energy consumption. However, the paper does not mention the feature, so it is unclear why graph neural networks should be combined with SNN. The motivation behind TAS-GNN is not clear.\n\nThe important points listed in weakness 1-3."
},
{
"confidence": 4,
"rating": 6,
"review_id": "iP8GDzFwhc",
"review_text": "This paper proposes topology-aware spiking graph neural networks with adaptive thresholds based on a group of neurons for graph classification. The paper first diagnoses the poor performance as the existence of neurons under starvation caused by the graph structure. Then the paper proposes the adaptive threshold among neurons partitioned by degrees, as well as the learnable initial threshold and decay rate to reduce the sensitivity. Experiments on several datasets show superior performance of the proposed method.\n\n1. This paper proposes the first SNN design to target graph classification.\n\n2. This paper identifies the starvation problem and proposes a novel topology-aware group-adaptive technique.\n\n3. Experiments show superior performance on several datasets, some outperforming ANNs.\n\n1. The proposed method seems to be a hybrid ANN-SNN model rather than a pure SNN design. The paper did not discuss how this will affect the deployment of the model on potential neuromorphic hardware, since SNNs mainly target those hardware to obtain energy efficiency.\n\n2. The paper did not discuss the (theoretical) energy efficiency estimation, which is a major motivation for considering SNNs as stated in Introduction.\n\n3. Or if the motivation is to get models with better performance than ANN, then Table 1 does not include state-of-the-art ANN results for comparisons.\n\nSome recent works also study SNN for link prediction tasks in graphs [1] besides node-level classification, which may be discussed.\n\n[1] Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning. ICML 2024."
}
] |
zn6s6VQYb0 | GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction | Graph-structured data is integral to many applications, prompting the development of various graph representation methods. Graph autoencoders (GAEs), in particular, reconstruct graph structures from node embeddings. Current GAE models primarily utilize self-correlation to represent graph structures and focus on node-level tasks, often overlooking multi-graph scenarios. Our theoretical analysis indicates that self-correlation generally falls short in accurately representing specific graph features such as islands, symmetrical structures, and directional edges, particularly in smaller or multiple graph contexts.To address these limitations, we introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities. Additionally, we propose the GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks and ensures robust structural reconstruction, through a mirrored encoding-decoding process. This model also tackles the challenge of representation bias during optimization by implementing a loss-balancing strategy. Both theoretical analysis and numerical evaluations demonstrate that our methodology significantly outperforms existing self-correlation-based GAEs in graph structure reconstruction. | https://openreview.net/pdf/57096dd4679d0699198e3899786b24845b43c7a8.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "JVFBYcSJ2e",
"review_text": "This paper proposes a cross-correlation autoencoder for graph structural reconstruction. The authors first analyze the problems of existing self-correlation encoder. Then, a cross-correlation autoencoder is designed. Experimental results show the effectiveness of the cross-correlation autoencoder.\n\n1. The motivation is clear and the cross-correlation autoencoder is reasonable.\n2. The paper is well-written and easy to follow.\n3. The experiments are comprehensive.\n\n1. The authors mention that the current self-correlation methods can not address specific (sub)graph structures. But this paper only presents an overall experimental performance. It is unclear how the proposed cross-correlation autoencoder performs given a specific graph structure. \n\n2. It is not clear whether the graph dataset used in the paper is a directed or undirected graph. Since the cross-correlation autoencoder can represent the directed graph effectively, it is suggested to consider the directed graph dataset.\n\n3. More different architectures of the encoder and decoder should be employed to further verify the effectiveness of the cross-correlation mechanism.\n\nsee Weakness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "wozjB4vJhN",
"review_text": "This paper proposed a method to address the limitations of existing graph autoencoder (GAE) models that primarily rely on self-correlation for graph structure representation. They claim existing GAE often fail to accurately represent complex structures like islands, symmetrical structures, and directional edges, particularly in smaller or multiple graph contexts. The proposed model, GraphCroc, introduces a cross-correlation mechanism that aims at enhancing the representational capabilities of GAEs. It employs a mirrored encoding-decoding process to ensure robust structural reconstruction and introduces a loss-balancing strategy to tackle representation bias during optimization.\n\n1. The idea to introduce two latent space for reconstructing the graph structure is \"simple and intuitive\". \n\n2. The writing is clear and easy to follow.\n\n3. The experimental results are sound.\n\n1. This paper lacks discussion on related works. There already exists some works trying to solve the graph autoencoder structure recovering issues. For example, including position encoding [1] or adding extra node labels [2]. How the proposed method is compared with these methods, from the perspective of effectiveness and efficiency?\n\n[1] You, Jiaxuan, Rex Ying, and Jure Leskovec. \"Position-aware graph neural networks.\" International conference on machine learning. PMLR, 2019.\n\n[2] M. Zhang, P. Li, Y. Xia, K. Wang, and L. Jin, Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning, Advances in Neural Information Processing Systems (NeurIPS-21), 2021.\n\n2. As the proposed method generate two latent embeddings, I wonder if there exists some techniques to control them to be different with each other? Otherwise I am concerned that whether the two embeddings could converge to each others.\n\nsee above weakness"
},
{
"confidence": 4,
"rating": 5,
"review_id": "ZjrWIOhtku",
"review_text": "This paper theoretically analyzes the limitations of existing graph autoencoders (GAE) in representing special graph features such as islands, symmetrical structures, and directional edges. To address this, the paper proposes a new GAE method, GraphCroc, which employs a cross-correlation mechanism that significantly enhances the representational capabilities of GAEs.\n\n1. The paper clearly shows the limitations of existing GAEs through theoretical analysis.\n\n2. The experimental results demonstrate the advantages of the proposed method in structural reconstruction and graph classification tasks.\n\n3. The paper is easy to follow.\n\n1. In Table 1, the improvements of GraphCroc are evident only on two datasets.\n\n2. While the proposed cross-correlation method performs better than the general self-correlation method on island, symmetric structures, and directed graphs, it would be beneficial to include more results in reconstruction visualization, particularly regarding island or directed edge reconstruction.\n\n3. Some related works [1] need to be discussed.\n\n[1] Liu, Chuang, et al. \"Where to Mask: Structure-Guided Masking for Graph Masked Autoencoders.\" arXiv preprint arXiv:2404.15806 (2024).\n\n1. How about the performance of the proposed method on directed graphs?"
}
] |
zm1LcgRpHm | Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations | Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly, we raise the question: Is there an alternative arrangement for time-series which could enable more effective representation learning? To address this, we propose a simple plug-and-play neural network layer called Segment, Shuffle, and Stitch (S3) designed to improve representation learning in time-series models. S3 works by creating non-overlapping segments from the original sequence and shuffling them in a learned manner that is optimal for the task at hand. It then re-attaches the shuffled segments back together and performs a learned weighted sum with the original input to capture both the newly shuffled sequence along with the original sequence. S3 is modular and can be stacked to achieve different levels of granularity, and can be added to many forms of neural architectures including CNNs or Transformers with negligible computation overhead. Through extensive experiments on several datasets and state-of-the-art baselines, we show that incorporating S3 results in significant improvements for the tasks of time-series classification, forecasting, and anomaly detection, improving performance on certain datasets by up to 68\%. We also show that S3 makes the learning more stable with a smoother training loss curve and loss landscape compared to the original baseline. The code is available at https://github.com/shivam-grover/S3-TimeSeries. | https://openreview.net/pdf/d5ba68bdf83d04632580f0b9e7ac80199a8c19c5.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "WUjKloq9SX",
"review_text": "This paper introduces a new method for time-series representation learning that enhances the modeling of non-adjacent segment dependencies. Specifically, the proposed method segments, shuffles in a learned manner and stitches the shuffled segments to combine with original time series. The proposed method is model-agnostic without adding significant parameter overhead and shows performance improvement across multiple classification and forecasting base models.\n\n1. The proposed method permutes the original segments to better capture inter-relations between distant segments. It is model-agnostic and introduces minimal parameter overhead to the original model.\n\n2. Extensive experiments on various base models for both classification and forecasting tasks demonstrate the effectiveness of the proposed method.\n\n1. It it not clear how the sorting process, specifically the calculation of permutation $\\sigma$ from $P$, is made differentiable.\n\n2. The compared forecasting baselines such as Informer are no longer state-of-the-art methods. Adding more recent baselines such as Time-LLM, GPT4TS, DLinear, PatchTST would provide a clearer understanding of the proposed method's comparative benefits.\n\n3. The basic assumption for S3 is that modeling non-adjacent dependencies is important. However, the paper lacks detailed case studies that demonstrate the specific types of non-adjacent dependencies effectively captured by S3, which are not addressed by existing models. Additionally, there is no case study to validate that the learned shuffling weights accurately represent these segment dependencies.\n\n1. The results in Tables 1, 2, and 3 seem to indicate more significant improvements in multivariate than in univariate time series tasks. Any reason behind this?\n\n2. What does the \"number of segments\" represent in Figure 6 and Figure A3? Is it the number of segments for the first layer or the final layer? If it refers to \"n\", then in Figure A3, this number seems to perform the best when it is larger than 100 for some datasets?\n\n3. Could you describe the inference process for the S3 method? Additionally, what are the computational overheads for training and inference times for S3?"
},
{
"confidence": 3,
"rating": 4,
"review_id": "Kb7xCYTdD8",
"review_text": "This paper introduces a plug-and-play mechanism called Segment, Shuffle, and Stitch (S3) designed to enhance time-series representation learning in existing models. S3 operates by dividing the original sequence into non-overlapping segments and shuffling them in a learned manner that is optimal for the given task. It then reattaches the shuffled segments and performs a learned weighted sum with the original input to capture both the newly shuffled sequence and the original sequence. This proposed model can enhance the performance of specific models in classification and prediction tasks.\n\nThe paper is easily comprehensible and straightforward.\n\nSufficient experiments are conducted to confirm the effectiveness of the method.\n\nLack of comparative methods:\nIn fact, the proposed method seems to share the same spirit as data augmentation methods in the time series field[1-4]. Why hasn't any data augmentation method been compared?\n\n\nSelection of baseline models:\nThe selected baseline model, Informer, seems somewhat outdated. Why not choose a more recent model, e.g., iTransformer[5] or PatchTST[6]?\n\n\nDataset for prediction task:\nThe author conducted experiments on three ETT datasets, but for prediction tasks, more datasets should be considered, e.g., traffic, electricity, and weather.\n\n\nTime-Series Representation Claim:\n As the author pointed out, more tasks should be considered for time series representation learning.\n\n\n[1]FRAUG: FREQUENCY DOMAIN AUGMENTATION FOR TIME SERIES FORECASTING [2]Time Series Data Augmentation for Deep Learning: A Survey [3]SimPSI: A Simple Strategy to Preserve Spectral Information in Time Series Data Augmentation [4]TOWARDS DIVERSE AND COHERENT AUGMENTATION FOR TIME-SERIES FORECASTING [5]ITRANSFORMER: INVERTED TRANSFORMERS ARE EFFECTIVE FOR TIME SERIES FORECASTING [6]A TIME SERIES IS WORTH 64 WORDS: LONG-TERM FORECASTING WITH TRANSFORMERS\n\nWhat are the essential differences between the proposed method and other data augmentation methods?"
},
{
"confidence": 5,
"rating": 8,
"review_id": "PQ6MFEkGOn",
"review_text": "This paper proposes a new neural network design element which segments, shuffles, and stitches time series for improved representation learning. They evaluate their methods on forecasting and classification tasks, and show that S3 benefits some widely used baselines.\n\n1. To the best of my knowledge, the idea is novel, and fundamentally challenges and changes how to learn representations for time series data\n2. The paper is well written and easy to follow\n3. Experiments are well-designed, and results are promising\n\nI have not found any major weaknesses in the methodology or experimental design. However, I think that the paper might benefit from showing what the S3 module is actually learning. For example, the authors can include the segmented, shuffled, and stitched time series on a particular dataset as an example, along with the weighted time series (used as input to the model), and the original time series. This might provide some intuition as to how this design element improves predictive performance. \n\nI think there's always scope to improve experimental design. TS2Vec is a excellent choice for classification, but not for forecasting. I would recommend that the authors use methods such as PatchTST (transformer-based) or iTransformer, TimesNet (CNN-based), N-BEATs or N-HITS (MLP-based) etc. for time series forecasting. For classification, it would also be good to compare with fully supervised methods such as ResNet1D (see [1]). \n\n### References\n[1] Ismail Fawaz, Hassan, et al. \"Deep learning for time series classification: a review.\" Data mining and knowledge discovery 33.4 (2019): 917-963.\n\nI do not have questions per se, but I am listing some things that I am curious about below:\n\nI would also encourage the authors to evaluate the benefits of S3 on some recent time series foundation models such as MOMENT [2], Chronos [3], Moirai [4], TimesFM [5], and/or LagLLama [6]. The MOMENT model does both classification and forecasting, so it might be interesting to see how S3 benefits pre-trained models, say by just training the S3 layer and freezing the pre-trained backbone (or some variation of this experiment).\n\nOn a similar note, I wonder if S3 improves generalization and hurts memorization, or vice versa. It would be interesting to do some transfer learning experiments where you train on some time series data and evaluate the model on other time series data (see MOMENT or PatchTST for inspiration). \n\n### References\n[2] Goswami, Mononito, et al. \"Moment: A family of open time-series foundation models.\" arXiv preprint arXiv:2402.03885 (2024).\n[3] Ansari, Abdul Fatir, et al. \"Chronos: Learning the language of time series.\" arXiv preprint arXiv:2403.07815 (2024).\n[4] Woo, Gerald, et al. \"Unified training of universal time series forecasting transformers.\" arXiv preprint arXiv:2402.02592 (2024).\n[5] Das, Abhimanyu, et al. \"A decoder-only foundation model for time-series forecasting.\" arXiv preprint arXiv:2310.10688 (2023).\n[6] Rasul, Kashif, et al. \"Lag-llama: Towards foundation models for time series forecasting.\" arXiv preprint arXiv:2310.08278 (2023)."
},
{
"confidence": 4,
"rating": 6,
"review_id": "RhsdsSBMVs",
"review_text": "The paper paper introduces a new approach called Segment, Shuffle, and Stitch (S3) to enhance time-series representation learning. The method involves segmenting the time-series into non-overlapping parts, shuffling them optimally, and stitching them back together along with the original sequence.\n\nKey contributions include:\n\n- Proposing the S3 mechanism to improve time-series representation learning by dynamically reordering segments.\n- Demonstrating that S3 can be integrated with existing neural architectures like CNNs and Transformers, resulting in significant performance improvements.\n- Showing through extensive experiments that S3 enhances performance in time-series classification and forecasting tasks, with improvements up to 68%.\n\n- Code is available, making reproducing this paper easier.\n- Paper is clear.\n- Results appear good, when considered on the set of baselines and dataset picked by the authors.\n\n- Tables 1 and 2 focus on the ETT datasets, which are only a (highly intra-correlated) subset of the common forecasting datasets: Electricity, Traffic, Weather, Illness...\n- I see no mention of CoST in the results tables, despite being cited in the paper. This is usually a very strong baseline for contrastive approaches. Including it would certainly paint a more complete picture of the results landscape. On a related note this also applies to e.g. more recent transformer baselines. Informer is relevant, but also very far from state of the art.\n- Error bars would help one better contextualize the results.\n- The lack of an ablation study makes understanding the reason this works more complicated.\n\n- The 3 points in weaknesses are also questions in the sense that they ask for some new experiments to be performed. Addressing those points would be my first recommendation.\n- Intuitively, it feels like this work is to some extent a form of bootstrap (as data augmentation) combined with a mixup-like sample interpolation. I may be wrong on this and am happy to discuss. If so, could the authors do more of an ablation study connected to this. I.e. how does the approach outperform other (non-permutation)-based data augmentation strategies combined with the same summation operation?\n\nEdit: I have read the author's rebuttal. They have addressed questions I had and I am as a result raising my score to a 6."
},
{
"confidence": 4,
"rating": 6,
"review_id": "Lhb1G9uQox",
"review_text": "The paper introduces a simple but effective differentiable module that performs pre-processing to input multivariate time-series before being fed into any differentiable model for arbitrary task. The pre-processing involves segmenting, shuffling the segments and stiching them together. The novelty include making this seemingly discrete operations into a differentiable module. This simple idea yields significant improvement in performance of different kinds of models over variety of datasets.\n\n1. The method is simple and easy to add to most deep learning models\n2. The technical details are well-motivated and explained\n3. The method also improves training efficiency and convergence time along with performance with very little increate in model complexity\n4. Experimental results across different tasks are strong\n\n1. Visualization and any qualitative study on the shuffling and segments generalted by S3 would greatly benefit the readers.\n2. How well does it optimize transformer based models, especially those that already do segmentation like PatchTST since the attention module captures the relations all pairs of segments already?\n3. Does the representations due to S3 generalize to multiple tasks at a time or do we need to retrain for each task?\n\nSee weaknesses"
}
] |
zlgfRk2CQa | Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints | Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn. | https://openreview.net/pdf/0735617b982a5aca1dad5a07d887a2347d77d249.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "MnvTMTLcWc",
"review_text": "To solve the stability of Deep Thinking models, this paper proposes to constrain activation functions to be Lipshitz-1 functions. The original DT and DT-R models have training stability problem, basically because of scale explosion or vanishing. The authors revealed the stability problem, attribute the problem to Lipschitz constants, proposed ways to ensure Lipschitz smoothness, and show the effectiveness of their approach through a few examples used in the original DT paper, as well as include the traveling salesman problem.\n\n* This paper is clearly written and well motivated. \n* The storyline is very reasonable: identify problems => propose ways to solve the problem => show the approach actually works\n* This approach is mathematically grounded.\n* Experiments are thorough, by running many random seeds and report error bars.\n\n* The idea is quite straight-forward (may not be a bad thing, but make technical contributions smaller)\n* In the TSP problems, DT-L's results seem worse than NN Tours and BNN Tours. At least some explanation is warranted. \n* I'm not fully convinced by the significance of this paper. The examples shown in the paper are quite toy. Are there more examples you expect DT-L would work?\n* I'd appreciate more visualizations that can intuitively show the benefits of DT-L over DT/DT-R? Maybe some Figures like in the original DT paper. \n* The title is not very informative. Might be better to mention Lipschitz smoothness in the title.\n\n* In Line 141-142, I don't quite get this comment \"Although any Lipschitz constant less than 1 would guarantee convergence, the nature of the problem solving mechanism we seek to learn intuitively means that we do not want fast convergence.\" Why don't we want faster convergence.\n* In Figure 6 left, it looks like DT-L is worse than DT-R? Why is that? More stability leads to worse performance?\n* What about DT and DT-R for TSP?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "JseNGVVPPN",
"review_text": "This paper identifies and rectifies an issue with a particular type of iterative neural network called Deep Thinking Networks. The problem arises in exploding latent representations and unstable training routines. The authors of this work propose an update to the architecture where they add Lipschitz constraints to the model. They show three major benefits: (I) The models train more stably/predictably; (II) the inference-time behavior is better as the latent representations converge with iterations of the recurrent model; and (III) this new approach can learn how to solve NP-Hard problems where the old methods fail.\n\n1. This paper is original to my knowledge. I am aware of much of the work on Deep Thinking Networks and the issues raised and the solutions proposed in this work are novel.\n1. The quality of the work is high. For the most part the experiments are done well and cover many natural questions that would arise from reading the abstract/intro.\n1. The clarity is good. I think the writing is clear and the results are compelling.\n1. The results are significant for those interested in easy-to-hard generalization. These Deep Thinking Networks have strong extrapolation of toy problems and with the proposed updates to the methods they show strong performance even for TSP solving.\n\n1. Clarity: A couple things could be more clear. \n i. I think IPT stands for Incremental Progress Training, but I don't see the acronym defined anywhere. \n ii. Table 1 the units are unclear. I gather there are tour lengths, but that isn't stated in the table or the caption. \n iii. The violin plot in Figure 2 is hard to parse (no harder than any other violin plot). This type of graphic does look nice, but offers little quantitative context. For example, there is no indication of the units/scale of the width of each violin. This is not the right type of plot for a conference paper.\n\n1. Can the authors make the clarifications needed to address my first two points in the Weaknesses section?\n1. Have the authors looked at transformer architectures at all? I'm not asking for results to be added to the paper, but I'm curious about how these techniques, which are independent from the parameterization of any given layer in some ways, might apply to modern large model architectures."
},
{
"confidence": 4,
"rating": 6,
"review_id": "EUnbVSCPjG",
"review_text": "The paper addresses the positive feedback issue in the so called Deep Thinking networks, where the inference computation may involve more recurrent computations than encountered in training. The proposed solution is to normalise the state vector that undergoes the recurrence, i.e. make the mapping contractive, i.e. ensure negative (but just) feedback.\n\nThe paper is well written and clear to follow, the proposed method is pretty straight forward and effective.\n\nAs far as I can tell, it is pretty straight forward control theory stuff for addressing positive feedback. Nothing wrong with the proposed solution, but I would assume this is such a fundamentally well known issue in any recurrent/feedback system that we can leave this to be addressed by the designer at implementation time with any choice of normalisation. It is somewhat disappointing that with the proposed method there is still the need for batch normalisation.\n\nDoes batch normalisation alone not do a good job of stabilising the feedback?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "gpRnDDIEof",
"review_text": "The paper introduces Deep Thinking with Lipschitz Constraints (DT-L), an improved version of the Deep Thinking (DT) networks, designed to enhance the stability and performance of iterative algorithm learning models. The authors address the instability issues inherent in DT networks by analyzing intermediate representation growth and applying Lipschitz constraints. The DT-L model guarantees convergence to a unique solution and demonstrates robustness in learning algorithms that extrapolate to more complex problems. The paper furthermore benchmarks DT-L on the Traveling Salesperson Problem (TSP) as well other than the datasets used in the Deep Thinking models. It compares its performance against existing DT models.\n\n- Introducing Lipschitz constraints into the DT framework enhances the models' reasoning capabilities. This approach addresses instability issues in training and inference, offering theoretical guarantees for convergence.\n- DT-L demonstrates the ability to scale to larger problems effectively, maintaining stability and performance, which is crucial for real-world applications.\n- The comprehensive evaluation on various problem classes, including prefix sums, mazes, chess puzzles, and TSP, highlights the robustness and versatility of the DT-L model.\n- The paper provides a thorough analysis of the issues with DT networks and clearly explains how the proposed modifications address these problems.\n\n- The modifications and theoretical underpinnings of the DT-L model, such as the Lipschitz constraints and orthogonal transformations, add complexity to the model, which might hinder its adoption and understanding by a broader audience.\n- While the DT-L model shows improvement, its performance on the TSP is not impressive, indicating room for further optimization and refinement.\n\n- How does the introduction of Lipschitz constraints impact the computational complexity and training time of the DT-L model compared to traditional DT models?\n- Can the proposed DT-L model be extended to other types of iterative algorithms beyond the ones tested in this paper? If so, what modifications would be necessary?\n- Can this applied for transformer architectures like looped transformers?\n- Can the insights gained from this work be applied to improve the interpretability of the learned algorithms, making the decision-making process of the DT-L model more transparent?"
}
] |
zkhyrxlwqH | Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization | Estimating the homography between two images is crucial for mid- or high-level vision tasks, such as image stitching and fusion. However, using supervised learning methods is often challenging or costly due to the difficulty of collecting ground-truth data. In response, unsupervised learning approaches have emerged. Most early methods, though, assume that the given image pairs are from the same camera or have minor lighting differences. Consequently, while these methods perform effectively under such conditions, they generally fail when input image pairs come from different domains, referred to as multimodal image pairs.
To address these limitations, we propose AltO, an unsupervised learning framework for estimating homography in multimodal image pairs. Our method employs a two-phase alternating optimization framework, similar to Expectation-Maximization (EM), where one phase reduces the geometry gap and the other addresses the modality gap. To handle these gaps, we use Barlow Twins loss for the modality gap and propose an extended version, Geometry Barlow Twins, for the geometry gap. As a result, we demonstrate that our method, AltO, can be trained on multimodal datasets without any ground-truth data. It not only outperforms other unsupervised methods but is also compatible with various architectures of homography estimators.
The source code can be found at: https://github.com/songsang7/AltO | https://openreview.net/pdf/dbd7c26b2dae2f1c86abaa70a60fb6e9e683d675.pdf | [
{
"confidence": 5,
"rating": 3,
"review_id": "hXC6dl8P6M",
"review_text": "The paper proposes an unsupervised homography estimation method for multimodal image pairs using an alternating optimization approach. The claimed key innovation is the introduction of the Geometry Barlow Twins loss function for the alternating optimization. The authors show that their approach works on 3 multimodal datasets and different homography estimation architecutres.\n\nThe alternating optimization framework together with Geometry Barlow Twins loss seem to be a fresh perspective on unsupervised multimodal homography estimation.\n\nWeaknesses\n1. Discussion on the Feasibility and Rationality of the Proposed Method: First, for unsupervised training of networks based on iterative prediction, such as RAFT, to ensure stability during training, related methods [1-2] typically apply some form of direct supervision to the motion predicted by the network. This is different from the approach proposed in this paper, which only uses the Geometry Barlow Twins loss for brightness supervision. Second, how RAFT can be used for homography estimation should also be explained, because it is designed for optical flow estimation. Moreover, the paper does not explain how the proposed Geometry Barlow Twins loss supervises the intermediate stages of iterative prediction, whereas RAFT, IHN, and RHWF, along with methods leveraging their structures [1-2], generally provide details on their supervision mechanisms on the intermediate stages. This raises concerns about the feasibility of the proposed supervision method in this paper. Additionally, the effectiveness of the Modality-Agnostic Representation Learning (MARL) introduced in section 4.3 is questionable because it lacks spatial information in its supervision. As mentioned in section 3.2, the projector removes spatial information from the feature maps. The authors should provide a convincing and thorough explanation for these issues. \n\n2. Doubt about the Effectiveness of the Proposed Method: For example, the paper proposes the alternating optimization (AltO) method but does not provide sufficient experimental results to demonstrate its superiority over other strategies, such as directly cascading all the modules. Furthermore, the paper lacks a comparative demonstration of the features extracted with and without the MARL phase, making the advantages of introducing this phase less convincing.\n\n3. Insufficient Experimental Validation: The paper conducts experiments on only 3 cross-modal datasets, among which only the GoogleMap dataset exhibits significant modality differences. The GoogleEarth dataset mainly consists of images taken in different seasons [3]. Part of the DeepIR dataset is simulated multispectral data [4], which will significantly reduce the difficulty of homography estimation. It would be beneficial to conduct experiments on more challenging multimodal datasets, such as those involving VIS-SAR modalities.\n\n[1] Stone, A., Maurer, D., Ayvaci, A., Angelova, A., & Jonschkowski, R. (2021). Smurf: Self-teaching multi-frame unsupervised raft with full-image warping. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (pp. 3887-3896).\n[2] Liang, Y., Liu, J., Zhang, D., & Fu, Y. (2023). Mpi-flow: Learning realistic optical flow with multiplane images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 13857-13868).\n[3] Zhao, Y., Huang, X., & Zhang, Z. (2021). Deep lucas-kanade homography for multimodal image alignment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 15950-15959).\n[4] Sa, I., Lim, J. Y., Ahn, H. S., & MacDonald, B. (2022). deepNIR: Datasets for generating synthetic NIR images and improved fruit detection system using deep learning techniques. Sensors, 22(13), 4721.\n\nPlease refer to the Weaknesses."
},
{
"confidence": 4,
"rating": 7,
"review_id": "jsmFxFHfsr",
"review_text": "This paper proposes a new unsupervised homography estimation approach for multimodal images. This method is designed as a two-phase optimization framework named AltO. The first phase named \"Geometry Learning\" trains a registration network to align the input multimodal images geometrically. The second phase named \"Modality-Agnostic Representation Learning\" trains an encoder and a projector to extract the image-level features invariant to modality changes. Experimental results demonstrate that AltO outperforms several existing unsupervised approaches on the multimodal registration datasets.\n\n1. The proposed framework is intuitive and interesting. This framework trains a registration network to align the input multimodal images geometrically, and trains another encoder to match the image-level features of the warped multimodal images. This framework has the potential to capture the pixel-level and image-level information in an unsupervised manner.\n2. The organization and presentation of this paper are good. I think I can understand the core idea of this paper.\n\n**1. Some central claims of this paper lack experimental evidence.**\n\n1.1 The \"alternating\" optimization framework is a central design in this paper. However, why is \"alternating\" optimization necessary? Will optimizing the \"geometry loss\" and \"modality loss\" simultaneously hurt performance?\n\n1.2 The superiority of the proposed Geometry Barlow Twins (GBT) loss was not verified. The original Barlow Twins loss can be straightforwardly applied to the proposed model by considering both the spatial axis (indexed with \"h,w\") and batch axis (indexed with \"n\") as the batch dimension. This straightforward implementation should be compared with the proposed GBT loss.\n\n1.3 The proposed approaches should be compared with some recent unsupervised approaches. Here are some approaches with released codes.\n\n[1] Unsupervised global and local homography estimation with motion basis learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.\n\n[2] A Multiscale Framework with Unsupervised Learning for Remote Sensing Image Registration, IEEE Transactions on Geoscience and Remote Sensing, 2022.\n\n**2. This paper did not discuss the recent hand-crafted approaches for multimodal image registration.**\n\nMany recent hand-crafted methods have been published in the top journals, so this kind of approach should not be ignored. The experiment should also compare the proposed approaches with the recent hand-crafted approaches. Here are some hand-crafted approaches with released code.\n\n[3] Histogram of the orientation of the weighted phase descriptor for multi-modal remote sensing image matching. ISPRS Journal of Photogrammetry and Remote Sensing, 2023.\n\n[4] POS-GIFT: A geometric and intensity-invariant feature transformation for multimodal images. Information Fusion, 2024.\n\n**3. The discussion of the motivation is not sufficient.**\n\nThe Introduction section mentioned some typical unsupervised approaches designed for the images from the same modality (e.g., UDHN and biHomE). However, the unsupervised approaches [2,5] designed for multimodal image registration are not discussed. What is the motivation of the proposed method compared with this kind of approach? \n\n[5] A Novel Coarse-to-Fine Deep Learning Registration Framework for Multi-Modal Remote Sensing Images. IEEE Transactions on Geoscience and Remote Sensing, 2023.\n\n**4. This paper misses some references to hand-crafted and unsupervised approaches.**\n\nI have listed some of them in the above weaknesses. The authors should further survey more papers and carefully revise the \"Related Work\" section.\n\nPlease provide more discussions and experimental results to address the above weaknesses.\n\nMoreover, is the 3D reconstruction task related to \"Homography Estimation\" (line 21)? Generally, 3D reconstruction focuses on non-planar scenes, while homography estimation is designed for the planar scenes. Is there some literature that mentions the relationship between 3D reconstruction and homography estimation?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "W69deQd2wr",
"review_text": "The paper addresses unsupervised homography estimation from multi-modal image pairs. The authors propose to cope with the issue of 1) modality, 2) registration in two distinct networks that are trained in an interleaved fashion. The networks architecture derives from the Barlow Twins framework, with changes in the loss function. Results are illustrated on several public benchmark of small images (128^2) and compares favorably wrt to related unsupervised approach.\n\n1- I enjoy reading the paper. I walked through the paper, first with curiosity and skepticism, then with strong interest. The approach is intuitive (adjust the two representations then compute the transformation) and compelling. I am somehow surprised that it works :) The constrastive-like loss used in Barlow Twins is contributing much for the network to learn the correct solution. \n\n2- Overall, the authors are tackling an important problem (unsupervised learning) for which an original solution is proposed --while based on previous recent work. The methodology is clearly presented. Results are convincing (thought only on small images 128x128) and illustrated on various modality pairs. Quantitative results show improvement wrt related unsupervised work\n\n1- Not a weakness, but a points which could have been discussed: why not simply transforming the inputs into edge maps before learning a matching/homography function (and putting aside the modality discrepancy). It would not be a very fancy approach, but I believe it could be a baseline for comparison. \n\n2- The approach would be more convincing if each of the two modules (GL and MARL) had demonstrated their effectiveness also individually (ie same image pair modality using only GL).\n\n- What is the size of the embedding? What is the training time? Are the Barlow Twins trained from scratch?\n\n- Illustration seems to show strong geometric features (ie lines) in the input images. Is it a strong limitation of the approach?"
}
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14